in_source_id
stringlengths
13
58
issue
stringlengths
3
241k
before_files
listlengths
0
3
after_files
listlengths
0
3
pr_diff
stringlengths
109
107M
ManimCommunity__manim-2770
proportion_from_point is not working for some cases ## Description of bug / unexpected behavior <!-- Add a clear and concise description of the problem you encountered. --> Hello, I'm trying to get the proportion of the vertices from a triangle outline, but the result is not correct for some cases. For easy testing purpose, I tried a regular triangle which is shifted and scaled somehow. The result i got was `0, 0.33333333333333337, 16336461.942820664` for the three vertices instead of `0, 0.3333333333333333, 0.6666666666666667`. Obviously, the proportion of the third vertex was wrong. ## Expected behavior <!-- Add a clear and concise description of what you expected to happen. --> In the case mentioned above, i should get `0, 0.3333333333333333, 0.6666666666666667` for each vertex of any regular triangle regardless of whether it's shifted/scaled or not. ## How to reproduce the issue <!-- Provide a piece of code illustrating the undesired behavior. --> Here is the code i used for testing the regular triangle case: <details><summary>Code for reproducing the problem</summary> ## 1. Without shift and scale, the result is ok. ```py class TestProportion(Scene): def construct(self): from math import sqrt A=sqrt(3)*UP B=LEFT C=RIGHT abc = Polygon(A,B,C) for p in abc.get_vertices(): print(abc.proportion_from_point(p)) ================================ output: 0.0 0.3333333333333333 0.6666666666666666 ``` ## 2. With shift only, the result is ok. ```py class TestProportion(Scene): def construct(self): from math import sqrt A=sqrt(3)*UP B=LEFT C=RIGHT abc = Polygon(A,B,C) abc.shift(LEFT) for p in abc.get_vertices(): print(abc.proportion_from_point(p)) ================================ output: 0.0 0.3333333333333333 0.6666666666666666 ``` ## 3. With scale only, the result is ok. ```py class TestProportion(Scene): def construct(self): from math import sqrt A=sqrt(3)*UP B=LEFT C=RIGHT abc = Polygon(A,B,C) abc.scale(0.8) for p in abc.get_vertices(): print(abc.proportion_from_point(p)) ================================ output: 0.0 0.3333333333333333 0.6666666666666666 ``` ## 4. With shift and scale, the result is NOT ok. ```py class TestProportion(Scene): def construct(self): from math import sqrt A=sqrt(3)*UP B=LEFT C=RIGHT abc = Polygon(A,B,C) abc.shift(LEFT) abc.scale(0.8) for p in abc.get_vertices(): print(abc.proportion_from_point(p)) ================================ output: 0.0 0.33333333333333337 16336461.942820664 ``` </details> ## Additional media files <!-- Paste in the files manim produced on rendering the code above. --> <details><summary>Images/GIFs</summary> <!-- PASTE MEDIA HERE --> </details> ## Logs <details><summary>Terminal output</summary> <!-- Add "-v DEBUG" when calling manim to generate more detailed logs --> ``` PASTE HERE OR PROVIDE LINK TO https://pastebin.com/ OR SIMILAR ``` <!-- Insert screenshots here (only when absolutely necessary, we prefer copy/pasted output!) --> </details> ## System specifications <details><summary>System Details</summary> - OS (with version, e.g., Windows 10 v2004 or macOS 10.15 (Catalina)): - RAM: - Python version (`python/py/python3 --version`): - Installed modules (provide output from `pip list`): ``` PASTE HERE ``` </details> <details><summary>LaTeX details</summary> + LaTeX distribution (e.g. TeX Live 2020): + Installed LaTeX packages: <!-- output of `tlmgr list --only-installed` for TeX Live or a screenshot of the Packages page for MikTeX --> </details> <details><summary>FFMPEG</summary> Output of `ffmpeg -version`: ``` PASTE HERE ``` </details> ## Additional comments <!-- Add further context that you think might be relevant for this issue here. --> I found the wrong value '16336461.942820664' was from `proportions_along_bezier_curve_for_point`, it could be caused by the accuracy while finding the roots of the bezier polynomial. As per the reference of [Numpy](https://numpy.org/doc/stable/reference/generated/numpy.polynomial.polynomial.Polynomial.roots.html?highlight=roots#numpy.polynomial.polynomial.Polynomial.roots), >the accuracy of the roots decrease the further outside the domain they lie. - Maybe we could do some correction on the coefficient (like we are rounding the roots) before solving the polynomial? - Or we could bypass all the roots which do not belong to [0,1]. Not sure why this is not checked currently, any reason that i missed? p.s. The two approaches above are roughly tested on my case and works, but maybe there are better fixes that i'm not aware of. At last, thanks for the great library, it really helps making math video much more easily.
[ { "content": "\"\"\"Utility functions related to Bézier curves.\"\"\"\n\nfrom __future__ import annotations\n\n__all__ = [\n \"bezier\",\n \"partial_bezier_points\",\n \"partial_quadratic_bezier_points\",\n \"interpolate\",\n \"integer_interpolate\",\n \"mid\",\n \"inverse_interpolate\",\n \"match_interpolate\",\n \"get_smooth_handle_points\",\n \"get_smooth_cubic_bezier_handle_points\",\n \"diag_to_matrix\",\n \"is_closed\",\n \"proportions_along_bezier_curve_for_point\",\n \"point_lies_on_bezier\",\n]\n\n\nimport typing\nfrom functools import reduce\n\nimport numpy as np\nfrom scipy import linalg\n\nfrom ..utils.simple_functions import choose\nfrom ..utils.space_ops import cross2d, find_intersection\n\n\ndef bezier(\n points: np.ndarray,\n) -> typing.Callable[[float], int | typing.Iterable]:\n \"\"\"Classic implementation of a bezier curve.\n\n Parameters\n ----------\n points : np.ndarray\n points defining the desired bezier curve.\n\n Returns\n -------\n typing.Callable[[float], typing.Union[int, typing.Iterable]]\n function describing the bezier curve.\n \"\"\"\n n = len(points) - 1\n\n # Cubic Bezier curve\n if n == 3:\n return (\n lambda t: (1 - t) ** 3 * points[0]\n + 3 * t * (1 - t) ** 2 * points[1]\n + 3 * (1 - t) * t**2 * points[2]\n + t**3 * points[3]\n )\n # Quadratic Bezier curve\n if n == 2:\n return (\n lambda t: (1 - t) ** 2 * points[0]\n + 2 * t * (1 - t) * points[1]\n + t**2 * points[2]\n )\n\n return lambda t: sum(\n ((1 - t) ** (n - k)) * (t**k) * choose(n, k) * point\n for k, point in enumerate(points)\n )\n\n\ndef partial_bezier_points(points: np.ndarray, a: float, b: float) -> np.ndarray:\n \"\"\"Given an array of points which define bezier curve, and two numbers 0<=a<b<=1, return an array of the same size,\n which describes the portion of the original bezier curve on the interval [a, b].\n\n This algorithm is pretty nifty, and pretty dense.\n\n Parameters\n ----------\n points : np.ndarray\n set of points defining the bezier curve.\n a : float\n lower bound of the desired partial bezier curve.\n b : float\n upper bound of the desired partial bezier curve.\n\n Returns\n -------\n np.ndarray\n Set of points defining the partial bezier curve.\n \"\"\"\n if a == 1:\n return [points[-1]] * len(points)\n\n a_to_1 = np.array([bezier(points[i:])(a) for i in range(len(points))])\n end_prop = (b - a) / (1.0 - a)\n return np.array([bezier(a_to_1[: i + 1])(end_prop) for i in range(len(points))])\n\n\n# Shortened version of partial_bezier_points just for quadratics,\n# since this is called a fair amount\ndef partial_quadratic_bezier_points(points, a, b):\n if a == 1:\n return 3 * [points[-1]]\n\n def curve(t):\n return (\n points[0] * (1 - t) * (1 - t)\n + 2 * points[1] * t * (1 - t)\n + points[2] * t * t\n )\n\n # bezier(points)\n h0 = curve(a) if a > 0 else points[0]\n h2 = curve(b) if b < 1 else points[2]\n h1_prime = (1 - a) * points[1] + a * points[2]\n end_prop = (b - a) / (1.0 - a)\n h1 = (1 - end_prop) * h0 + end_prop * h1_prime\n return [h0, h1, h2]\n\n\n# Linear interpolation variants\n\n\ndef interpolate(start: int, end: int, alpha: float) -> float:\n return (1 - alpha) * start + alpha * end\n\n\ndef integer_interpolate(\n start: float,\n end: float,\n alpha: float,\n) -> tuple[int, float]:\n \"\"\"\n Alpha is a float between 0 and 1. This returns\n an integer between start and end (inclusive) representing\n appropriate interpolation between them, along with a\n \"residue\" representing a new proportion between the\n returned integer and the next one of the\n list.\n\n For example, if start=0, end=10, alpha=0.46, This\n would return (4, 0.6).\n \"\"\"\n if alpha >= 1:\n return (end - 1, 1.0)\n if alpha <= 0:\n return (start, 0)\n value = int(interpolate(start, end, alpha))\n residue = ((end - start) * alpha) % 1\n return (value, residue)\n\n\ndef mid(start: float, end: float) -> float:\n return (start + end) / 2.0\n\n\ndef inverse_interpolate(start: float, end: float, value: float) -> np.ndarray:\n return np.true_divide(value - start, end - start)\n\n\ndef match_interpolate(\n new_start: float,\n new_end: float,\n old_start: float,\n old_end: float,\n old_value: float,\n) -> np.ndarray:\n return interpolate(\n new_start,\n new_end,\n inverse_interpolate(old_start, old_end, old_value),\n )\n\n\n# Figuring out which bezier curves most smoothly connect a sequence of points\n\n\ndef get_smooth_cubic_bezier_handle_points(points):\n points = np.array(points)\n num_handles = len(points) - 1\n dim = points.shape[1]\n if num_handles < 1:\n return np.zeros((0, dim)), np.zeros((0, dim))\n # Must solve 2*num_handles equations to get the handles.\n # l and u are the number of lower an upper diagonal rows\n # in the matrix to solve.\n l, u = 2, 1\n # diag is a representation of the matrix in diagonal form\n # See https://www.particleincell.com/2012/bezier-splines/\n # for how to arrive at these equations\n diag = np.zeros((l + u + 1, 2 * num_handles))\n diag[0, 1::2] = -1\n diag[0, 2::2] = 1\n diag[1, 0::2] = 2\n diag[1, 1::2] = 1\n diag[2, 1:-2:2] = -2\n diag[3, 0:-3:2] = 1\n # last\n diag[2, -2] = -1\n diag[1, -1] = 2\n # This is the b as in Ax = b, where we are solving for x,\n # and A is represented using diag. However, think of entries\n # to x and b as being points in space, not numbers\n b = np.zeros((2 * num_handles, dim))\n b[1::2] = 2 * points[1:]\n b[0] = points[0]\n b[-1] = points[-1]\n\n def solve_func(b):\n return linalg.solve_banded((l, u), diag, b)\n\n use_closed_solve_function = is_closed(points)\n if use_closed_solve_function:\n # Get equations to relate first and last points\n matrix = diag_to_matrix((l, u), diag)\n # last row handles second derivative\n matrix[-1, [0, 1, -2, -1]] = [2, -1, 1, -2]\n # first row handles first derivative\n matrix[0, :] = np.zeros(matrix.shape[1])\n matrix[0, [0, -1]] = [1, 1]\n b[0] = 2 * points[0]\n b[-1] = np.zeros(dim)\n\n def closed_curve_solve_func(b):\n return linalg.solve(matrix, b)\n\n handle_pairs = np.zeros((2 * num_handles, dim))\n for i in range(dim):\n if use_closed_solve_function:\n handle_pairs[:, i] = closed_curve_solve_func(b[:, i])\n else:\n handle_pairs[:, i] = solve_func(b[:, i])\n return handle_pairs[0::2], handle_pairs[1::2]\n\n\ndef get_smooth_handle_points(\n points: np.ndarray,\n) -> tuple[np.ndarray, np.ndarray]:\n \"\"\"Given some anchors (points), compute handles so the resulting bezier curve is smooth.\n\n Parameters\n ----------\n points : np.ndarray\n Anchors.\n\n Returns\n -------\n typing.Tuple[np.ndarray, np.ndarray]\n Computed handles.\n \"\"\"\n # NOTE points here are anchors.\n points = np.array(points)\n num_handles = len(points) - 1\n dim = points.shape[1]\n if num_handles < 1:\n return np.zeros((0, dim)), np.zeros((0, dim))\n # Must solve 2*num_handles equations to get the handles.\n # l and u are the number of lower an upper diagonal rows\n # in the matrix to solve.\n l, u = 2, 1\n # diag is a representation of the matrix in diagonal form\n # See https://www.particleincell.com/2012/bezier-splines/\n # for how to arrive at these equations\n diag = np.zeros((l + u + 1, 2 * num_handles))\n diag[0, 1::2] = -1\n diag[0, 2::2] = 1\n diag[1, 0::2] = 2\n diag[1, 1::2] = 1\n diag[2, 1:-2:2] = -2\n diag[3, 0:-3:2] = 1\n # last\n diag[2, -2] = -1\n diag[1, -1] = 2\n # This is the b as in Ax = b, where we are solving for x,\n # and A is represented using diag. However, think of entries\n # to x and b as being points in space, not numbers\n b = np.zeros((2 * num_handles, dim))\n b[1::2] = 2 * points[1:]\n b[0] = points[0]\n b[-1] = points[-1]\n\n def solve_func(b: np.ndarray) -> np.ndarray:\n return linalg.solve_banded((l, u), diag, b)\n\n use_closed_solve_function = is_closed(points)\n if use_closed_solve_function:\n # Get equations to relate first and last points\n matrix = diag_to_matrix((l, u), diag)\n # last row handles second derivative\n matrix[-1, [0, 1, -2, -1]] = [2, -1, 1, -2]\n # first row handles first derivative\n matrix[0, :] = np.zeros(matrix.shape[1])\n matrix[0, [0, -1]] = [1, 1]\n b[0] = 2 * points[0]\n b[-1] = np.zeros(dim)\n\n def closed_curve_solve_func(b: np.ndarray) -> np.ndarray:\n return linalg.solve(matrix, b)\n\n handle_pairs = np.zeros((2 * num_handles, dim))\n for i in range(dim):\n if use_closed_solve_function:\n handle_pairs[:, i] = closed_curve_solve_func(b[:, i])\n else:\n handle_pairs[:, i] = solve_func(b[:, i])\n return handle_pairs[0::2], handle_pairs[1::2]\n\n\ndef diag_to_matrix(l_and_u: tuple[int, int], diag: np.ndarray) -> np.ndarray:\n \"\"\"\n Converts array whose rows represent diagonal\n entries of a matrix into the matrix itself.\n See scipy.linalg.solve_banded\n \"\"\"\n l, u = l_and_u\n dim = diag.shape[1]\n matrix = np.zeros((dim, dim))\n for i in range(l + u + 1):\n np.fill_diagonal(\n matrix[max(0, i - u) :, max(0, u - i) :],\n diag[i, max(0, u - i) :],\n )\n return matrix\n\n\n# Given 4 control points for a cubic bezier curve (or arrays of such)\n# return control points for 2 quadratics (or 2n quadratics) approximating them.\ndef get_quadratic_approximation_of_cubic(a0, h0, h1, a1):\n a0 = np.array(a0, ndmin=2)\n h0 = np.array(h0, ndmin=2)\n h1 = np.array(h1, ndmin=2)\n a1 = np.array(a1, ndmin=2)\n # Tangent vectors at the start and end.\n T0 = h0 - a0\n T1 = a1 - h1\n\n # Search for inflection points. If none are found, use the\n # midpoint as a cut point.\n # Based on http://www.caffeineowl.com/graphics/2d/vectorial/cubic-inflexion.html\n has_infl = np.ones(len(a0), dtype=bool)\n\n p = h0 - a0\n q = h1 - 2 * h0 + a0\n r = a1 - 3 * h1 + 3 * h0 - a0\n\n a = cross2d(q, r)\n b = cross2d(p, r)\n c = cross2d(p, q)\n\n disc = b * b - 4 * a * c\n has_infl &= disc > 0\n sqrt_disc = np.sqrt(np.abs(disc))\n settings = np.seterr(all=\"ignore\")\n ti_bounds = []\n for sgn in [-1, +1]:\n ti = (-b + sgn * sqrt_disc) / (2 * a)\n ti[a == 0] = (-c / b)[a == 0]\n ti[(a == 0) & (b == 0)] = 0\n ti_bounds.append(ti)\n ti_min, ti_max = ti_bounds\n np.seterr(**settings)\n ti_min_in_range = has_infl & (0 < ti_min) & (ti_min < 1)\n ti_max_in_range = has_infl & (0 < ti_max) & (ti_max < 1)\n\n # Choose a value of t which starts at 0.5,\n # but is updated to one of the inflection points\n # if they lie between 0 and 1\n\n t_mid = 0.5 * np.ones(len(a0))\n t_mid[ti_min_in_range] = ti_min[ti_min_in_range]\n t_mid[ti_max_in_range] = ti_max[ti_max_in_range]\n\n m, n = a0.shape\n t_mid = t_mid.repeat(n).reshape((m, n))\n\n # Compute bezier point and tangent at the chosen value of t\n mid = bezier([a0, h0, h1, a1])(t_mid)\n Tm = bezier([h0 - a0, h1 - h0, a1 - h1])(t_mid)\n\n # Intersection between tangent lines at end points\n # and tangent in the middle\n i0 = find_intersection(a0, T0, mid, Tm)\n i1 = find_intersection(a1, T1, mid, Tm)\n\n m, n = np.shape(a0)\n result = np.zeros((6 * m, n))\n result[0::6] = a0\n result[1::6] = i0\n result[2::6] = mid\n result[3::6] = mid\n result[4::6] = i1\n result[5::6] = a1\n return result\n\n\ndef is_closed(points: tuple[np.ndarray, np.ndarray]) -> bool:\n return np.allclose(points[0], points[-1])\n\n\ndef proportions_along_bezier_curve_for_point(\n point: typing.Iterable[float | int],\n control_points: typing.Iterable[typing.Iterable[float | int]],\n round_to: float | int | None = 1e-6,\n) -> np.ndarray:\n \"\"\"Obtains the proportion along the bezier curve corresponding to a given point\n given the bezier curve's control points.\n\n The bezier polynomial is constructed using the coordinates of the given point\n as well as the bezier curve's control points. On solving the polynomial for each dimension,\n if there are roots common to every dimension, those roots give the proportion along the\n curve the point is at. If there are no real roots, the point does not lie on the curve.\n\n Parameters\n ----------\n point\n The Cartesian Coordinates of the point whose parameter\n should be obtained.\n control_points\n The Cartesian Coordinates of the ordered control\n points of the bezier curve on which the point may\n or may not lie.\n round_to\n A float whose number of decimal places all values\n such as coordinates of points will be rounded.\n\n Returns\n -------\n np.ndarray[float]\n List containing possible parameters (the proportions along the bezier curve)\n for the given point on the given bezier curve.\n This usually only contains one or zero elements, but if the\n point is, say, at the beginning/end of a closed loop, may return\n a list with more than 1 value, corresponding to the beginning and\n end etc. of the loop.\n\n Raises\n ------\n :class:`ValueError`\n When ``point`` and the control points have different shapes.\n \"\"\"\n # Method taken from\n # http://polymathprogrammer.com/2012/04/03/does-point-lie-on-bezier-curve/\n\n if not all(np.shape(point) == np.shape(c_p) for c_p in control_points):\n raise ValueError(\n f\"Point {point} and Control Points {control_points} have different shapes.\",\n )\n\n control_points = np.array(control_points)\n n = len(control_points) - 1\n\n roots = []\n for dim, coord in enumerate(point):\n control_coords = control_points[:, dim]\n terms = []\n for term_power in range(n, -1, -1):\n outercoeff = choose(n, term_power)\n term = []\n sign = 1\n for subterm_num in range(term_power, -1, -1):\n innercoeff = choose(term_power, subterm_num) * sign\n subterm = innercoeff * control_coords[subterm_num]\n if term_power == 0:\n subterm -= coord\n term.append(subterm)\n sign *= -1\n terms.append(outercoeff * sum(np.array(term)))\n if all(term == 0 for term in terms):\n # Then both Bezier curve and Point lie on the same plane.\n # Roots will be none, but in this specific instance, we don't need to consider that.\n continue\n bezier_polynom = np.polynomial.Polynomial(terms[::-1])\n polynom_roots = bezier_polynom.roots()\n if len(polynom_roots) > 0:\n polynom_roots = np.around(polynom_roots, int(np.log10(1 / round_to)))\n roots.append(polynom_roots)\n\n roots = [[root for root in rootlist if root.imag == 0] for rootlist in roots]\n roots = reduce(np.intersect1d, roots) # Get common roots.\n roots = np.array([r.real for r in roots])\n return roots\n\n\ndef point_lies_on_bezier(\n point: typing.Iterable[float | int],\n control_points: typing.Iterable[typing.Iterable[float | int]],\n round_to: float | int | None = 1e-6,\n) -> bool:\n \"\"\"Checks if a given point lies on the bezier curves with the given control points.\n\n This is done by solving the bezier polynomial with the point as the constant term; if\n any real roots exist, the point lies on the bezier curve.\n\n Parameters\n ----------\n point\n The Cartesian Coordinates of the point to check.\n control_points\n The Cartesian Coordinates of the ordered control\n points of the bezier curve on which the point may\n or may not lie.\n round_to\n A float whose number of decimal places all values\n such as coordinates of points will be rounded.\n\n Returns\n -------\n bool\n Whether the point lies on the curve.\n \"\"\"\n\n roots = proportions_along_bezier_curve_for_point(point, control_points, round_to)\n\n return len(roots) > 0\n", "path": "manim/utils/bezier.py" } ]
[ { "content": "\"\"\"Utility functions related to Bézier curves.\"\"\"\n\nfrom __future__ import annotations\n\n__all__ = [\n \"bezier\",\n \"partial_bezier_points\",\n \"partial_quadratic_bezier_points\",\n \"interpolate\",\n \"integer_interpolate\",\n \"mid\",\n \"inverse_interpolate\",\n \"match_interpolate\",\n \"get_smooth_handle_points\",\n \"get_smooth_cubic_bezier_handle_points\",\n \"diag_to_matrix\",\n \"is_closed\",\n \"proportions_along_bezier_curve_for_point\",\n \"point_lies_on_bezier\",\n]\n\n\nimport typing\nfrom functools import reduce\n\nimport numpy as np\nfrom scipy import linalg\n\nfrom ..utils.simple_functions import choose\nfrom ..utils.space_ops import cross2d, find_intersection\n\n\ndef bezier(\n points: np.ndarray,\n) -> typing.Callable[[float], int | typing.Iterable]:\n \"\"\"Classic implementation of a bezier curve.\n\n Parameters\n ----------\n points : np.ndarray\n points defining the desired bezier curve.\n\n Returns\n -------\n typing.Callable[[float], typing.Union[int, typing.Iterable]]\n function describing the bezier curve.\n \"\"\"\n n = len(points) - 1\n\n # Cubic Bezier curve\n if n == 3:\n return (\n lambda t: (1 - t) ** 3 * points[0]\n + 3 * t * (1 - t) ** 2 * points[1]\n + 3 * (1 - t) * t**2 * points[2]\n + t**3 * points[3]\n )\n # Quadratic Bezier curve\n if n == 2:\n return (\n lambda t: (1 - t) ** 2 * points[0]\n + 2 * t * (1 - t) * points[1]\n + t**2 * points[2]\n )\n\n return lambda t: sum(\n ((1 - t) ** (n - k)) * (t**k) * choose(n, k) * point\n for k, point in enumerate(points)\n )\n\n\ndef partial_bezier_points(points: np.ndarray, a: float, b: float) -> np.ndarray:\n \"\"\"Given an array of points which define bezier curve, and two numbers 0<=a<b<=1, return an array of the same size,\n which describes the portion of the original bezier curve on the interval [a, b].\n\n This algorithm is pretty nifty, and pretty dense.\n\n Parameters\n ----------\n points : np.ndarray\n set of points defining the bezier curve.\n a : float\n lower bound of the desired partial bezier curve.\n b : float\n upper bound of the desired partial bezier curve.\n\n Returns\n -------\n np.ndarray\n Set of points defining the partial bezier curve.\n \"\"\"\n if a == 1:\n return [points[-1]] * len(points)\n\n a_to_1 = np.array([bezier(points[i:])(a) for i in range(len(points))])\n end_prop = (b - a) / (1.0 - a)\n return np.array([bezier(a_to_1[: i + 1])(end_prop) for i in range(len(points))])\n\n\n# Shortened version of partial_bezier_points just for quadratics,\n# since this is called a fair amount\ndef partial_quadratic_bezier_points(points, a, b):\n if a == 1:\n return 3 * [points[-1]]\n\n def curve(t):\n return (\n points[0] * (1 - t) * (1 - t)\n + 2 * points[1] * t * (1 - t)\n + points[2] * t * t\n )\n\n # bezier(points)\n h0 = curve(a) if a > 0 else points[0]\n h2 = curve(b) if b < 1 else points[2]\n h1_prime = (1 - a) * points[1] + a * points[2]\n end_prop = (b - a) / (1.0 - a)\n h1 = (1 - end_prop) * h0 + end_prop * h1_prime\n return [h0, h1, h2]\n\n\n# Linear interpolation variants\n\n\ndef interpolate(start: int, end: int, alpha: float) -> float:\n return (1 - alpha) * start + alpha * end\n\n\ndef integer_interpolate(\n start: float,\n end: float,\n alpha: float,\n) -> tuple[int, float]:\n \"\"\"\n Alpha is a float between 0 and 1. This returns\n an integer between start and end (inclusive) representing\n appropriate interpolation between them, along with a\n \"residue\" representing a new proportion between the\n returned integer and the next one of the\n list.\n\n For example, if start=0, end=10, alpha=0.46, This\n would return (4, 0.6).\n \"\"\"\n if alpha >= 1:\n return (end - 1, 1.0)\n if alpha <= 0:\n return (start, 0)\n value = int(interpolate(start, end, alpha))\n residue = ((end - start) * alpha) % 1\n return (value, residue)\n\n\ndef mid(start: float, end: float) -> float:\n return (start + end) / 2.0\n\n\ndef inverse_interpolate(start: float, end: float, value: float) -> np.ndarray:\n return np.true_divide(value - start, end - start)\n\n\ndef match_interpolate(\n new_start: float,\n new_end: float,\n old_start: float,\n old_end: float,\n old_value: float,\n) -> np.ndarray:\n return interpolate(\n new_start,\n new_end,\n inverse_interpolate(old_start, old_end, old_value),\n )\n\n\n# Figuring out which bezier curves most smoothly connect a sequence of points\n\n\ndef get_smooth_cubic_bezier_handle_points(points):\n points = np.array(points)\n num_handles = len(points) - 1\n dim = points.shape[1]\n if num_handles < 1:\n return np.zeros((0, dim)), np.zeros((0, dim))\n # Must solve 2*num_handles equations to get the handles.\n # l and u are the number of lower an upper diagonal rows\n # in the matrix to solve.\n l, u = 2, 1\n # diag is a representation of the matrix in diagonal form\n # See https://www.particleincell.com/2012/bezier-splines/\n # for how to arrive at these equations\n diag = np.zeros((l + u + 1, 2 * num_handles))\n diag[0, 1::2] = -1\n diag[0, 2::2] = 1\n diag[1, 0::2] = 2\n diag[1, 1::2] = 1\n diag[2, 1:-2:2] = -2\n diag[3, 0:-3:2] = 1\n # last\n diag[2, -2] = -1\n diag[1, -1] = 2\n # This is the b as in Ax = b, where we are solving for x,\n # and A is represented using diag. However, think of entries\n # to x and b as being points in space, not numbers\n b = np.zeros((2 * num_handles, dim))\n b[1::2] = 2 * points[1:]\n b[0] = points[0]\n b[-1] = points[-1]\n\n def solve_func(b):\n return linalg.solve_banded((l, u), diag, b)\n\n use_closed_solve_function = is_closed(points)\n if use_closed_solve_function:\n # Get equations to relate first and last points\n matrix = diag_to_matrix((l, u), diag)\n # last row handles second derivative\n matrix[-1, [0, 1, -2, -1]] = [2, -1, 1, -2]\n # first row handles first derivative\n matrix[0, :] = np.zeros(matrix.shape[1])\n matrix[0, [0, -1]] = [1, 1]\n b[0] = 2 * points[0]\n b[-1] = np.zeros(dim)\n\n def closed_curve_solve_func(b):\n return linalg.solve(matrix, b)\n\n handle_pairs = np.zeros((2 * num_handles, dim))\n for i in range(dim):\n if use_closed_solve_function:\n handle_pairs[:, i] = closed_curve_solve_func(b[:, i])\n else:\n handle_pairs[:, i] = solve_func(b[:, i])\n return handle_pairs[0::2], handle_pairs[1::2]\n\n\ndef get_smooth_handle_points(\n points: np.ndarray,\n) -> tuple[np.ndarray, np.ndarray]:\n \"\"\"Given some anchors (points), compute handles so the resulting bezier curve is smooth.\n\n Parameters\n ----------\n points : np.ndarray\n Anchors.\n\n Returns\n -------\n typing.Tuple[np.ndarray, np.ndarray]\n Computed handles.\n \"\"\"\n # NOTE points here are anchors.\n points = np.array(points)\n num_handles = len(points) - 1\n dim = points.shape[1]\n if num_handles < 1:\n return np.zeros((0, dim)), np.zeros((0, dim))\n # Must solve 2*num_handles equations to get the handles.\n # l and u are the number of lower an upper diagonal rows\n # in the matrix to solve.\n l, u = 2, 1\n # diag is a representation of the matrix in diagonal form\n # See https://www.particleincell.com/2012/bezier-splines/\n # for how to arrive at these equations\n diag = np.zeros((l + u + 1, 2 * num_handles))\n diag[0, 1::2] = -1\n diag[0, 2::2] = 1\n diag[1, 0::2] = 2\n diag[1, 1::2] = 1\n diag[2, 1:-2:2] = -2\n diag[3, 0:-3:2] = 1\n # last\n diag[2, -2] = -1\n diag[1, -1] = 2\n # This is the b as in Ax = b, where we are solving for x,\n # and A is represented using diag. However, think of entries\n # to x and b as being points in space, not numbers\n b = np.zeros((2 * num_handles, dim))\n b[1::2] = 2 * points[1:]\n b[0] = points[0]\n b[-1] = points[-1]\n\n def solve_func(b: np.ndarray) -> np.ndarray:\n return linalg.solve_banded((l, u), diag, b)\n\n use_closed_solve_function = is_closed(points)\n if use_closed_solve_function:\n # Get equations to relate first and last points\n matrix = diag_to_matrix((l, u), diag)\n # last row handles second derivative\n matrix[-1, [0, 1, -2, -1]] = [2, -1, 1, -2]\n # first row handles first derivative\n matrix[0, :] = np.zeros(matrix.shape[1])\n matrix[0, [0, -1]] = [1, 1]\n b[0] = 2 * points[0]\n b[-1] = np.zeros(dim)\n\n def closed_curve_solve_func(b: np.ndarray) -> np.ndarray:\n return linalg.solve(matrix, b)\n\n handle_pairs = np.zeros((2 * num_handles, dim))\n for i in range(dim):\n if use_closed_solve_function:\n handle_pairs[:, i] = closed_curve_solve_func(b[:, i])\n else:\n handle_pairs[:, i] = solve_func(b[:, i])\n return handle_pairs[0::2], handle_pairs[1::2]\n\n\ndef diag_to_matrix(l_and_u: tuple[int, int], diag: np.ndarray) -> np.ndarray:\n \"\"\"\n Converts array whose rows represent diagonal\n entries of a matrix into the matrix itself.\n See scipy.linalg.solve_banded\n \"\"\"\n l, u = l_and_u\n dim = diag.shape[1]\n matrix = np.zeros((dim, dim))\n for i in range(l + u + 1):\n np.fill_diagonal(\n matrix[max(0, i - u) :, max(0, u - i) :],\n diag[i, max(0, u - i) :],\n )\n return matrix\n\n\n# Given 4 control points for a cubic bezier curve (or arrays of such)\n# return control points for 2 quadratics (or 2n quadratics) approximating them.\ndef get_quadratic_approximation_of_cubic(a0, h0, h1, a1):\n a0 = np.array(a0, ndmin=2)\n h0 = np.array(h0, ndmin=2)\n h1 = np.array(h1, ndmin=2)\n a1 = np.array(a1, ndmin=2)\n # Tangent vectors at the start and end.\n T0 = h0 - a0\n T1 = a1 - h1\n\n # Search for inflection points. If none are found, use the\n # midpoint as a cut point.\n # Based on http://www.caffeineowl.com/graphics/2d/vectorial/cubic-inflexion.html\n has_infl = np.ones(len(a0), dtype=bool)\n\n p = h0 - a0\n q = h1 - 2 * h0 + a0\n r = a1 - 3 * h1 + 3 * h0 - a0\n\n a = cross2d(q, r)\n b = cross2d(p, r)\n c = cross2d(p, q)\n\n disc = b * b - 4 * a * c\n has_infl &= disc > 0\n sqrt_disc = np.sqrt(np.abs(disc))\n settings = np.seterr(all=\"ignore\")\n ti_bounds = []\n for sgn in [-1, +1]:\n ti = (-b + sgn * sqrt_disc) / (2 * a)\n ti[a == 0] = (-c / b)[a == 0]\n ti[(a == 0) & (b == 0)] = 0\n ti_bounds.append(ti)\n ti_min, ti_max = ti_bounds\n np.seterr(**settings)\n ti_min_in_range = has_infl & (0 < ti_min) & (ti_min < 1)\n ti_max_in_range = has_infl & (0 < ti_max) & (ti_max < 1)\n\n # Choose a value of t which starts at 0.5,\n # but is updated to one of the inflection points\n # if they lie between 0 and 1\n\n t_mid = 0.5 * np.ones(len(a0))\n t_mid[ti_min_in_range] = ti_min[ti_min_in_range]\n t_mid[ti_max_in_range] = ti_max[ti_max_in_range]\n\n m, n = a0.shape\n t_mid = t_mid.repeat(n).reshape((m, n))\n\n # Compute bezier point and tangent at the chosen value of t\n mid = bezier([a0, h0, h1, a1])(t_mid)\n Tm = bezier([h0 - a0, h1 - h0, a1 - h1])(t_mid)\n\n # Intersection between tangent lines at end points\n # and tangent in the middle\n i0 = find_intersection(a0, T0, mid, Tm)\n i1 = find_intersection(a1, T1, mid, Tm)\n\n m, n = np.shape(a0)\n result = np.zeros((6 * m, n))\n result[0::6] = a0\n result[1::6] = i0\n result[2::6] = mid\n result[3::6] = mid\n result[4::6] = i1\n result[5::6] = a1\n return result\n\n\ndef is_closed(points: tuple[np.ndarray, np.ndarray]) -> bool:\n return np.allclose(points[0], points[-1])\n\n\ndef proportions_along_bezier_curve_for_point(\n point: typing.Iterable[float | int],\n control_points: typing.Iterable[typing.Iterable[float | int]],\n round_to: float | int | None = 1e-6,\n) -> np.ndarray:\n \"\"\"Obtains the proportion along the bezier curve corresponding to a given point\n given the bezier curve's control points.\n\n The bezier polynomial is constructed using the coordinates of the given point\n as well as the bezier curve's control points. On solving the polynomial for each dimension,\n if there are roots common to every dimension, those roots give the proportion along the\n curve the point is at. If there are no real roots, the point does not lie on the curve.\n\n Parameters\n ----------\n point\n The Cartesian Coordinates of the point whose parameter\n should be obtained.\n control_points\n The Cartesian Coordinates of the ordered control\n points of the bezier curve on which the point may\n or may not lie.\n round_to\n A float whose number of decimal places all values\n such as coordinates of points will be rounded.\n\n Returns\n -------\n np.ndarray[float]\n List containing possible parameters (the proportions along the bezier curve)\n for the given point on the given bezier curve.\n This usually only contains one or zero elements, but if the\n point is, say, at the beginning/end of a closed loop, may return\n a list with more than 1 value, corresponding to the beginning and\n end etc. of the loop.\n\n Raises\n ------\n :class:`ValueError`\n When ``point`` and the control points have different shapes.\n \"\"\"\n # Method taken from\n # http://polymathprogrammer.com/2012/04/03/does-point-lie-on-bezier-curve/\n\n if not all(np.shape(point) == np.shape(c_p) for c_p in control_points):\n raise ValueError(\n f\"Point {point} and Control Points {control_points} have different shapes.\",\n )\n\n control_points = np.array(control_points)\n n = len(control_points) - 1\n\n roots = []\n for dim, coord in enumerate(point):\n control_coords = control_points[:, dim]\n terms = []\n for term_power in range(n, -1, -1):\n outercoeff = choose(n, term_power)\n term = []\n sign = 1\n for subterm_num in range(term_power, -1, -1):\n innercoeff = choose(term_power, subterm_num) * sign\n subterm = innercoeff * control_coords[subterm_num]\n if term_power == 0:\n subterm -= coord\n term.append(subterm)\n sign *= -1\n terms.append(outercoeff * sum(np.array(term)))\n if all(term == 0 for term in terms):\n # Then both Bezier curve and Point lie on the same plane.\n # Roots will be none, but in this specific instance, we don't need to consider that.\n continue\n bezier_polynom = np.polynomial.Polynomial(terms[::-1])\n polynom_roots = bezier_polynom.roots()\n if len(polynom_roots) > 0:\n polynom_roots = np.around(polynom_roots, int(np.log10(1 / round_to)))\n roots.append(polynom_roots)\n\n roots = [[root for root in rootlist if root.imag == 0] for rootlist in roots]\n roots = reduce(np.intersect1d, roots) # Get common roots.\n roots = np.array([r.real for r in roots if 0 <= r.real <= 1])\n return roots\n\n\ndef point_lies_on_bezier(\n point: typing.Iterable[float | int],\n control_points: typing.Iterable[typing.Iterable[float | int]],\n round_to: float | int | None = 1e-6,\n) -> bool:\n \"\"\"Checks if a given point lies on the bezier curves with the given control points.\n\n This is done by solving the bezier polynomial with the point as the constant term; if\n any real roots exist, the point lies on the bezier curve.\n\n Parameters\n ----------\n point\n The Cartesian Coordinates of the point to check.\n control_points\n The Cartesian Coordinates of the ordered control\n points of the bezier curve on which the point may\n or may not lie.\n round_to\n A float whose number of decimal places all values\n such as coordinates of points will be rounded.\n\n Returns\n -------\n bool\n Whether the point lies on the curve.\n \"\"\"\n\n roots = proportions_along_bezier_curve_for_point(point, control_points, round_to)\n\n return len(roots) > 0\n", "path": "manim/utils/bezier.py" } ]
diff --git a/manim/utils/bezier.py b/manim/utils/bezier.py index 6f8187eef5..4a489318bd 100644 --- a/manim/utils/bezier.py +++ b/manim/utils/bezier.py @@ -478,7 +478,7 @@ def proportions_along_bezier_curve_for_point( roots = [[root for root in rootlist if root.imag == 0] for rootlist in roots] roots = reduce(np.intersect1d, roots) # Get common roots. - roots = np.array([r.real for r in roots]) + roots = np.array([r.real for r in roots if 0 <= r.real <= 1]) return roots diff --git a/tests/test_vectorized_mobject.py b/tests/test_vectorized_mobject.py index 181efc0444..5e87c820ca 100644 --- a/tests/test_vectorized_mobject.py +++ b/tests/test_vectorized_mobject.py @@ -3,7 +3,17 @@ import numpy as np import pytest -from manim import Circle, Line, Mobject, RegularPolygon, Square, VDict, VGroup, VMobject +from manim import ( + Circle, + Line, + Mobject, + Polygon, + RegularPolygon, + Square, + VDict, + VGroup, + VMobject, +) from manim.constants import PI @@ -295,3 +305,14 @@ def test_vmobject_point_at_angle(): a = Circle() p = a.point_at_angle(4 * PI) np.testing.assert_array_equal(a.points[0], p) + + +def test_proportion_from_point(): + A = np.sqrt(3) * np.array([0, 1, 0]) + B = np.array([-1, 0, 0]) + C = np.array([1, 0, 0]) + abc = Polygon(A, B, C) + abc.shift(np.array([-1, 0, 0])) + abc.scale(0.8) + props = [abc.proportion_from_point(p) for p in abc.get_vertices()] + np.testing.assert_allclose(props, [0, 1 / 3, 2 / 3])
jupyterhub__jupyterhub-3646
New user token returns `200` instead of `201` ### Bug description The api docs for stable & latest list the following response status: ``` 201 Created The newly created token ``` But currently returns a status of `200` #### Expected behaviour Should return a response status of `201` #### Actual behaviour Currently returns a response status of `200` ### How to reproduce Make a request to `POST /users/{name}/tokens` See (https://jupyterhub.readthedocs.io/en/latest/_static/rest-api/index.html#path--users--name--tokens)
[ { "content": "\"\"\"User handlers\"\"\"\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\nimport asyncio\nimport json\nfrom datetime import datetime\nfrom datetime import timedelta\nfrom datetime import timezone\n\nfrom async_generator import aclosing\nfrom dateutil.parser import parse as parse_date\nfrom sqlalchemy import func\nfrom sqlalchemy import or_\nfrom tornado import web\nfrom tornado.iostream import StreamClosedError\n\nfrom .. import orm\nfrom .. import scopes\nfrom ..roles import assign_default_roles\nfrom ..scopes import needs_scope\nfrom ..user import User\nfrom ..utils import isoformat\nfrom ..utils import iterate_until\nfrom ..utils import maybe_future\nfrom ..utils import url_path_join\nfrom .base import APIHandler\n\n\nclass SelfAPIHandler(APIHandler):\n \"\"\"Return the authenticated user's model\n\n Based on the authentication info. Acts as a 'whoami' for auth tokens.\n \"\"\"\n\n async def get(self):\n user = self.current_user\n if user is None:\n raise web.HTTPError(403)\n\n _added_scopes = set()\n if isinstance(user, orm.Service):\n # ensure we have the minimal 'identify' scopes for the token owner\n identify_scopes = scopes.identify_scopes(user)\n get_model = self.service_model\n else:\n identify_scopes = scopes.identify_scopes(user.orm_user)\n get_model = self.user_model\n\n # ensure we have permission to identify ourselves\n # all tokens can do this on this endpoint\n for scope in identify_scopes:\n if scope not in self.expanded_scopes:\n _added_scopes.add(scope)\n self.expanded_scopes.add(scope)\n if _added_scopes:\n # re-parse with new scopes\n self.parsed_scopes = scopes.parse_scopes(self.expanded_scopes)\n\n model = get_model(user)\n\n # add scopes to identify model,\n # but not the scopes we added to ensure we could read our own model\n model[\"scopes\"] = sorted(self.expanded_scopes.difference(_added_scopes))\n self.write(json.dumps(model))\n\n\nclass UserListAPIHandler(APIHandler):\n def _user_has_ready_spawner(self, orm_user):\n \"\"\"Return True if a user has *any* ready spawners\n\n Used for filtering from active -> ready\n \"\"\"\n user = self.users[orm_user]\n return any(spawner.ready for spawner in user.spawners.values())\n\n @needs_scope('list:users')\n def get(self):\n state_filter = self.get_argument(\"state\", None)\n offset, limit = self.get_api_pagination()\n\n # post_filter\n post_filter = None\n\n if state_filter in {\"active\", \"ready\"}:\n # only get users with active servers\n # an 'active' Spawner has a server record in the database\n # which means Spawner.server != None\n # it may still be in a pending start/stop state.\n # join filters out users with no Spawners\n query = (\n self.db.query(orm.User)\n # join filters out any Users with no Spawners\n .join(orm.Spawner)\n # this implicitly gets Users with *any* active server\n .filter(orm.Spawner.server != None)\n )\n if state_filter == \"ready\":\n # have to post-process query results because active vs ready\n # can only be distinguished with in-memory Spawner properties\n post_filter = self._user_has_ready_spawner\n\n elif state_filter == \"inactive\":\n # only get users with *no* active servers\n # as opposed to users with *any inactive servers*\n # this is the complement to the above query.\n # how expensive is this with lots of servers?\n query = (\n self.db.query(orm.User)\n .outerjoin(orm.Spawner)\n .outerjoin(orm.Server)\n .group_by(orm.User.id)\n .having(func.count(orm.Server.id) == 0)\n )\n elif state_filter:\n raise web.HTTPError(400, \"Unrecognized state filter: %r\" % state_filter)\n else:\n # no filter, return all users\n query = self.db.query(orm.User)\n\n sub_scope = self.parsed_scopes['list:users']\n if sub_scope != scopes.Scope.ALL:\n if not set(sub_scope).issubset({'group', 'user'}):\n # don't expand invalid !server=x filter to all users!\n self.log.warning(\n \"Invalid filter on list:user for {self.current_user}: {sub_scope}\"\n )\n raise web.HTTPError(403)\n filters = []\n if 'user' in sub_scope:\n filters.append(orm.User.name.in_(sub_scope['user']))\n if 'group' in sub_scope:\n filters.append(\n orm.User.groups.any(\n orm.Group.name.in_(sub_scope['group']),\n )\n )\n\n if len(filters) == 1:\n query = query.filter(filters[0])\n else:\n query = query.filter(or_(*filters))\n\n full_query = query\n query = query.order_by(orm.User.id.asc()).offset(offset).limit(limit)\n\n user_list = []\n for u in query:\n if post_filter is None or post_filter(u):\n user_model = self.user_model(u)\n if user_model:\n user_list.append(user_model)\n\n total_count = full_query.count()\n if self.accepts_pagination:\n data = self.paginated_model(user_list, offset, limit, total_count)\n else:\n query_count = query.count()\n if offset == 0 and total_count > query_count:\n self.log.warning(\n f\"Truncated user list in request that does not expect pagination. Processing {query_count} of {total_count} total users.\"\n )\n data = user_list\n\n self.write(json.dumps(data))\n\n @needs_scope('admin:users')\n async def post(self):\n data = self.get_json_body()\n if not data or not isinstance(data, dict) or not data.get('usernames'):\n raise web.HTTPError(400, \"Must specify at least one user to create\")\n\n usernames = data.pop('usernames')\n self._check_user_model(data)\n # admin is set for all users\n # to create admin and non-admin users requires at least two API requests\n admin = data.get('admin', False)\n\n to_create = []\n invalid_names = []\n for name in usernames:\n name = self.authenticator.normalize_username(name)\n if not self.authenticator.validate_username(name):\n invalid_names.append(name)\n continue\n user = self.find_user(name)\n if user is not None:\n self.log.warning(\"User %s already exists\" % name)\n else:\n to_create.append(name)\n\n if invalid_names:\n if len(invalid_names) == 1:\n msg = \"Invalid username: %s\" % invalid_names[0]\n else:\n msg = \"Invalid usernames: %s\" % ', '.join(invalid_names)\n raise web.HTTPError(400, msg)\n\n if not to_create:\n raise web.HTTPError(409, \"All %i users already exist\" % len(usernames))\n\n created = []\n for name in to_create:\n user = self.user_from_username(name)\n if admin:\n user.admin = True\n assign_default_roles(self.db, entity=user)\n self.db.commit()\n try:\n await maybe_future(self.authenticator.add_user(user))\n except Exception as e:\n self.log.error(\"Failed to create user: %s\" % name, exc_info=True)\n self.users.delete(user)\n raise web.HTTPError(400, f\"Failed to create user {name}: {e}\")\n else:\n created.append(user)\n\n self.write(json.dumps([self.user_model(u) for u in created]))\n self.set_status(201)\n\n\nclass UserAPIHandler(APIHandler):\n @needs_scope(\n 'read:users',\n 'read:users:name',\n 'read:servers',\n 'read:users:groups',\n 'read:users:activity',\n 'read:roles:users',\n )\n async def get(self, user_name):\n user = self.find_user(user_name)\n if user is None:\n raise web.HTTPError(404)\n model = self.user_model(user)\n # auth state will only be shown if the requester is an admin\n # this means users can't see their own auth state unless they\n # are admins, Hub admins often are also marked as admins so they\n # will see their auth state but normal users won't\n if 'auth_state' in model:\n model['auth_state'] = await user.get_auth_state()\n self.write(json.dumps(model))\n\n @needs_scope('admin:users')\n async def post(self, user_name):\n data = self.get_json_body()\n user = self.find_user(user_name)\n if user is not None:\n raise web.HTTPError(409, \"User %s already exists\" % user_name)\n\n user = self.user_from_username(user_name)\n if data:\n self._check_user_model(data)\n if 'admin' in data:\n user.admin = data['admin']\n assign_default_roles(self.db, entity=user)\n self.db.commit()\n\n try:\n await maybe_future(self.authenticator.add_user(user))\n except Exception:\n self.log.error(\"Failed to create user: %s\" % user_name, exc_info=True)\n # remove from registry\n self.users.delete(user)\n raise web.HTTPError(400, \"Failed to create user: %s\" % user_name)\n\n self.write(json.dumps(self.user_model(user)))\n self.set_status(201)\n\n @needs_scope('delete:users')\n async def delete(self, user_name):\n user = self.find_user(user_name)\n if user is None:\n raise web.HTTPError(404)\n if user.name == self.current_user.name:\n raise web.HTTPError(400, \"Cannot delete yourself!\")\n if user.spawner._stop_pending:\n raise web.HTTPError(\n 400,\n \"%s's server is in the process of stopping, please wait.\" % user_name,\n )\n if user.running:\n await self.stop_single_user(user)\n if user.spawner._stop_pending:\n raise web.HTTPError(\n 400,\n \"%s's server is in the process of stopping, please wait.\"\n % user_name,\n )\n\n await maybe_future(self.authenticator.delete_user(user))\n\n await user.delete_spawners()\n\n # remove from registry\n self.users.delete(user)\n\n self.set_status(204)\n\n @needs_scope('admin:users')\n async def patch(self, user_name):\n user = self.find_user(user_name)\n if user is None:\n raise web.HTTPError(404)\n data = self.get_json_body()\n self._check_user_model(data)\n if 'name' in data and data['name'] != user_name:\n # check if the new name is already taken inside db\n if self.find_user(data['name']):\n raise web.HTTPError(\n 400,\n \"User %s already exists, username must be unique\" % data['name'],\n )\n for key, value in data.items():\n if key == 'auth_state':\n await user.save_auth_state(value)\n else:\n setattr(user, key, value)\n if key == 'admin':\n assign_default_roles(self.db, entity=user)\n self.db.commit()\n user_ = self.user_model(user)\n user_['auth_state'] = await user.get_auth_state()\n self.write(json.dumps(user_))\n\n\nclass UserTokenListAPIHandler(APIHandler):\n \"\"\"API endpoint for listing/creating tokens\"\"\"\n\n @needs_scope('read:tokens')\n def get(self, user_name):\n \"\"\"Get tokens for a given user\"\"\"\n user = self.find_user(user_name)\n if not user:\n raise web.HTTPError(404, \"No such user: %s\" % user_name)\n\n now = datetime.utcnow()\n api_tokens = []\n\n def sort_key(token):\n return token.last_activity or token.created\n\n for token in sorted(user.api_tokens, key=sort_key):\n if token.expires_at and token.expires_at < now:\n # exclude expired tokens\n self.db.delete(token)\n self.db.commit()\n continue\n api_tokens.append(self.token_model(token))\n\n self.write(json.dumps({'api_tokens': api_tokens}))\n\n async def post(self, user_name):\n body = self.get_json_body() or {}\n if not isinstance(body, dict):\n raise web.HTTPError(400, \"Body must be a JSON dict or empty\")\n\n requester = self.current_user\n if requester is None:\n # defer to Authenticator for identifying the user\n # can be username+password or an upstream auth token\n try:\n name = await self.authenticate(body.get('auth'))\n if isinstance(name, dict):\n # not a simple string so it has to be a dict\n name = name.get('name')\n except web.HTTPError as e:\n # turn any authentication error into 403\n raise web.HTTPError(403)\n except Exception as e:\n # suppress and log error here in case Authenticator\n # isn't prepared to handle auth via this data\n self.log.error(\n \"Error authenticating request for %s: %s\", self.request.uri, e\n )\n raise web.HTTPError(403)\n requester = self.find_user(name)\n if requester is None:\n # couldn't identify requester\n raise web.HTTPError(403)\n self._jupyterhub_user = requester\n self._resolve_roles_and_scopes()\n user = self.find_user(user_name)\n kind = 'user' if isinstance(requester, User) else 'service'\n scope_filter = self.get_scope_filter('tokens')\n if user is None or not scope_filter(user, kind):\n raise web.HTTPError(\n 403,\n f\"{kind.title()} {user_name} not found or no permissions to generate tokens\",\n )\n\n note = body.get('note')\n if not note:\n note = \"Requested via api\"\n if requester is not user:\n note += f\" by {kind} {requester.name}\"\n\n token_roles = body.get('roles')\n try:\n api_token = user.new_api_token(\n note=note, expires_in=body.get('expires_in', None), roles=token_roles\n )\n except NameError:\n raise web.HTTPError(404, \"Requested roles %r not found\" % token_roles)\n except ValueError:\n raise web.HTTPError(\n 403,\n \"Requested roles %r cannot have higher permissions than the token owner\"\n % token_roles,\n )\n if requester is not user:\n self.log.info(\n \"%s %s requested API token for %s\",\n kind.title(),\n requester.name,\n user.name,\n )\n else:\n user_kind = 'user' if isinstance(user, User) else 'service'\n self.log.info(\"%s %s requested new API token\", user_kind.title(), user.name)\n # retrieve the model\n token_model = self.token_model(orm.APIToken.find(self.db, api_token))\n token_model['token'] = api_token\n self.write(json.dumps(token_model))\n\n\nclass UserTokenAPIHandler(APIHandler):\n \"\"\"API endpoint for retrieving/deleting individual tokens\"\"\"\n\n def find_token_by_id(self, user, token_id):\n \"\"\"Find a token object by token-id key\n\n Raises 404 if not found for any reason\n (e.g. wrong owner, invalid key format, etc.)\n \"\"\"\n not_found = f\"No such token {token_id} for user {user.name}\"\n prefix, id_ = token_id[:1], token_id[1:]\n if prefix != 'a':\n raise web.HTTPError(404, not_found)\n try:\n id_ = int(id_)\n except ValueError:\n raise web.HTTPError(404, not_found)\n\n orm_token = self.db.query(orm.APIToken).filter_by(id=id_).first()\n if orm_token is None or orm_token.user is not user.orm_user:\n raise web.HTTPError(404, \"Token not found %s\", orm_token)\n return orm_token\n\n @needs_scope('read:tokens')\n def get(self, user_name, token_id):\n \"\"\"\"\"\"\n user = self.find_user(user_name)\n if not user:\n raise web.HTTPError(404, \"No such user: %s\" % user_name)\n token = self.find_token_by_id(user, token_id)\n self.write(json.dumps(self.token_model(token)))\n\n @needs_scope('tokens')\n def delete(self, user_name, token_id):\n \"\"\"Delete a token\"\"\"\n user = self.find_user(user_name)\n if not user:\n raise web.HTTPError(404, \"No such user: %s\" % user_name)\n token = self.find_token_by_id(user, token_id)\n # deleting an oauth token deletes *all* oauth tokens for that client\n client_id = token.client_id\n if token.client_id != \"jupyterhub\":\n tokens = [\n token for token in user.api_tokens if token.client_id == client_id\n ]\n else:\n tokens = [token]\n for token in tokens:\n self.db.delete(token)\n self.db.commit()\n self.set_header('Content-Type', 'text/plain')\n self.set_status(204)\n\n\nclass UserServerAPIHandler(APIHandler):\n \"\"\"Start and stop single-user servers\"\"\"\n\n @needs_scope('servers')\n async def post(self, user_name, server_name=''):\n user = self.find_user(user_name)\n if server_name:\n if not self.allow_named_servers:\n raise web.HTTPError(400, \"Named servers are not enabled.\")\n if (\n self.named_server_limit_per_user > 0\n and server_name not in user.orm_spawners\n ):\n named_spawners = list(user.all_spawners(include_default=False))\n if self.named_server_limit_per_user <= len(named_spawners):\n raise web.HTTPError(\n 400,\n \"User {} already has the maximum of {} named servers.\"\n \" One must be deleted before a new server can be created\".format(\n user_name, self.named_server_limit_per_user\n ),\n )\n spawner = user.spawners[server_name]\n pending = spawner.pending\n if pending == 'spawn':\n self.set_header('Content-Type', 'text/plain')\n self.set_status(202)\n return\n elif pending:\n raise web.HTTPError(400, f\"{spawner._log_name} is pending {pending}\")\n\n if spawner.ready:\n # include notify, so that a server that died is noticed immediately\n # set _spawn_pending flag to prevent races while we wait\n spawner._spawn_pending = True\n try:\n state = await spawner.poll_and_notify()\n finally:\n spawner._spawn_pending = False\n if state is None:\n raise web.HTTPError(400, \"%s is already running\" % spawner._log_name)\n\n options = self.get_json_body()\n await self.spawn_single_user(user, server_name, options=options)\n status = 202 if spawner.pending == 'spawn' else 201\n self.set_header('Content-Type', 'text/plain')\n self.set_status(status)\n\n @needs_scope('delete:servers')\n async def delete(self, user_name, server_name=''):\n user = self.find_user(user_name)\n options = self.get_json_body()\n remove = (options or {}).get('remove', False)\n\n async def _remove_spawner(f=None):\n \"\"\"Remove the spawner object\n\n only called after it stops successfully\n \"\"\"\n if f:\n # await f, stop on error,\n # leaving resources in the db in case of failure to stop\n await f\n self.log.info(\"Deleting spawner %s\", spawner._log_name)\n await maybe_future(user._delete_spawner(spawner))\n\n self.db.delete(spawner.orm_spawner)\n user.spawners.pop(server_name, None)\n self.db.commit()\n\n if server_name:\n if not self.allow_named_servers:\n raise web.HTTPError(400, \"Named servers are not enabled.\")\n if server_name not in user.orm_spawners:\n raise web.HTTPError(\n 404, f\"{user_name} has no server named '{server_name}'\"\n )\n elif remove:\n raise web.HTTPError(400, \"Cannot delete the default server\")\n\n spawner = user.spawners[server_name]\n if spawner.pending == 'stop':\n self.log.debug(\"%s already stopping\", spawner._log_name)\n self.set_header('Content-Type', 'text/plain')\n self.set_status(202)\n if remove:\n # schedule remove when stop completes\n asyncio.ensure_future(_remove_spawner(spawner._stop_future))\n return\n\n if spawner.pending:\n raise web.HTTPError(\n 400,\n f\"{spawner._log_name} is pending {spawner.pending}, please wait\",\n )\n\n stop_future = None\n if spawner.ready:\n # include notify, so that a server that died is noticed immediately\n status = await spawner.poll_and_notify()\n if status is None:\n stop_future = await self.stop_single_user(user, server_name)\n\n if remove:\n if stop_future:\n # schedule remove when stop completes\n asyncio.ensure_future(_remove_spawner(spawner._stop_future))\n else:\n await _remove_spawner()\n\n status = 202 if spawner._stop_pending else 204\n self.set_header('Content-Type', 'text/plain')\n self.set_status(status)\n\n\nclass UserAdminAccessAPIHandler(APIHandler):\n \"\"\"Grant admins access to single-user servers\n\n This handler sets the necessary cookie for an admin to login to a single-user server.\n \"\"\"\n\n @needs_scope('servers')\n def post(self, user_name):\n self.log.warning(\n \"Deprecated in JupyterHub 0.8.\"\n \" Admin access API is not needed now that we use OAuth.\"\n )\n current = self.current_user\n self.log.warning(\n \"Admin user %s has requested access to %s's server\", current.name, user_name\n )\n if not self.settings.get('admin_access', False):\n raise web.HTTPError(403, \"admin access to user servers disabled\")\n user = self.find_user(user_name)\n if user is None:\n raise web.HTTPError(404)\n\n\nclass SpawnProgressAPIHandler(APIHandler):\n \"\"\"EventStream handler for pending spawns\"\"\"\n\n keepalive_interval = 8\n\n def get_content_type(self):\n return 'text/event-stream'\n\n async def send_event(self, event):\n try:\n self.write(f'data: {json.dumps(event)}\\n\\n')\n await self.flush()\n except StreamClosedError:\n self.log.warning(\"Stream closed while handling %s\", self.request.uri)\n # raise Finish to halt the handler\n raise web.Finish()\n\n def initialize(self):\n super().initialize()\n self._finish_future = asyncio.Future()\n\n def on_finish(self):\n self._finish_future.set_result(None)\n\n async def keepalive(self):\n \"\"\"Write empty lines periodically\n\n to avoid being closed by intermediate proxies\n when there's a large gap between events.\n \"\"\"\n while not self._finish_future.done():\n try:\n self.write(\"\\n\\n\")\n await self.flush()\n except (StreamClosedError, RuntimeError):\n return\n\n await asyncio.wait([self._finish_future], timeout=self.keepalive_interval)\n\n @needs_scope('read:servers')\n async def get(self, user_name, server_name=''):\n self.set_header('Cache-Control', 'no-cache')\n if server_name is None:\n server_name = ''\n user = self.find_user(user_name)\n if user is None:\n # no such user\n raise web.HTTPError(404)\n if server_name not in user.spawners:\n # user has no such server\n raise web.HTTPError(404)\n spawner = user.spawners[server_name]\n\n # start sending keepalive to avoid proxies closing the connection\n asyncio.ensure_future(self.keepalive())\n # cases:\n # - spawner already started and ready\n # - spawner not running at all\n # - spawner failed\n # - spawner pending start (what we expect)\n url = url_path_join(user.url, server_name, '/')\n ready_event = {\n 'progress': 100,\n 'ready': True,\n 'message': f\"Server ready at {url}\",\n 'html_message': 'Server ready at <a href=\"{0}\">{0}</a>'.format(url),\n 'url': url,\n }\n failed_event = {'progress': 100, 'failed': True, 'message': \"Spawn failed\"}\n\n if spawner.ready:\n # spawner already ready. Trigger progress-completion immediately\n self.log.info(\"Server %s is already started\", spawner._log_name)\n await self.send_event(ready_event)\n return\n\n spawn_future = spawner._spawn_future\n\n if not spawner._spawn_pending:\n # not pending, no progress to fetch\n # check if spawner has just failed\n f = spawn_future\n if f and f.done() and f.exception():\n failed_event['message'] = \"Spawn failed: %s\" % f.exception()\n await self.send_event(failed_event)\n return\n else:\n raise web.HTTPError(400, \"%s is not starting...\", spawner._log_name)\n\n # retrieve progress events from the Spawner\n async with aclosing(\n iterate_until(spawn_future, spawner._generate_progress())\n ) as events:\n try:\n async for event in events:\n # don't allow events to sneakily set the 'ready' flag\n if 'ready' in event:\n event.pop('ready', None)\n await self.send_event(event)\n except asyncio.CancelledError:\n pass\n\n # progress finished, wait for spawn to actually resolve,\n # in case progress finished early\n # (ignore errors, which will be logged elsewhere)\n await asyncio.wait([spawn_future])\n\n # progress and spawn finished, check if spawn succeeded\n if spawner.ready:\n # spawner is ready, signal completion and redirect\n self.log.info(\"Server %s is ready\", spawner._log_name)\n await self.send_event(ready_event)\n else:\n # what happened? Maybe spawn failed?\n f = spawn_future\n if f and f.done() and f.exception():\n failed_event['message'] = \"Spawn failed: %s\" % f.exception()\n else:\n self.log.warning(\n \"Server %s didn't start for unknown reason\", spawner._log_name\n )\n await self.send_event(failed_event)\n\n\ndef _parse_timestamp(timestamp):\n \"\"\"Parse and return a utc timestamp\n\n - raise HTTPError(400) on parse error\n - handle and strip tz info for internal consistency\n (we use naive utc timestamps everywhere)\n \"\"\"\n try:\n dt = parse_date(timestamp)\n except Exception:\n raise web.HTTPError(400, \"Not a valid timestamp: %r\", timestamp)\n if dt.tzinfo:\n # strip timezone info to naive UTC datetime\n dt = dt.astimezone(timezone.utc).replace(tzinfo=None)\n\n now = datetime.utcnow()\n if (dt - now) > timedelta(minutes=59):\n raise web.HTTPError(\n 400,\n \"Rejecting activity from more than an hour in the future: {}\".format(\n isoformat(dt)\n ),\n )\n return dt\n\n\nclass ActivityAPIHandler(APIHandler):\n def _validate_servers(self, user, servers):\n \"\"\"Validate servers dict argument\n\n - types are correct\n - each server exists\n - last_activity fields are parsed into datetime objects\n \"\"\"\n msg = \"servers must be a dict of the form {server_name: {last_activity: timestamp}}\"\n if not isinstance(servers, dict):\n raise web.HTTPError(400, msg)\n\n spawners = user.orm_spawners\n for server_name, server_info in servers.items():\n if server_name not in spawners:\n raise web.HTTPError(\n 400,\n f\"No such server '{server_name}' for user {user.name}\",\n )\n # check that each per-server field is a dict\n if not isinstance(server_info, dict):\n raise web.HTTPError(400, msg)\n # check that last_activity is defined for each per-server dict\n if 'last_activity' not in server_info:\n raise web.HTTPError(400, msg)\n # parse last_activity timestamps\n # _parse_timestamp above is responsible for raising errors\n server_info['last_activity'] = _parse_timestamp(\n server_info['last_activity']\n )\n return servers\n\n @needs_scope('users:activity')\n def post(self, user_name):\n user = self.find_user(user_name)\n if user is None:\n # no such user\n raise web.HTTPError(404, \"No such user: %r\", user_name)\n\n body = self.get_json_body()\n if not isinstance(body, dict):\n raise web.HTTPError(400, \"body must be a json dict\")\n\n last_activity_timestamp = body.get('last_activity')\n servers = body.get('servers')\n if not last_activity_timestamp and not servers:\n raise web.HTTPError(\n 400, \"body must contain at least one of `last_activity` or `servers`\"\n )\n\n if servers:\n # validate server args\n servers = self._validate_servers(user, servers)\n # at this point we know that the servers dict\n # is valid and contains only servers that exist\n # and last_activity is defined and a valid datetime object\n\n # update user.last_activity if specified\n if last_activity_timestamp:\n last_activity = _parse_timestamp(last_activity_timestamp)\n if (not user.last_activity) or last_activity > user.last_activity:\n self.log.debug(\n \"Activity for user %s: %s\", user.name, isoformat(last_activity)\n )\n user.last_activity = last_activity\n else:\n self.log.debug(\n \"Not updating activity for %s: %s < %s\",\n user,\n isoformat(last_activity),\n isoformat(user.last_activity),\n )\n\n if servers:\n for server_name, server_info in servers.items():\n last_activity = server_info['last_activity']\n spawner = user.orm_spawners[server_name]\n\n if (not spawner.last_activity) or last_activity > spawner.last_activity:\n self.log.debug(\n \"Activity on server %s/%s: %s\",\n user.name,\n server_name,\n isoformat(last_activity),\n )\n spawner.last_activity = last_activity\n else:\n self.log.debug(\n \"Not updating server activity on %s/%s: %s < %s\",\n user.name,\n server_name,\n isoformat(last_activity),\n isoformat(user.last_activity),\n )\n\n self.db.commit()\n\n\ndefault_handlers = [\n (r\"/api/user\", SelfAPIHandler),\n (r\"/api/users\", UserListAPIHandler),\n (r\"/api/users/([^/]+)\", UserAPIHandler),\n (r\"/api/users/([^/]+)/server\", UserServerAPIHandler),\n (r\"/api/users/([^/]+)/server/progress\", SpawnProgressAPIHandler),\n (r\"/api/users/([^/]+)/tokens\", UserTokenListAPIHandler),\n (r\"/api/users/([^/]+)/tokens/([^/]*)\", UserTokenAPIHandler),\n (r\"/api/users/([^/]+)/servers/([^/]*)\", UserServerAPIHandler),\n (r\"/api/users/([^/]+)/servers/([^/]*)/progress\", SpawnProgressAPIHandler),\n (r\"/api/users/([^/]+)/activity\", ActivityAPIHandler),\n (r\"/api/users/([^/]+)/admin-access\", UserAdminAccessAPIHandler),\n]\n", "path": "jupyterhub/apihandlers/users.py" } ]
[ { "content": "\"\"\"User handlers\"\"\"\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\nimport asyncio\nimport json\nfrom datetime import datetime\nfrom datetime import timedelta\nfrom datetime import timezone\n\nfrom async_generator import aclosing\nfrom dateutil.parser import parse as parse_date\nfrom sqlalchemy import func\nfrom sqlalchemy import or_\nfrom tornado import web\nfrom tornado.iostream import StreamClosedError\n\nfrom .. import orm\nfrom .. import scopes\nfrom ..roles import assign_default_roles\nfrom ..scopes import needs_scope\nfrom ..user import User\nfrom ..utils import isoformat\nfrom ..utils import iterate_until\nfrom ..utils import maybe_future\nfrom ..utils import url_path_join\nfrom .base import APIHandler\n\n\nclass SelfAPIHandler(APIHandler):\n \"\"\"Return the authenticated user's model\n\n Based on the authentication info. Acts as a 'whoami' for auth tokens.\n \"\"\"\n\n async def get(self):\n user = self.current_user\n if user is None:\n raise web.HTTPError(403)\n\n _added_scopes = set()\n if isinstance(user, orm.Service):\n # ensure we have the minimal 'identify' scopes for the token owner\n identify_scopes = scopes.identify_scopes(user)\n get_model = self.service_model\n else:\n identify_scopes = scopes.identify_scopes(user.orm_user)\n get_model = self.user_model\n\n # ensure we have permission to identify ourselves\n # all tokens can do this on this endpoint\n for scope in identify_scopes:\n if scope not in self.expanded_scopes:\n _added_scopes.add(scope)\n self.expanded_scopes.add(scope)\n if _added_scopes:\n # re-parse with new scopes\n self.parsed_scopes = scopes.parse_scopes(self.expanded_scopes)\n\n model = get_model(user)\n\n # add scopes to identify model,\n # but not the scopes we added to ensure we could read our own model\n model[\"scopes\"] = sorted(self.expanded_scopes.difference(_added_scopes))\n self.write(json.dumps(model))\n\n\nclass UserListAPIHandler(APIHandler):\n def _user_has_ready_spawner(self, orm_user):\n \"\"\"Return True if a user has *any* ready spawners\n\n Used for filtering from active -> ready\n \"\"\"\n user = self.users[orm_user]\n return any(spawner.ready for spawner in user.spawners.values())\n\n @needs_scope('list:users')\n def get(self):\n state_filter = self.get_argument(\"state\", None)\n offset, limit = self.get_api_pagination()\n\n # post_filter\n post_filter = None\n\n if state_filter in {\"active\", \"ready\"}:\n # only get users with active servers\n # an 'active' Spawner has a server record in the database\n # which means Spawner.server != None\n # it may still be in a pending start/stop state.\n # join filters out users with no Spawners\n query = (\n self.db.query(orm.User)\n # join filters out any Users with no Spawners\n .join(orm.Spawner)\n # this implicitly gets Users with *any* active server\n .filter(orm.Spawner.server != None)\n )\n if state_filter == \"ready\":\n # have to post-process query results because active vs ready\n # can only be distinguished with in-memory Spawner properties\n post_filter = self._user_has_ready_spawner\n\n elif state_filter == \"inactive\":\n # only get users with *no* active servers\n # as opposed to users with *any inactive servers*\n # this is the complement to the above query.\n # how expensive is this with lots of servers?\n query = (\n self.db.query(orm.User)\n .outerjoin(orm.Spawner)\n .outerjoin(orm.Server)\n .group_by(orm.User.id)\n .having(func.count(orm.Server.id) == 0)\n )\n elif state_filter:\n raise web.HTTPError(400, \"Unrecognized state filter: %r\" % state_filter)\n else:\n # no filter, return all users\n query = self.db.query(orm.User)\n\n sub_scope = self.parsed_scopes['list:users']\n if sub_scope != scopes.Scope.ALL:\n if not set(sub_scope).issubset({'group', 'user'}):\n # don't expand invalid !server=x filter to all users!\n self.log.warning(\n \"Invalid filter on list:user for {self.current_user}: {sub_scope}\"\n )\n raise web.HTTPError(403)\n filters = []\n if 'user' in sub_scope:\n filters.append(orm.User.name.in_(sub_scope['user']))\n if 'group' in sub_scope:\n filters.append(\n orm.User.groups.any(\n orm.Group.name.in_(sub_scope['group']),\n )\n )\n\n if len(filters) == 1:\n query = query.filter(filters[0])\n else:\n query = query.filter(or_(*filters))\n\n full_query = query\n query = query.order_by(orm.User.id.asc()).offset(offset).limit(limit)\n\n user_list = []\n for u in query:\n if post_filter is None or post_filter(u):\n user_model = self.user_model(u)\n if user_model:\n user_list.append(user_model)\n\n total_count = full_query.count()\n if self.accepts_pagination:\n data = self.paginated_model(user_list, offset, limit, total_count)\n else:\n query_count = query.count()\n if offset == 0 and total_count > query_count:\n self.log.warning(\n f\"Truncated user list in request that does not expect pagination. Processing {query_count} of {total_count} total users.\"\n )\n data = user_list\n\n self.write(json.dumps(data))\n\n @needs_scope('admin:users')\n async def post(self):\n data = self.get_json_body()\n if not data or not isinstance(data, dict) or not data.get('usernames'):\n raise web.HTTPError(400, \"Must specify at least one user to create\")\n\n usernames = data.pop('usernames')\n self._check_user_model(data)\n # admin is set for all users\n # to create admin and non-admin users requires at least two API requests\n admin = data.get('admin', False)\n\n to_create = []\n invalid_names = []\n for name in usernames:\n name = self.authenticator.normalize_username(name)\n if not self.authenticator.validate_username(name):\n invalid_names.append(name)\n continue\n user = self.find_user(name)\n if user is not None:\n self.log.warning(\"User %s already exists\" % name)\n else:\n to_create.append(name)\n\n if invalid_names:\n if len(invalid_names) == 1:\n msg = \"Invalid username: %s\" % invalid_names[0]\n else:\n msg = \"Invalid usernames: %s\" % ', '.join(invalid_names)\n raise web.HTTPError(400, msg)\n\n if not to_create:\n raise web.HTTPError(409, \"All %i users already exist\" % len(usernames))\n\n created = []\n for name in to_create:\n user = self.user_from_username(name)\n if admin:\n user.admin = True\n assign_default_roles(self.db, entity=user)\n self.db.commit()\n try:\n await maybe_future(self.authenticator.add_user(user))\n except Exception as e:\n self.log.error(\"Failed to create user: %s\" % name, exc_info=True)\n self.users.delete(user)\n raise web.HTTPError(400, f\"Failed to create user {name}: {e}\")\n else:\n created.append(user)\n\n self.write(json.dumps([self.user_model(u) for u in created]))\n self.set_status(201)\n\n\nclass UserAPIHandler(APIHandler):\n @needs_scope(\n 'read:users',\n 'read:users:name',\n 'read:servers',\n 'read:users:groups',\n 'read:users:activity',\n 'read:roles:users',\n )\n async def get(self, user_name):\n user = self.find_user(user_name)\n if user is None:\n raise web.HTTPError(404)\n model = self.user_model(user)\n # auth state will only be shown if the requester is an admin\n # this means users can't see their own auth state unless they\n # are admins, Hub admins often are also marked as admins so they\n # will see their auth state but normal users won't\n if 'auth_state' in model:\n model['auth_state'] = await user.get_auth_state()\n self.write(json.dumps(model))\n\n @needs_scope('admin:users')\n async def post(self, user_name):\n data = self.get_json_body()\n user = self.find_user(user_name)\n if user is not None:\n raise web.HTTPError(409, \"User %s already exists\" % user_name)\n\n user = self.user_from_username(user_name)\n if data:\n self._check_user_model(data)\n if 'admin' in data:\n user.admin = data['admin']\n assign_default_roles(self.db, entity=user)\n self.db.commit()\n\n try:\n await maybe_future(self.authenticator.add_user(user))\n except Exception:\n self.log.error(\"Failed to create user: %s\" % user_name, exc_info=True)\n # remove from registry\n self.users.delete(user)\n raise web.HTTPError(400, \"Failed to create user: %s\" % user_name)\n\n self.write(json.dumps(self.user_model(user)))\n self.set_status(201)\n\n @needs_scope('delete:users')\n async def delete(self, user_name):\n user = self.find_user(user_name)\n if user is None:\n raise web.HTTPError(404)\n if user.name == self.current_user.name:\n raise web.HTTPError(400, \"Cannot delete yourself!\")\n if user.spawner._stop_pending:\n raise web.HTTPError(\n 400,\n \"%s's server is in the process of stopping, please wait.\" % user_name,\n )\n if user.running:\n await self.stop_single_user(user)\n if user.spawner._stop_pending:\n raise web.HTTPError(\n 400,\n \"%s's server is in the process of stopping, please wait.\"\n % user_name,\n )\n\n await maybe_future(self.authenticator.delete_user(user))\n\n await user.delete_spawners()\n\n # remove from registry\n self.users.delete(user)\n\n self.set_status(204)\n\n @needs_scope('admin:users')\n async def patch(self, user_name):\n user = self.find_user(user_name)\n if user is None:\n raise web.HTTPError(404)\n data = self.get_json_body()\n self._check_user_model(data)\n if 'name' in data and data['name'] != user_name:\n # check if the new name is already taken inside db\n if self.find_user(data['name']):\n raise web.HTTPError(\n 400,\n \"User %s already exists, username must be unique\" % data['name'],\n )\n for key, value in data.items():\n if key == 'auth_state':\n await user.save_auth_state(value)\n else:\n setattr(user, key, value)\n if key == 'admin':\n assign_default_roles(self.db, entity=user)\n self.db.commit()\n user_ = self.user_model(user)\n user_['auth_state'] = await user.get_auth_state()\n self.write(json.dumps(user_))\n\n\nclass UserTokenListAPIHandler(APIHandler):\n \"\"\"API endpoint for listing/creating tokens\"\"\"\n\n @needs_scope('read:tokens')\n def get(self, user_name):\n \"\"\"Get tokens for a given user\"\"\"\n user = self.find_user(user_name)\n if not user:\n raise web.HTTPError(404, \"No such user: %s\" % user_name)\n\n now = datetime.utcnow()\n api_tokens = []\n\n def sort_key(token):\n return token.last_activity or token.created\n\n for token in sorted(user.api_tokens, key=sort_key):\n if token.expires_at and token.expires_at < now:\n # exclude expired tokens\n self.db.delete(token)\n self.db.commit()\n continue\n api_tokens.append(self.token_model(token))\n\n self.write(json.dumps({'api_tokens': api_tokens}))\n\n async def post(self, user_name):\n body = self.get_json_body() or {}\n if not isinstance(body, dict):\n raise web.HTTPError(400, \"Body must be a JSON dict or empty\")\n\n requester = self.current_user\n if requester is None:\n # defer to Authenticator for identifying the user\n # can be username+password or an upstream auth token\n try:\n name = await self.authenticate(body.get('auth'))\n if isinstance(name, dict):\n # not a simple string so it has to be a dict\n name = name.get('name')\n except web.HTTPError as e:\n # turn any authentication error into 403\n raise web.HTTPError(403)\n except Exception as e:\n # suppress and log error here in case Authenticator\n # isn't prepared to handle auth via this data\n self.log.error(\n \"Error authenticating request for %s: %s\", self.request.uri, e\n )\n raise web.HTTPError(403)\n requester = self.find_user(name)\n if requester is None:\n # couldn't identify requester\n raise web.HTTPError(403)\n self._jupyterhub_user = requester\n self._resolve_roles_and_scopes()\n user = self.find_user(user_name)\n kind = 'user' if isinstance(requester, User) else 'service'\n scope_filter = self.get_scope_filter('tokens')\n if user is None or not scope_filter(user, kind):\n raise web.HTTPError(\n 403,\n f\"{kind.title()} {user_name} not found or no permissions to generate tokens\",\n )\n\n note = body.get('note')\n if not note:\n note = \"Requested via api\"\n if requester is not user:\n note += f\" by {kind} {requester.name}\"\n\n token_roles = body.get('roles')\n try:\n api_token = user.new_api_token(\n note=note, expires_in=body.get('expires_in', None), roles=token_roles\n )\n except NameError:\n raise web.HTTPError(404, \"Requested roles %r not found\" % token_roles)\n except ValueError:\n raise web.HTTPError(\n 403,\n \"Requested roles %r cannot have higher permissions than the token owner\"\n % token_roles,\n )\n if requester is not user:\n self.log.info(\n \"%s %s requested API token for %s\",\n kind.title(),\n requester.name,\n user.name,\n )\n else:\n user_kind = 'user' if isinstance(user, User) else 'service'\n self.log.info(\"%s %s requested new API token\", user_kind.title(), user.name)\n # retrieve the model\n token_model = self.token_model(orm.APIToken.find(self.db, api_token))\n token_model['token'] = api_token\n self.write(json.dumps(token_model))\n self.set_status(201)\n\n\nclass UserTokenAPIHandler(APIHandler):\n \"\"\"API endpoint for retrieving/deleting individual tokens\"\"\"\n\n def find_token_by_id(self, user, token_id):\n \"\"\"Find a token object by token-id key\n\n Raises 404 if not found for any reason\n (e.g. wrong owner, invalid key format, etc.)\n \"\"\"\n not_found = f\"No such token {token_id} for user {user.name}\"\n prefix, id_ = token_id[:1], token_id[1:]\n if prefix != 'a':\n raise web.HTTPError(404, not_found)\n try:\n id_ = int(id_)\n except ValueError:\n raise web.HTTPError(404, not_found)\n\n orm_token = self.db.query(orm.APIToken).filter_by(id=id_).first()\n if orm_token is None or orm_token.user is not user.orm_user:\n raise web.HTTPError(404, \"Token not found %s\", orm_token)\n return orm_token\n\n @needs_scope('read:tokens')\n def get(self, user_name, token_id):\n \"\"\"\"\"\"\n user = self.find_user(user_name)\n if not user:\n raise web.HTTPError(404, \"No such user: %s\" % user_name)\n token = self.find_token_by_id(user, token_id)\n self.write(json.dumps(self.token_model(token)))\n\n @needs_scope('tokens')\n def delete(self, user_name, token_id):\n \"\"\"Delete a token\"\"\"\n user = self.find_user(user_name)\n if not user:\n raise web.HTTPError(404, \"No such user: %s\" % user_name)\n token = self.find_token_by_id(user, token_id)\n # deleting an oauth token deletes *all* oauth tokens for that client\n client_id = token.client_id\n if token.client_id != \"jupyterhub\":\n tokens = [\n token for token in user.api_tokens if token.client_id == client_id\n ]\n else:\n tokens = [token]\n for token in tokens:\n self.db.delete(token)\n self.db.commit()\n self.set_header('Content-Type', 'text/plain')\n self.set_status(204)\n\n\nclass UserServerAPIHandler(APIHandler):\n \"\"\"Start and stop single-user servers\"\"\"\n\n @needs_scope('servers')\n async def post(self, user_name, server_name=''):\n user = self.find_user(user_name)\n if server_name:\n if not self.allow_named_servers:\n raise web.HTTPError(400, \"Named servers are not enabled.\")\n if (\n self.named_server_limit_per_user > 0\n and server_name not in user.orm_spawners\n ):\n named_spawners = list(user.all_spawners(include_default=False))\n if self.named_server_limit_per_user <= len(named_spawners):\n raise web.HTTPError(\n 400,\n \"User {} already has the maximum of {} named servers.\"\n \" One must be deleted before a new server can be created\".format(\n user_name, self.named_server_limit_per_user\n ),\n )\n spawner = user.spawners[server_name]\n pending = spawner.pending\n if pending == 'spawn':\n self.set_header('Content-Type', 'text/plain')\n self.set_status(202)\n return\n elif pending:\n raise web.HTTPError(400, f\"{spawner._log_name} is pending {pending}\")\n\n if spawner.ready:\n # include notify, so that a server that died is noticed immediately\n # set _spawn_pending flag to prevent races while we wait\n spawner._spawn_pending = True\n try:\n state = await spawner.poll_and_notify()\n finally:\n spawner._spawn_pending = False\n if state is None:\n raise web.HTTPError(400, \"%s is already running\" % spawner._log_name)\n\n options = self.get_json_body()\n await self.spawn_single_user(user, server_name, options=options)\n status = 202 if spawner.pending == 'spawn' else 201\n self.set_header('Content-Type', 'text/plain')\n self.set_status(status)\n\n @needs_scope('delete:servers')\n async def delete(self, user_name, server_name=''):\n user = self.find_user(user_name)\n options = self.get_json_body()\n remove = (options or {}).get('remove', False)\n\n async def _remove_spawner(f=None):\n \"\"\"Remove the spawner object\n\n only called after it stops successfully\n \"\"\"\n if f:\n # await f, stop on error,\n # leaving resources in the db in case of failure to stop\n await f\n self.log.info(\"Deleting spawner %s\", spawner._log_name)\n await maybe_future(user._delete_spawner(spawner))\n\n self.db.delete(spawner.orm_spawner)\n user.spawners.pop(server_name, None)\n self.db.commit()\n\n if server_name:\n if not self.allow_named_servers:\n raise web.HTTPError(400, \"Named servers are not enabled.\")\n if server_name not in user.orm_spawners:\n raise web.HTTPError(\n 404, f\"{user_name} has no server named '{server_name}'\"\n )\n elif remove:\n raise web.HTTPError(400, \"Cannot delete the default server\")\n\n spawner = user.spawners[server_name]\n if spawner.pending == 'stop':\n self.log.debug(\"%s already stopping\", spawner._log_name)\n self.set_header('Content-Type', 'text/plain')\n self.set_status(202)\n if remove:\n # schedule remove when stop completes\n asyncio.ensure_future(_remove_spawner(spawner._stop_future))\n return\n\n if spawner.pending:\n raise web.HTTPError(\n 400,\n f\"{spawner._log_name} is pending {spawner.pending}, please wait\",\n )\n\n stop_future = None\n if spawner.ready:\n # include notify, so that a server that died is noticed immediately\n status = await spawner.poll_and_notify()\n if status is None:\n stop_future = await self.stop_single_user(user, server_name)\n\n if remove:\n if stop_future:\n # schedule remove when stop completes\n asyncio.ensure_future(_remove_spawner(spawner._stop_future))\n else:\n await _remove_spawner()\n\n status = 202 if spawner._stop_pending else 204\n self.set_header('Content-Type', 'text/plain')\n self.set_status(status)\n\n\nclass UserAdminAccessAPIHandler(APIHandler):\n \"\"\"Grant admins access to single-user servers\n\n This handler sets the necessary cookie for an admin to login to a single-user server.\n \"\"\"\n\n @needs_scope('servers')\n def post(self, user_name):\n self.log.warning(\n \"Deprecated in JupyterHub 0.8.\"\n \" Admin access API is not needed now that we use OAuth.\"\n )\n current = self.current_user\n self.log.warning(\n \"Admin user %s has requested access to %s's server\", current.name, user_name\n )\n if not self.settings.get('admin_access', False):\n raise web.HTTPError(403, \"admin access to user servers disabled\")\n user = self.find_user(user_name)\n if user is None:\n raise web.HTTPError(404)\n\n\nclass SpawnProgressAPIHandler(APIHandler):\n \"\"\"EventStream handler for pending spawns\"\"\"\n\n keepalive_interval = 8\n\n def get_content_type(self):\n return 'text/event-stream'\n\n async def send_event(self, event):\n try:\n self.write(f'data: {json.dumps(event)}\\n\\n')\n await self.flush()\n except StreamClosedError:\n self.log.warning(\"Stream closed while handling %s\", self.request.uri)\n # raise Finish to halt the handler\n raise web.Finish()\n\n def initialize(self):\n super().initialize()\n self._finish_future = asyncio.Future()\n\n def on_finish(self):\n self._finish_future.set_result(None)\n\n async def keepalive(self):\n \"\"\"Write empty lines periodically\n\n to avoid being closed by intermediate proxies\n when there's a large gap between events.\n \"\"\"\n while not self._finish_future.done():\n try:\n self.write(\"\\n\\n\")\n await self.flush()\n except (StreamClosedError, RuntimeError):\n return\n\n await asyncio.wait([self._finish_future], timeout=self.keepalive_interval)\n\n @needs_scope('read:servers')\n async def get(self, user_name, server_name=''):\n self.set_header('Cache-Control', 'no-cache')\n if server_name is None:\n server_name = ''\n user = self.find_user(user_name)\n if user is None:\n # no such user\n raise web.HTTPError(404)\n if server_name not in user.spawners:\n # user has no such server\n raise web.HTTPError(404)\n spawner = user.spawners[server_name]\n\n # start sending keepalive to avoid proxies closing the connection\n asyncio.ensure_future(self.keepalive())\n # cases:\n # - spawner already started and ready\n # - spawner not running at all\n # - spawner failed\n # - spawner pending start (what we expect)\n url = url_path_join(user.url, server_name, '/')\n ready_event = {\n 'progress': 100,\n 'ready': True,\n 'message': f\"Server ready at {url}\",\n 'html_message': 'Server ready at <a href=\"{0}\">{0}</a>'.format(url),\n 'url': url,\n }\n failed_event = {'progress': 100, 'failed': True, 'message': \"Spawn failed\"}\n\n if spawner.ready:\n # spawner already ready. Trigger progress-completion immediately\n self.log.info(\"Server %s is already started\", spawner._log_name)\n await self.send_event(ready_event)\n return\n\n spawn_future = spawner._spawn_future\n\n if not spawner._spawn_pending:\n # not pending, no progress to fetch\n # check if spawner has just failed\n f = spawn_future\n if f and f.done() and f.exception():\n failed_event['message'] = \"Spawn failed: %s\" % f.exception()\n await self.send_event(failed_event)\n return\n else:\n raise web.HTTPError(400, \"%s is not starting...\", spawner._log_name)\n\n # retrieve progress events from the Spawner\n async with aclosing(\n iterate_until(spawn_future, spawner._generate_progress())\n ) as events:\n try:\n async for event in events:\n # don't allow events to sneakily set the 'ready' flag\n if 'ready' in event:\n event.pop('ready', None)\n await self.send_event(event)\n except asyncio.CancelledError:\n pass\n\n # progress finished, wait for spawn to actually resolve,\n # in case progress finished early\n # (ignore errors, which will be logged elsewhere)\n await asyncio.wait([spawn_future])\n\n # progress and spawn finished, check if spawn succeeded\n if spawner.ready:\n # spawner is ready, signal completion and redirect\n self.log.info(\"Server %s is ready\", spawner._log_name)\n await self.send_event(ready_event)\n else:\n # what happened? Maybe spawn failed?\n f = spawn_future\n if f and f.done() and f.exception():\n failed_event['message'] = \"Spawn failed: %s\" % f.exception()\n else:\n self.log.warning(\n \"Server %s didn't start for unknown reason\", spawner._log_name\n )\n await self.send_event(failed_event)\n\n\ndef _parse_timestamp(timestamp):\n \"\"\"Parse and return a utc timestamp\n\n - raise HTTPError(400) on parse error\n - handle and strip tz info for internal consistency\n (we use naive utc timestamps everywhere)\n \"\"\"\n try:\n dt = parse_date(timestamp)\n except Exception:\n raise web.HTTPError(400, \"Not a valid timestamp: %r\", timestamp)\n if dt.tzinfo:\n # strip timezone info to naive UTC datetime\n dt = dt.astimezone(timezone.utc).replace(tzinfo=None)\n\n now = datetime.utcnow()\n if (dt - now) > timedelta(minutes=59):\n raise web.HTTPError(\n 400,\n \"Rejecting activity from more than an hour in the future: {}\".format(\n isoformat(dt)\n ),\n )\n return dt\n\n\nclass ActivityAPIHandler(APIHandler):\n def _validate_servers(self, user, servers):\n \"\"\"Validate servers dict argument\n\n - types are correct\n - each server exists\n - last_activity fields are parsed into datetime objects\n \"\"\"\n msg = \"servers must be a dict of the form {server_name: {last_activity: timestamp}}\"\n if not isinstance(servers, dict):\n raise web.HTTPError(400, msg)\n\n spawners = user.orm_spawners\n for server_name, server_info in servers.items():\n if server_name not in spawners:\n raise web.HTTPError(\n 400,\n f\"No such server '{server_name}' for user {user.name}\",\n )\n # check that each per-server field is a dict\n if not isinstance(server_info, dict):\n raise web.HTTPError(400, msg)\n # check that last_activity is defined for each per-server dict\n if 'last_activity' not in server_info:\n raise web.HTTPError(400, msg)\n # parse last_activity timestamps\n # _parse_timestamp above is responsible for raising errors\n server_info['last_activity'] = _parse_timestamp(\n server_info['last_activity']\n )\n return servers\n\n @needs_scope('users:activity')\n def post(self, user_name):\n user = self.find_user(user_name)\n if user is None:\n # no such user\n raise web.HTTPError(404, \"No such user: %r\", user_name)\n\n body = self.get_json_body()\n if not isinstance(body, dict):\n raise web.HTTPError(400, \"body must be a json dict\")\n\n last_activity_timestamp = body.get('last_activity')\n servers = body.get('servers')\n if not last_activity_timestamp and not servers:\n raise web.HTTPError(\n 400, \"body must contain at least one of `last_activity` or `servers`\"\n )\n\n if servers:\n # validate server args\n servers = self._validate_servers(user, servers)\n # at this point we know that the servers dict\n # is valid and contains only servers that exist\n # and last_activity is defined and a valid datetime object\n\n # update user.last_activity if specified\n if last_activity_timestamp:\n last_activity = _parse_timestamp(last_activity_timestamp)\n if (not user.last_activity) or last_activity > user.last_activity:\n self.log.debug(\n \"Activity for user %s: %s\", user.name, isoformat(last_activity)\n )\n user.last_activity = last_activity\n else:\n self.log.debug(\n \"Not updating activity for %s: %s < %s\",\n user,\n isoformat(last_activity),\n isoformat(user.last_activity),\n )\n\n if servers:\n for server_name, server_info in servers.items():\n last_activity = server_info['last_activity']\n spawner = user.orm_spawners[server_name]\n\n if (not spawner.last_activity) or last_activity > spawner.last_activity:\n self.log.debug(\n \"Activity on server %s/%s: %s\",\n user.name,\n server_name,\n isoformat(last_activity),\n )\n spawner.last_activity = last_activity\n else:\n self.log.debug(\n \"Not updating server activity on %s/%s: %s < %s\",\n user.name,\n server_name,\n isoformat(last_activity),\n isoformat(user.last_activity),\n )\n\n self.db.commit()\n\n\ndefault_handlers = [\n (r\"/api/user\", SelfAPIHandler),\n (r\"/api/users\", UserListAPIHandler),\n (r\"/api/users/([^/]+)\", UserAPIHandler),\n (r\"/api/users/([^/]+)/server\", UserServerAPIHandler),\n (r\"/api/users/([^/]+)/server/progress\", SpawnProgressAPIHandler),\n (r\"/api/users/([^/]+)/tokens\", UserTokenListAPIHandler),\n (r\"/api/users/([^/]+)/tokens/([^/]*)\", UserTokenAPIHandler),\n (r\"/api/users/([^/]+)/servers/([^/]*)\", UserServerAPIHandler),\n (r\"/api/users/([^/]+)/servers/([^/]*)/progress\", SpawnProgressAPIHandler),\n (r\"/api/users/([^/]+)/activity\", ActivityAPIHandler),\n (r\"/api/users/([^/]+)/admin-access\", UserAdminAccessAPIHandler),\n]\n", "path": "jupyterhub/apihandlers/users.py" } ]
diff --git a/jupyterhub/apihandlers/users.py b/jupyterhub/apihandlers/users.py index e36faf9eba..97aa3a87f8 100644 --- a/jupyterhub/apihandlers/users.py +++ b/jupyterhub/apihandlers/users.py @@ -421,6 +421,7 @@ async def post(self, user_name): token_model = self.token_model(orm.APIToken.find(self.db, api_token)) token_model['token'] = api_token self.write(json.dumps(token_model)) + self.set_status(201) class UserTokenAPIHandler(APIHandler): diff --git a/jupyterhub/tests/test_api.py b/jupyterhub/tests/test_api.py index bbbb7f3f00..b20566259a 100644 --- a/jupyterhub/tests/test_api.py +++ b/jupyterhub/tests/test_api.py @@ -1366,8 +1366,8 @@ async def test_get_new_token_deprecated(app, headers, status): @mark.parametrize( "headers, status, note, expires_in", [ - ({}, 200, 'test note', None), - ({}, 200, '', 100), + ({}, 201, 'test note', None), + ({}, 201, '', 100), ({'Authorization': 'token bad'}, 403, '', None), ], ) @@ -1386,7 +1386,7 @@ async def test_get_new_token(app, headers, status, note, expires_in): app, 'users/admin/tokens', method='post', headers=headers, data=body ) assert r.status_code == status - if status != 200: + if status != 201: return # check the new-token reply reply = r.json() @@ -1424,10 +1424,10 @@ async def test_get_new_token(app, headers, status, note, expires_in): @mark.parametrize( "as_user, for_user, status", [ - ('admin', 'other', 200), + ('admin', 'other', 201), ('admin', 'missing', 403), ('user', 'other', 403), - ('user', 'user', 200), + ('user', 'user', 201), ], ) async def test_token_for_user(app, as_user, for_user, status): @@ -1448,7 +1448,7 @@ async def test_token_for_user(app, as_user, for_user, status): ) assert r.status_code == status reply = r.json() - if status != 200: + if status != 201: return assert 'token' in reply @@ -1486,7 +1486,7 @@ async def test_token_authenticator_noauth(app): data=json.dumps(data) if data else None, noauth=True, ) - assert r.status_code == 200 + assert r.status_code == 201 reply = r.json() assert 'token' in reply r = await api_request(app, 'authorizations', 'token', reply['token']) @@ -1509,7 +1509,7 @@ async def test_token_authenticator_dict_noauth(app): data=json.dumps(data) if data else None, noauth=True, ) - assert r.status_code == 200 + assert r.status_code == 201 reply = r.json() assert 'token' in reply r = await api_request(app, 'authorizations', 'token', reply['token']) diff --git a/jupyterhub/tests/test_roles.py b/jupyterhub/tests/test_roles.py index 9c6e66fb65..ed9ede2e48 100644 --- a/jupyterhub/tests/test_roles.py +++ b/jupyterhub/tests/test_roles.py @@ -661,11 +661,11 @@ async def test_load_roles_user_tokens(tmpdir, request): "headers, rolename, scopes, status", [ # no role requested - gets default 'token' role - ({}, None, None, 200), + ({}, None, None, 201), # role scopes within the user's default 'user' role - ({}, 'self-reader', ['read:users'], 200), + ({}, 'self-reader', ['read:users'], 201), # role scopes outside of the user's role but within the group's role scopes of which the user is a member - ({}, 'groups-reader', ['read:groups'], 200), + ({}, 'groups-reader', ['read:groups'], 201), # non-existing role request ({}, 'non-existing', [], 404), # role scopes outside of both user's role and group's role scopes
facebookresearch__hydra-2729
CI failing: `./tools/configen/configen/utils.py:4:1: F401 'typing.Tuple' imported but unused` ``` ./tools/configen/configen/utils.py:4:1: F401 'typing.Tuple' imported but unused nox > [2023-07-24 22:16:52,631] Command flake8 --config .flake8 failed with exit code 1 nox > [2023-07-24 22:16:52,632] Session lint-3.10 failed. ```
[ { "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport sys\nfrom enum import Enum\nfrom typing import Any, Dict, Iterable, List, Optional, Set, Tuple\n\nfrom omegaconf._utils import (\n _resolve_optional,\n get_dict_key_value_types,\n get_list_element_type,\n is_dict_annotation,\n is_list_annotation,\n is_primitive_type_annotation,\n)\n\n\n# borrowed from OmegaConf\ndef type_str(t: Any) -> str:\n is_optional, t = _resolve_optional(t)\n if t is None:\n return type(t).__name__\n if t is Any:\n return \"Any\"\n if t is ...:\n return \"...\"\n\n if sys.version_info < (3, 7, 0): # pragma: no cover\n # Python 3.6\n if hasattr(t, \"__name__\"):\n name = str(t.__name__)\n else:\n if t.__origin__ is not None:\n name = type_str(t.__origin__)\n else:\n name = str(t)\n if name.startswith(\"typing.\"):\n name = name[len(\"typing.\") :]\n else: # pragma: no cover\n # Python >= 3.7\n if hasattr(t, \"__name__\"):\n name = str(t.__name__)\n else:\n if t._name is None:\n if t.__origin__ is not None:\n name = type_str(t.__origin__)\n else:\n name = str(t._name)\n\n args = getattr(t, \"__args__\", None)\n if args is not None:\n args = \", \".join(type_str(t) for t in t.__args__)\n ret = f\"{name}[{args}]\"\n else:\n ret = name\n if is_optional:\n return f\"Optional[{ret}]\"\n else:\n return ret\n\n\ndef is_tuple_annotation(type_: Any) -> bool:\n origin = getattr(type_, \"__origin__\", None)\n return origin is tuple\n\n\ndef convert_imports(imports: Set[Any], string_imports: Iterable[str]) -> List[str]:\n tmp = set()\n for imp in string_imports:\n tmp.add(imp)\n for t in imports:\n s = None\n origin = getattr(t, \"__origin__\", None)\n if t is Any:\n classname = \"Any\"\n elif t is Optional:\n classname = \"Optional\"\n else:\n if origin is list:\n classname = \"List\"\n elif origin is tuple:\n classname = \"Tuple\"\n elif origin is dict:\n classname = \"Dict\"\n else:\n classname = t.__name__\n\n if not is_primitive_type_annotation(t) or issubclass(t, Enum):\n s = f\"from {t.__module__} import {classname}\"\n\n if s is not None:\n tmp.add(s)\n return sorted(list(tmp))\n\n\ndef collect_imports(imports: Set[Any], type_: Any) -> None:\n if is_list_annotation(type_):\n collect_imports(imports, get_list_element_type(type_))\n type_ = List\n elif is_dict_annotation(type_):\n kvt = get_dict_key_value_types(type_)\n collect_imports(imports, kvt[0])\n collect_imports(imports, kvt[1])\n type_ = Dict\n else:\n is_optional = _resolve_optional(type_)[0]\n if is_optional and type_ is not Any:\n type_ = Optional\n imports.add(type_)\n", "path": "tools/configen/configen/utils.py" } ]
[ { "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport sys\nfrom enum import Enum\nfrom typing import Any, Dict, Iterable, List, Optional, Set\n\nfrom omegaconf._utils import (\n _resolve_optional,\n get_dict_key_value_types,\n get_list_element_type,\n is_dict_annotation,\n is_list_annotation,\n is_primitive_type_annotation,\n)\n\n\n# borrowed from OmegaConf\ndef type_str(t: Any) -> str:\n is_optional, t = _resolve_optional(t)\n if t is None:\n return type(t).__name__\n if t is Any:\n return \"Any\"\n if t is ...:\n return \"...\"\n\n if sys.version_info < (3, 7, 0): # pragma: no cover\n # Python 3.6\n if hasattr(t, \"__name__\"):\n name = str(t.__name__)\n else:\n if t.__origin__ is not None:\n name = type_str(t.__origin__)\n else:\n name = str(t)\n if name.startswith(\"typing.\"):\n name = name[len(\"typing.\") :]\n else: # pragma: no cover\n # Python >= 3.7\n if hasattr(t, \"__name__\"):\n name = str(t.__name__)\n else:\n if t._name is None:\n if t.__origin__ is not None:\n name = type_str(t.__origin__)\n else:\n name = str(t._name)\n\n args = getattr(t, \"__args__\", None)\n if args is not None:\n args = \", \".join(type_str(t) for t in t.__args__)\n ret = f\"{name}[{args}]\"\n else:\n ret = name\n if is_optional:\n return f\"Optional[{ret}]\"\n else:\n return ret\n\n\ndef is_tuple_annotation(type_: Any) -> bool:\n origin = getattr(type_, \"__origin__\", None)\n return origin is tuple\n\n\ndef convert_imports(imports: Set[Any], string_imports: Iterable[str]) -> List[str]:\n tmp = set()\n for imp in string_imports:\n tmp.add(imp)\n for t in imports:\n s = None\n origin = getattr(t, \"__origin__\", None)\n if t is Any:\n classname = \"Any\"\n elif t is Optional:\n classname = \"Optional\"\n else:\n if origin is list:\n classname = \"List\"\n elif origin is tuple:\n classname = \"Tuple\"\n elif origin is dict:\n classname = \"Dict\"\n else:\n classname = t.__name__\n\n if not is_primitive_type_annotation(t) or issubclass(t, Enum):\n s = f\"from {t.__module__} import {classname}\"\n\n if s is not None:\n tmp.add(s)\n return sorted(list(tmp))\n\n\ndef collect_imports(imports: Set[Any], type_: Any) -> None:\n if is_list_annotation(type_):\n collect_imports(imports, get_list_element_type(type_))\n type_ = List\n elif is_dict_annotation(type_):\n kvt = get_dict_key_value_types(type_)\n collect_imports(imports, kvt[0])\n collect_imports(imports, kvt[1])\n type_ = Dict\n else:\n is_optional = _resolve_optional(type_)[0]\n if is_optional and type_ is not Any:\n type_ = Optional\n imports.add(type_)\n", "path": "tools/configen/configen/utils.py" } ]
diff --git a/tools/configen/configen/utils.py b/tools/configen/configen/utils.py index 4faf42fb26..546ebec797 100644 --- a/tools/configen/configen/utils.py +++ b/tools/configen/configen/utils.py @@ -1,7 +1,7 @@ # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved import sys from enum import Enum -from typing import Any, Dict, Iterable, List, Optional, Set, Tuple +from typing import Any, Dict, Iterable, List, Optional, Set from omegaconf._utils import ( _resolve_optional,
docker__docker-py-683
client.py - exec_create - broken API Missing 'Id' extraction from a container's dictionary representation: Line #296 : `url = self._url('/containers/{0}/exec'.format(container))` should add the following above it: ``` if isinstance(container, dict): container = container.get('Id') ```
[ { "content": "# Copyright 2013 dotCloud inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport re\nimport shlex\nimport warnings\nfrom datetime import datetime\n\nimport six\n\nfrom . import clientbase\nfrom . import constants\nfrom . import errors\nfrom .auth import auth\nfrom .utils import utils, check_resource\nfrom .constants import INSECURE_REGISTRY_DEPRECATION_WARNING\n\n\nclass Client(clientbase.ClientBase):\n @check_resource\n def attach(self, container, stdout=True, stderr=True,\n stream=False, logs=False):\n params = {\n 'logs': logs and 1 or 0,\n 'stdout': stdout and 1 or 0,\n 'stderr': stderr and 1 or 0,\n 'stream': stream and 1 or 0,\n }\n u = self._url(\"/containers/{0}/attach\".format(container))\n response = self._post(u, params=params, stream=stream)\n\n return self._get_result(container, stream, response)\n\n @check_resource\n def attach_socket(self, container, params=None, ws=False):\n if params is None:\n params = {\n 'stdout': 1,\n 'stderr': 1,\n 'stream': 1\n }\n\n if ws:\n return self._attach_websocket(container, params)\n\n u = self._url(\"/containers/{0}/attach\".format(container))\n return self._get_raw_response_socket(self.post(\n u, None, params=self._attach_params(params), stream=True))\n\n def build(self, path=None, tag=None, quiet=False, fileobj=None,\n nocache=False, rm=False, stream=False, timeout=None,\n custom_context=False, encoding=None, pull=False,\n forcerm=False, dockerfile=None, container_limits=None,\n decode=False):\n remote = context = headers = None\n container_limits = container_limits or {}\n if path is None and fileobj is None:\n raise TypeError(\"Either path or fileobj needs to be provided.\")\n\n for key in container_limits.keys():\n if key not in constants.CONTAINER_LIMITS_KEYS:\n raise errors.DockerException(\n 'Invalid container_limits key {0}'.format(key)\n )\n\n if custom_context:\n if not fileobj:\n raise TypeError(\"You must specify fileobj with custom_context\")\n context = fileobj\n elif fileobj is not None:\n context = utils.mkbuildcontext(fileobj)\n elif path.startswith(('http://', 'https://',\n 'git://', 'github.com/', 'git@')):\n remote = path\n elif not os.path.isdir(path):\n raise TypeError(\"You must specify a directory to build in path\")\n else:\n dockerignore = os.path.join(path, '.dockerignore')\n exclude = None\n if os.path.exists(dockerignore):\n with open(dockerignore, 'r') as f:\n exclude = list(filter(bool, f.read().splitlines()))\n # These are handled by the docker daemon and should not be\n # excluded on the client\n if 'Dockerfile' in exclude:\n exclude.remove('Dockerfile')\n if '.dockerignore' in exclude:\n exclude.remove(\".dockerignore\")\n context = utils.tar(path, exclude=exclude)\n\n if utils.compare_version('1.8', self._version) >= 0:\n stream = True\n\n if dockerfile and utils.compare_version('1.17', self._version) < 0:\n raise errors.InvalidVersion(\n 'dockerfile was only introduced in API version 1.17'\n )\n\n if utils.compare_version('1.19', self._version) < 0:\n pull = 1 if pull else 0\n\n u = self._url('/build')\n params = {\n 't': tag,\n 'remote': remote,\n 'q': quiet,\n 'nocache': nocache,\n 'rm': rm,\n 'forcerm': forcerm,\n 'pull': pull,\n 'dockerfile': dockerfile,\n }\n params.update(container_limits)\n\n if context is not None:\n headers = {'Content-Type': 'application/tar'}\n if encoding:\n headers['Content-Encoding'] = encoding\n\n if utils.compare_version('1.9', self._version) >= 0:\n # If we don't have any auth data so far, try reloading the config\n # file one more time in case anything showed up in there.\n if not self._auth_configs:\n self._auth_configs = auth.load_config()\n\n # Send the full auth configuration (if any exists), since the build\n # could use any (or all) of the registries.\n if self._auth_configs:\n if headers is None:\n headers = {}\n headers['X-Registry-Config'] = auth.encode_full_header(\n self._auth_configs\n )\n\n response = self._post(\n u,\n data=context,\n params=params,\n headers=headers,\n stream=stream,\n timeout=timeout,\n )\n\n if context is not None and not custom_context:\n context.close()\n\n if stream:\n return self._stream_helper(response, decode=decode)\n else:\n output = self._result(response)\n srch = r'Successfully built ([0-9a-f]+)'\n match = re.search(srch, output)\n if not match:\n return None, output\n return match.group(1), output\n\n @check_resource\n def commit(self, container, repository=None, tag=None, message=None,\n author=None, conf=None):\n params = {\n 'container': container,\n 'repo': repository,\n 'tag': tag,\n 'comment': message,\n 'author': author\n }\n u = self._url(\"/commit\")\n return self._result(self._post_json(u, data=conf, params=params),\n json=True)\n\n def containers(self, quiet=False, all=False, trunc=False, latest=False,\n since=None, before=None, limit=-1, size=False,\n filters=None):\n params = {\n 'limit': 1 if latest else limit,\n 'all': 1 if all else 0,\n 'size': 1 if size else 0,\n 'trunc_cmd': 1 if trunc else 0,\n 'since': since,\n 'before': before\n }\n if filters:\n params['filters'] = utils.convert_filters(filters)\n u = self._url(\"/containers/json\")\n res = self._result(self._get(u, params=params), True)\n\n if quiet:\n return [{'Id': x['Id']} for x in res]\n if trunc:\n for x in res:\n x['Id'] = x['Id'][:12]\n return res\n\n @check_resource\n def copy(self, container, resource):\n res = self._post_json(\n self._url(\"/containers/{0}/copy\".format(container)),\n data={\"Resource\": resource},\n stream=True\n )\n self._raise_for_status(res)\n return res.raw\n\n def create_container(self, image, command=None, hostname=None, user=None,\n detach=False, stdin_open=False, tty=False,\n mem_limit=None, ports=None, environment=None,\n dns=None, volumes=None, volumes_from=None,\n network_disabled=False, name=None, entrypoint=None,\n cpu_shares=None, working_dir=None, domainname=None,\n memswap_limit=None, cpuset=None, host_config=None,\n mac_address=None, labels=None, volume_driver=None):\n\n if isinstance(volumes, six.string_types):\n volumes = [volumes, ]\n\n if host_config and utils.compare_version('1.15', self._version) < 0:\n raise errors.InvalidVersion(\n 'host_config is not supported in API < 1.15'\n )\n\n config = utils.create_container_config(\n self._version, image, command, hostname, user, detach, stdin_open,\n tty, mem_limit, ports, environment, dns, volumes, volumes_from,\n network_disabled, entrypoint, cpu_shares, working_dir, domainname,\n memswap_limit, cpuset, host_config, mac_address, labels,\n volume_driver\n )\n return self.create_container_from_config(config, name)\n\n def create_container_from_config(self, config, name=None):\n u = self._url(\"/containers/create\")\n params = {\n 'name': name\n }\n res = self._post_json(u, data=config, params=params)\n return self._result(res, True)\n\n @check_resource\n def diff(self, container):\n return self._result(self._get(self._url(\"/containers/{0}/changes\".\n format(container))), True)\n\n def events(self, since=None, until=None, filters=None, decode=None):\n if isinstance(since, datetime):\n since = utils.datetime_to_timestamp(since)\n\n if isinstance(until, datetime):\n until = utils.datetime_to_timestamp(until)\n\n if filters:\n filters = utils.convert_filters(filters)\n\n params = {\n 'since': since,\n 'until': until,\n 'filters': filters\n }\n\n return self._stream_helper(\n self.get(self._url('/events'), params=params, stream=True),\n decode=decode\n )\n\n def exec_create(self, container, cmd, stdout=True, stderr=True, tty=False,\n privileged=False):\n if utils.compare_version('1.15', self._version) < 0:\n raise errors.InvalidVersion('Exec is not supported in API < 1.15')\n if privileged and utils.compare_version('1.19', self._version) < 0:\n raise errors.InvalidVersion(\n 'Privileged exec is not supported in API < 1.19'\n )\n if isinstance(cmd, six.string_types):\n cmd = shlex.split(str(cmd))\n\n data = {\n 'Container': container,\n 'User': '',\n 'Privileged': privileged,\n 'Tty': tty,\n 'AttachStdin': False,\n 'AttachStdout': stdout,\n 'AttachStderr': stderr,\n 'Cmd': cmd\n }\n\n url = self._url('/containers/{0}/exec'.format(container))\n res = self._post_json(url, data=data)\n return self._result(res, True)\n\n def exec_inspect(self, exec_id):\n if utils.compare_version('1.15', self._version) < 0:\n raise errors.InvalidVersion('Exec is not supported in API < 1.15')\n if isinstance(exec_id, dict):\n exec_id = exec_id.get('Id')\n res = self._get(self._url(\"/exec/{0}/json\".format(exec_id)))\n return self._result(res, True)\n\n def exec_resize(self, exec_id, height=None, width=None):\n if utils.compare_version('1.15', self._version) < 0:\n raise errors.InvalidVersion('Exec is not supported in API < 1.15')\n if isinstance(exec_id, dict):\n exec_id = exec_id.get('Id')\n\n params = {'h': height, 'w': width}\n url = self._url(\"/exec/{0}/resize\".format(exec_id))\n res = self._post(url, params=params)\n self._raise_for_status(res)\n\n def exec_start(self, exec_id, detach=False, tty=False, stream=False):\n if utils.compare_version('1.15', self._version) < 0:\n raise errors.InvalidVersion('Exec is not supported in API < 1.15')\n if isinstance(exec_id, dict):\n exec_id = exec_id.get('Id')\n\n data = {\n 'Tty': tty,\n 'Detach': detach\n }\n\n res = self._post_json(self._url('/exec/{0}/start'.format(exec_id)),\n data=data, stream=stream)\n return self._get_result_tty(stream, res, tty)\n\n @check_resource\n def export(self, container):\n res = self._get(self._url(\"/containers/{0}/export\".format(container)),\n stream=True)\n self._raise_for_status(res)\n return res.raw\n\n @check_resource\n def get_image(self, image):\n res = self._get(self._url(\"/images/{0}/get\".format(image)),\n stream=True)\n self._raise_for_status(res)\n return res.raw\n\n @check_resource\n def history(self, image):\n res = self._get(self._url(\"/images/{0}/history\".format(image)))\n return self._result(res, True)\n\n def images(self, name=None, quiet=False, all=False, viz=False,\n filters=None):\n if viz:\n if utils.compare_version('1.7', self._version) >= 0:\n raise Exception('Viz output is not supported in API >= 1.7!')\n return self._result(self._get(self._url(\"images/viz\")))\n params = {\n 'filter': name,\n 'only_ids': 1 if quiet else 0,\n 'all': 1 if all else 0,\n }\n if filters:\n params['filters'] = utils.convert_filters(filters)\n res = self._result(self._get(self._url(\"/images/json\"), params=params),\n True)\n if quiet:\n return [x['Id'] for x in res]\n return res\n\n def import_image(self, src=None, repository=None, tag=None, image=None):\n if src:\n if isinstance(src, six.string_types):\n try:\n result = self.import_image_from_file(\n src, repository=repository, tag=tag)\n except IOError:\n result = self.import_image_from_url(\n src, repository=repository, tag=tag)\n else:\n result = self.import_image_from_data(\n src, repository=repository, tag=tag)\n elif image:\n result = self.import_image_from_image(\n image, repository=repository, tag=tag)\n else:\n raise Exception(\"Must specify a src or image\")\n\n return result\n\n def import_image_from_data(self, data, repository=None, tag=None):\n u = self._url(\"/images/create\")\n params = {\n 'fromSrc': '-',\n 'repo': repository,\n 'tag': tag\n }\n headers = {\n 'Content-Type': 'application/tar',\n }\n return self._result(\n self._post(u, data=data, params=params, headers=headers))\n\n def import_image_from_file(self, filename, repository=None, tag=None):\n u = self._url(\"/images/create\")\n params = {\n 'fromSrc': '-',\n 'repo': repository,\n 'tag': tag\n }\n headers = {\n 'Content-Type': 'application/tar',\n }\n with open(filename, 'rb') as f:\n return self._result(\n self._post(u, data=f, params=params, headers=headers,\n timeout=None))\n\n def import_image_from_stream(self, stream, repository=None, tag=None):\n u = self._url(\"/images/create\")\n params = {\n 'fromSrc': '-',\n 'repo': repository,\n 'tag': tag\n }\n headers = {\n 'Content-Type': 'application/tar',\n 'Transfer-Encoding': 'chunked',\n }\n return self._result(\n self._post(u, data=stream, params=params, headers=headers))\n\n def import_image_from_url(self, url, repository=None, tag=None):\n u = self._url(\"/images/create\")\n params = {\n 'fromSrc': url,\n 'repo': repository,\n 'tag': tag\n }\n return self._result(\n self._post(u, data=None, params=params))\n\n def import_image_from_image(self, image, repository=None, tag=None):\n u = self._url(\"/images/create\")\n params = {\n 'fromImage': image,\n 'repo': repository,\n 'tag': tag\n }\n return self._result(\n self._post(u, data=None, params=params))\n\n def info(self):\n return self._result(self._get(self._url(\"/info\")),\n True)\n\n @check_resource\n def insert(self, image, url, path):\n if utils.compare_version('1.12', self._version) >= 0:\n raise errors.DeprecatedMethod(\n 'insert is not available for API version >=1.12'\n )\n api_url = self._url(\"/images/{0}/insert\".format(image))\n params = {\n 'url': url,\n 'path': path\n }\n return self._result(self._post(api_url, params=params))\n\n @check_resource\n def inspect_container(self, container):\n return self._result(\n self._get(self._url(\"/containers/{0}/json\".format(container))),\n True)\n\n @check_resource\n def inspect_image(self, image):\n return self._result(\n self._get(\n self._url(\"/images/{0}/json\".format(image.replace('/', '%2F')))\n ),\n True\n )\n\n @check_resource\n def kill(self, container, signal=None):\n url = self._url(\"/containers/{0}/kill\".format(container))\n params = {}\n if signal is not None:\n params['signal'] = signal\n res = self._post(url, params=params)\n\n self._raise_for_status(res)\n\n def load_image(self, data):\n res = self._post(self._url(\"/images/load\"), data=data)\n self._raise_for_status(res)\n\n def login(self, username, password=None, email=None, registry=None,\n reauth=False, insecure_registry=False, dockercfg_path=None):\n if insecure_registry:\n warnings.warn(\n INSECURE_REGISTRY_DEPRECATION_WARNING.format('login()'),\n DeprecationWarning\n )\n\n # If we don't have any auth data so far, try reloading the config file\n # one more time in case anything showed up in there.\n # If dockercfg_path is passed check to see if the config file exists,\n # if so load that config.\n if dockercfg_path and os.path.exists(dockercfg_path):\n self._auth_configs = auth.load_config(dockercfg_path)\n elif not self._auth_configs:\n self._auth_configs = auth.load_config()\n\n registry = registry or auth.INDEX_URL\n\n authcfg = auth.resolve_authconfig(self._auth_configs, registry)\n # If we found an existing auth config for this registry and username\n # combination, we can return it immediately unless reauth is requested.\n if authcfg and authcfg.get('username', None) == username \\\n and not reauth:\n return authcfg\n\n req_data = {\n 'username': username,\n 'password': password,\n 'email': email,\n 'serveraddress': registry,\n }\n\n response = self._post_json(self._url('/auth'), data=req_data)\n if response.status_code == 200:\n self._auth_configs[registry] = req_data\n return self._result(response, json=True)\n\n @check_resource\n def logs(self, container, stdout=True, stderr=True, stream=False,\n timestamps=False, tail='all'):\n if utils.compare_version('1.11', self._version) >= 0:\n params = {'stderr': stderr and 1 or 0,\n 'stdout': stdout and 1 or 0,\n 'timestamps': timestamps and 1 or 0,\n 'follow': stream and 1 or 0,\n }\n if utils.compare_version('1.13', self._version) >= 0:\n if tail != 'all' and (not isinstance(tail, int) or tail <= 0):\n tail = 'all'\n params['tail'] = tail\n url = self._url(\"/containers/{0}/logs\".format(container))\n res = self._get(url, params=params, stream=stream)\n return self._get_result(container, stream, res)\n return self.attach(\n container,\n stdout=stdout,\n stderr=stderr,\n stream=stream,\n logs=True\n )\n\n @check_resource\n def pause(self, container):\n url = self._url('/containers/{0}/pause'.format(container))\n res = self._post(url)\n self._raise_for_status(res)\n\n def ping(self):\n return self._result(self._get(self._url('/_ping')))\n\n @check_resource\n def port(self, container, private_port):\n res = self._get(self._url(\"/containers/{0}/json\".format(container)))\n self._raise_for_status(res)\n json_ = res.json()\n s_port = str(private_port)\n h_ports = None\n\n # Port settings is None when the container is running with\n # network_mode=host.\n port_settings = json_.get('NetworkSettings', {}).get('Ports')\n if port_settings is None:\n return None\n\n h_ports = port_settings.get(s_port + '/udp')\n if h_ports is None:\n h_ports = port_settings.get(s_port + '/tcp')\n\n return h_ports\n\n def pull(self, repository, tag=None, stream=False,\n insecure_registry=False, auth_config=None):\n if insecure_registry:\n warnings.warn(\n INSECURE_REGISTRY_DEPRECATION_WARNING.format('pull()'),\n DeprecationWarning\n )\n\n if not tag:\n repository, tag = utils.parse_repository_tag(repository)\n registry, repo_name = auth.resolve_repository_name(repository)\n if repo_name.count(\":\") == 1:\n repository, tag = repository.rsplit(\":\", 1)\n\n params = {\n 'tag': tag,\n 'fromImage': repository\n }\n headers = {}\n\n if utils.compare_version('1.5', self._version) >= 0:\n # If we don't have any auth data so far, try reloading the config\n # file one more time in case anything showed up in there.\n if auth_config is None:\n if not self._auth_configs:\n self._auth_configs = auth.load_config()\n authcfg = auth.resolve_authconfig(self._auth_configs, registry)\n # Do not fail here if no authentication exists for this\n # specific registry as we can have a readonly pull. Just\n # put the header if we can.\n if authcfg:\n # auth_config needs to be a dict in the format used by\n # auth.py username , password, serveraddress, email\n headers['X-Registry-Auth'] = auth.encode_header(\n authcfg\n )\n else:\n headers['X-Registry-Auth'] = auth.encode_header(auth_config)\n\n response = self._post(\n self._url('/images/create'), params=params, headers=headers,\n stream=stream, timeout=None\n )\n\n self._raise_for_status(response)\n\n if stream:\n return self._stream_helper(response)\n\n return self._result(response)\n\n def push(self, repository, tag=None, stream=False,\n insecure_registry=False):\n if insecure_registry:\n warnings.warn(\n INSECURE_REGISTRY_DEPRECATION_WARNING.format('push()'),\n DeprecationWarning\n )\n\n if not tag:\n repository, tag = utils.parse_repository_tag(repository)\n registry, repo_name = auth.resolve_repository_name(repository)\n u = self._url(\"/images/{0}/push\".format(repository))\n params = {\n 'tag': tag\n }\n headers = {}\n\n if utils.compare_version('1.5', self._version) >= 0:\n # If we don't have any auth data so far, try reloading the config\n # file one more time in case anything showed up in there.\n if not self._auth_configs:\n self._auth_configs = auth.load_config()\n authcfg = auth.resolve_authconfig(self._auth_configs, registry)\n\n # Do not fail here if no authentication exists for this specific\n # registry as we can have a readonly pull. Just put the header if\n # we can.\n if authcfg:\n headers['X-Registry-Auth'] = auth.encode_header(authcfg)\n\n response = self._post_json(\n u, None, headers=headers, stream=stream, params=params\n )\n\n self._raise_for_status(response)\n\n if stream:\n return self._stream_helper(response)\n\n return self._result(response)\n\n @check_resource\n def remove_container(self, container, v=False, link=False, force=False):\n params = {'v': v, 'link': link, 'force': force}\n res = self._delete(self._url(\"/containers/\" + container),\n params=params)\n self._raise_for_status(res)\n\n @check_resource\n def remove_image(self, image, force=False, noprune=False):\n params = {'force': force, 'noprune': noprune}\n res = self._delete(self._url(\"/images/\" + image), params=params)\n self._raise_for_status(res)\n\n @check_resource\n def rename(self, container, name):\n if utils.compare_version('1.17', self._version) < 0:\n raise errors.InvalidVersion(\n 'rename was only introduced in API version 1.17'\n )\n url = self._url(\"/containers/{0}/rename\".format(container))\n params = {'name': name}\n res = self._post(url, params=params)\n self._raise_for_status(res)\n\n @check_resource\n def resize(self, container, height, width):\n params = {'h': height, 'w': width}\n url = self._url(\"/containers/{0}/resize\".format(container))\n res = self._post(url, params=params)\n self._raise_for_status(res)\n\n @check_resource\n def restart(self, container, timeout=10):\n params = {'t': timeout}\n url = self._url(\"/containers/{0}/restart\".format(container))\n res = self._post(url, params=params)\n self._raise_for_status(res)\n\n def search(self, term):\n return self._result(self._get(self._url(\"/images/search\"),\n params={'term': term}),\n True)\n\n @check_resource\n def start(self, container, binds=None, port_bindings=None, lxc_conf=None,\n publish_all_ports=False, links=None, privileged=False,\n dns=None, dns_search=None, volumes_from=None, network_mode=None,\n restart_policy=None, cap_add=None, cap_drop=None, devices=None,\n extra_hosts=None, read_only=None, pid_mode=None, ipc_mode=None,\n security_opt=None, ulimits=None):\n\n if utils.compare_version('1.10', self._version) < 0:\n if dns is not None:\n raise errors.InvalidVersion(\n 'dns is only supported for API version >= 1.10'\n )\n if volumes_from is not None:\n raise errors.InvalidVersion(\n 'volumes_from is only supported for API version >= 1.10'\n )\n\n if utils.compare_version('1.15', self._version) < 0:\n if security_opt is not None:\n raise errors.InvalidVersion(\n 'security_opt is only supported for API version >= 1.15'\n )\n if ipc_mode:\n raise errors.InvalidVersion(\n 'ipc_mode is only supported for API version >= 1.15'\n )\n\n if utils.compare_version('1.17', self._version) < 0:\n if read_only is not None:\n raise errors.InvalidVersion(\n 'read_only is only supported for API version >= 1.17'\n )\n if pid_mode is not None:\n raise errors.InvalidVersion(\n 'pid_mode is only supported for API version >= 1.17'\n )\n\n if utils.compare_version('1.18', self._version) < 0:\n if ulimits is not None:\n raise errors.InvalidVersion(\n 'ulimits is only supported for API version >= 1.18'\n )\n\n start_config = utils.create_host_config(\n binds=binds, port_bindings=port_bindings, lxc_conf=lxc_conf,\n publish_all_ports=publish_all_ports, links=links, dns=dns,\n privileged=privileged, dns_search=dns_search, cap_add=cap_add,\n cap_drop=cap_drop, volumes_from=volumes_from, devices=devices,\n network_mode=network_mode, restart_policy=restart_policy,\n extra_hosts=extra_hosts, read_only=read_only, pid_mode=pid_mode,\n ipc_mode=ipc_mode, security_opt=security_opt, ulimits=ulimits\n )\n\n url = self._url(\"/containers/{0}/start\".format(container))\n if not start_config:\n start_config = None\n elif utils.compare_version('1.15', self._version) > 0:\n warnings.warn(\n 'Passing host config parameters in start() is deprecated. '\n 'Please use host_config in create_container instead!',\n DeprecationWarning\n )\n res = self._post_json(url, data=start_config)\n self._raise_for_status(res)\n\n @check_resource\n def stats(self, container, decode=None):\n if utils.compare_version('1.17', self._version) < 0:\n raise errors.InvalidVersion(\n 'Stats retrieval is not supported in API < 1.17!')\n\n url = self._url(\"/containers/{0}/stats\".format(container))\n return self._stream_helper(self._get(url, stream=True), decode=decode)\n\n @check_resource\n def stop(self, container, timeout=10):\n params = {'t': timeout}\n url = self._url(\"/containers/{0}/stop\".format(container))\n\n res = self._post(url, params=params,\n timeout=(timeout + (self.timeout or 0)))\n self._raise_for_status(res)\n\n @check_resource\n def tag(self, image, repository, tag=None, force=False):\n params = {\n 'tag': tag,\n 'repo': repository,\n 'force': 1 if force else 0\n }\n url = self._url(\"/images/{0}/tag\".format(image))\n res = self._post(url, params=params)\n self._raise_for_status(res)\n return res.status_code == 201\n\n @check_resource\n def top(self, container):\n u = self._url(\"/containers/{0}/top\".format(container))\n return self._result(self._get(u), True)\n\n def version(self, api_version=True):\n url = self._url(\"/version\", versioned_api=api_version)\n return self._result(self._get(url), json=True)\n\n @check_resource\n def unpause(self, container):\n url = self._url('/containers/{0}/unpause'.format(container))\n res = self._post(url)\n self._raise_for_status(res)\n\n @check_resource\n def wait(self, container, timeout=None):\n url = self._url(\"/containers/{0}/wait\".format(container))\n res = self._post(url, timeout=timeout)\n self._raise_for_status(res)\n json_ = res.json()\n if 'StatusCode' in json_:\n return json_['StatusCode']\n return -1\n\n\nclass AutoVersionClient(Client):\n def __init__(self, *args, **kwargs):\n if 'version' in kwargs and kwargs['version']:\n raise errors.DockerException(\n 'Can not specify version for AutoVersionClient'\n )\n kwargs['version'] = 'auto'\n super(AutoVersionClient, self).__init__(*args, **kwargs)\n", "path": "docker/client.py" } ]
[ { "content": "# Copyright 2013 dotCloud inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport re\nimport shlex\nimport warnings\nfrom datetime import datetime\n\nimport six\n\nfrom . import clientbase\nfrom . import constants\nfrom . import errors\nfrom .auth import auth\nfrom .utils import utils, check_resource\nfrom .constants import INSECURE_REGISTRY_DEPRECATION_WARNING\n\n\nclass Client(clientbase.ClientBase):\n @check_resource\n def attach(self, container, stdout=True, stderr=True,\n stream=False, logs=False):\n params = {\n 'logs': logs and 1 or 0,\n 'stdout': stdout and 1 or 0,\n 'stderr': stderr and 1 or 0,\n 'stream': stream and 1 or 0,\n }\n u = self._url(\"/containers/{0}/attach\".format(container))\n response = self._post(u, params=params, stream=stream)\n\n return self._get_result(container, stream, response)\n\n @check_resource\n def attach_socket(self, container, params=None, ws=False):\n if params is None:\n params = {\n 'stdout': 1,\n 'stderr': 1,\n 'stream': 1\n }\n\n if ws:\n return self._attach_websocket(container, params)\n\n u = self._url(\"/containers/{0}/attach\".format(container))\n return self._get_raw_response_socket(self.post(\n u, None, params=self._attach_params(params), stream=True))\n\n def build(self, path=None, tag=None, quiet=False, fileobj=None,\n nocache=False, rm=False, stream=False, timeout=None,\n custom_context=False, encoding=None, pull=False,\n forcerm=False, dockerfile=None, container_limits=None,\n decode=False):\n remote = context = headers = None\n container_limits = container_limits or {}\n if path is None and fileobj is None:\n raise TypeError(\"Either path or fileobj needs to be provided.\")\n\n for key in container_limits.keys():\n if key not in constants.CONTAINER_LIMITS_KEYS:\n raise errors.DockerException(\n 'Invalid container_limits key {0}'.format(key)\n )\n\n if custom_context:\n if not fileobj:\n raise TypeError(\"You must specify fileobj with custom_context\")\n context = fileobj\n elif fileobj is not None:\n context = utils.mkbuildcontext(fileobj)\n elif path.startswith(('http://', 'https://',\n 'git://', 'github.com/', 'git@')):\n remote = path\n elif not os.path.isdir(path):\n raise TypeError(\"You must specify a directory to build in path\")\n else:\n dockerignore = os.path.join(path, '.dockerignore')\n exclude = None\n if os.path.exists(dockerignore):\n with open(dockerignore, 'r') as f:\n exclude = list(filter(bool, f.read().splitlines()))\n # These are handled by the docker daemon and should not be\n # excluded on the client\n if 'Dockerfile' in exclude:\n exclude.remove('Dockerfile')\n if '.dockerignore' in exclude:\n exclude.remove(\".dockerignore\")\n context = utils.tar(path, exclude=exclude)\n\n if utils.compare_version('1.8', self._version) >= 0:\n stream = True\n\n if dockerfile and utils.compare_version('1.17', self._version) < 0:\n raise errors.InvalidVersion(\n 'dockerfile was only introduced in API version 1.17'\n )\n\n if utils.compare_version('1.19', self._version) < 0:\n pull = 1 if pull else 0\n\n u = self._url('/build')\n params = {\n 't': tag,\n 'remote': remote,\n 'q': quiet,\n 'nocache': nocache,\n 'rm': rm,\n 'forcerm': forcerm,\n 'pull': pull,\n 'dockerfile': dockerfile,\n }\n params.update(container_limits)\n\n if context is not None:\n headers = {'Content-Type': 'application/tar'}\n if encoding:\n headers['Content-Encoding'] = encoding\n\n if utils.compare_version('1.9', self._version) >= 0:\n # If we don't have any auth data so far, try reloading the config\n # file one more time in case anything showed up in there.\n if not self._auth_configs:\n self._auth_configs = auth.load_config()\n\n # Send the full auth configuration (if any exists), since the build\n # could use any (or all) of the registries.\n if self._auth_configs:\n if headers is None:\n headers = {}\n headers['X-Registry-Config'] = auth.encode_full_header(\n self._auth_configs\n )\n\n response = self._post(\n u,\n data=context,\n params=params,\n headers=headers,\n stream=stream,\n timeout=timeout,\n )\n\n if context is not None and not custom_context:\n context.close()\n\n if stream:\n return self._stream_helper(response, decode=decode)\n else:\n output = self._result(response)\n srch = r'Successfully built ([0-9a-f]+)'\n match = re.search(srch, output)\n if not match:\n return None, output\n return match.group(1), output\n\n @check_resource\n def commit(self, container, repository=None, tag=None, message=None,\n author=None, conf=None):\n params = {\n 'container': container,\n 'repo': repository,\n 'tag': tag,\n 'comment': message,\n 'author': author\n }\n u = self._url(\"/commit\")\n return self._result(self._post_json(u, data=conf, params=params),\n json=True)\n\n def containers(self, quiet=False, all=False, trunc=False, latest=False,\n since=None, before=None, limit=-1, size=False,\n filters=None):\n params = {\n 'limit': 1 if latest else limit,\n 'all': 1 if all else 0,\n 'size': 1 if size else 0,\n 'trunc_cmd': 1 if trunc else 0,\n 'since': since,\n 'before': before\n }\n if filters:\n params['filters'] = utils.convert_filters(filters)\n u = self._url(\"/containers/json\")\n res = self._result(self._get(u, params=params), True)\n\n if quiet:\n return [{'Id': x['Id']} for x in res]\n if trunc:\n for x in res:\n x['Id'] = x['Id'][:12]\n return res\n\n @check_resource\n def copy(self, container, resource):\n res = self._post_json(\n self._url(\"/containers/{0}/copy\".format(container)),\n data={\"Resource\": resource},\n stream=True\n )\n self._raise_for_status(res)\n return res.raw\n\n def create_container(self, image, command=None, hostname=None, user=None,\n detach=False, stdin_open=False, tty=False,\n mem_limit=None, ports=None, environment=None,\n dns=None, volumes=None, volumes_from=None,\n network_disabled=False, name=None, entrypoint=None,\n cpu_shares=None, working_dir=None, domainname=None,\n memswap_limit=None, cpuset=None, host_config=None,\n mac_address=None, labels=None, volume_driver=None):\n\n if isinstance(volumes, six.string_types):\n volumes = [volumes, ]\n\n if host_config and utils.compare_version('1.15', self._version) < 0:\n raise errors.InvalidVersion(\n 'host_config is not supported in API < 1.15'\n )\n\n config = utils.create_container_config(\n self._version, image, command, hostname, user, detach, stdin_open,\n tty, mem_limit, ports, environment, dns, volumes, volumes_from,\n network_disabled, entrypoint, cpu_shares, working_dir, domainname,\n memswap_limit, cpuset, host_config, mac_address, labels,\n volume_driver\n )\n return self.create_container_from_config(config, name)\n\n def create_container_from_config(self, config, name=None):\n u = self._url(\"/containers/create\")\n params = {\n 'name': name\n }\n res = self._post_json(u, data=config, params=params)\n return self._result(res, True)\n\n @check_resource\n def diff(self, container):\n return self._result(self._get(self._url(\"/containers/{0}/changes\".\n format(container))), True)\n\n def events(self, since=None, until=None, filters=None, decode=None):\n if isinstance(since, datetime):\n since = utils.datetime_to_timestamp(since)\n\n if isinstance(until, datetime):\n until = utils.datetime_to_timestamp(until)\n\n if filters:\n filters = utils.convert_filters(filters)\n\n params = {\n 'since': since,\n 'until': until,\n 'filters': filters\n }\n\n return self._stream_helper(\n self.get(self._url('/events'), params=params, stream=True),\n decode=decode\n )\n\n @check_resource\n def exec_create(self, container, cmd, stdout=True, stderr=True, tty=False,\n privileged=False):\n if utils.compare_version('1.15', self._version) < 0:\n raise errors.InvalidVersion('Exec is not supported in API < 1.15')\n if privileged and utils.compare_version('1.19', self._version) < 0:\n raise errors.InvalidVersion(\n 'Privileged exec is not supported in API < 1.19'\n )\n if isinstance(cmd, six.string_types):\n cmd = shlex.split(str(cmd))\n\n data = {\n 'Container': container,\n 'User': '',\n 'Privileged': privileged,\n 'Tty': tty,\n 'AttachStdin': False,\n 'AttachStdout': stdout,\n 'AttachStderr': stderr,\n 'Cmd': cmd\n }\n\n url = self._url('/containers/{0}/exec'.format(container))\n res = self._post_json(url, data=data)\n return self._result(res, True)\n\n def exec_inspect(self, exec_id):\n if utils.compare_version('1.15', self._version) < 0:\n raise errors.InvalidVersion('Exec is not supported in API < 1.15')\n if isinstance(exec_id, dict):\n exec_id = exec_id.get('Id')\n res = self._get(self._url(\"/exec/{0}/json\".format(exec_id)))\n return self._result(res, True)\n\n def exec_resize(self, exec_id, height=None, width=None):\n if utils.compare_version('1.15', self._version) < 0:\n raise errors.InvalidVersion('Exec is not supported in API < 1.15')\n if isinstance(exec_id, dict):\n exec_id = exec_id.get('Id')\n\n params = {'h': height, 'w': width}\n url = self._url(\"/exec/{0}/resize\".format(exec_id))\n res = self._post(url, params=params)\n self._raise_for_status(res)\n\n def exec_start(self, exec_id, detach=False, tty=False, stream=False):\n if utils.compare_version('1.15', self._version) < 0:\n raise errors.InvalidVersion('Exec is not supported in API < 1.15')\n if isinstance(exec_id, dict):\n exec_id = exec_id.get('Id')\n\n data = {\n 'Tty': tty,\n 'Detach': detach\n }\n\n res = self._post_json(self._url('/exec/{0}/start'.format(exec_id)),\n data=data, stream=stream)\n return self._get_result_tty(stream, res, tty)\n\n @check_resource\n def export(self, container):\n res = self._get(self._url(\"/containers/{0}/export\".format(container)),\n stream=True)\n self._raise_for_status(res)\n return res.raw\n\n @check_resource\n def get_image(self, image):\n res = self._get(self._url(\"/images/{0}/get\".format(image)),\n stream=True)\n self._raise_for_status(res)\n return res.raw\n\n @check_resource\n def history(self, image):\n res = self._get(self._url(\"/images/{0}/history\".format(image)))\n return self._result(res, True)\n\n def images(self, name=None, quiet=False, all=False, viz=False,\n filters=None):\n if viz:\n if utils.compare_version('1.7', self._version) >= 0:\n raise Exception('Viz output is not supported in API >= 1.7!')\n return self._result(self._get(self._url(\"images/viz\")))\n params = {\n 'filter': name,\n 'only_ids': 1 if quiet else 0,\n 'all': 1 if all else 0,\n }\n if filters:\n params['filters'] = utils.convert_filters(filters)\n res = self._result(self._get(self._url(\"/images/json\"), params=params),\n True)\n if quiet:\n return [x['Id'] for x in res]\n return res\n\n def import_image(self, src=None, repository=None, tag=None, image=None):\n if src:\n if isinstance(src, six.string_types):\n try:\n result = self.import_image_from_file(\n src, repository=repository, tag=tag)\n except IOError:\n result = self.import_image_from_url(\n src, repository=repository, tag=tag)\n else:\n result = self.import_image_from_data(\n src, repository=repository, tag=tag)\n elif image:\n result = self.import_image_from_image(\n image, repository=repository, tag=tag)\n else:\n raise Exception(\"Must specify a src or image\")\n\n return result\n\n def import_image_from_data(self, data, repository=None, tag=None):\n u = self._url(\"/images/create\")\n params = {\n 'fromSrc': '-',\n 'repo': repository,\n 'tag': tag\n }\n headers = {\n 'Content-Type': 'application/tar',\n }\n return self._result(\n self._post(u, data=data, params=params, headers=headers))\n\n def import_image_from_file(self, filename, repository=None, tag=None):\n u = self._url(\"/images/create\")\n params = {\n 'fromSrc': '-',\n 'repo': repository,\n 'tag': tag\n }\n headers = {\n 'Content-Type': 'application/tar',\n }\n with open(filename, 'rb') as f:\n return self._result(\n self._post(u, data=f, params=params, headers=headers,\n timeout=None))\n\n def import_image_from_stream(self, stream, repository=None, tag=None):\n u = self._url(\"/images/create\")\n params = {\n 'fromSrc': '-',\n 'repo': repository,\n 'tag': tag\n }\n headers = {\n 'Content-Type': 'application/tar',\n 'Transfer-Encoding': 'chunked',\n }\n return self._result(\n self._post(u, data=stream, params=params, headers=headers))\n\n def import_image_from_url(self, url, repository=None, tag=None):\n u = self._url(\"/images/create\")\n params = {\n 'fromSrc': url,\n 'repo': repository,\n 'tag': tag\n }\n return self._result(\n self._post(u, data=None, params=params))\n\n def import_image_from_image(self, image, repository=None, tag=None):\n u = self._url(\"/images/create\")\n params = {\n 'fromImage': image,\n 'repo': repository,\n 'tag': tag\n }\n return self._result(\n self._post(u, data=None, params=params))\n\n def info(self):\n return self._result(self._get(self._url(\"/info\")),\n True)\n\n @check_resource\n def insert(self, image, url, path):\n if utils.compare_version('1.12', self._version) >= 0:\n raise errors.DeprecatedMethod(\n 'insert is not available for API version >=1.12'\n )\n api_url = self._url(\"/images/{0}/insert\".format(image))\n params = {\n 'url': url,\n 'path': path\n }\n return self._result(self._post(api_url, params=params))\n\n @check_resource\n def inspect_container(self, container):\n return self._result(\n self._get(self._url(\"/containers/{0}/json\".format(container))),\n True)\n\n @check_resource\n def inspect_image(self, image):\n return self._result(\n self._get(\n self._url(\"/images/{0}/json\".format(image.replace('/', '%2F')))\n ),\n True\n )\n\n @check_resource\n def kill(self, container, signal=None):\n url = self._url(\"/containers/{0}/kill\".format(container))\n params = {}\n if signal is not None:\n params['signal'] = signal\n res = self._post(url, params=params)\n\n self._raise_for_status(res)\n\n def load_image(self, data):\n res = self._post(self._url(\"/images/load\"), data=data)\n self._raise_for_status(res)\n\n def login(self, username, password=None, email=None, registry=None,\n reauth=False, insecure_registry=False, dockercfg_path=None):\n if insecure_registry:\n warnings.warn(\n INSECURE_REGISTRY_DEPRECATION_WARNING.format('login()'),\n DeprecationWarning\n )\n\n # If we don't have any auth data so far, try reloading the config file\n # one more time in case anything showed up in there.\n # If dockercfg_path is passed check to see if the config file exists,\n # if so load that config.\n if dockercfg_path and os.path.exists(dockercfg_path):\n self._auth_configs = auth.load_config(dockercfg_path)\n elif not self._auth_configs:\n self._auth_configs = auth.load_config()\n\n registry = registry or auth.INDEX_URL\n\n authcfg = auth.resolve_authconfig(self._auth_configs, registry)\n # If we found an existing auth config for this registry and username\n # combination, we can return it immediately unless reauth is requested.\n if authcfg and authcfg.get('username', None) == username \\\n and not reauth:\n return authcfg\n\n req_data = {\n 'username': username,\n 'password': password,\n 'email': email,\n 'serveraddress': registry,\n }\n\n response = self._post_json(self._url('/auth'), data=req_data)\n if response.status_code == 200:\n self._auth_configs[registry] = req_data\n return self._result(response, json=True)\n\n @check_resource\n def logs(self, container, stdout=True, stderr=True, stream=False,\n timestamps=False, tail='all'):\n if utils.compare_version('1.11', self._version) >= 0:\n params = {'stderr': stderr and 1 or 0,\n 'stdout': stdout and 1 or 0,\n 'timestamps': timestamps and 1 or 0,\n 'follow': stream and 1 or 0,\n }\n if utils.compare_version('1.13', self._version) >= 0:\n if tail != 'all' and (not isinstance(tail, int) or tail <= 0):\n tail = 'all'\n params['tail'] = tail\n url = self._url(\"/containers/{0}/logs\".format(container))\n res = self._get(url, params=params, stream=stream)\n return self._get_result(container, stream, res)\n return self.attach(\n container,\n stdout=stdout,\n stderr=stderr,\n stream=stream,\n logs=True\n )\n\n @check_resource\n def pause(self, container):\n url = self._url('/containers/{0}/pause'.format(container))\n res = self._post(url)\n self._raise_for_status(res)\n\n def ping(self):\n return self._result(self._get(self._url('/_ping')))\n\n @check_resource\n def port(self, container, private_port):\n res = self._get(self._url(\"/containers/{0}/json\".format(container)))\n self._raise_for_status(res)\n json_ = res.json()\n s_port = str(private_port)\n h_ports = None\n\n # Port settings is None when the container is running with\n # network_mode=host.\n port_settings = json_.get('NetworkSettings', {}).get('Ports')\n if port_settings is None:\n return None\n\n h_ports = port_settings.get(s_port + '/udp')\n if h_ports is None:\n h_ports = port_settings.get(s_port + '/tcp')\n\n return h_ports\n\n def pull(self, repository, tag=None, stream=False,\n insecure_registry=False, auth_config=None):\n if insecure_registry:\n warnings.warn(\n INSECURE_REGISTRY_DEPRECATION_WARNING.format('pull()'),\n DeprecationWarning\n )\n\n if not tag:\n repository, tag = utils.parse_repository_tag(repository)\n registry, repo_name = auth.resolve_repository_name(repository)\n if repo_name.count(\":\") == 1:\n repository, tag = repository.rsplit(\":\", 1)\n\n params = {\n 'tag': tag,\n 'fromImage': repository\n }\n headers = {}\n\n if utils.compare_version('1.5', self._version) >= 0:\n # If we don't have any auth data so far, try reloading the config\n # file one more time in case anything showed up in there.\n if auth_config is None:\n if not self._auth_configs:\n self._auth_configs = auth.load_config()\n authcfg = auth.resolve_authconfig(self._auth_configs, registry)\n # Do not fail here if no authentication exists for this\n # specific registry as we can have a readonly pull. Just\n # put the header if we can.\n if authcfg:\n # auth_config needs to be a dict in the format used by\n # auth.py username , password, serveraddress, email\n headers['X-Registry-Auth'] = auth.encode_header(\n authcfg\n )\n else:\n headers['X-Registry-Auth'] = auth.encode_header(auth_config)\n\n response = self._post(\n self._url('/images/create'), params=params, headers=headers,\n stream=stream, timeout=None\n )\n\n self._raise_for_status(response)\n\n if stream:\n return self._stream_helper(response)\n\n return self._result(response)\n\n def push(self, repository, tag=None, stream=False,\n insecure_registry=False):\n if insecure_registry:\n warnings.warn(\n INSECURE_REGISTRY_DEPRECATION_WARNING.format('push()'),\n DeprecationWarning\n )\n\n if not tag:\n repository, tag = utils.parse_repository_tag(repository)\n registry, repo_name = auth.resolve_repository_name(repository)\n u = self._url(\"/images/{0}/push\".format(repository))\n params = {\n 'tag': tag\n }\n headers = {}\n\n if utils.compare_version('1.5', self._version) >= 0:\n # If we don't have any auth data so far, try reloading the config\n # file one more time in case anything showed up in there.\n if not self._auth_configs:\n self._auth_configs = auth.load_config()\n authcfg = auth.resolve_authconfig(self._auth_configs, registry)\n\n # Do not fail here if no authentication exists for this specific\n # registry as we can have a readonly pull. Just put the header if\n # we can.\n if authcfg:\n headers['X-Registry-Auth'] = auth.encode_header(authcfg)\n\n response = self._post_json(\n u, None, headers=headers, stream=stream, params=params\n )\n\n self._raise_for_status(response)\n\n if stream:\n return self._stream_helper(response)\n\n return self._result(response)\n\n @check_resource\n def remove_container(self, container, v=False, link=False, force=False):\n params = {'v': v, 'link': link, 'force': force}\n res = self._delete(self._url(\"/containers/\" + container),\n params=params)\n self._raise_for_status(res)\n\n @check_resource\n def remove_image(self, image, force=False, noprune=False):\n params = {'force': force, 'noprune': noprune}\n res = self._delete(self._url(\"/images/\" + image), params=params)\n self._raise_for_status(res)\n\n @check_resource\n def rename(self, container, name):\n if utils.compare_version('1.17', self._version) < 0:\n raise errors.InvalidVersion(\n 'rename was only introduced in API version 1.17'\n )\n url = self._url(\"/containers/{0}/rename\".format(container))\n params = {'name': name}\n res = self._post(url, params=params)\n self._raise_for_status(res)\n\n @check_resource\n def resize(self, container, height, width):\n params = {'h': height, 'w': width}\n url = self._url(\"/containers/{0}/resize\".format(container))\n res = self._post(url, params=params)\n self._raise_for_status(res)\n\n @check_resource\n def restart(self, container, timeout=10):\n params = {'t': timeout}\n url = self._url(\"/containers/{0}/restart\".format(container))\n res = self._post(url, params=params)\n self._raise_for_status(res)\n\n def search(self, term):\n return self._result(self._get(self._url(\"/images/search\"),\n params={'term': term}),\n True)\n\n @check_resource\n def start(self, container, binds=None, port_bindings=None, lxc_conf=None,\n publish_all_ports=False, links=None, privileged=False,\n dns=None, dns_search=None, volumes_from=None, network_mode=None,\n restart_policy=None, cap_add=None, cap_drop=None, devices=None,\n extra_hosts=None, read_only=None, pid_mode=None, ipc_mode=None,\n security_opt=None, ulimits=None):\n\n if utils.compare_version('1.10', self._version) < 0:\n if dns is not None:\n raise errors.InvalidVersion(\n 'dns is only supported for API version >= 1.10'\n )\n if volumes_from is not None:\n raise errors.InvalidVersion(\n 'volumes_from is only supported for API version >= 1.10'\n )\n\n if utils.compare_version('1.15', self._version) < 0:\n if security_opt is not None:\n raise errors.InvalidVersion(\n 'security_opt is only supported for API version >= 1.15'\n )\n if ipc_mode:\n raise errors.InvalidVersion(\n 'ipc_mode is only supported for API version >= 1.15'\n )\n\n if utils.compare_version('1.17', self._version) < 0:\n if read_only is not None:\n raise errors.InvalidVersion(\n 'read_only is only supported for API version >= 1.17'\n )\n if pid_mode is not None:\n raise errors.InvalidVersion(\n 'pid_mode is only supported for API version >= 1.17'\n )\n\n if utils.compare_version('1.18', self._version) < 0:\n if ulimits is not None:\n raise errors.InvalidVersion(\n 'ulimits is only supported for API version >= 1.18'\n )\n\n start_config = utils.create_host_config(\n binds=binds, port_bindings=port_bindings, lxc_conf=lxc_conf,\n publish_all_ports=publish_all_ports, links=links, dns=dns,\n privileged=privileged, dns_search=dns_search, cap_add=cap_add,\n cap_drop=cap_drop, volumes_from=volumes_from, devices=devices,\n network_mode=network_mode, restart_policy=restart_policy,\n extra_hosts=extra_hosts, read_only=read_only, pid_mode=pid_mode,\n ipc_mode=ipc_mode, security_opt=security_opt, ulimits=ulimits\n )\n\n url = self._url(\"/containers/{0}/start\".format(container))\n if not start_config:\n start_config = None\n elif utils.compare_version('1.15', self._version) > 0:\n warnings.warn(\n 'Passing host config parameters in start() is deprecated. '\n 'Please use host_config in create_container instead!',\n DeprecationWarning\n )\n res = self._post_json(url, data=start_config)\n self._raise_for_status(res)\n\n @check_resource\n def stats(self, container, decode=None):\n if utils.compare_version('1.17', self._version) < 0:\n raise errors.InvalidVersion(\n 'Stats retrieval is not supported in API < 1.17!')\n\n url = self._url(\"/containers/{0}/stats\".format(container))\n return self._stream_helper(self._get(url, stream=True), decode=decode)\n\n @check_resource\n def stop(self, container, timeout=10):\n params = {'t': timeout}\n url = self._url(\"/containers/{0}/stop\".format(container))\n\n res = self._post(url, params=params,\n timeout=(timeout + (self.timeout or 0)))\n self._raise_for_status(res)\n\n @check_resource\n def tag(self, image, repository, tag=None, force=False):\n params = {\n 'tag': tag,\n 'repo': repository,\n 'force': 1 if force else 0\n }\n url = self._url(\"/images/{0}/tag\".format(image))\n res = self._post(url, params=params)\n self._raise_for_status(res)\n return res.status_code == 201\n\n @check_resource\n def top(self, container):\n u = self._url(\"/containers/{0}/top\".format(container))\n return self._result(self._get(u), True)\n\n def version(self, api_version=True):\n url = self._url(\"/version\", versioned_api=api_version)\n return self._result(self._get(url), json=True)\n\n @check_resource\n def unpause(self, container):\n url = self._url('/containers/{0}/unpause'.format(container))\n res = self._post(url)\n self._raise_for_status(res)\n\n @check_resource\n def wait(self, container, timeout=None):\n url = self._url(\"/containers/{0}/wait\".format(container))\n res = self._post(url, timeout=timeout)\n self._raise_for_status(res)\n json_ = res.json()\n if 'StatusCode' in json_:\n return json_['StatusCode']\n return -1\n\n\nclass AutoVersionClient(Client):\n def __init__(self, *args, **kwargs):\n if 'version' in kwargs and kwargs['version']:\n raise errors.DockerException(\n 'Can not specify version for AutoVersionClient'\n )\n kwargs['version'] = 'auto'\n super(AutoVersionClient, self).__init__(*args, **kwargs)\n", "path": "docker/client.py" } ]
diff --git a/docker/client.py b/docker/client.py index 908468959..af4b635bb 100644 --- a/docker/client.py +++ b/docker/client.py @@ -273,6 +273,7 @@ def events(self, since=None, until=None, filters=None, decode=None): decode=decode ) + @check_resource def exec_create(self, container, cmd, stdout=True, stderr=True, tty=False, privileged=False): if utils.compare_version('1.15', self._version) < 0:
django-oscar__django-oscar-4257
There is no link from the dashboard to Attribute Option Group list page I found the page `/dashboard/catalogue/attribute-option-group/` useful to manage option groups (insert / delete options from groups), but I see no link from the dashboard to this page, is it missing?
[ { "content": "from django.urls import reverse_lazy\nfrom django.utils.translation import gettext_lazy as _\n\nOSCAR_SHOP_NAME = \"Oscar\"\nOSCAR_SHOP_TAGLINE = \"\"\nOSCAR_HOMEPAGE = reverse_lazy(\"catalogue:index\")\n\n# Dynamic class loading\nOSCAR_DYNAMIC_CLASS_LOADER = \"oscar.core.loading.default_class_loader\"\n\n# Basket settings\nOSCAR_BASKET_COOKIE_LIFETIME = 7 * 24 * 60 * 60\nOSCAR_BASKET_COOKIE_OPEN = \"oscar_open_basket\"\nOSCAR_BASKET_COOKIE_SECURE = False\nOSCAR_MAX_BASKET_QUANTITY_THRESHOLD = 10000\n\n# Recently-viewed products\nOSCAR_RECENTLY_VIEWED_COOKIE_LIFETIME = 7 * 24 * 60 * 60\nOSCAR_RECENTLY_VIEWED_COOKIE_NAME = \"oscar_history\"\nOSCAR_RECENTLY_VIEWED_COOKIE_SECURE = False\nOSCAR_RECENTLY_VIEWED_PRODUCTS = 20\n\n# Currency\nOSCAR_DEFAULT_CURRENCY = \"GBP\"\n\n# Paths\nOSCAR_IMAGE_FOLDER = \"images/products/%Y/%m/\"\nOSCAR_DELETE_IMAGE_FILES = True\n\n# Copy this image from oscar/static/img to your MEDIA_ROOT folder.\n# It needs to be there so Sorl can resize it.\nOSCAR_MISSING_IMAGE_URL = \"image_not_found.jpg\"\n\n# Address settings\nOSCAR_REQUIRED_ADDRESS_FIELDS = (\n \"first_name\",\n \"last_name\",\n \"line1\",\n \"line4\",\n \"postcode\",\n \"country\",\n)\n\n# Pagination settings\n\nOSCAR_OFFERS_PER_PAGE = 20\nOSCAR_PRODUCTS_PER_PAGE = 20\nOSCAR_REVIEWS_PER_PAGE = 20\nOSCAR_NOTIFICATIONS_PER_PAGE = 20\nOSCAR_EMAILS_PER_PAGE = 20\nOSCAR_ORDERS_PER_PAGE = 20\nOSCAR_ADDRESSES_PER_PAGE = 20\nOSCAR_STOCK_ALERTS_PER_PAGE = 20\nOSCAR_DASHBOARD_ITEMS_PER_PAGE = 20\n\n# Checkout\nOSCAR_ALLOW_ANON_CHECKOUT = False\n\n# Reviews\nOSCAR_ALLOW_ANON_REVIEWS = True\nOSCAR_MODERATE_REVIEWS = False\n\n# Accounts\nOSCAR_ACCOUNTS_REDIRECT_URL = \"customer:profile-view\"\n\n# This enables sending alert notifications/emails instantly when products get\n# back in stock by listening to stock record update signals.\n# This might impact performance for large numbers of stock record updates.\n# Alternatively, the management command ``oscar_send_alerts`` can be used to\n# run periodically, e.g. as a cron job. In this case eager alerts should be\n# disabled.\nOSCAR_EAGER_ALERTS = True\n\n# Registration\nOSCAR_SEND_REGISTRATION_EMAIL = True\nOSCAR_FROM_EMAIL = \"[email protected]\"\n\n# Slug handling\nOSCAR_SLUG_FUNCTION = \"oscar.core.utils.default_slugifier\"\nOSCAR_SLUG_MAP = {}\nOSCAR_SLUG_BLACKLIST = []\nOSCAR_SLUG_ALLOW_UNICODE = False\n\n# Cookies\nOSCAR_COOKIES_DELETE_ON_LOGOUT = [\n \"oscar_recently_viewed_products\",\n]\n\n# Offers\nOSCAR_OFFERS_INCL_TAX = False\n# Values (using the names of the model constants) from\n# \"offer.ConditionalOffer.TYPE_CHOICES\"\nOSCAR_OFFERS_IMPLEMENTED_TYPES = [\n \"SITE\",\n \"VOUCHER\",\n]\n\n# Hidden Oscar features, e.g. wishlists or reviews\nOSCAR_HIDDEN_FEATURES = []\n\n# Menu structure of the dashboard navigation\nOSCAR_DASHBOARD_NAVIGATION = [\n {\n \"label\": _(\"Dashboard\"),\n \"icon\": \"fas fa-list\",\n \"url_name\": \"dashboard:index\",\n },\n {\n \"label\": _(\"Catalogue\"),\n \"icon\": \"fas fa-sitemap\",\n \"children\": [\n {\n \"label\": _(\"Products\"),\n \"url_name\": \"dashboard:catalogue-product-list\",\n },\n {\n \"label\": _(\"Product Types\"),\n \"url_name\": \"dashboard:catalogue-class-list\",\n },\n {\n \"label\": _(\"Categories\"),\n \"url_name\": \"dashboard:catalogue-category-list\",\n },\n {\n \"label\": _(\"Ranges\"),\n \"url_name\": \"dashboard:range-list\",\n },\n {\n \"label\": _(\"Low stock alerts\"),\n \"url_name\": \"dashboard:stock-alert-list\",\n },\n {\n \"label\": _(\"Options\"),\n \"url_name\": \"dashboard:catalogue-option-list\",\n },\n ],\n },\n {\n \"label\": _(\"Fulfilment\"),\n \"icon\": \"fas fa-shopping-cart\",\n \"children\": [\n {\n \"label\": _(\"Orders\"),\n \"url_name\": \"dashboard:order-list\",\n },\n {\n \"label\": _(\"Statistics\"),\n \"url_name\": \"dashboard:order-stats\",\n },\n {\n \"label\": _(\"Partners\"),\n \"url_name\": \"dashboard:partner-list\",\n },\n # The shipping method dashboard is disabled by default as it might\n # be confusing. Weight-based shipping methods aren't hooked into\n # the shipping repository by default (as it would make\n # customising the repository slightly more difficult).\n # {\n # 'label': _('Shipping charges'),\n # 'url_name': 'dashboard:shipping-method-list',\n # },\n ],\n },\n {\n \"label\": _(\"Customers\"),\n \"icon\": \"fas fa-users\",\n \"children\": [\n {\n \"label\": _(\"Customers\"),\n \"url_name\": \"dashboard:users-index\",\n },\n {\n \"label\": _(\"Stock alert requests\"),\n \"url_name\": \"dashboard:user-alert-list\",\n },\n ],\n },\n {\n \"label\": _(\"Offers\"),\n \"icon\": \"fas fa-bullhorn\",\n \"children\": [\n {\n \"label\": _(\"Offers\"),\n \"url_name\": \"dashboard:offer-list\",\n },\n {\n \"label\": _(\"Vouchers\"),\n \"url_name\": \"dashboard:voucher-list\",\n },\n {\n \"label\": _(\"Voucher Sets\"),\n \"url_name\": \"dashboard:voucher-set-list\",\n },\n ],\n },\n {\n \"label\": _(\"Content\"),\n \"icon\": \"fas fa-folder\",\n \"children\": [\n {\n \"label\": _(\"Pages\"),\n \"url_name\": \"dashboard:page-list\",\n },\n {\n \"label\": _(\"Email templates\"),\n \"url_name\": \"dashboard:comms-list\",\n },\n {\n \"label\": _(\"Reviews\"),\n \"url_name\": \"dashboard:reviews-list\",\n },\n ],\n },\n {\n \"label\": _(\"Reports\"),\n \"icon\": \"fas fa-chart-bar\",\n \"url_name\": \"dashboard:reports-index\",\n },\n]\nOSCAR_DASHBOARD_DEFAULT_ACCESS_FUNCTION = \"oscar.apps.dashboard.nav.default_access_fn\"\n\n# Search facets\nOSCAR_SEARCH_FACETS = {\n \"fields\": {\n # The key for these dicts will be used when passing facet data\n # to the template. Same for the 'queries' dict below.\n \"product_class\": {\"name\": _(\"Type\"), \"field\": \"product_class\"},\n \"rating\": {\"name\": _(\"Rating\"), \"field\": \"rating\"},\n # You can specify an 'options' element that will be passed to the\n # SearchQuerySet.facet() call.\n # For instance, with Elasticsearch backend, 'options': {'order': 'term'}\n # will sort items in a facet by title instead of number of items.\n # It's hard to get 'missing' to work\n # correctly though as of Solr's hilarious syntax for selecting\n # items without a specific facet:\n # http://wiki.apache.org/solr/SimpleFacetParameters#facet.method\n # 'options': {'missing': 'true'}\n },\n \"queries\": {\n \"price_range\": {\n \"name\": _(\"Price range\"),\n \"field\": \"price\",\n \"queries\": [\n # This is a list of (name, query) tuples where the name will\n # be displayed on the front-end.\n (_(\"0 to 20\"), \"[0 TO 20]\"),\n (_(\"20 to 40\"), \"[20 TO 40]\"),\n (_(\"40 to 60\"), \"[40 TO 60]\"),\n (_(\"60+\"), \"[60 TO *]\"),\n ],\n },\n },\n}\n\nOSCAR_THUMBNAILER = \"oscar.core.thumbnails.SorlThumbnail\"\n\nOSCAR_URL_SCHEMA = \"http\"\n\nOSCAR_SAVE_SENT_EMAILS_TO_DB = True\n\nHAYSTACK_SIGNAL_PROCESSOR = \"haystack.signals.RealtimeSignalProcessor\"\n", "path": "src/oscar/defaults.py" } ]
[ { "content": "from django.urls import reverse_lazy\nfrom django.utils.translation import gettext_lazy as _\n\nOSCAR_SHOP_NAME = \"Oscar\"\nOSCAR_SHOP_TAGLINE = \"\"\nOSCAR_HOMEPAGE = reverse_lazy(\"catalogue:index\")\n\n# Dynamic class loading\nOSCAR_DYNAMIC_CLASS_LOADER = \"oscar.core.loading.default_class_loader\"\n\n# Basket settings\nOSCAR_BASKET_COOKIE_LIFETIME = 7 * 24 * 60 * 60\nOSCAR_BASKET_COOKIE_OPEN = \"oscar_open_basket\"\nOSCAR_BASKET_COOKIE_SECURE = False\nOSCAR_MAX_BASKET_QUANTITY_THRESHOLD = 10000\n\n# Recently-viewed products\nOSCAR_RECENTLY_VIEWED_COOKIE_LIFETIME = 7 * 24 * 60 * 60\nOSCAR_RECENTLY_VIEWED_COOKIE_NAME = \"oscar_history\"\nOSCAR_RECENTLY_VIEWED_COOKIE_SECURE = False\nOSCAR_RECENTLY_VIEWED_PRODUCTS = 20\n\n# Currency\nOSCAR_DEFAULT_CURRENCY = \"GBP\"\n\n# Paths\nOSCAR_IMAGE_FOLDER = \"images/products/%Y/%m/\"\nOSCAR_DELETE_IMAGE_FILES = True\n\n# Copy this image from oscar/static/img to your MEDIA_ROOT folder.\n# It needs to be there so Sorl can resize it.\nOSCAR_MISSING_IMAGE_URL = \"image_not_found.jpg\"\n\n# Address settings\nOSCAR_REQUIRED_ADDRESS_FIELDS = (\n \"first_name\",\n \"last_name\",\n \"line1\",\n \"line4\",\n \"postcode\",\n \"country\",\n)\n\n# Pagination settings\n\nOSCAR_OFFERS_PER_PAGE = 20\nOSCAR_PRODUCTS_PER_PAGE = 20\nOSCAR_REVIEWS_PER_PAGE = 20\nOSCAR_NOTIFICATIONS_PER_PAGE = 20\nOSCAR_EMAILS_PER_PAGE = 20\nOSCAR_ORDERS_PER_PAGE = 20\nOSCAR_ADDRESSES_PER_PAGE = 20\nOSCAR_STOCK_ALERTS_PER_PAGE = 20\nOSCAR_DASHBOARD_ITEMS_PER_PAGE = 20\n\n# Checkout\nOSCAR_ALLOW_ANON_CHECKOUT = False\n\n# Reviews\nOSCAR_ALLOW_ANON_REVIEWS = True\nOSCAR_MODERATE_REVIEWS = False\n\n# Accounts\nOSCAR_ACCOUNTS_REDIRECT_URL = \"customer:profile-view\"\n\n# This enables sending alert notifications/emails instantly when products get\n# back in stock by listening to stock record update signals.\n# This might impact performance for large numbers of stock record updates.\n# Alternatively, the management command ``oscar_send_alerts`` can be used to\n# run periodically, e.g. as a cron job. In this case eager alerts should be\n# disabled.\nOSCAR_EAGER_ALERTS = True\n\n# Registration\nOSCAR_SEND_REGISTRATION_EMAIL = True\nOSCAR_FROM_EMAIL = \"[email protected]\"\n\n# Slug handling\nOSCAR_SLUG_FUNCTION = \"oscar.core.utils.default_slugifier\"\nOSCAR_SLUG_MAP = {}\nOSCAR_SLUG_BLACKLIST = []\nOSCAR_SLUG_ALLOW_UNICODE = False\n\n# Cookies\nOSCAR_COOKIES_DELETE_ON_LOGOUT = [\n \"oscar_recently_viewed_products\",\n]\n\n# Offers\nOSCAR_OFFERS_INCL_TAX = False\n# Values (using the names of the model constants) from\n# \"offer.ConditionalOffer.TYPE_CHOICES\"\nOSCAR_OFFERS_IMPLEMENTED_TYPES = [\n \"SITE\",\n \"VOUCHER\",\n]\n\n# Hidden Oscar features, e.g. wishlists or reviews\nOSCAR_HIDDEN_FEATURES = []\n\n# Menu structure of the dashboard navigation\nOSCAR_DASHBOARD_NAVIGATION = [\n {\n \"label\": _(\"Dashboard\"),\n \"icon\": \"fas fa-list\",\n \"url_name\": \"dashboard:index\",\n },\n {\n \"label\": _(\"Catalogue\"),\n \"icon\": \"fas fa-sitemap\",\n \"children\": [\n {\n \"label\": _(\"Products\"),\n \"url_name\": \"dashboard:catalogue-product-list\",\n },\n {\n \"label\": _(\"Product Types\"),\n \"url_name\": \"dashboard:catalogue-class-list\",\n },\n {\n \"label\": _(\"Categories\"),\n \"url_name\": \"dashboard:catalogue-category-list\",\n },\n {\n \"label\": _(\"Ranges\"),\n \"url_name\": \"dashboard:range-list\",\n },\n {\n \"label\": _(\"Low stock alerts\"),\n \"url_name\": \"dashboard:stock-alert-list\",\n },\n {\n \"label\": _(\"Options\"),\n \"url_name\": \"dashboard:catalogue-option-list\",\n },\n {\n \"label\": _(\"Attribute Option Groups\"),\n \"url_name\": \"dashboard:catalogue-attribute-option-group-list\",\n },\n ],\n },\n {\n \"label\": _(\"Fulfilment\"),\n \"icon\": \"fas fa-shopping-cart\",\n \"children\": [\n {\n \"label\": _(\"Orders\"),\n \"url_name\": \"dashboard:order-list\",\n },\n {\n \"label\": _(\"Statistics\"),\n \"url_name\": \"dashboard:order-stats\",\n },\n {\n \"label\": _(\"Partners\"),\n \"url_name\": \"dashboard:partner-list\",\n },\n # The shipping method dashboard is disabled by default as it might\n # be confusing. Weight-based shipping methods aren't hooked into\n # the shipping repository by default (as it would make\n # customising the repository slightly more difficult).\n # {\n # 'label': _('Shipping charges'),\n # 'url_name': 'dashboard:shipping-method-list',\n # },\n ],\n },\n {\n \"label\": _(\"Customers\"),\n \"icon\": \"fas fa-users\",\n \"children\": [\n {\n \"label\": _(\"Customers\"),\n \"url_name\": \"dashboard:users-index\",\n },\n {\n \"label\": _(\"Stock alert requests\"),\n \"url_name\": \"dashboard:user-alert-list\",\n },\n ],\n },\n {\n \"label\": _(\"Offers\"),\n \"icon\": \"fas fa-bullhorn\",\n \"children\": [\n {\n \"label\": _(\"Offers\"),\n \"url_name\": \"dashboard:offer-list\",\n },\n {\n \"label\": _(\"Vouchers\"),\n \"url_name\": \"dashboard:voucher-list\",\n },\n {\n \"label\": _(\"Voucher Sets\"),\n \"url_name\": \"dashboard:voucher-set-list\",\n },\n ],\n },\n {\n \"label\": _(\"Content\"),\n \"icon\": \"fas fa-folder\",\n \"children\": [\n {\n \"label\": _(\"Pages\"),\n \"url_name\": \"dashboard:page-list\",\n },\n {\n \"label\": _(\"Email templates\"),\n \"url_name\": \"dashboard:comms-list\",\n },\n {\n \"label\": _(\"Reviews\"),\n \"url_name\": \"dashboard:reviews-list\",\n },\n ],\n },\n {\n \"label\": _(\"Reports\"),\n \"icon\": \"fas fa-chart-bar\",\n \"url_name\": \"dashboard:reports-index\",\n },\n]\nOSCAR_DASHBOARD_DEFAULT_ACCESS_FUNCTION = \"oscar.apps.dashboard.nav.default_access_fn\"\n\n# Search facets\nOSCAR_SEARCH_FACETS = {\n \"fields\": {\n # The key for these dicts will be used when passing facet data\n # to the template. Same for the 'queries' dict below.\n \"product_class\": {\"name\": _(\"Type\"), \"field\": \"product_class\"},\n \"rating\": {\"name\": _(\"Rating\"), \"field\": \"rating\"},\n # You can specify an 'options' element that will be passed to the\n # SearchQuerySet.facet() call.\n # For instance, with Elasticsearch backend, 'options': {'order': 'term'}\n # will sort items in a facet by title instead of number of items.\n # It's hard to get 'missing' to work\n # correctly though as of Solr's hilarious syntax for selecting\n # items without a specific facet:\n # http://wiki.apache.org/solr/SimpleFacetParameters#facet.method\n # 'options': {'missing': 'true'}\n },\n \"queries\": {\n \"price_range\": {\n \"name\": _(\"Price range\"),\n \"field\": \"price\",\n \"queries\": [\n # This is a list of (name, query) tuples where the name will\n # be displayed on the front-end.\n (_(\"0 to 20\"), \"[0 TO 20]\"),\n (_(\"20 to 40\"), \"[20 TO 40]\"),\n (_(\"40 to 60\"), \"[40 TO 60]\"),\n (_(\"60+\"), \"[60 TO *]\"),\n ],\n },\n },\n}\n\nOSCAR_THUMBNAILER = \"oscar.core.thumbnails.SorlThumbnail\"\n\nOSCAR_URL_SCHEMA = \"http\"\n\nOSCAR_SAVE_SENT_EMAILS_TO_DB = True\n\nHAYSTACK_SIGNAL_PROCESSOR = \"haystack.signals.RealtimeSignalProcessor\"\n", "path": "src/oscar/defaults.py" } ]
diff --git a/src/oscar/defaults.py b/src/oscar/defaults.py index f231afbf449..53639d6e8f3 100644 --- a/src/oscar/defaults.py +++ b/src/oscar/defaults.py @@ -133,6 +133,10 @@ "label": _("Options"), "url_name": "dashboard:catalogue-option-list", }, + { + "label": _("Attribute Option Groups"), + "url_name": "dashboard:catalogue-attribute-option-group-list", + }, ], }, {
PokemonGoF__PokemonGo-Bot-4547
No usable pokeballs found When "No usable pokeballs found" happens, the bot just hanging there and not tried to move and spin the fort.
[ { "content": "# -*- coding: utf-8 -*-\n\nimport os\nimport time\nimport json\nimport logging\nimport time\nfrom random import random, randrange\nfrom pokemongo_bot import inventory\nfrom pokemongo_bot.base_task import BaseTask\nfrom pokemongo_bot.human_behaviour import sleep, action_delay\nfrom pokemongo_bot.inventory import Pokemon\nfrom pokemongo_bot.worker_result import WorkerResult\nfrom pokemongo_bot.datastore import Datastore\nfrom pokemongo_bot.base_dir import _base_dir\nfrom datetime import datetime, timedelta\n\nCATCH_STATUS_SUCCESS = 1\nCATCH_STATUS_FAILED = 2\nCATCH_STATUS_VANISHED = 3\nCATCH_STATUS_MISSED = 4\n\nENCOUNTER_STATUS_SUCCESS = 1\nENCOUNTER_STATUS_NOT_IN_RANGE = 5\nENCOUNTER_STATUS_POKEMON_INVENTORY_FULL = 7\n\nITEM_POKEBALL = 1\nITEM_GREATBALL = 2\nITEM_ULTRABALL = 3\nITEM_RAZZBERRY = 701\n\nLOGIC_TO_FUNCTION = {\n 'or': lambda x, y, z: x or y or z,\n 'and': lambda x, y, z: x and y and z,\n 'orand': lambda x, y, z: x or y and z,\n 'andor': lambda x, y, z: x and y or z\n}\n\n\nclass PokemonCatchWorker(Datastore, BaseTask):\n\n def __init__(self, pokemon, bot, config):\n self.pokemon = pokemon\n super(PokemonCatchWorker, self).__init__(bot, config)\n\n def initialize(self):\n self.api = self.bot.api\n self.position = self.bot.position\n self.pokemon_list = self.bot.pokemon_list\n self.inventory = inventory.items()\n self.spawn_point_guid = ''\n self.response_key = ''\n self.response_status_key = ''\n\n #Config\n self.min_ultraball_to_keep = self.config.get('min_ultraball_to_keep', 10)\n self.berry_threshold = self.config.get('berry_threshold', 0.35)\n self.vip_berry_threshold = self.config.get('vip_berry_threshold', 0.9)\n\n self.catch_throw_parameters = self.config.get('catch_throw_parameters', {})\n self.catch_throw_parameters_spin_success_rate = self.catch_throw_parameters.get('spin_success_rate', 0.6)\n self.catch_throw_parameters_excellent_rate = self.catch_throw_parameters.get('excellent_rate', 0.1)\n self.catch_throw_parameters_great_rate = self.catch_throw_parameters.get('great_rate', 0.5)\n self.catch_throw_parameters_nice_rate = self.catch_throw_parameters.get('nice_rate', 0.3)\n self.catch_throw_parameters_normal_rate = self.catch_throw_parameters.get('normal_rate', 0.1)\n self.catch_throw_parameters_hit_rate = self.catch_throw_parameters.get('hit_rate', 0.8)\n\n self.catchsim_config = self.config.get('catch_simulation', {})\n self.catchsim_catch_wait_min = self.catchsim_config.get('catch_wait_min', 2)\n self.catchsim_catch_wait_max = self.catchsim_config.get('catch_wait_max', 6)\n self.catchsim_flee_count = int(self.catchsim_config.get('flee_count', 3))\n self.catchsim_flee_duration = self.catchsim_config.get('flee_duration', 2)\n self.catchsim_berry_wait_min = self.catchsim_config.get('berry_wait_min', 2)\n self.catchsim_berry_wait_max = self.catchsim_config.get('berry_wait_max', 3)\n self.catchsim_changeball_wait_min = self.catchsim_config.get('changeball_wait_min', 2)\n self.catchsim_changeball_wait_max = self.catchsim_config.get('changeball_wait_max', 3)\n\n\n ############################################################################\n # public methods\n ############################################################################\n\n def work(self, response_dict=None):\n response_dict = response_dict or self.create_encounter_api_call()\n\n # validate response\n if not response_dict:\n return WorkerResult.ERROR\n\n try:\n responses = response_dict['responses']\n response = responses[self.response_key]\n if response[self.response_status_key] != ENCOUNTER_STATUS_SUCCESS:\n if response[self.response_status_key] == ENCOUNTER_STATUS_NOT_IN_RANGE:\n self.emit_event('pokemon_not_in_range', formatted='Pokemon went out of range!')\n elif response[self.response_status_key] == ENCOUNTER_STATUS_POKEMON_INVENTORY_FULL:\n self.emit_event('pokemon_inventory_full', formatted='Your Pokemon inventory is full! Could not catch!')\n return WorkerResult.ERROR\n except KeyError:\n return WorkerResult.ERROR\n\n # get pokemon data\n pokemon_data = response['wild_pokemon']['pokemon_data'] if 'wild_pokemon' in response else response['pokemon_data']\n pokemon = Pokemon(pokemon_data)\n\n # skip ignored pokemon\n if not self._should_catch_pokemon(pokemon):\n return WorkerResult.SUCCESS\n\n is_vip = self._is_vip_pokemon(pokemon)\n if inventory.items().get(ITEM_POKEBALL).count < 1:\n if inventory.items().get(ITEM_GREATBALL).count < 1:\n if inventory.items().get(ITEM_ULTRABALL).count < 1:\n return WorkerResult.SUCCESS\n elif (not is_vip) and inventory.items().get(ITEM_ULTRABALL).count <= self.min_ultraball_to_keep:\n return WorkerResult.SUCCESS\n\n # log encounter\n self.emit_event(\n 'pokemon_appeared',\n formatted='A wild {pokemon} appeared! [CP {cp}] [NCP {ncp}] [Potential {iv}] [A/D/S {iv_display}]',\n data={\n 'pokemon': pokemon.name,\n 'ncp': round(pokemon.cp_percent, 2),\n 'cp': pokemon.cp,\n 'iv': pokemon.iv,\n 'iv_display': pokemon.iv_display,\n 'encounter_id': self.pokemon['encounter_id'],\n 'latitude': self.pokemon['latitude'],\n 'longitude': self.pokemon['longitude'],\n 'pokemon_id': pokemon.pokemon_id\n }\n )\n\n # simulate app\n time.sleep(3)\n\n # check for VIP pokemon\n if is_vip:\n self.emit_event('vip_pokemon', formatted='This is a VIP pokemon. Catch!!!')\n\n # check catch limits before catch\n with self.bot.database as conn:\n c = conn.cursor()\n c.execute(\"SELECT DISTINCT COUNT(encounter_id) FROM catch_log WHERE dated >= datetime('now','-1 day')\")\n\n result = c.fetchone()\n\n while True:\n max_catch = self.bot.config.daily_catch_limit\n if result[0] < max_catch:\n # catch that pokemon!\n encounter_id = self.pokemon['encounter_id']\n catch_rate_by_ball = [0] + response['capture_probability']['capture_probability'] # offset so item ids match indces\n self._do_catch(pokemon, encounter_id, catch_rate_by_ball, is_vip=is_vip)\n break\n else:\n self.emit_event('catch_limit', formatted='WARNING! You have reached your daily catch limit')\n break\n\n # simulate app\n time.sleep(5)\n\n def create_encounter_api_call(self):\n encounter_id = self.pokemon['encounter_id']\n player_latitude = self.pokemon['latitude']\n player_longitude = self.pokemon['longitude']\n\n request = self.api.create_request()\n if 'spawn_point_id' in self.pokemon:\n spawn_point_id = self.pokemon['spawn_point_id']\n self.spawn_point_guid = spawn_point_id\n self.response_key = 'ENCOUNTER'\n self.response_status_key = 'status'\n request.encounter(\n encounter_id=encounter_id,\n spawn_point_id=spawn_point_id,\n player_latitude=player_latitude,\n player_longitude=player_longitude\n )\n else:\n fort_id = self.pokemon['fort_id']\n self.spawn_point_guid = fort_id\n self.response_key = 'DISK_ENCOUNTER'\n self.response_status_key = 'result'\n request.disk_encounter(\n encounter_id=encounter_id,\n fort_id=fort_id,\n player_latitude=player_latitude,\n player_longitude=player_longitude\n )\n return request.call()\n\n ############################################################################\n # helpers\n ############################################################################\n\n def _pokemon_matches_config(self, config, pokemon, default_logic='and'):\n pokemon_config = config.get(pokemon.name, config.get('any'))\n\n if not pokemon_config:\n return False\n\n catch_results = {\n 'ncp': False,\n 'cp': False,\n 'iv': False,\n }\n\n if pokemon_config.get('never_catch', False):\n return False\n\n if pokemon_config.get('always_catch', False):\n return True\n\n catch_ncp = pokemon_config.get('catch_above_ncp', 0.8)\n if pokemon.cp_percent > catch_ncp:\n catch_results['ncp'] = True\n\n catch_cp = pokemon_config.get('catch_above_cp', 1200)\n if pokemon.cp > catch_cp:\n catch_results['cp'] = True\n\n catch_iv = pokemon_config.get('catch_above_iv', 0.8)\n if pokemon.iv > catch_iv:\n catch_results['iv'] = True\n\n return LOGIC_TO_FUNCTION[pokemon_config.get('logic', default_logic)](*catch_results.values())\n\n def _should_catch_pokemon(self, pokemon):\n return self._pokemon_matches_config(self.bot.config.catch, pokemon)\n\n def _is_vip_pokemon(self, pokemon):\n # having just a name present in the list makes them vip\n if self.bot.config.vips.get(pokemon.name) == {}:\n return True\n return self._pokemon_matches_config(self.bot.config.vips, pokemon, default_logic='or')\n\n def _pct(self, rate_by_ball):\n return '{0:.2f}'.format(rate_by_ball * 100)\n\n def _use_berry(self, berry_id, berry_count, encounter_id, catch_rate_by_ball, current_ball):\n # Delay to simulate selecting berry\n action_delay(self.catchsim_berry_wait_min, self.catchsim_berry_wait_max)\n new_catch_rate_by_ball = []\n self.emit_event(\n 'pokemon_catch_rate',\n level='debug',\n formatted='Catch rate of {catch_rate} with {ball_name} is low. Throwing {berry_name} (have {berry_count})',\n data={\n 'catch_rate': self._pct(catch_rate_by_ball[current_ball]),\n 'ball_name': self.inventory.get(current_ball).name,\n 'berry_name': self.inventory.get(berry_id).name,\n 'berry_count': berry_count\n }\n )\n\n response_dict = self.api.use_item_capture(\n item_id=berry_id,\n encounter_id=encounter_id,\n spawn_point_id=self.spawn_point_guid\n )\n responses = response_dict['responses']\n\n if response_dict and response_dict['status_code'] == 1:\n\n # update catch rates using multiplier\n if 'item_capture_mult' in responses['USE_ITEM_CAPTURE']:\n for rate in catch_rate_by_ball:\n new_catch_rate_by_ball.append(rate * responses['USE_ITEM_CAPTURE']['item_capture_mult'])\n self.emit_event(\n 'threw_berry',\n formatted=\"Threw a {berry_name}! Catch rate with {ball_name} is now: {new_catch_rate}\",\n data={\n 'berry_name': self.inventory.get(berry_id).name,\n 'ball_name': self.inventory.get(current_ball).name,\n 'new_catch_rate': self._pct(new_catch_rate_by_ball[current_ball])\n }\n )\n\n # softban?\n else:\n new_catch_rate_by_ball = catch_rate_by_ball\n self.bot.softban = True\n self.emit_event(\n 'softban',\n level='warning',\n formatted='Failed to use berry. You may be softbanned.'\n )\n with self.bot.database as conn:\n c = conn.cursor()\n c.execute(\"SELECT COUNT(name) FROM sqlite_master WHERE type='table' AND name='softban_log'\")\n result = c.fetchone()\n\n while True:\n if result[0] == 1:\n source = str(\"PokemonCatchWorker\")\n status = str(\"Possible Softban\")\n conn.execute('''INSERT INTO softban_log (status, source) VALUES (?, ?)''', (status, source))\n break\n else:\n self.emit_event(\n 'softban_log',\n sender=self,\n level='info',\n formatted=\"softban_log table not found, skipping log\"\n )\n\n # unknown status code\n else:\n new_catch_rate_by_ball = catch_rate_by_ball\n self.emit_event(\n 'threw_berry_failed',\n formatted='Unknown response when throwing berry: {status_code}.',\n data={\n 'status_code': response_dict['status_code']\n }\n )\n\n return new_catch_rate_by_ball\n\n def _do_catch(self, pokemon, encounter_id, catch_rate_by_ball, is_vip=False):\n # settings that may be exposed at some point\n \"\"\"\n\n :type pokemon: Pokemon\n \"\"\"\n berry_id = ITEM_RAZZBERRY\n maximum_ball = ITEM_ULTRABALL if is_vip else ITEM_GREATBALL\n ideal_catch_rate_before_throw = self.vip_berry_threshold if is_vip else self.berry_threshold\n\n berry_count = self.inventory.get(ITEM_RAZZBERRY).count\n ball_count = {}\n for ball_id in [ITEM_POKEBALL, ITEM_GREATBALL, ITEM_ULTRABALL]:\n ball_count[ball_id] = self.inventory.get(ball_id).count\n\n # use `min_ultraball_to_keep` from config if is not None\n min_ultraball_to_keep = ball_count[ITEM_ULTRABALL]\n if self.min_ultraball_to_keep is not None:\n if self.min_ultraball_to_keep >= 0 and self.min_ultraball_to_keep < min_ultraball_to_keep:\n min_ultraball_to_keep = self.min_ultraball_to_keep\n\n used_berry = False\n while True:\n\n # find lowest available ball\n current_ball = ITEM_POKEBALL\n while ball_count[current_ball] == 0 and current_ball < maximum_ball:\n current_ball += 1\n if ball_count[current_ball] == 0:\n self.emit_event('no_pokeballs', formatted='No usable pokeballs found!')\n\n # use untraball if there is no other balls with constraint to `min_ultraball_to_keep`\n if maximum_ball != ITEM_ULTRABALL and ball_count[ITEM_ULTRABALL] > min_ultraball_to_keep:\n maximum_ball = ITEM_ULTRABALL\n continue\n else:\n break\n\n # check future ball count\n num_next_balls = 0\n next_ball = current_ball\n while next_ball < maximum_ball:\n next_ball += 1\n num_next_balls += ball_count[next_ball]\n\n # check if we've got berries to spare\n berries_to_spare = berry_count > 0 if is_vip else berry_count > num_next_balls + 30\n\n # use a berry if we are under our ideal rate and have berries to spare\n changed_ball = False\n if catch_rate_by_ball[current_ball] < ideal_catch_rate_before_throw and berries_to_spare and not used_berry:\n new_catch_rate_by_ball = self._use_berry(berry_id, berry_count, encounter_id, catch_rate_by_ball, current_ball)\n if new_catch_rate_by_ball != catch_rate_by_ball:\n catch_rate_by_ball = new_catch_rate_by_ball\n self.inventory.get(ITEM_RAZZBERRY).remove(1)\n berry_count -= 1\n used_berry = True\n\n # pick the best ball to catch with\n best_ball = current_ball\n while best_ball < maximum_ball:\n best_ball += 1\n if catch_rate_by_ball[current_ball] < ideal_catch_rate_before_throw and ball_count[best_ball] > 0:\n # if current ball chance to catch is under our ideal rate, and player has better ball - then use it\n current_ball = best_ball\n changed_ball = True\n\n # if the rate is still low and we didn't throw a berry before, throw one\n if catch_rate_by_ball[current_ball] < ideal_catch_rate_before_throw and berry_count > 0 and not used_berry:\n new_catch_rate_by_ball = self._use_berry(berry_id, berry_count, encounter_id, catch_rate_by_ball, current_ball)\n if new_catch_rate_by_ball != catch_rate_by_ball:\n catch_rate_by_ball = new_catch_rate_by_ball\n self.inventory.get(ITEM_RAZZBERRY).remove(1)\n berry_count -= 1\n used_berry = True\n\n # If we change ball then wait to simulate user selecting it\n if changed_ball:\n action_delay(self.catchsim_changeball_wait_min, self.catchsim_changeball_wait_max)\n\n # Randomize the quality of the throw\n # Default structure\n throw_parameters = {'normalized_reticle_size': 1.950,\n 'spin_modifier': 1.0,\n 'normalized_hit_position': 1.0,\n 'throw_type_label': 'Excellent'}\n self.generate_spin_parameter(throw_parameters)\n self.generate_throw_quality_parameters(throw_parameters)\n\n # try to catch pokemon!\n ball_count[current_ball] -= 1\n self.inventory.get(current_ball).remove(1)\n # Take some time to throw the ball from config options\n action_delay(self.catchsim_catch_wait_min, self.catchsim_catch_wait_max)\n self.emit_event(\n 'threw_pokeball',\n formatted='{throw_type}{spin_label} throw! Used {ball_name}, with chance {success_percentage} ({count_left} left)',\n data={\n 'throw_type': throw_parameters['throw_type_label'],\n 'spin_label': throw_parameters['spin_label'],\n 'ball_name': self.inventory.get(current_ball).name,\n 'success_percentage': self._pct(catch_rate_by_ball[current_ball]),\n 'count_left': ball_count[current_ball]\n }\n )\n\n hit_pokemon = 1\n if random() >= self.catch_throw_parameters_hit_rate and not is_vip:\n hit_pokemon = 0\n\n response_dict = self.api.catch_pokemon(\n encounter_id=encounter_id,\n pokeball=current_ball,\n normalized_reticle_size=throw_parameters['normalized_reticle_size'],\n spawn_point_id=self.spawn_point_guid,\n hit_pokemon=hit_pokemon,\n spin_modifier=throw_parameters['spin_modifier'],\n normalized_hit_position=throw_parameters['normalized_hit_position']\n )\n\n try:\n catch_pokemon_status = response_dict['responses']['CATCH_POKEMON']['status']\n except KeyError:\n break\n\n # retry failed pokemon\n if catch_pokemon_status == CATCH_STATUS_FAILED:\n self.emit_event(\n 'pokemon_capture_failed',\n formatted='{pokemon} capture failed.. trying again!',\n data={'pokemon': pokemon.name}\n )\n used_berry = False\n\n # sleep according to flee_count and flee_duration config settings\n # randomly chooses a number of times to 'show' wobble animation between 1 and flee_count\n # multiplies this by flee_duration to get total sleep\n if self.catchsim_flee_count:\n sleep((randrange(self.catchsim_flee_count)+1) * self.catchsim_flee_duration)\n\n continue\n\n # abandon if pokemon vanished\n elif catch_pokemon_status == CATCH_STATUS_VANISHED:\n self.emit_event(\n 'pokemon_vanished',\n formatted='{pokemon} vanished!',\n data={\n 'pokemon': pokemon.name,\n 'encounter_id': self.pokemon['encounter_id'],\n 'latitude': self.pokemon['latitude'],\n 'longitude': self.pokemon['longitude'],\n 'pokemon_id': pokemon.pokemon_id\n }\n )\n if self._pct(catch_rate_by_ball[current_ball]) == 100:\n self.bot.softban = True\n\n # pokemon caught!\n elif catch_pokemon_status == CATCH_STATUS_SUCCESS:\n pokemon.unique_id = response_dict['responses']['CATCH_POKEMON']['captured_pokemon_id']\n self.bot.metrics.captured_pokemon(pokemon.name, pokemon.cp, pokemon.iv_display, pokemon.iv)\n\n try:\n inventory.pokemons().add(pokemon)\n self.emit_event(\n 'pokemon_caught',\n formatted='Captured {pokemon}! [CP {cp}] [NCP {ncp}] [Potential {iv}] [{iv_display}] [+{exp} exp]',\n data={\n 'pokemon': pokemon.name,\n 'ncp': round(pokemon.cp_percent, 2),\n 'cp': pokemon.cp,\n 'iv': pokemon.iv,\n 'iv_display': pokemon.iv_display,\n 'exp': sum(response_dict['responses']['CATCH_POKEMON']['capture_award']['xp']),\n 'encounter_id': self.pokemon['encounter_id'],\n 'latitude': self.pokemon['latitude'],\n 'longitude': self.pokemon['longitude'],\n 'pokemon_id': pokemon.pokemon_id\n }\n\n )\n with self.bot.database as conn:\n c = conn.cursor()\n c.execute(\"SELECT COUNT(name) FROM sqlite_master WHERE type='table' AND name='catch_log'\")\n result = c.fetchone()\n\n while True:\n if result[0] == 1:\n conn.execute('''INSERT INTO catch_log (pokemon, cp, iv, encounter_id, pokemon_id) VALUES (?, ?, ?, ?, ?)''', (pokemon.name, pokemon.cp, pokemon.iv, str(encounter_id), pokemon.pokemon_id))\n break\n else:\n self.emit_event(\n 'catch_log',\n sender=self,\n level='info',\n formatted=\"catch_log table not found, skipping log\"\n )\n break\n user_data_caught = os.path.join(_base_dir, 'data', 'caught-%s.json' % self.bot.config.username)\n with open(user_data_caught, 'ab') as outfile:\n outfile.write(str(datetime.now()))\n json.dump({\n 'pokemon': pokemon.name,\n 'cp': pokemon.cp,\n 'iv': pokemon.iv,\n 'encounter_id': self.pokemon['encounter_id'],\n 'pokemon_id': pokemon.pokemon_id\n }, outfile)\n outfile.write('\\n')\n\n except IOError as e:\n self.logger.info('[x] Error while opening location file: %s' % e)\n\n candy = inventory.candies().get(pokemon.pokemon_id)\n candy.add(self.get_candy_gained_count(response_dict))\n\n self.emit_event(\n 'gained_candy',\n formatted='You now have {quantity} {type} candy!',\n data = {\n 'quantity': candy.quantity,\n 'type': candy.type,\n },\n )\n\n self.bot.softban = False\n\n elif catch_pokemon_status == CATCH_STATUS_MISSED:\n self.emit_event(\n 'pokemon_capture_failed',\n formatted='Pokeball thrown to {pokemon} missed.. trying again!',\n data={'pokemon': pokemon.name}\n )\n # Take some time to throw the ball from config options\n action_delay(self.catchsim_catch_wait_min, self.catchsim_catch_wait_max)\n continue\n\n break\n\n def get_candy_gained_count(self, response_dict):\n total_candy_gained = 0\n for candy_gained in response_dict['responses']['CATCH_POKEMON']['capture_award']['candy']:\n total_candy_gained += candy_gained\n return total_candy_gained\n\n def generate_spin_parameter(self, throw_parameters):\n spin_success_rate = self.catch_throw_parameters_spin_success_rate\n if random() <= spin_success_rate:\n throw_parameters['spin_modifier'] = 0.5 + 0.5 * random()\n throw_parameters['spin_label'] = ' Curveball'\n else:\n throw_parameters['spin_modifier'] = 0.499 * random()\n throw_parameters['spin_label'] = ''\n\n def generate_throw_quality_parameters(self, throw_parameters):\n throw_excellent_chance = self.catch_throw_parameters_excellent_rate\n throw_great_chance = self.catch_throw_parameters_great_rate\n throw_nice_chance = self.catch_throw_parameters_nice_rate\n throw_normal_throw_chance = self.catch_throw_parameters_normal_rate\n\n # Total every chance types, pick a random number in the range and check what type of throw we got\n total_chances = throw_excellent_chance + throw_great_chance \\\n + throw_nice_chance + throw_normal_throw_chance\n\n random_throw = random() * total_chances\n\n if random_throw <= throw_excellent_chance:\n throw_parameters['normalized_reticle_size'] = 1.70 + 0.25 * random()\n throw_parameters['normalized_hit_position'] = 1.0\n throw_parameters['throw_type_label'] = 'Excellent'\n return\n\n random_throw -= throw_excellent_chance\n if random_throw <= throw_great_chance:\n throw_parameters['normalized_reticle_size'] = 1.30 + 0.399 * random()\n throw_parameters['normalized_hit_position'] = 1.0\n throw_parameters['throw_type_label'] = 'Great'\n return\n\n random_throw -= throw_great_chance\n if random_throw <= throw_nice_chance:\n throw_parameters['normalized_reticle_size'] = 1.00 + 0.299 * random()\n throw_parameters['normalized_hit_position'] = 1.0\n throw_parameters['throw_type_label'] = 'Nice'\n return\n\n # Not a any kind of special throw, let's throw a normal one\n # Here the reticle size doesn't matter, we scored out of it\n throw_parameters['normalized_reticle_size'] = 1.25 + 0.70 * random()\n throw_parameters['normalized_hit_position'] = 0.0\n throw_parameters['throw_type_label'] = 'OK'\n", "path": "pokemongo_bot/cell_workers/pokemon_catch_worker.py" } ]
[ { "content": "# -*- coding: utf-8 -*-\n\nimport os\nimport time\nimport json\nimport logging\nimport time\nfrom random import random, randrange\nfrom pokemongo_bot import inventory\nfrom pokemongo_bot.base_task import BaseTask\nfrom pokemongo_bot.human_behaviour import sleep, action_delay\nfrom pokemongo_bot.inventory import Pokemon\nfrom pokemongo_bot.worker_result import WorkerResult\nfrom pokemongo_bot.datastore import Datastore\nfrom pokemongo_bot.base_dir import _base_dir\nfrom datetime import datetime, timedelta\n\nCATCH_STATUS_SUCCESS = 1\nCATCH_STATUS_FAILED = 2\nCATCH_STATUS_VANISHED = 3\nCATCH_STATUS_MISSED = 4\n\nENCOUNTER_STATUS_SUCCESS = 1\nENCOUNTER_STATUS_NOT_IN_RANGE = 5\nENCOUNTER_STATUS_POKEMON_INVENTORY_FULL = 7\n\nITEM_POKEBALL = 1\nITEM_GREATBALL = 2\nITEM_ULTRABALL = 3\nITEM_RAZZBERRY = 701\n\nLOGIC_TO_FUNCTION = {\n 'or': lambda x, y, z: x or y or z,\n 'and': lambda x, y, z: x and y and z,\n 'orand': lambda x, y, z: x or y and z,\n 'andor': lambda x, y, z: x and y or z\n}\n\n\nclass PokemonCatchWorker(Datastore, BaseTask):\n\n def __init__(self, pokemon, bot, config):\n self.pokemon = pokemon\n super(PokemonCatchWorker, self).__init__(bot, config)\n\n def initialize(self):\n self.api = self.bot.api\n self.position = self.bot.position\n self.pokemon_list = self.bot.pokemon_list\n self.inventory = inventory.items()\n self.spawn_point_guid = ''\n self.response_key = ''\n self.response_status_key = ''\n\n #Config\n self.min_ultraball_to_keep = self.config.get('min_ultraball_to_keep', 10)\n self.berry_threshold = self.config.get('berry_threshold', 0.35)\n self.vip_berry_threshold = self.config.get('vip_berry_threshold', 0.9)\n\n self.catch_throw_parameters = self.config.get('catch_throw_parameters', {})\n self.catch_throw_parameters_spin_success_rate = self.catch_throw_parameters.get('spin_success_rate', 0.6)\n self.catch_throw_parameters_excellent_rate = self.catch_throw_parameters.get('excellent_rate', 0.1)\n self.catch_throw_parameters_great_rate = self.catch_throw_parameters.get('great_rate', 0.5)\n self.catch_throw_parameters_nice_rate = self.catch_throw_parameters.get('nice_rate', 0.3)\n self.catch_throw_parameters_normal_rate = self.catch_throw_parameters.get('normal_rate', 0.1)\n self.catch_throw_parameters_hit_rate = self.catch_throw_parameters.get('hit_rate', 0.8)\n\n self.catchsim_config = self.config.get('catch_simulation', {})\n self.catchsim_catch_wait_min = self.catchsim_config.get('catch_wait_min', 2)\n self.catchsim_catch_wait_max = self.catchsim_config.get('catch_wait_max', 6)\n self.catchsim_flee_count = int(self.catchsim_config.get('flee_count', 3))\n self.catchsim_flee_duration = self.catchsim_config.get('flee_duration', 2)\n self.catchsim_berry_wait_min = self.catchsim_config.get('berry_wait_min', 2)\n self.catchsim_berry_wait_max = self.catchsim_config.get('berry_wait_max', 3)\n self.catchsim_changeball_wait_min = self.catchsim_config.get('changeball_wait_min', 2)\n self.catchsim_changeball_wait_max = self.catchsim_config.get('changeball_wait_max', 3)\n\n\n ############################################################################\n # public methods\n ############################################################################\n\n def work(self, response_dict=None):\n response_dict = response_dict or self.create_encounter_api_call()\n\n # validate response\n if not response_dict:\n return WorkerResult.ERROR\n\n try:\n responses = response_dict['responses']\n response = responses[self.response_key]\n if response[self.response_status_key] != ENCOUNTER_STATUS_SUCCESS:\n if response[self.response_status_key] == ENCOUNTER_STATUS_NOT_IN_RANGE:\n self.emit_event('pokemon_not_in_range', formatted='Pokemon went out of range!')\n elif response[self.response_status_key] == ENCOUNTER_STATUS_POKEMON_INVENTORY_FULL:\n self.emit_event('pokemon_inventory_full', formatted='Your Pokemon inventory is full! Could not catch!')\n return WorkerResult.ERROR\n except KeyError:\n return WorkerResult.ERROR\n\n # get pokemon data\n pokemon_data = response['wild_pokemon']['pokemon_data'] if 'wild_pokemon' in response else response['pokemon_data']\n pokemon = Pokemon(pokemon_data)\n\n # skip ignored pokemon\n if not self._should_catch_pokemon(pokemon):\n return WorkerResult.SUCCESS\n\n is_vip = self._is_vip_pokemon(pokemon)\n if inventory.items().get(ITEM_POKEBALL).count < 1:\n if inventory.items().get(ITEM_GREATBALL).count < 1:\n if inventory.items().get(ITEM_ULTRABALL).count < 1:\n return WorkerResult.SUCCESS\n elif (not is_vip) and inventory.items().get(ITEM_ULTRABALL).count <= self.min_ultraball_to_keep:\n return WorkerResult.SUCCESS\n\n # log encounter\n self.emit_event(\n 'pokemon_appeared',\n formatted='A wild {pokemon} appeared! [CP {cp}] [NCP {ncp}] [Potential {iv}] [A/D/S {iv_display}]',\n data={\n 'pokemon': pokemon.name,\n 'ncp': round(pokemon.cp_percent, 2),\n 'cp': pokemon.cp,\n 'iv': pokemon.iv,\n 'iv_display': pokemon.iv_display,\n 'encounter_id': self.pokemon['encounter_id'],\n 'latitude': self.pokemon['latitude'],\n 'longitude': self.pokemon['longitude'],\n 'pokemon_id': pokemon.pokemon_id\n }\n )\n\n # simulate app\n time.sleep(3)\n\n # check for VIP pokemon\n if is_vip:\n self.emit_event('vip_pokemon', formatted='This is a VIP pokemon. Catch!!!')\n\n # check catch limits before catch\n with self.bot.database as conn:\n c = conn.cursor()\n c.execute(\"SELECT DISTINCT COUNT(encounter_id) FROM catch_log WHERE dated >= datetime('now','-1 day')\")\n\n result = c.fetchone()\n\n while True:\n max_catch = self.bot.config.daily_catch_limit\n if result[0] < max_catch:\n # catch that pokemon!\n encounter_id = self.pokemon['encounter_id']\n catch_rate_by_ball = [0] + response['capture_probability']['capture_probability'] # offset so item ids match indces\n self._do_catch(pokemon, encounter_id, catch_rate_by_ball, is_vip=is_vip)\n break\n else:\n self.emit_event('catch_limit', formatted='WARNING! You have reached your daily catch limit')\n break\n\n # simulate app\n time.sleep(5)\n\n def create_encounter_api_call(self):\n encounter_id = self.pokemon['encounter_id']\n player_latitude = self.pokemon['latitude']\n player_longitude = self.pokemon['longitude']\n\n request = self.api.create_request()\n if 'spawn_point_id' in self.pokemon:\n spawn_point_id = self.pokemon['spawn_point_id']\n self.spawn_point_guid = spawn_point_id\n self.response_key = 'ENCOUNTER'\n self.response_status_key = 'status'\n request.encounter(\n encounter_id=encounter_id,\n spawn_point_id=spawn_point_id,\n player_latitude=player_latitude,\n player_longitude=player_longitude\n )\n else:\n fort_id = self.pokemon['fort_id']\n self.spawn_point_guid = fort_id\n self.response_key = 'DISK_ENCOUNTER'\n self.response_status_key = 'result'\n request.disk_encounter(\n encounter_id=encounter_id,\n fort_id=fort_id,\n player_latitude=player_latitude,\n player_longitude=player_longitude\n )\n return request.call()\n\n ############################################################################\n # helpers\n ############################################################################\n\n def _pokemon_matches_config(self, config, pokemon, default_logic='and'):\n pokemon_config = config.get(pokemon.name, config.get('any'))\n\n if not pokemon_config:\n return False\n\n catch_results = {\n 'ncp': False,\n 'cp': False,\n 'iv': False,\n }\n\n if pokemon_config.get('never_catch', False):\n return False\n\n if pokemon_config.get('always_catch', False):\n return True\n\n catch_ncp = pokemon_config.get('catch_above_ncp', 0.8)\n if pokemon.cp_percent > catch_ncp:\n catch_results['ncp'] = True\n\n catch_cp = pokemon_config.get('catch_above_cp', 1200)\n if pokemon.cp > catch_cp:\n catch_results['cp'] = True\n\n catch_iv = pokemon_config.get('catch_above_iv', 0.8)\n if pokemon.iv > catch_iv:\n catch_results['iv'] = True\n\n return LOGIC_TO_FUNCTION[pokemon_config.get('logic', default_logic)](*catch_results.values())\n\n def _should_catch_pokemon(self, pokemon):\n return self._pokemon_matches_config(self.bot.config.catch, pokemon)\n\n def _is_vip_pokemon(self, pokemon):\n # having just a name present in the list makes them vip\n if self.bot.config.vips.get(pokemon.name) == {}:\n return True\n return self._pokemon_matches_config(self.bot.config.vips, pokemon, default_logic='or')\n\n def _pct(self, rate_by_ball):\n return '{0:.2f}'.format(rate_by_ball * 100)\n\n def _use_berry(self, berry_id, berry_count, encounter_id, catch_rate_by_ball, current_ball):\n # Delay to simulate selecting berry\n action_delay(self.catchsim_berry_wait_min, self.catchsim_berry_wait_max)\n new_catch_rate_by_ball = []\n self.emit_event(\n 'pokemon_catch_rate',\n level='debug',\n formatted='Catch rate of {catch_rate} with {ball_name} is low. Throwing {berry_name} (have {berry_count})',\n data={\n 'catch_rate': self._pct(catch_rate_by_ball[current_ball]),\n 'ball_name': self.inventory.get(current_ball).name,\n 'berry_name': self.inventory.get(berry_id).name,\n 'berry_count': berry_count\n }\n )\n\n response_dict = self.api.use_item_capture(\n item_id=berry_id,\n encounter_id=encounter_id,\n spawn_point_id=self.spawn_point_guid\n )\n responses = response_dict['responses']\n\n if response_dict and response_dict['status_code'] == 1:\n\n # update catch rates using multiplier\n if 'item_capture_mult' in responses['USE_ITEM_CAPTURE']:\n for rate in catch_rate_by_ball:\n new_catch_rate_by_ball.append(rate * responses['USE_ITEM_CAPTURE']['item_capture_mult'])\n self.emit_event(\n 'threw_berry',\n formatted=\"Threw a {berry_name}! Catch rate with {ball_name} is now: {new_catch_rate}\",\n data={\n 'berry_name': self.inventory.get(berry_id).name,\n 'ball_name': self.inventory.get(current_ball).name,\n 'new_catch_rate': self._pct(new_catch_rate_by_ball[current_ball])\n }\n )\n\n # softban?\n else:\n new_catch_rate_by_ball = catch_rate_by_ball\n self.bot.softban = True\n self.emit_event(\n 'softban',\n level='warning',\n formatted='Failed to use berry. You may be softbanned.'\n )\n with self.bot.database as conn:\n c = conn.cursor()\n c.execute(\"SELECT COUNT(name) FROM sqlite_master WHERE type='table' AND name='softban_log'\")\n result = c.fetchone()\n\n while True:\n if result[0] == 1:\n source = str(\"PokemonCatchWorker\")\n status = str(\"Possible Softban\")\n conn.execute('''INSERT INTO softban_log (status, source) VALUES (?, ?)''', (status, source))\n break\n else:\n self.emit_event(\n 'softban_log',\n sender=self,\n level='info',\n formatted=\"softban_log table not found, skipping log\"\n )\n\n # unknown status code\n else:\n new_catch_rate_by_ball = catch_rate_by_ball\n self.emit_event(\n 'threw_berry_failed',\n formatted='Unknown response when throwing berry: {status_code}.',\n data={\n 'status_code': response_dict['status_code']\n }\n )\n\n return new_catch_rate_by_ball\n\n def _do_catch(self, pokemon, encounter_id, catch_rate_by_ball, is_vip=False):\n # settings that may be exposed at some point\n \"\"\"\n\n :type pokemon: Pokemon\n \"\"\"\n berry_id = ITEM_RAZZBERRY\n maximum_ball = ITEM_ULTRABALL if is_vip else ITEM_GREATBALL\n ideal_catch_rate_before_throw = self.vip_berry_threshold if is_vip else self.berry_threshold\n\n berry_count = self.inventory.get(ITEM_RAZZBERRY).count\n ball_count = {}\n for ball_id in [ITEM_POKEBALL, ITEM_GREATBALL, ITEM_ULTRABALL]:\n ball_count[ball_id] = self.inventory.get(ball_id).count\n\n # use `min_ultraball_to_keep` from config if is not None\n min_ultraball_to_keep = ball_count[ITEM_ULTRABALL]\n if self.min_ultraball_to_keep is not None:\n if self.min_ultraball_to_keep >= 0 and self.min_ultraball_to_keep < min_ultraball_to_keep:\n min_ultraball_to_keep = self.min_ultraball_to_keep\n\n used_berry = False\n while True:\n\n # find lowest available ball\n current_ball = ITEM_POKEBALL\n while ball_count[current_ball] == 0 and current_ball < maximum_ball:\n current_ball += 1\n if ball_count[current_ball] == 0:\n self.emit_event('no_pokeballs', formatted='No usable pokeballs found!')\n\n # use untraball if there is no other balls with constraint to `min_ultraball_to_keep`\n if maximum_ball != ITEM_ULTRABALL and ball_count[ITEM_ULTRABALL] > min_ultraball_to_keep:\n maximum_ball = ITEM_ULTRABALL\n continue\n else:\n return WorkerResult.ERROR\n\n # check future ball count\n num_next_balls = 0\n next_ball = current_ball\n while next_ball < maximum_ball:\n next_ball += 1\n num_next_balls += ball_count[next_ball]\n\n # check if we've got berries to spare\n berries_to_spare = berry_count > 0 if is_vip else berry_count > num_next_balls + 30\n\n # use a berry if we are under our ideal rate and have berries to spare\n changed_ball = False\n if catch_rate_by_ball[current_ball] < ideal_catch_rate_before_throw and berries_to_spare and not used_berry:\n new_catch_rate_by_ball = self._use_berry(berry_id, berry_count, encounter_id, catch_rate_by_ball, current_ball)\n if new_catch_rate_by_ball != catch_rate_by_ball:\n catch_rate_by_ball = new_catch_rate_by_ball\n self.inventory.get(ITEM_RAZZBERRY).remove(1)\n berry_count -= 1\n used_berry = True\n\n # pick the best ball to catch with\n best_ball = current_ball\n while best_ball < maximum_ball:\n best_ball += 1\n if catch_rate_by_ball[current_ball] < ideal_catch_rate_before_throw and ball_count[best_ball] > 0:\n # if current ball chance to catch is under our ideal rate, and player has better ball - then use it\n current_ball = best_ball\n changed_ball = True\n\n # if the rate is still low and we didn't throw a berry before, throw one\n if catch_rate_by_ball[current_ball] < ideal_catch_rate_before_throw and berry_count > 0 and not used_berry:\n new_catch_rate_by_ball = self._use_berry(berry_id, berry_count, encounter_id, catch_rate_by_ball, current_ball)\n if new_catch_rate_by_ball != catch_rate_by_ball:\n catch_rate_by_ball = new_catch_rate_by_ball\n self.inventory.get(ITEM_RAZZBERRY).remove(1)\n berry_count -= 1\n used_berry = True\n\n # If we change ball then wait to simulate user selecting it\n if changed_ball:\n action_delay(self.catchsim_changeball_wait_min, self.catchsim_changeball_wait_max)\n\n # Randomize the quality of the throw\n # Default structure\n throw_parameters = {'normalized_reticle_size': 1.950,\n 'spin_modifier': 1.0,\n 'normalized_hit_position': 1.0,\n 'throw_type_label': 'Excellent'}\n self.generate_spin_parameter(throw_parameters)\n self.generate_throw_quality_parameters(throw_parameters)\n\n # try to catch pokemon!\n ball_count[current_ball] -= 1\n self.inventory.get(current_ball).remove(1)\n # Take some time to throw the ball from config options\n action_delay(self.catchsim_catch_wait_min, self.catchsim_catch_wait_max)\n self.emit_event(\n 'threw_pokeball',\n formatted='{throw_type}{spin_label} throw! Used {ball_name}, with chance {success_percentage} ({count_left} left)',\n data={\n 'throw_type': throw_parameters['throw_type_label'],\n 'spin_label': throw_parameters['spin_label'],\n 'ball_name': self.inventory.get(current_ball).name,\n 'success_percentage': self._pct(catch_rate_by_ball[current_ball]),\n 'count_left': ball_count[current_ball]\n }\n )\n\n hit_pokemon = 1\n if random() >= self.catch_throw_parameters_hit_rate and not is_vip:\n hit_pokemon = 0\n\n response_dict = self.api.catch_pokemon(\n encounter_id=encounter_id,\n pokeball=current_ball,\n normalized_reticle_size=throw_parameters['normalized_reticle_size'],\n spawn_point_id=self.spawn_point_guid,\n hit_pokemon=hit_pokemon,\n spin_modifier=throw_parameters['spin_modifier'],\n normalized_hit_position=throw_parameters['normalized_hit_position']\n )\n\n try:\n catch_pokemon_status = response_dict['responses']['CATCH_POKEMON']['status']\n except KeyError:\n break\n\n # retry failed pokemon\n if catch_pokemon_status == CATCH_STATUS_FAILED:\n self.emit_event(\n 'pokemon_capture_failed',\n formatted='{pokemon} capture failed.. trying again!',\n data={'pokemon': pokemon.name}\n )\n used_berry = False\n\n # sleep according to flee_count and flee_duration config settings\n # randomly chooses a number of times to 'show' wobble animation between 1 and flee_count\n # multiplies this by flee_duration to get total sleep\n if self.catchsim_flee_count:\n sleep((randrange(self.catchsim_flee_count)+1) * self.catchsim_flee_duration)\n\n continue\n\n # abandon if pokemon vanished\n elif catch_pokemon_status == CATCH_STATUS_VANISHED:\n self.emit_event(\n 'pokemon_vanished',\n formatted='{pokemon} vanished!',\n data={\n 'pokemon': pokemon.name,\n 'encounter_id': self.pokemon['encounter_id'],\n 'latitude': self.pokemon['latitude'],\n 'longitude': self.pokemon['longitude'],\n 'pokemon_id': pokemon.pokemon_id\n }\n )\n if self._pct(catch_rate_by_ball[current_ball]) == 100:\n self.bot.softban = True\n\n # pokemon caught!\n elif catch_pokemon_status == CATCH_STATUS_SUCCESS:\n pokemon.unique_id = response_dict['responses']['CATCH_POKEMON']['captured_pokemon_id']\n self.bot.metrics.captured_pokemon(pokemon.name, pokemon.cp, pokemon.iv_display, pokemon.iv)\n\n try:\n inventory.pokemons().add(pokemon)\n self.emit_event(\n 'pokemon_caught',\n formatted='Captured {pokemon}! [CP {cp}] [NCP {ncp}] [Potential {iv}] [{iv_display}] [+{exp} exp]',\n data={\n 'pokemon': pokemon.name,\n 'ncp': round(pokemon.cp_percent, 2),\n 'cp': pokemon.cp,\n 'iv': pokemon.iv,\n 'iv_display': pokemon.iv_display,\n 'exp': sum(response_dict['responses']['CATCH_POKEMON']['capture_award']['xp']),\n 'encounter_id': self.pokemon['encounter_id'],\n 'latitude': self.pokemon['latitude'],\n 'longitude': self.pokemon['longitude'],\n 'pokemon_id': pokemon.pokemon_id\n }\n\n )\n with self.bot.database as conn:\n c = conn.cursor()\n c.execute(\"SELECT COUNT(name) FROM sqlite_master WHERE type='table' AND name='catch_log'\")\n result = c.fetchone()\n\n while True:\n if result[0] == 1:\n conn.execute('''INSERT INTO catch_log (pokemon, cp, iv, encounter_id, pokemon_id) VALUES (?, ?, ?, ?, ?)''', (pokemon.name, pokemon.cp, pokemon.iv, str(encounter_id), pokemon.pokemon_id))\n break\n else:\n self.emit_event(\n 'catch_log',\n sender=self,\n level='info',\n formatted=\"catch_log table not found, skipping log\"\n )\n break\n user_data_caught = os.path.join(_base_dir, 'data', 'caught-%s.json' % self.bot.config.username)\n with open(user_data_caught, 'ab') as outfile:\n outfile.write(str(datetime.now()))\n json.dump({\n 'pokemon': pokemon.name,\n 'cp': pokemon.cp,\n 'iv': pokemon.iv,\n 'encounter_id': self.pokemon['encounter_id'],\n 'pokemon_id': pokemon.pokemon_id\n }, outfile)\n outfile.write('\\n')\n\n except IOError as e:\n self.logger.info('[x] Error while opening location file: %s' % e)\n\n candy = inventory.candies().get(pokemon.pokemon_id)\n candy.add(self.get_candy_gained_count(response_dict))\n\n self.emit_event(\n 'gained_candy',\n formatted='You now have {quantity} {type} candy!',\n data = {\n 'quantity': candy.quantity,\n 'type': candy.type,\n },\n )\n\n self.bot.softban = False\n\n elif catch_pokemon_status == CATCH_STATUS_MISSED:\n self.emit_event(\n 'pokemon_capture_failed',\n formatted='Pokeball thrown to {pokemon} missed.. trying again!',\n data={'pokemon': pokemon.name}\n )\n # Take some time to throw the ball from config options\n action_delay(self.catchsim_catch_wait_min, self.catchsim_catch_wait_max)\n continue\n\n break\n\n def get_candy_gained_count(self, response_dict):\n total_candy_gained = 0\n for candy_gained in response_dict['responses']['CATCH_POKEMON']['capture_award']['candy']:\n total_candy_gained += candy_gained\n return total_candy_gained\n\n def generate_spin_parameter(self, throw_parameters):\n spin_success_rate = self.catch_throw_parameters_spin_success_rate\n if random() <= spin_success_rate:\n throw_parameters['spin_modifier'] = 0.5 + 0.5 * random()\n throw_parameters['spin_label'] = ' Curveball'\n else:\n throw_parameters['spin_modifier'] = 0.499 * random()\n throw_parameters['spin_label'] = ''\n\n def generate_throw_quality_parameters(self, throw_parameters):\n throw_excellent_chance = self.catch_throw_parameters_excellent_rate\n throw_great_chance = self.catch_throw_parameters_great_rate\n throw_nice_chance = self.catch_throw_parameters_nice_rate\n throw_normal_throw_chance = self.catch_throw_parameters_normal_rate\n\n # Total every chance types, pick a random number in the range and check what type of throw we got\n total_chances = throw_excellent_chance + throw_great_chance \\\n + throw_nice_chance + throw_normal_throw_chance\n\n random_throw = random() * total_chances\n\n if random_throw <= throw_excellent_chance:\n throw_parameters['normalized_reticle_size'] = 1.70 + 0.25 * random()\n throw_parameters['normalized_hit_position'] = 1.0\n throw_parameters['throw_type_label'] = 'Excellent'\n return\n\n random_throw -= throw_excellent_chance\n if random_throw <= throw_great_chance:\n throw_parameters['normalized_reticle_size'] = 1.30 + 0.399 * random()\n throw_parameters['normalized_hit_position'] = 1.0\n throw_parameters['throw_type_label'] = 'Great'\n return\n\n random_throw -= throw_great_chance\n if random_throw <= throw_nice_chance:\n throw_parameters['normalized_reticle_size'] = 1.00 + 0.299 * random()\n throw_parameters['normalized_hit_position'] = 1.0\n throw_parameters['throw_type_label'] = 'Nice'\n return\n\n # Not a any kind of special throw, let's throw a normal one\n # Here the reticle size doesn't matter, we scored out of it\n throw_parameters['normalized_reticle_size'] = 1.25 + 0.70 * random()\n throw_parameters['normalized_hit_position'] = 0.0\n throw_parameters['throw_type_label'] = 'OK'\n", "path": "pokemongo_bot/cell_workers/pokemon_catch_worker.py" } ]
diff --git a/pokemongo_bot/cell_workers/pokemon_catch_worker.py b/pokemongo_bot/cell_workers/pokemon_catch_worker.py index aa060b46be..a494783382 100644 --- a/pokemongo_bot/cell_workers/pokemon_catch_worker.py +++ b/pokemongo_bot/cell_workers/pokemon_catch_worker.py @@ -355,7 +355,7 @@ def _do_catch(self, pokemon, encounter_id, catch_rate_by_ball, is_vip=False): maximum_ball = ITEM_ULTRABALL continue else: - break + return WorkerResult.ERROR # check future ball count num_next_balls = 0
learningequality__kolibri-4689
Shows sorry! something went wrong. ### Observed behavior When coach is going to the recent tab to see exercise and video progress then it shows error. ### Expected behavior It must show progress instead of error. ### Steps to reproduce 1. Login with coach. 2. go to the recent tab. 3. Go to the exercise/video and see. ### Context * Kolibri version : kolibri 0.11.0 * Operating system : Ubuntu 14.04 * Browser : chrome ### Screenshot ![1](https://user-images.githubusercontent.com/12776071/50138341-4d958180-02c4-11e9-92b7-01a9fb28acc2.png) ![2](https://user-images.githubusercontent.com/12776071/50138342-4d958180-02c4-11e9-9426-fe0709d16751.png) ![3](https://user-images.githubusercontent.com/12776071/50138343-4e2e1800-02c4-11e9-9ac4-e520796024ed.png)
[ { "content": "import datetime\n\nfrom dateutil.parser import parse\nfrom django.db import connection\nfrom django.db.models import Min\nfrom django.db.models import Q\nfrom django.utils import timezone\nfrom rest_framework import mixins\nfrom rest_framework import pagination\nfrom rest_framework import permissions\nfrom rest_framework import viewsets\n\nfrom .serializers import ContentReportSerializer\nfrom .serializers import ContentSummarySerializer\nfrom .serializers import LessonReportSerializer\nfrom .serializers import UserReportSerializer\nfrom .utils.return_users import get_members_or_user\nfrom kolibri.core.auth.constants import collection_kinds\nfrom kolibri.core.auth.constants import role_kinds\nfrom kolibri.core.auth.models import Collection\nfrom kolibri.core.auth.models import FacilityUser\nfrom kolibri.core.content.models import ContentNode\nfrom kolibri.core.decorators import query_params_required\nfrom kolibri.core.lessons.models import Lesson\nfrom kolibri.core.logger.models import ContentSummaryLog\nfrom kolibri.core.logger.models import MasteryLog\n\n\ncollection_kind_choices = tuple([choice[0] for choice in collection_kinds.choices] + ['user'])\n\n\nclass OptionalPageNumberPagination(pagination.PageNumberPagination):\n \"\"\"\n Pagination class that allows for page number-style pagination, when requested.\n To activate, the `page_size` argument must be set. For example, to request the first 20 records:\n `?page_size=20&page=1`\n \"\"\"\n page_size = None\n page_size_query_param = \"page_size\"\n\n\nclass KolibriReportPermissions(permissions.BasePermission):\n\n # check if requesting user has permission for collection or user\n def has_permission(self, request, view):\n if isinstance(view, LessonReportViewset):\n report_pk = view.kwargs.get('pk', None)\n if report_pk is None:\n # If requesting list view, check if requester has coach/admin permissions on whole facility\n collection_kind = 'facility'\n collection_or_user_pk = request.user.facility_id\n else:\n # If requesting detail view, only check if requester has permissions on the Classroom\n collection_kind = 'classroom'\n collection_or_user_pk = Lesson.objects.get(pk=report_pk).collection.id\n\n else:\n collection_kind = view.kwargs.get('collection_kind', 'user')\n collection_or_user_pk = view.kwargs.get('collection_id', view.kwargs.get('pk'))\n\n allowed_roles = [role_kinds.ADMIN, role_kinds.COACH]\n try:\n if 'user' == collection_kind:\n return request.user.has_role_for(allowed_roles, FacilityUser.objects.get(pk=collection_or_user_pk))\n else:\n return request.user.has_role_for(allowed_roles, Collection.objects.get(pk=collection_or_user_pk))\n except (FacilityUser.DoesNotExist, Collection.DoesNotExist):\n return False\n\n\n@query_params_required(channel_id=str, content_node_id=str, collection_kind=collection_kind_choices, collection_id=str)\nclass ReportBaseViewSet(mixins.ListModelMixin, viewsets.GenericViewSet):\n\n permission_classes = (KolibriReportPermissions,)\n\n\nclass UserReportViewSet(ReportBaseViewSet):\n\n pagination_class = OptionalPageNumberPagination\n serializer_class = UserReportSerializer\n\n def get_queryset(self):\n assert 'user' != self.kwargs['collection_kind'], 'only a `collection` should be passed to this endpoint'\n return get_members_or_user(self.kwargs['collection_kind'], self.kwargs['collection_id'])\n\n\nclass ContentReportViewSet(ReportBaseViewSet):\n\n pagination_class = OptionalPageNumberPagination\n serializer_class = ContentReportSerializer\n\n def get_queryset(self):\n content_node_id = self.kwargs['content_node_id']\n return ContentNode.objects.filter(Q(parent=content_node_id) & Q(available=True)).order_by('lft')\n\n\n@query_params_required(channel_id=str, collection_kind=collection_kind_choices, collection_id=str)\nclass ContentSummaryViewSet(viewsets.ReadOnlyModelViewSet):\n\n permission_classes = (KolibriReportPermissions,)\n serializer_class = ContentSummarySerializer\n\n def get_queryset(self):\n channel_id = self.kwargs['channel_id']\n return ContentNode.objects.filter(Q(channel_id=channel_id) & Q(available=True)).order_by('lft')\n\n\nclass RecentReportViewSet(ReportBaseViewSet):\n\n pagination_class = OptionalPageNumberPagination\n serializer_class = ContentReportSerializer\n\n def get_queryset(self):\n channel_id = self.kwargs['channel_id']\n attempted_mastery_logs = MasteryLog.objects.filter(attemptlogs__isnull=False)\n query_node = ContentNode.objects.get(pk=self.kwargs['content_node_id'])\n if self.request.query_params.get('last_active_time'):\n # Last active time specified\n datetime_cutoff = parse(self.request.query_params.get('last_active_time'))\n else:\n datetime_cutoff = timezone.now() - datetime.timedelta(7)\n # Set on the kwargs to pass into the serializer\n self.kwargs['last_active_time'] = datetime_cutoff.isoformat()\n recent_content_items = ContentSummaryLog.objects.filter_by_topic(query_node).filter(\n Q(progress__gt=0) | Q(masterylogs__in=attempted_mastery_logs),\n user__in=list(get_members_or_user(self.kwargs['collection_kind'], self.kwargs['collection_id'])),\n end_timestamp__gte=datetime_cutoff).values_list('content_id', flat=True)\n if connection.vendor == 'postgresql':\n pks_with_unique_content_ids = ContentNode.objects.order_by('content_id').distinct('content_id').filter(\n channel_id=channel_id, content_id__in=recent_content_items).values_list('pk', flat=True)\n else:\n # note from rtibbles:\n # As good as either I or jamalex could come up with to ensure that we only return\n # unique content_id'ed ContentNodes from the coach recent report endpoint.\n # Would have loved to use distinct('content_id'), but unfortunately DISTINCT ON is Postgresql only\n pks_with_unique_content_ids = ContentNode.objects.filter(\n channel_id=channel_id, content_id__in=recent_content_items).values('content_id').order_by('lft').annotate(\n pk=Min('pk')).values_list('pk', flat=True)\n return ContentNode.objects.filter(pk__in=pks_with_unique_content_ids).order_by('lft')\n\n\nclass LessonReportViewset(viewsets.ReadOnlyModelViewSet):\n permission_classes = (permissions.IsAuthenticated, KolibriReportPermissions,)\n serializer_class = LessonReportSerializer\n queryset = Lesson.objects.all()\n", "path": "kolibri/plugins/coach/api.py" } ]
[ { "content": "import datetime\n\nfrom dateutil.parser import parse\nfrom django.db import connection\nfrom django.db.models import Min\nfrom django.db.models import Q\nfrom django.utils import timezone\nfrom rest_framework import mixins\nfrom rest_framework import pagination\nfrom rest_framework import permissions\nfrom rest_framework import viewsets\n\nfrom .serializers import ContentReportSerializer\nfrom .serializers import ContentSummarySerializer\nfrom .serializers import LessonReportSerializer\nfrom .serializers import UserReportSerializer\nfrom .utils.return_users import get_members_or_user\nfrom kolibri.core.auth.constants import collection_kinds\nfrom kolibri.core.auth.constants import role_kinds\nfrom kolibri.core.auth.models import Collection\nfrom kolibri.core.auth.models import FacilityUser\nfrom kolibri.core.content.models import ContentNode\nfrom kolibri.core.decorators import query_params_required\nfrom kolibri.core.lessons.models import Lesson\nfrom kolibri.core.logger.models import ContentSummaryLog\nfrom kolibri.core.logger.models import MasteryLog\n\n\ncollection_kind_choices = tuple([choice[0] for choice in collection_kinds.choices] + ['user'])\n\n\nclass OptionalPageNumberPagination(pagination.PageNumberPagination):\n \"\"\"\n Pagination class that allows for page number-style pagination, when requested.\n To activate, the `page_size` argument must be set. For example, to request the first 20 records:\n `?page_size=20&page=1`\n \"\"\"\n page_size = None\n page_size_query_param = \"page_size\"\n\n\nclass KolibriReportPermissions(permissions.BasePermission):\n\n # check if requesting user has permission for collection or user\n def has_permission(self, request, view):\n if isinstance(view, LessonReportViewset):\n report_pk = view.kwargs.get('pk', None)\n if report_pk is None:\n # If requesting list view, check if requester has coach/admin permissions on whole facility\n collection_kind = 'facility'\n collection_or_user_pk = request.user.facility_id\n else:\n # If requesting detail view, only check if requester has permissions on the Classroom\n collection_kind = 'classroom'\n collection_or_user_pk = Lesson.objects.get(pk=report_pk).collection.id\n\n else:\n collection_kind = view.kwargs.get('collection_kind', 'user')\n collection_or_user_pk = view.kwargs.get('collection_id', view.kwargs.get('pk'))\n\n allowed_roles = [role_kinds.ADMIN, role_kinds.COACH]\n try:\n if 'user' == collection_kind:\n return request.user.has_role_for(allowed_roles, FacilityUser.objects.get(pk=collection_or_user_pk))\n else:\n return request.user.has_role_for(allowed_roles, Collection.objects.get(pk=collection_or_user_pk))\n except (FacilityUser.DoesNotExist, Collection.DoesNotExist):\n return False\n\n\n@query_params_required(channel_id=str, content_node_id=str, collection_kind=collection_kind_choices, collection_id=str)\nclass ReportBaseViewSet(mixins.ListModelMixin, viewsets.GenericViewSet):\n\n permission_classes = (KolibriReportPermissions,)\n\n\nclass UserReportViewSet(ReportBaseViewSet):\n\n pagination_class = OptionalPageNumberPagination\n serializer_class = UserReportSerializer\n\n def get_queryset(self):\n assert 'user' != self.kwargs['collection_kind'], 'only a `collection` should be passed to this endpoint'\n return get_members_or_user(self.kwargs['collection_kind'], self.kwargs['collection_id'])\n\n\nclass ContentReportViewSet(ReportBaseViewSet):\n\n pagination_class = OptionalPageNumberPagination\n serializer_class = ContentReportSerializer\n\n def get_queryset(self):\n content_node_id = self.kwargs['content_node_id']\n return ContentNode.objects.filter(Q(parent=content_node_id) & Q(available=True)).order_by('lft')\n\n\n@query_params_required(channel_id=str, collection_kind=collection_kind_choices, collection_id=str)\nclass ContentSummaryViewSet(viewsets.ReadOnlyModelViewSet):\n\n permission_classes = (KolibriReportPermissions,)\n serializer_class = ContentSummarySerializer\n\n def get_queryset(self):\n channel_id = self.kwargs['channel_id']\n return ContentNode.objects.filter(Q(channel_id=channel_id)).order_by('lft')\n\n\nclass RecentReportViewSet(ReportBaseViewSet):\n\n pagination_class = OptionalPageNumberPagination\n serializer_class = ContentReportSerializer\n\n def get_queryset(self):\n channel_id = self.kwargs['channel_id']\n attempted_mastery_logs = MasteryLog.objects.filter(attemptlogs__isnull=False)\n query_node = ContentNode.objects.get(pk=self.kwargs['content_node_id'])\n if self.request.query_params.get('last_active_time'):\n # Last active time specified\n datetime_cutoff = parse(self.request.query_params.get('last_active_time'))\n else:\n datetime_cutoff = timezone.now() - datetime.timedelta(7)\n # Set on the kwargs to pass into the serializer\n self.kwargs['last_active_time'] = datetime_cutoff.isoformat()\n recent_content_items = ContentSummaryLog.objects.filter_by_topic(query_node).filter(\n Q(progress__gt=0) | Q(masterylogs__in=attempted_mastery_logs),\n user__in=list(get_members_or_user(self.kwargs['collection_kind'], self.kwargs['collection_id'])),\n end_timestamp__gte=datetime_cutoff).values_list('content_id', flat=True)\n if connection.vendor == 'postgresql':\n pks_with_unique_content_ids = ContentNode.objects.order_by('content_id').distinct('content_id').filter(\n channel_id=channel_id, content_id__in=recent_content_items).values_list('pk', flat=True)\n else:\n # note from rtibbles:\n # As good as either I or jamalex could come up with to ensure that we only return\n # unique content_id'ed ContentNodes from the coach recent report endpoint.\n # Would have loved to use distinct('content_id'), but unfortunately DISTINCT ON is Postgresql only\n pks_with_unique_content_ids = ContentNode.objects.filter(\n channel_id=channel_id, content_id__in=recent_content_items).values('content_id').order_by('lft').annotate(\n pk=Min('pk')).values_list('pk', flat=True)\n return ContentNode.objects.filter(pk__in=pks_with_unique_content_ids).order_by('lft')\n\n\nclass LessonReportViewset(viewsets.ReadOnlyModelViewSet):\n permission_classes = (permissions.IsAuthenticated, KolibriReportPermissions,)\n serializer_class = LessonReportSerializer\n queryset = Lesson.objects.all()\n", "path": "kolibri/plugins/coach/api.py" } ]
diff --git a/kolibri/plugins/coach/api.py b/kolibri/plugins/coach/api.py index 8310fa13339..3fa7ae1568e 100644 --- a/kolibri/plugins/coach/api.py +++ b/kolibri/plugins/coach/api.py @@ -102,7 +102,7 @@ class ContentSummaryViewSet(viewsets.ReadOnlyModelViewSet): def get_queryset(self): channel_id = self.kwargs['channel_id'] - return ContentNode.objects.filter(Q(channel_id=channel_id) & Q(available=True)).order_by('lft') + return ContentNode.objects.filter(Q(channel_id=channel_id)).order_by('lft') class RecentReportViewSet(ReportBaseViewSet):
ansible__ansible-modules-core-4285
cron state is changed when using multiline job ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cron ##### ANSIBLE VERSION 2.0.2.0 ##### CONFIGURATION ##### OS / ENVIRONMENT Darwin craneworks 15.5.0 Darwin Kernel Version 15.5.0: Tue Apr 19 18:36:36 PDT 2016; root:xnu-3248.50.21~8/RELEASE_X86_64 x86_64 ##### SUMMARY When the corn moduled is faced with a multiline jobit can't figure out the correct state of the task. It just reports everytime a changed. ##### STEPS TO REPRODUCE ``` - name: returns a changed every time cron: name: renewal cron job: > bash -l -c "mkdir -p /tmp/certbot-auto && /opt/certbot/certbot-auto certonly -d www.mydomain --agree-tos --renew-by-default -a webroot --webroot-path=\"/tmp/certbot-auto\" && service nginx reload" special_time: "monthly" - name: works as expected cron: name: debug cron job: bash -l -c "mkdir -p /tmp/certbot-auto && /opt/certbot/certbot-auto certonly -d www.mydomain --agree-tos --renew-by-default -a webroot --webroot-path=\"/tmp/certbot-auto\" && service nginx reload" special_time: "monthly" ``` ##### EXPECTED RESULTS A changed the first time it is run and after this just a ok ##### ACTUAL RESULTS Changed is returned every time
[ { "content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n#\n# (c) 2012, Dane Summers <[email protected]>\n# (c) 2013, Mike Grozak <[email protected]>\n# (c) 2013, Patrick Callahan <[email protected]>\n# (c) 2015, Evan Kaufman <[email protected]>\n# (c) 2015, Luca Berruti <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\n# Cron Plugin: The goal of this plugin is to provide an indempotent method for\n# setting up cron jobs on a host. The script will play well with other manually\n# entered crons. Each cron job entered will be preceded with a comment\n# describing the job so that it can be found later, which is required to be\n# present in order for this plugin to find/modify the job.\n#\n# This module is based on python-crontab by Martin Owens.\n#\n\nDOCUMENTATION = \"\"\"\n---\nmodule: cron\nshort_description: Manage cron.d and crontab entries.\ndescription:\n - Use this module to manage crontab and environment variables entries. This module allows\n you to create environment variables and named crontab entries, update, or delete them.\n - 'When crontab jobs are managed: the module includes one line with the description of the\n crontab entry C(\"#Ansible: <name>\") corresponding to the \"name\" passed to the module,\n which is used by future ansible/module calls to find/check the state. The \"name\"\n parameter should be unique, and changing the \"name\" value will result in a new cron\n task being created (or a different one being removed).'\n - 'When environment variables are managed: no comment line is added, but, when the module\n needs to find/check the state, it uses the \"name\" parameter to find the environment\n variable definition line.'\n - 'When using symbols such as %, they must be properly escaped.'\nversion_added: \"0.9\"\noptions:\n name:\n description:\n - Description of a crontab entry or, if env is set, the name of environment variable.\n Required if state=absent. Note that if name is not set and state=present, then a\n new crontab entry will always be created, regardless of existing ones.\n default: null\n required: false\n user:\n description:\n - The specific user whose crontab should be modified.\n required: false\n default: root\n job:\n description:\n - The command to execute or, if env is set, the value of environment variable.\n Required if state=present.\n required: false\n aliases: ['value']\n default: null\n state:\n description:\n - Whether to ensure the job or environment variable is present or absent.\n required: false\n default: present\n choices: [ \"present\", \"absent\" ]\n cron_file:\n description:\n - If specified, uses this file instead of an individual user's crontab.\n If this is a relative path, it is interpreted with respect to\n /etc/cron.d. (If it is absolute, it will typically be /etc/crontab).\n To use the C(cron_file) parameter you must specify the C(user) as well.\n required: false\n default: null\n backup:\n description:\n - If set, create a backup of the crontab before it is modified.\n The location of the backup is returned in the C(backup_file) variable by this module.\n required: false\n choices: [ \"yes\", \"no\" ]\n default: no\n minute:\n description:\n - Minute when the job should run ( 0-59, *, */2, etc )\n required: false\n default: \"*\"\n hour:\n description:\n - Hour when the job should run ( 0-23, *, */2, etc )\n required: false\n default: \"*\"\n day:\n description:\n - Day of the month the job should run ( 1-31, *, */2, etc )\n required: false\n default: \"*\"\n aliases: [ \"dom\" ]\n month:\n description:\n - Month of the year the job should run ( 1-12, *, */2, etc )\n required: false\n default: \"*\"\n weekday:\n description:\n - Day of the week that the job should run ( 0-6 for Sunday-Saturday, *, etc )\n required: false\n default: \"*\"\n aliases: [ \"dow\" ]\n reboot:\n description:\n - If the job should be run at reboot. This option is deprecated. Users should use special_time.\n version_added: \"1.0\"\n required: false\n default: \"no\"\n choices: [ \"yes\", \"no\" ]\n special_time:\n description:\n - Special time specification nickname.\n version_added: \"1.3\"\n required: false\n default: null\n choices: [ \"reboot\", \"yearly\", \"annually\", \"monthly\", \"weekly\", \"daily\", \"hourly\" ]\n disabled:\n description:\n - If the job should be disabled (commented out) in the crontab. Only has effect if state=present\n version_added: \"2.0\"\n required: false\n default: false\n env:\n description:\n - If set, manages a crontab's environment variable. New variables are added on top of crontab.\n \"name\" and \"value\" paramenters are the name and the value of environment variable.\n version_added: \"2.1\"\n required: false\n default: \"no\"\n choices: [ \"yes\", \"no\" ]\n insertafter:\n description:\n - Used with C(state=present) and C(env). If specified, the environment variable will be\n inserted after the declaration of specified environment variable.\n version_added: \"2.1\"\n required: false\n default: null\n insertbefore:\n description:\n - Used with C(state=present) and C(env). If specified, the environment variable will be\n inserted before the declaration of specified environment variable.\n version_added: \"2.1\"\n required: false\n default: null\nrequirements:\n - cron\nauthor:\n - \"Dane Summers (@dsummersl)\"\n - 'Mike Grozak'\n - 'Patrick Callahan'\n - 'Evan Kaufman (@EvanK)'\n - 'Luca Berruti (@lberruti)'\n\"\"\"\n\nEXAMPLES = '''\n# Ensure a job that runs at 2 and 5 exists.\n# Creates an entry like \"0 5,2 * * ls -alh > /dev/null\"\n- cron: name=\"check dirs\" minute=\"0\" hour=\"5,2\" job=\"ls -alh > /dev/null\"\n\n# Ensure an old job is no longer present. Removes any job that is prefixed\n# by \"#Ansible: an old job\" from the crontab\n- cron: name=\"an old job\" state=absent\n\n# Creates an entry like \"@reboot /some/job.sh\"\n- cron: name=\"a job for reboot\" special_time=reboot job=\"/some/job.sh\"\n\n# Creates an entry like \"PATH=/opt/bin\" on top of crontab\n- cron: name=PATH env=yes value=/opt/bin\n\n# Creates an entry like \"APP_HOME=/srv/app\" and insert it after PATH\n# declaration\n- cron: name=APP_HOME env=yes value=/srv/app insertafter=PATH\n\n# Creates a cron file under /etc/cron.d\n- cron: name=\"yum autoupdate\" weekday=\"2\" minute=0 hour=12\n user=\"root\" job=\"YUMINTERACTIVE=0 /usr/sbin/yum-autoupdate\"\n cron_file=ansible_yum-autoupdate\n\n# Removes a cron file from under /etc/cron.d\n- cron: name=\"yum autoupdate\" cron_file=ansible_yum-autoupdate state=absent\n\n# Removes \"APP_HOME\" environment variable from crontab\n- cron: name=APP_HOME env=yes state=absent\n'''\n\nimport os\nimport re\nimport tempfile\nimport platform\nimport pipes\n\nCRONCMD = \"/usr/bin/crontab\"\n\nclass CronTabError(Exception):\n pass\n\nclass CronTab(object):\n \"\"\"\n CronTab object to write time based crontab file\n\n user - the user of the crontab (defaults to root)\n cron_file - a cron file under /etc/cron.d, or an absolute path\n \"\"\"\n def __init__(self, module, user=None, cron_file=None):\n self.module = module\n self.user = user\n self.root = (os.getuid() == 0)\n self.lines = None\n self.ansible = \"#Ansible: \"\n\n if cron_file:\n if os.path.isabs(cron_file):\n self.cron_file = cron_file\n else:\n self.cron_file = os.path.join('/etc/cron.d', cron_file)\n else:\n self.cron_file = None\n\n self.read()\n\n def read(self):\n # Read in the crontab from the system\n self.lines = []\n if self.cron_file:\n # read the cronfile\n try:\n f = open(self.cron_file, 'r')\n self.lines = f.read().splitlines()\n f.close()\n except IOError:\n # cron file does not exist\n return\n except:\n raise CronTabError(\"Unexpected error:\", sys.exc_info()[0])\n else:\n # using safely quoted shell for now, but this really should be two non-shell calls instead. FIXME\n (rc, out, err) = self.module.run_command(self._read_user_execute(), use_unsafe_shell=True)\n\n if rc != 0 and rc != 1: # 1 can mean that there are no jobs.\n raise CronTabError(\"Unable to read crontab\")\n\n lines = out.splitlines()\n count = 0\n for l in lines:\n if count > 2 or (not re.match( r'# DO NOT EDIT THIS FILE - edit the master and reinstall.', l) and\n not re.match( r'# \\(/tmp/.*installed on.*\\)', l) and\n not re.match( r'# \\(.*version.*\\)', l)):\n self.lines.append(l)\n count += 1\n\n def is_empty(self):\n if len(self.lines) == 0:\n return True\n else:\n return False\n\n def write(self, backup_file=None):\n \"\"\"\n Write the crontab to the system. Saves all information.\n \"\"\"\n if backup_file:\n fileh = open(backup_file, 'w')\n elif self.cron_file:\n fileh = open(self.cron_file, 'w')\n else:\n filed, path = tempfile.mkstemp(prefix='crontab')\n os.chmod(path, int('0644', 8))\n fileh = os.fdopen(filed, 'w')\n\n fileh.write(self.render())\n fileh.close()\n\n # return if making a backup\n if backup_file:\n return\n\n # Add the entire crontab back to the user crontab\n if not self.cron_file:\n # quoting shell args for now but really this should be two non-shell calls. FIXME\n (rc, out, err) = self.module.run_command(self._write_execute(path), use_unsafe_shell=True)\n os.unlink(path)\n\n if rc != 0:\n self.module.fail_json(msg=err)\n\n def add_job(self, name, job):\n # Add the comment\n self.lines.append(\"%s%s\" % (self.ansible, name))\n\n # Add the job\n self.lines.append(\"%s\" % (job))\n\n def update_job(self, name, job):\n return self._update_job(name, job, self.do_add_job)\n\n def do_add_job(self, lines, comment, job):\n lines.append(comment)\n\n lines.append(\"%s\" % (job))\n\n def remove_job(self, name):\n return self._update_job(name, \"\", self.do_remove_job)\n\n def do_remove_job(self, lines, comment, job):\n return None\n\n def add_env(self, decl, insertafter=None, insertbefore=None):\n if not (insertafter or insertbefore):\n self.lines.insert(0, decl)\n return\n\n if insertafter:\n other_name = insertafter\n elif insertbefore:\n other_name = insertbefore\n other_decl = self.find_env(other_name)\n if len(other_decl) > 0:\n if insertafter:\n index = other_decl[0]+1\n elif insertbefore:\n index = other_decl[0]\n self.lines.insert(index, decl)\n return\n\n self.module.fail_json(msg=\"Variable named '%s' not found.\" % other_name)\n\n def update_env(self, name, decl):\n return self._update_env(name, decl, self.do_add_env)\n\n def do_add_env(self, lines, decl):\n lines.append(decl)\n\n def remove_env(self, name):\n return self._update_env(name, '', self.do_remove_env)\n\n def do_remove_env(self, lines, decl):\n return None\n\n def remove_job_file(self):\n try:\n os.unlink(self.cron_file)\n return True\n except OSError:\n # cron file does not exist\n return False\n except:\n raise CronTabError(\"Unexpected error:\", sys.exc_info()[0])\n\n def find_job(self, name):\n comment = None\n for l in self.lines:\n if comment is not None:\n if comment == name:\n return [comment, l]\n else:\n comment = None\n elif re.match( r'%s' % self.ansible, l):\n comment = re.sub( r'%s' % self.ansible, '', l)\n\n return []\n\n def find_env(self, name):\n for index, l in enumerate(self.lines):\n if re.match( r'^%s=' % name, l):\n return [index, l]\n\n return []\n\n def get_cron_job(self,minute,hour,day,month,weekday,job,special,disabled):\n if disabled:\n disable_prefix = '#'\n else:\n disable_prefix = ''\n\n if special:\n if self.cron_file:\n return \"%s@%s %s %s\" % (disable_prefix, special, self.user, job)\n else:\n return \"%s@%s %s\" % (disable_prefix, special, job)\n else:\n if self.cron_file:\n return \"%s%s %s %s %s %s %s %s\" % (disable_prefix,minute,hour,day,month,weekday,self.user,job)\n else:\n return \"%s%s %s %s %s %s %s\" % (disable_prefix,minute,hour,day,month,weekday,job)\n\n return None\n\n def get_jobnames(self):\n jobnames = []\n\n for l in self.lines:\n if re.match( r'%s' % self.ansible, l):\n jobnames.append(re.sub( r'%s' % self.ansible, '', l))\n\n return jobnames\n\n def get_envnames(self):\n envnames = []\n\n for l in self.lines:\n if re.match( r'^\\S+=' , l):\n envnames.append(l.split('=')[0])\n\n return envnames\n\n def _update_job(self, name, job, addlinesfunction):\n ansiblename = \"%s%s\" % (self.ansible, name)\n newlines = []\n comment = None\n\n for l in self.lines:\n if comment is not None:\n addlinesfunction(newlines, comment, job)\n comment = None\n elif l == ansiblename:\n comment = l\n else:\n newlines.append(l)\n\n self.lines = newlines\n\n if len(newlines) == 0:\n return True\n else:\n return False # TODO add some more error testing\n\n def _update_env(self, name, decl, addenvfunction):\n newlines = []\n\n for l in self.lines:\n if re.match( r'^%s=' % name, l):\n addenvfunction(newlines, decl)\n else:\n newlines.append(l)\n\n self.lines = newlines\n\n def render(self):\n \"\"\"\n Render this crontab as it would be in the crontab.\n \"\"\"\n crons = []\n for cron in self.lines:\n crons.append(cron)\n\n result = '\\n'.join(crons)\n if result and result[-1] not in ['\\n', '\\r']:\n result += '\\n'\n return result\n\n def _read_user_execute(self):\n \"\"\"\n Returns the command line for reading a crontab\n \"\"\"\n user = ''\n if self.user:\n if platform.system() == 'SunOS':\n return \"su %s -c '%s -l'\" % (pipes.quote(self.user), pipes.quote(CRONCMD))\n elif platform.system() == 'AIX':\n return \"%s -l %s\" % (pipes.quote(CRONCMD), pipes.quote(self.user))\n elif platform.system() == 'HP-UX':\n return \"%s %s %s\" % (CRONCMD , '-l', pipes.quote(self.user))\n else:\n user = '-u %s' % pipes.quote(self.user)\n return \"%s %s %s\" % (CRONCMD , user, '-l')\n\n def _write_execute(self, path):\n \"\"\"\n Return the command line for writing a crontab\n \"\"\"\n user = ''\n if self.user:\n if platform.system() in ['SunOS', 'HP-UX', 'AIX']:\n return \"chown %s %s ; su '%s' -c '%s %s'\" % (pipes.quote(self.user), pipes.quote(path), pipes.quote(self.user), CRONCMD, pipes.quote(path))\n else:\n user = '-u %s' % pipes.quote(self.user)\n return \"%s %s %s\" % (CRONCMD , user, pipes.quote(path))\n\n\n\n#==================================================\n\ndef main():\n # The following example playbooks:\n #\n # - cron: name=\"check dirs\" hour=\"5,2\" job=\"ls -alh > /dev/null\"\n #\n # - name: do the job\n # cron: name=\"do the job\" hour=\"5,2\" job=\"/some/dir/job.sh\"\n #\n # - name: no job\n # cron: name=\"an old job\" state=absent\n #\n # - name: sets env\n # cron: name=\"PATH\" env=yes value=\"/bin:/usr/bin\"\n #\n # Would produce:\n # PATH=/bin:/usr/bin\n # # Ansible: check dirs\n # * * 5,2 * * ls -alh > /dev/null\n # # Ansible: do the job\n # * * 5,2 * * /some/dir/job.sh\n\n module = AnsibleModule(\n argument_spec = dict(\n name=dict(required=False),\n user=dict(required=False),\n job=dict(required=False, aliases=['value']),\n cron_file=dict(required=False),\n state=dict(default='present', choices=['present', 'absent']),\n backup=dict(default=False, type='bool'),\n minute=dict(default='*'),\n hour=dict(default='*'),\n day=dict(aliases=['dom'], default='*'),\n month=dict(default='*'),\n weekday=dict(aliases=['dow'], default='*'),\n reboot=dict(required=False, default=False, type='bool'),\n special_time=dict(required=False,\n default=None,\n choices=[\"reboot\", \"yearly\", \"annually\", \"monthly\", \"weekly\", \"daily\", \"hourly\"],\n type='str'),\n disabled=dict(default=False, type='bool'),\n env=dict(required=False, type='bool'),\n insertafter=dict(required=False),\n insertbefore=dict(required=False),\n ),\n supports_check_mode = True,\n mutually_exclusive=[\n ['reboot', 'special_time'],\n ['insertafter', 'insertbefore'],\n ]\n )\n\n name = module.params['name']\n user = module.params['user']\n job = module.params['job']\n cron_file = module.params['cron_file']\n state = module.params['state']\n backup = module.params['backup']\n minute = module.params['minute']\n hour = module.params['hour']\n day = module.params['day']\n month = module.params['month']\n weekday = module.params['weekday']\n reboot = module.params['reboot']\n special_time = module.params['special_time']\n disabled = module.params['disabled']\n env = module.params['env']\n insertafter = module.params['insertafter']\n insertbefore = module.params['insertbefore']\n do_install = state == 'present'\n\n changed = False\n res_args = dict()\n\n # Ensure all files generated are only writable by the owning user. Primarily relevant for the cron_file option.\n os.umask(int('022', 8))\n crontab = CronTab(module, user, cron_file)\n\n module.debug('cron instantiated - name: \"%s\"' % name)\n\n if module._diff:\n diff = dict()\n diff['before'] = crontab.render()\n if crontab.cron_file:\n diff['before_header'] = crontab.cron_file\n else:\n if crontab.user:\n diff['before_header'] = 'crontab for user \"%s\"' % crontab.user\n else:\n diff['before_header'] = 'crontab'\n\n # --- user input validation ---\n\n if (special_time or reboot) and \\\n (True in [(x != '*') for x in [minute, hour, day, month, weekday]]):\n module.fail_json(msg=\"You must specify time and date fields or special time.\")\n\n if cron_file and do_install:\n if not user:\n module.fail_json(msg=\"To use cron_file=... parameter you must specify user=... as well\")\n\n if job is None and do_install:\n module.fail_json(msg=\"You must specify 'job' to install a new cron job or variable\")\n\n if (insertafter or insertbefore) and not env and do_install:\n module.fail_json(msg=\"Insertafter and insertbefore parameters are valid only with env=yes\")\n\n if reboot:\n special_time = \"reboot\"\n\n # if requested make a backup before making a change\n if backup and not module.check_mode:\n (backuph, backup_file) = tempfile.mkstemp(prefix='crontab')\n crontab.write(backup_file)\n\n\n if crontab.cron_file and not name and not do_install:\n if module._diff:\n diff['after'] = ''\n diff['after_header'] = '/dev/null'\n else:\n diff = dict()\n if module.check_mode:\n changed = os.path.isfile(crontab.cron_file)\n else:\n changed = crontab.remove_job_file()\n module.exit_json(changed=changed,cron_file=cron_file,state=state,diff=diff)\n\n if env:\n if ' ' in name:\n module.fail_json(msg=\"Invalid name for environment variable\")\n decl = '%s=\"%s\"' % (name, job)\n old_decl = crontab.find_env(name)\n\n if do_install:\n if len(old_decl) == 0:\n crontab.add_env(decl, insertafter, insertbefore)\n changed = True\n if len(old_decl) > 0 and old_decl[1] != decl:\n crontab.update_env(name, decl)\n changed = True\n else:\n if len(old_decl) > 0:\n crontab.remove_env(name)\n changed = True\n else:\n job = crontab.get_cron_job(minute, hour, day, month, weekday, job, special_time, disabled)\n old_job = crontab.find_job(name)\n\n if do_install:\n if len(old_job) == 0:\n crontab.add_job(name, job)\n changed = True\n if len(old_job) > 0 and old_job[1] != job:\n crontab.update_job(name, job)\n changed = True\n else:\n if len(old_job) > 0:\n crontab.remove_job(name)\n changed = True\n\n res_args = dict(\n jobs = crontab.get_jobnames(),\n envs = crontab.get_envnames(),\n changed = changed\n )\n\n if changed:\n if not module.check_mode:\n crontab.write()\n if module._diff:\n diff['after'] = crontab.render()\n if crontab.cron_file:\n diff['after_header'] = crontab.cron_file\n else:\n if crontab.user:\n diff['after_header'] = 'crontab for user \"%s\"' % crontab.user\n else:\n diff['after_header'] = 'crontab'\n\n res_args['diff'] = diff\n\n # retain the backup only if crontab or cron file have changed\n if backup:\n if changed:\n res_args['backup_file'] = backup_file\n else:\n if not module.check_mode:\n os.unlink(backup_file)\n\n if cron_file:\n res_args['cron_file'] = cron_file\n\n module.exit_json(**res_args)\n\n # --- should never get here\n module.exit_json(msg=\"Unable to execute cron task.\")\n\n# import module snippets\nfrom ansible.module_utils.basic import *\n\nmain()\n\n", "path": "system/cron.py" } ]
[ { "content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n#\n# (c) 2012, Dane Summers <[email protected]>\n# (c) 2013, Mike Grozak <[email protected]>\n# (c) 2013, Patrick Callahan <[email protected]>\n# (c) 2015, Evan Kaufman <[email protected]>\n# (c) 2015, Luca Berruti <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\n# Cron Plugin: The goal of this plugin is to provide an indempotent method for\n# setting up cron jobs on a host. The script will play well with other manually\n# entered crons. Each cron job entered will be preceded with a comment\n# describing the job so that it can be found later, which is required to be\n# present in order for this plugin to find/modify the job.\n#\n# This module is based on python-crontab by Martin Owens.\n#\n\nDOCUMENTATION = \"\"\"\n---\nmodule: cron\nshort_description: Manage cron.d and crontab entries.\ndescription:\n - Use this module to manage crontab and environment variables entries. This module allows\n you to create environment variables and named crontab entries, update, or delete them.\n - 'When crontab jobs are managed: the module includes one line with the description of the\n crontab entry C(\"#Ansible: <name>\") corresponding to the \"name\" passed to the module,\n which is used by future ansible/module calls to find/check the state. The \"name\"\n parameter should be unique, and changing the \"name\" value will result in a new cron\n task being created (or a different one being removed).'\n - 'When environment variables are managed: no comment line is added, but, when the module\n needs to find/check the state, it uses the \"name\" parameter to find the environment\n variable definition line.'\n - 'When using symbols such as %, they must be properly escaped.'\nversion_added: \"0.9\"\noptions:\n name:\n description:\n - Description of a crontab entry or, if env is set, the name of environment variable.\n Required if state=absent. Note that if name is not set and state=present, then a\n new crontab entry will always be created, regardless of existing ones.\n default: null\n required: false\n user:\n description:\n - The specific user whose crontab should be modified.\n required: false\n default: root\n job:\n description:\n - The command to execute or, if env is set, the value of environment variable.\n Required if state=present.\n required: false\n aliases: ['value']\n default: null\n state:\n description:\n - Whether to ensure the job or environment variable is present or absent.\n required: false\n default: present\n choices: [ \"present\", \"absent\" ]\n cron_file:\n description:\n - If specified, uses this file instead of an individual user's crontab.\n If this is a relative path, it is interpreted with respect to\n /etc/cron.d. (If it is absolute, it will typically be /etc/crontab).\n To use the C(cron_file) parameter you must specify the C(user) as well.\n required: false\n default: null\n backup:\n description:\n - If set, create a backup of the crontab before it is modified.\n The location of the backup is returned in the C(backup_file) variable by this module.\n required: false\n choices: [ \"yes\", \"no\" ]\n default: no\n minute:\n description:\n - Minute when the job should run ( 0-59, *, */2, etc )\n required: false\n default: \"*\"\n hour:\n description:\n - Hour when the job should run ( 0-23, *, */2, etc )\n required: false\n default: \"*\"\n day:\n description:\n - Day of the month the job should run ( 1-31, *, */2, etc )\n required: false\n default: \"*\"\n aliases: [ \"dom\" ]\n month:\n description:\n - Month of the year the job should run ( 1-12, *, */2, etc )\n required: false\n default: \"*\"\n weekday:\n description:\n - Day of the week that the job should run ( 0-6 for Sunday-Saturday, *, etc )\n required: false\n default: \"*\"\n aliases: [ \"dow\" ]\n reboot:\n description:\n - If the job should be run at reboot. This option is deprecated. Users should use special_time.\n version_added: \"1.0\"\n required: false\n default: \"no\"\n choices: [ \"yes\", \"no\" ]\n special_time:\n description:\n - Special time specification nickname.\n version_added: \"1.3\"\n required: false\n default: null\n choices: [ \"reboot\", \"yearly\", \"annually\", \"monthly\", \"weekly\", \"daily\", \"hourly\" ]\n disabled:\n description:\n - If the job should be disabled (commented out) in the crontab. Only has effect if state=present\n version_added: \"2.0\"\n required: false\n default: false\n env:\n description:\n - If set, manages a crontab's environment variable. New variables are added on top of crontab.\n \"name\" and \"value\" paramenters are the name and the value of environment variable.\n version_added: \"2.1\"\n required: false\n default: \"no\"\n choices: [ \"yes\", \"no\" ]\n insertafter:\n description:\n - Used with C(state=present) and C(env). If specified, the environment variable will be\n inserted after the declaration of specified environment variable.\n version_added: \"2.1\"\n required: false\n default: null\n insertbefore:\n description:\n - Used with C(state=present) and C(env). If specified, the environment variable will be\n inserted before the declaration of specified environment variable.\n version_added: \"2.1\"\n required: false\n default: null\nrequirements:\n - cron\nauthor:\n - \"Dane Summers (@dsummersl)\"\n - 'Mike Grozak'\n - 'Patrick Callahan'\n - 'Evan Kaufman (@EvanK)'\n - 'Luca Berruti (@lberruti)'\n\"\"\"\n\nEXAMPLES = '''\n# Ensure a job that runs at 2 and 5 exists.\n# Creates an entry like \"0 5,2 * * ls -alh > /dev/null\"\n- cron: name=\"check dirs\" minute=\"0\" hour=\"5,2\" job=\"ls -alh > /dev/null\"\n\n# Ensure an old job is no longer present. Removes any job that is prefixed\n# by \"#Ansible: an old job\" from the crontab\n- cron: name=\"an old job\" state=absent\n\n# Creates an entry like \"@reboot /some/job.sh\"\n- cron: name=\"a job for reboot\" special_time=reboot job=\"/some/job.sh\"\n\n# Creates an entry like \"PATH=/opt/bin\" on top of crontab\n- cron: name=PATH env=yes value=/opt/bin\n\n# Creates an entry like \"APP_HOME=/srv/app\" and insert it after PATH\n# declaration\n- cron: name=APP_HOME env=yes value=/srv/app insertafter=PATH\n\n# Creates a cron file under /etc/cron.d\n- cron: name=\"yum autoupdate\" weekday=\"2\" minute=0 hour=12\n user=\"root\" job=\"YUMINTERACTIVE=0 /usr/sbin/yum-autoupdate\"\n cron_file=ansible_yum-autoupdate\n\n# Removes a cron file from under /etc/cron.d\n- cron: name=\"yum autoupdate\" cron_file=ansible_yum-autoupdate state=absent\n\n# Removes \"APP_HOME\" environment variable from crontab\n- cron: name=APP_HOME env=yes state=absent\n'''\n\nimport os\nimport re\nimport tempfile\nimport platform\nimport pipes\n\nCRONCMD = \"/usr/bin/crontab\"\n\nclass CronTabError(Exception):\n pass\n\nclass CronTab(object):\n \"\"\"\n CronTab object to write time based crontab file\n\n user - the user of the crontab (defaults to root)\n cron_file - a cron file under /etc/cron.d, or an absolute path\n \"\"\"\n def __init__(self, module, user=None, cron_file=None):\n self.module = module\n self.user = user\n self.root = (os.getuid() == 0)\n self.lines = None\n self.ansible = \"#Ansible: \"\n\n if cron_file:\n if os.path.isabs(cron_file):\n self.cron_file = cron_file\n else:\n self.cron_file = os.path.join('/etc/cron.d', cron_file)\n else:\n self.cron_file = None\n\n self.read()\n\n def read(self):\n # Read in the crontab from the system\n self.lines = []\n if self.cron_file:\n # read the cronfile\n try:\n f = open(self.cron_file, 'r')\n self.lines = f.read().splitlines()\n f.close()\n except IOError:\n # cron file does not exist\n return\n except:\n raise CronTabError(\"Unexpected error:\", sys.exc_info()[0])\n else:\n # using safely quoted shell for now, but this really should be two non-shell calls instead. FIXME\n (rc, out, err) = self.module.run_command(self._read_user_execute(), use_unsafe_shell=True)\n\n if rc != 0 and rc != 1: # 1 can mean that there are no jobs.\n raise CronTabError(\"Unable to read crontab\")\n\n lines = out.splitlines()\n count = 0\n for l in lines:\n if count > 2 or (not re.match( r'# DO NOT EDIT THIS FILE - edit the master and reinstall.', l) and\n not re.match( r'# \\(/tmp/.*installed on.*\\)', l) and\n not re.match( r'# \\(.*version.*\\)', l)):\n self.lines.append(l)\n count += 1\n\n def is_empty(self):\n if len(self.lines) == 0:\n return True\n else:\n return False\n\n def write(self, backup_file=None):\n \"\"\"\n Write the crontab to the system. Saves all information.\n \"\"\"\n if backup_file:\n fileh = open(backup_file, 'w')\n elif self.cron_file:\n fileh = open(self.cron_file, 'w')\n else:\n filed, path = tempfile.mkstemp(prefix='crontab')\n os.chmod(path, int('0644', 8))\n fileh = os.fdopen(filed, 'w')\n\n fileh.write(self.render())\n fileh.close()\n\n # return if making a backup\n if backup_file:\n return\n\n # Add the entire crontab back to the user crontab\n if not self.cron_file:\n # quoting shell args for now but really this should be two non-shell calls. FIXME\n (rc, out, err) = self.module.run_command(self._write_execute(path), use_unsafe_shell=True)\n os.unlink(path)\n\n if rc != 0:\n self.module.fail_json(msg=err)\n\n def add_job(self, name, job):\n # Add the comment\n self.lines.append(\"%s%s\" % (self.ansible, name))\n\n # Add the job\n self.lines.append(\"%s\" % (job))\n\n def update_job(self, name, job):\n return self._update_job(name, job, self.do_add_job)\n\n def do_add_job(self, lines, comment, job):\n lines.append(comment)\n\n lines.append(\"%s\" % (job))\n\n def remove_job(self, name):\n return self._update_job(name, \"\", self.do_remove_job)\n\n def do_remove_job(self, lines, comment, job):\n return None\n\n def add_env(self, decl, insertafter=None, insertbefore=None):\n if not (insertafter or insertbefore):\n self.lines.insert(0, decl)\n return\n\n if insertafter:\n other_name = insertafter\n elif insertbefore:\n other_name = insertbefore\n other_decl = self.find_env(other_name)\n if len(other_decl) > 0:\n if insertafter:\n index = other_decl[0]+1\n elif insertbefore:\n index = other_decl[0]\n self.lines.insert(index, decl)\n return\n\n self.module.fail_json(msg=\"Variable named '%s' not found.\" % other_name)\n\n def update_env(self, name, decl):\n return self._update_env(name, decl, self.do_add_env)\n\n def do_add_env(self, lines, decl):\n lines.append(decl)\n\n def remove_env(self, name):\n return self._update_env(name, '', self.do_remove_env)\n\n def do_remove_env(self, lines, decl):\n return None\n\n def remove_job_file(self):\n try:\n os.unlink(self.cron_file)\n return True\n except OSError:\n # cron file does not exist\n return False\n except:\n raise CronTabError(\"Unexpected error:\", sys.exc_info()[0])\n\n def find_job(self, name):\n comment = None\n for l in self.lines:\n if comment is not None:\n if comment == name:\n return [comment, l]\n else:\n comment = None\n elif re.match( r'%s' % self.ansible, l):\n comment = re.sub( r'%s' % self.ansible, '', l)\n\n return []\n\n def find_env(self, name):\n for index, l in enumerate(self.lines):\n if re.match( r'^%s=' % name, l):\n return [index, l]\n\n return []\n\n def get_cron_job(self,minute,hour,day,month,weekday,job,special,disabled):\n # normalize any leading/trailing newlines (ansible/ansible-modules-core#3791)\n job = job.strip('\\r\\n')\n\n if disabled:\n disable_prefix = '#'\n else:\n disable_prefix = ''\n\n if special:\n if self.cron_file:\n return \"%s@%s %s %s\" % (disable_prefix, special, self.user, job)\n else:\n return \"%s@%s %s\" % (disable_prefix, special, job)\n else:\n if self.cron_file:\n return \"%s%s %s %s %s %s %s %s\" % (disable_prefix,minute,hour,day,month,weekday,self.user,job)\n else:\n return \"%s%s %s %s %s %s %s\" % (disable_prefix,minute,hour,day,month,weekday,job)\n\n return None\n\n def get_jobnames(self):\n jobnames = []\n\n for l in self.lines:\n if re.match( r'%s' % self.ansible, l):\n jobnames.append(re.sub( r'%s' % self.ansible, '', l))\n\n return jobnames\n\n def get_envnames(self):\n envnames = []\n\n for l in self.lines:\n if re.match( r'^\\S+=' , l):\n envnames.append(l.split('=')[0])\n\n return envnames\n\n def _update_job(self, name, job, addlinesfunction):\n ansiblename = \"%s%s\" % (self.ansible, name)\n newlines = []\n comment = None\n\n for l in self.lines:\n if comment is not None:\n addlinesfunction(newlines, comment, job)\n comment = None\n elif l == ansiblename:\n comment = l\n else:\n newlines.append(l)\n\n self.lines = newlines\n\n if len(newlines) == 0:\n return True\n else:\n return False # TODO add some more error testing\n\n def _update_env(self, name, decl, addenvfunction):\n newlines = []\n\n for l in self.lines:\n if re.match( r'^%s=' % name, l):\n addenvfunction(newlines, decl)\n else:\n newlines.append(l)\n\n self.lines = newlines\n\n def render(self):\n \"\"\"\n Render this crontab as it would be in the crontab.\n \"\"\"\n crons = []\n for cron in self.lines:\n crons.append(cron)\n\n result = '\\n'.join(crons)\n if result and result[-1] not in ['\\n', '\\r']:\n result += '\\n'\n return result\n\n def _read_user_execute(self):\n \"\"\"\n Returns the command line for reading a crontab\n \"\"\"\n user = ''\n if self.user:\n if platform.system() == 'SunOS':\n return \"su %s -c '%s -l'\" % (pipes.quote(self.user), pipes.quote(CRONCMD))\n elif platform.system() == 'AIX':\n return \"%s -l %s\" % (pipes.quote(CRONCMD), pipes.quote(self.user))\n elif platform.system() == 'HP-UX':\n return \"%s %s %s\" % (CRONCMD , '-l', pipes.quote(self.user))\n else:\n user = '-u %s' % pipes.quote(self.user)\n return \"%s %s %s\" % (CRONCMD , user, '-l')\n\n def _write_execute(self, path):\n \"\"\"\n Return the command line for writing a crontab\n \"\"\"\n user = ''\n if self.user:\n if platform.system() in ['SunOS', 'HP-UX', 'AIX']:\n return \"chown %s %s ; su '%s' -c '%s %s'\" % (pipes.quote(self.user), pipes.quote(path), pipes.quote(self.user), CRONCMD, pipes.quote(path))\n else:\n user = '-u %s' % pipes.quote(self.user)\n return \"%s %s %s\" % (CRONCMD , user, pipes.quote(path))\n\n\n\n#==================================================\n\ndef main():\n # The following example playbooks:\n #\n # - cron: name=\"check dirs\" hour=\"5,2\" job=\"ls -alh > /dev/null\"\n #\n # - name: do the job\n # cron: name=\"do the job\" hour=\"5,2\" job=\"/some/dir/job.sh\"\n #\n # - name: no job\n # cron: name=\"an old job\" state=absent\n #\n # - name: sets env\n # cron: name=\"PATH\" env=yes value=\"/bin:/usr/bin\"\n #\n # Would produce:\n # PATH=/bin:/usr/bin\n # # Ansible: check dirs\n # * * 5,2 * * ls -alh > /dev/null\n # # Ansible: do the job\n # * * 5,2 * * /some/dir/job.sh\n\n module = AnsibleModule(\n argument_spec = dict(\n name=dict(required=False),\n user=dict(required=False),\n job=dict(required=False, aliases=['value']),\n cron_file=dict(required=False),\n state=dict(default='present', choices=['present', 'absent']),\n backup=dict(default=False, type='bool'),\n minute=dict(default='*'),\n hour=dict(default='*'),\n day=dict(aliases=['dom'], default='*'),\n month=dict(default='*'),\n weekday=dict(aliases=['dow'], default='*'),\n reboot=dict(required=False, default=False, type='bool'),\n special_time=dict(required=False,\n default=None,\n choices=[\"reboot\", \"yearly\", \"annually\", \"monthly\", \"weekly\", \"daily\", \"hourly\"],\n type='str'),\n disabled=dict(default=False, type='bool'),\n env=dict(required=False, type='bool'),\n insertafter=dict(required=False),\n insertbefore=dict(required=False),\n ),\n supports_check_mode = True,\n mutually_exclusive=[\n ['reboot', 'special_time'],\n ['insertafter', 'insertbefore'],\n ]\n )\n\n name = module.params['name']\n user = module.params['user']\n job = module.params['job']\n cron_file = module.params['cron_file']\n state = module.params['state']\n backup = module.params['backup']\n minute = module.params['minute']\n hour = module.params['hour']\n day = module.params['day']\n month = module.params['month']\n weekday = module.params['weekday']\n reboot = module.params['reboot']\n special_time = module.params['special_time']\n disabled = module.params['disabled']\n env = module.params['env']\n insertafter = module.params['insertafter']\n insertbefore = module.params['insertbefore']\n do_install = state == 'present'\n\n changed = False\n res_args = dict()\n\n # Ensure all files generated are only writable by the owning user. Primarily relevant for the cron_file option.\n os.umask(int('022', 8))\n crontab = CronTab(module, user, cron_file)\n\n module.debug('cron instantiated - name: \"%s\"' % name)\n\n if module._diff:\n diff = dict()\n diff['before'] = crontab.render()\n if crontab.cron_file:\n diff['before_header'] = crontab.cron_file\n else:\n if crontab.user:\n diff['before_header'] = 'crontab for user \"%s\"' % crontab.user\n else:\n diff['before_header'] = 'crontab'\n\n # --- user input validation ---\n\n if (special_time or reboot) and \\\n (True in [(x != '*') for x in [minute, hour, day, month, weekday]]):\n module.fail_json(msg=\"You must specify time and date fields or special time.\")\n\n if cron_file and do_install:\n if not user:\n module.fail_json(msg=\"To use cron_file=... parameter you must specify user=... as well\")\n\n if job is None and do_install:\n module.fail_json(msg=\"You must specify 'job' to install a new cron job or variable\")\n\n if (insertafter or insertbefore) and not env and do_install:\n module.fail_json(msg=\"Insertafter and insertbefore parameters are valid only with env=yes\")\n\n if reboot:\n special_time = \"reboot\"\n\n # if requested make a backup before making a change\n if backup and not module.check_mode:\n (backuph, backup_file) = tempfile.mkstemp(prefix='crontab')\n crontab.write(backup_file)\n\n\n if crontab.cron_file and not name and not do_install:\n if module._diff:\n diff['after'] = ''\n diff['after_header'] = '/dev/null'\n else:\n diff = dict()\n if module.check_mode:\n changed = os.path.isfile(crontab.cron_file)\n else:\n changed = crontab.remove_job_file()\n module.exit_json(changed=changed,cron_file=cron_file,state=state,diff=diff)\n\n if env:\n if ' ' in name:\n module.fail_json(msg=\"Invalid name for environment variable\")\n decl = '%s=\"%s\"' % (name, job)\n old_decl = crontab.find_env(name)\n\n if do_install:\n if len(old_decl) == 0:\n crontab.add_env(decl, insertafter, insertbefore)\n changed = True\n if len(old_decl) > 0 and old_decl[1] != decl:\n crontab.update_env(name, decl)\n changed = True\n else:\n if len(old_decl) > 0:\n crontab.remove_env(name)\n changed = True\n else:\n job = crontab.get_cron_job(minute, hour, day, month, weekday, job, special_time, disabled)\n old_job = crontab.find_job(name)\n\n if do_install:\n if len(old_job) == 0:\n crontab.add_job(name, job)\n changed = True\n if len(old_job) > 0 and old_job[1] != job:\n crontab.update_job(name, job)\n changed = True\n else:\n if len(old_job) > 0:\n crontab.remove_job(name)\n changed = True\n\n res_args = dict(\n jobs = crontab.get_jobnames(),\n envs = crontab.get_envnames(),\n changed = changed\n )\n\n if changed:\n if not module.check_mode:\n crontab.write()\n if module._diff:\n diff['after'] = crontab.render()\n if crontab.cron_file:\n diff['after_header'] = crontab.cron_file\n else:\n if crontab.user:\n diff['after_header'] = 'crontab for user \"%s\"' % crontab.user\n else:\n diff['after_header'] = 'crontab'\n\n res_args['diff'] = diff\n\n # retain the backup only if crontab or cron file have changed\n if backup:\n if changed:\n res_args['backup_file'] = backup_file\n else:\n if not module.check_mode:\n os.unlink(backup_file)\n\n if cron_file:\n res_args['cron_file'] = cron_file\n\n module.exit_json(**res_args)\n\n # --- should never get here\n module.exit_json(msg=\"Unable to execute cron task.\")\n\n# import module snippets\nfrom ansible.module_utils.basic import *\n\nmain()\n\n", "path": "system/cron.py" } ]
diff --git a/system/cron.py b/system/cron.py index aed25d3feab..f7308fda9d4 100644 --- a/system/cron.py +++ b/system/cron.py @@ -383,6 +383,9 @@ def find_env(self, name): return [] def get_cron_job(self,minute,hour,day,month,weekday,job,special,disabled): + # normalize any leading/trailing newlines (ansible/ansible-modules-core#3791) + job = job.strip('\r\n') + if disabled: disable_prefix = '#' else:
carpentries__amy-1622
500 Server error when 'manual attendance' field is not set Having 'Manual attendance' field unset causes a 500 Server Error. I tried leaving it unset for workshops where the attendance number is not known yet. It defaults to '0' - not sure if this is the desired behaviour (to have attendance at 0). If yes, disregard the issue.
[ { "content": "from datetime import datetime, timezone\nimport re\n\nfrom crispy_forms.helper import FormHelper\nfrom crispy_forms.layout import Layout, Div, HTML, Submit, Button, Field\nfrom django import forms\nfrom django.contrib.auth.models import Permission\nfrom django.contrib.sites.models import Site\nfrom django.dispatch import receiver\nfrom django.forms import (\n SelectMultiple,\n CheckboxSelectMultiple,\n TextInput,\n RadioSelect,\n)\nfrom django_comments.models import Comment\nfrom django_countries import Countries\nfrom django_countries.fields import CountryField\nfrom markdownx.fields import MarkdownxFormField\n\nfrom dashboard.models import Continent\nfrom workshops.models import (\n Award,\n Event,\n Lesson,\n GenderMixin,\n Person,\n Task,\n Airport,\n Organization,\n Membership,\n Tag,\n Language,\n Badge,\n)\n# this is used instead of Django Autocomplete Light widgets\n# see issue #1330: https://github.com/swcarpentry/amy/issues/1330\nfrom workshops.fields import (\n Select2Widget,\n Select2MultipleWidget,\n ModelSelect2Widget,\n ModelSelect2MultipleWidget,\n RadioSelectWithOther,\n)\nfrom workshops.signals import create_comment_signal\n\n\n#### settings for Select2\n# this makes it possible for autocomplete widget to fit in low-width sidebar\nSELECT2_SIDEBAR = {\n 'data-width': '100%',\n 'width': 'style',\n}\n\n\nclass BootstrapHelper(FormHelper):\n \"\"\"Layout and behavior for crispy-displayed forms.\"\"\"\n html5_required = True\n form_id = 'main-form'\n\n def __init__(self,\n form=None,\n duplicate_buttons_on_top=False,\n submit_label='Submit',\n submit_name='submit',\n use_get_method=False,\n wider_labels=False,\n add_submit_button=True,\n add_delete_button=False,\n add_cancel_button=True,\n additional_form_class='',\n form_tag=True,\n display_labels=True,\n form_action=None,\n form_id=None,\n include_media=True):\n \"\"\"\n `duplicate_buttons_on_top` -- Whether submit buttons should be\n displayed on both top and bottom of the form.\n\n `use_get_method` -- Force form to use GET instead of default POST.\n\n `wider_labels` -- SWCEventRequestForm and DCEventRequestForm have\n long labels, so this flag (set to True) is used to address that issue.\n\n `add_delete_button` -- displays additional red \"delete\" button.\n If you want to use it, you need to include in your template the\n following code:\n\n <form action=\"delete?next={{ request.GET.next|urlencode }}\" method=\"POST\" id=\"delete-form\">\n {% csrf_token %}\n </form>\n\n This is necessary, because delete button must be reassigned from the\n form using this helper to \"delete-form\". This reassignment is done\n via HTML5 \"form\" attribute on the \"delete\" button.\n\n `display_labels` -- Set to False, when your form has only submit\n buttons and you want these buttons to be aligned to left.\n \"\"\"\n\n super().__init__(form)\n\n self.attrs['role'] = 'form'\n\n self.duplicate_buttons_on_top = duplicate_buttons_on_top\n\n self.submit_label = submit_label\n\n if use_get_method:\n self.form_method = 'get'\n\n if wider_labels:\n assert display_labels\n self.label_class = 'col-12 col-lg-3'\n self.field_class = 'col-12 col-lg-9'\n elif display_labels:\n self.label_class = 'col-12 col-lg-2'\n self.field_class = 'col-12 col-lg-10'\n else:\n self.label_class = ''\n self.field_class = 'col-lg-12'\n\n if add_submit_button:\n self.add_input(Submit(submit_name, submit_label))\n\n if add_delete_button:\n self.add_input(Submit(\n 'delete', 'Delete',\n onclick='return '\n 'confirm(\"Are you sure you want to delete it?\");',\n form='delete-form',\n css_class='btn-danger float-right'))\n\n if add_cancel_button:\n self.add_input(Button(\n 'cancel', 'Cancel',\n css_class='btn-secondary float-right',\n onclick='window.history.back()'))\n\n # offset here adds horizontal centering for all these forms\n self.form_class = 'form-horizontal ' + additional_form_class\n\n self.form_tag = form_tag\n\n if form_action is not None:\n self.form_action = form_action\n\n if form_id is not None:\n self.form_id = form_id\n\n # don't prevent from loading media by default\n self.include_media = include_media\n\n def hr(self):\n \"\"\"Horizontal line as a separator in forms is used very often. But\n since from time to time the forms are changed (in terms of columns\n width), we should rather use one global <hr>...\"\"\"\n return '<hr class=\"col-12 mx-0 px-0\">'\n\n\nclass BootstrapHelperFilter(FormHelper):\n \"\"\"A differently shaped forms (more space-efficient) for use in sidebar as\n filter forms.\"\"\"\n form_method = 'get'\n form_id = 'filter-form'\n\n def __init__(self, form=None):\n super().__init__(form)\n self.attrs['role'] = 'form'\n self.inputs.append(Submit('', 'Submit'))\n\n\nclass BootstrapHelperFormsetInline(BootstrapHelper):\n \"\"\"For use in inline formsets.\"\"\"\n template = 'bootstrap/table_inline_formset.html'\n\n\nbootstrap_helper_filter = BootstrapHelperFilter()\nbootstrap_helper_inline_formsets = BootstrapHelperFormsetInline()\n\n\n# ----------------------------------------------------------\n# MixIns\n\nclass PrivacyConsentMixin(forms.Form):\n privacy_consent = forms.BooleanField(\n label='*I have read and agree to <a href='\n '\"https://docs.carpentries.org/topic_folders/policies/'\n 'privacy.html\" target=\"_blank\">'\n 'the data privacy policy of The Carpentries</a>.',\n required=True)\n\n\nclass WidgetOverrideMixin:\n def __init__(self, *args, **kwargs):\n widgets = kwargs.pop('widgets', {})\n super().__init__(*args, **kwargs)\n for field, widget in widgets.items():\n self.fields[field].widget = widget\n\n\n# ----------------------------------------------------------\n# Forms\n\n\ndef continent_list():\n \"\"\"This has to be as a callable, because otherwise Django evaluates this\n query and, if the database doesn't exist yet (e.g. during Travis-CI\n tests).\"\"\"\n return [('', '')] + list(Continent.objects.values_list('pk', 'name'))\n\n\nclass WorkshopStaffForm(forms.Form):\n '''Represent instructor matching form.'''\n\n latitude = forms.FloatField(label='Latitude',\n min_value=-90.0,\n max_value=90.0,\n required=False)\n longitude = forms.FloatField(label='Longitude',\n min_value=-180.0,\n max_value=180.0,\n required=False)\n airport = forms.ModelChoiceField(\n label='Airport',\n required=False,\n queryset=Airport.objects.all(),\n widget=ModelSelect2Widget(\n data_view='airport-lookup',\n attrs=SELECT2_SIDEBAR,\n )\n )\n languages = forms.ModelMultipleChoiceField(\n label='Languages',\n required=False,\n queryset=Language.objects.all(),\n widget=ModelSelect2MultipleWidget(\n data_view='language-lookup',\n attrs=SELECT2_SIDEBAR,\n )\n )\n\n country = forms.MultipleChoiceField(\n choices=list(Countries()), required=False,\n widget=Select2MultipleWidget,\n )\n\n continent = forms.ChoiceField(\n choices=continent_list, required=False, widget=Select2Widget,\n )\n\n lessons = forms.ModelMultipleChoiceField(\n queryset=Lesson.objects.all(),\n widget=SelectMultiple(),\n required=False,\n )\n\n badges = forms.ModelMultipleChoiceField(\n queryset=Badge.objects.instructor_badges(),\n widget=CheckboxSelectMultiple(),\n required=False,\n )\n\n is_trainer = forms.BooleanField(\n required=False,\n label='Has Trainer badge')\n\n GENDER_CHOICES = ((None, '---------'), ) + Person.GENDER_CHOICES\n gender = forms.ChoiceField(choices=GENDER_CHOICES, required=False)\n\n was_helper = forms.BooleanField(\n required=False, label='Was helper at least once before')\n was_organizer = forms.BooleanField(\n required=False, label='Was organizer at least once before')\n is_in_progress_trainee = forms.BooleanField(\n required=False, label='Is an in-progress instructor trainee')\n\n def __init__(self, *args, **kwargs):\n '''Build form layout dynamically.'''\n super().__init__(*args, **kwargs)\n\n self.helper = FormHelper(self)\n self.helper.form_method = 'get'\n self.helper.layout = Layout(\n Div(\n Div(\n HTML('<h5 class=\"card-title\">Location</h5>'),\n 'airport',\n HTML('<hr>'),\n 'country',\n HTML('<hr>'),\n 'continent',\n HTML('<hr>'),\n 'latitude',\n 'longitude',\n css_class='card-body'\n ),\n css_class='card',\n ),\n 'badges',\n 'is_trainer',\n HTML('<hr>'),\n 'was_helper',\n 'was_organizer',\n 'is_in_progress_trainee',\n 'languages',\n 'gender',\n 'lessons',\n Submit('', 'Submit'),\n )\n\n def clean(self):\n cleaned_data = super().clean()\n lat = bool(cleaned_data.get('latitude'))\n lng = bool(cleaned_data.get('longitude'))\n airport = bool(cleaned_data.get('airport'))\n country = bool(cleaned_data.get('country'))\n latlng = lat and lng\n\n # if searching by coordinates, then there must be both lat & lng\n # present\n if lat ^ lng:\n raise forms.ValidationError(\n 'Must specify both latitude and longitude if searching by '\n 'coordinates')\n\n # User must search by airport, or country, or coordinates, or none\n # of them. Sum of boolean elements must be equal 0 (if general search)\n # or 1 (if searching by airport OR country OR lat/lng).\n if sum([airport, country, latlng]) not in [0, 1]:\n raise forms.ValidationError(\n 'Must specify an airport OR a country, OR use coordinates, OR '\n 'none of them.')\n return cleaned_data\n\n\nclass BulkUploadCSVForm(forms.Form):\n \"\"\"This form allows to upload a single file; it's used by person bulk\n upload and training request manual score bulk upload.\"\"\"\n file = forms.FileField()\n\n\nclass SearchForm(forms.Form):\n '''Represent general searching form.'''\n\n term = forms.CharField(label='Term',\n max_length=100)\n in_organizations = forms.BooleanField(label='in organizations',\n required=False,\n initial=True)\n in_events = forms.BooleanField(label='in events',\n required=False,\n initial=True)\n in_persons = forms.BooleanField(label='in persons',\n required=False,\n initial=True)\n in_airports = forms.BooleanField(label='in airports',\n required=False,\n initial=True)\n in_training_requests = forms.BooleanField(label='in training requests',\n required=False,\n initial=True)\n\n in_comments = forms.BooleanField(label='in comments',\n required=False,\n initial=True)\n\n helper = BootstrapHelper(\n add_cancel_button=False,\n use_get_method=True,\n )\n\n\nclass EventForm(forms.ModelForm):\n host = forms.ModelChoiceField(\n label='Host',\n required=True,\n help_text=Event._meta.get_field('host').help_text,\n queryset=Organization.objects.all(),\n widget=ModelSelect2Widget(data_view='organization-lookup')\n )\n\n administrator = forms.ModelChoiceField(\n label='Administrator',\n required=False,\n help_text=Event._meta.get_field('administrator').help_text,\n queryset=Organization.objects.administrators(),\n widget=ModelSelect2Widget(data_view='administrator-org-lookup'),\n )\n\n assigned_to = forms.ModelChoiceField(\n label='Assigned to',\n required=False,\n queryset=Person.objects.all(),\n widget=ModelSelect2Widget(data_view='admin-lookup')\n )\n\n language = forms.ModelChoiceField(\n label='Language',\n required=False,\n queryset=Language.objects.all(),\n widget=ModelSelect2Widget(data_view='language-lookup')\n )\n\n country = CountryField().formfield(\n required=False,\n help_text=Event._meta.get_field('country').help_text,\n widget=Select2Widget,\n )\n\n comment = MarkdownxFormField(\n label='Comment',\n help_text='Any content in here will be added to comments after this '\n 'event is saved.',\n widget=forms.Textarea,\n required=False,\n )\n\n helper = BootstrapHelper(add_cancel_button=False,\n duplicate_buttons_on_top=True)\n\n class Meta:\n model = Event\n fields = [\n 'slug',\n 'completed',\n 'start',\n 'end',\n 'host',\n 'administrator',\n 'assigned_to',\n 'tags',\n 'url',\n 'language',\n 'reg_key',\n 'venue',\n 'manual_attendance',\n 'contact',\n 'country',\n 'address',\n 'latitude',\n 'longitude',\n 'open_TTT_applications',\n 'curricula',\n 'lessons',\n 'comment',\n ]\n widgets = {\n 'manual_attendance': TextInput,\n 'latitude': TextInput,\n 'longitude': TextInput,\n 'invoice_status': RadioSelect,\n 'tags': SelectMultiple(attrs={\n 'size': Tag.ITEMS_VISIBLE_IN_SELECT_WIDGET\n }),\n 'curricula': CheckboxSelectMultiple(),\n 'lessons': CheckboxSelectMultiple(),\n }\n\n class Media:\n # thanks to this, {{ form.media }} in the template will generate\n # a <link href=\"\"> (for CSS files) or <script src=\"\"> (for JS files)\n js = (\n 'date_yyyymmdd.js',\n 'edit_from_url.js',\n 'online_country.js',\n )\n\n def __init__(self, *args, **kwargs):\n show_lessons = kwargs.pop('show_lessons', False)\n super().__init__(*args, **kwargs)\n\n self.helper.layout = Layout(\n Field('slug', placeholder='YYYY-MM-DD-location'),\n 'completed',\n Field('start', placeholder='YYYY-MM-DD'),\n Field('end', placeholder='YYYY-MM-DD'),\n 'host',\n 'administrator',\n 'assigned_to',\n 'tags',\n 'open_TTT_applications',\n 'curricula',\n 'url',\n 'language',\n 'reg_key',\n 'manual_attendance',\n 'contact',\n Div(\n Div(HTML('Location details'), css_class='card-header'),\n Div('country',\n 'venue',\n 'address',\n 'latitude',\n 'longitude',\n css_class='card-body'),\n css_class='card mb-2'\n ),\n 'comment',\n )\n\n # if we want to show lessons, we need to alter existing layout\n # otherwise we should remove the field so it doesn't break validation\n if show_lessons:\n self.helper.layout.insert(\n # insert AFTER the curricula\n self.helper.layout.fields.index('curricula') + 1,\n 'lessons',\n )\n else:\n del self.fields['lessons']\n\n def clean_slug(self):\n # Ensure slug is in \"YYYY-MM-DD-location\" format\n data = self.cleaned_data['slug']\n match = re.match(r'(\\d{4}|x{4})-(\\d{2}|x{2})-(\\d{2}|x{2})-.+', data)\n if not match:\n raise forms.ValidationError('Slug must be in \"YYYY-MM-DD-location\"'\n ' format, where \"YYYY\", \"MM\", \"DD\" can'\n ' be unspecified (ie. \"xx\").')\n return data\n\n def clean_end(self):\n \"\"\"Ensure end >= start.\"\"\"\n start = self.cleaned_data['start']\n end = self.cleaned_data['end']\n\n if start and end and end < start:\n raise forms.ValidationError('Must not be earlier than start date.')\n return end\n\n def clean_open_TTT_applications(self):\n \"\"\"Ensure there's a TTT tag applied to the event, if the\n `open_TTT_applications` is True.\"\"\"\n open_TTT_applications = self.cleaned_data['open_TTT_applications']\n tags = self.cleaned_data.get('tags', None)\n error_msg = 'You cannot open applications on a non-TTT event.'\n\n if open_TTT_applications and tags:\n # find TTT tag\n TTT_tag = False\n for tag in tags:\n if tag.name == 'TTT':\n TTT_tag = True\n break\n\n if not TTT_tag:\n raise forms.ValidationError(error_msg)\n\n elif open_TTT_applications:\n raise forms.ValidationError(error_msg)\n\n return open_TTT_applications\n\n def clean_curricula(self):\n \"\"\"Validate tags when some curricula are selected.\"\"\"\n curricula = self.cleaned_data['curricula']\n tags = self.cleaned_data['tags']\n\n try:\n expected_tags = []\n for c in curricula:\n if c.active and c.carpentry:\n expected_tags.append(c.carpentry)\n elif c.active and c.mix_match:\n expected_tags.append('Circuits')\n except (ValueError, TypeError):\n expected_tags = []\n\n for tag in expected_tags:\n if not tags.filter(name=tag):\n raise forms.ValidationError(\n \"You must add tags corresponding to these curricula.\")\n\n return curricula\n\n def save(self, *args, **kwargs):\n res = super().save(*args, **kwargs)\n\n create_comment_signal.send(sender=self.__class__,\n content_object=res,\n comment=self.cleaned_data['comment'],\n timestamp=None)\n\n return res\n\n\nclass EventCreateForm(EventForm):\n comment = MarkdownxFormField(\n label='Comment',\n help_text='This will be added to comments after the event is created.',\n widget=forms.Textarea,\n required=False,\n )\n\n\nclass TaskForm(WidgetOverrideMixin, forms.ModelForm):\n\n helper = BootstrapHelper(add_cancel_button=False)\n\n SEAT_MEMBERSHIP_HELP_TEXT = (\n '{}<br><b>Hint:</b> you can use input format YYYY-MM-DD to display '\n 'memberships available on that date.'.format(\n Task._meta.get_field('seat_membership').help_text\n )\n )\n seat_membership = forms.ModelChoiceField(\n label=Task._meta.get_field('seat_membership').verbose_name,\n help_text=SEAT_MEMBERSHIP_HELP_TEXT,\n required=False,\n queryset=Membership.objects.all(),\n widget=ModelSelect2Widget(\n data_view='membership-lookup',\n attrs=SELECT2_SIDEBAR,\n )\n )\n\n class Meta:\n model = Task\n fields = [\n 'event', 'person', 'role', 'title', 'url',\n 'seat_membership', 'seat_open_training',\n ]\n widgets = {\n 'person': ModelSelect2Widget(data_view='person-lookup',\n attrs=SELECT2_SIDEBAR),\n 'event': ModelSelect2Widget(data_view='event-lookup',\n attrs=SELECT2_SIDEBAR),\n }\n\n\nclass PersonForm(forms.ModelForm):\n airport = forms.ModelChoiceField(\n label='Airport',\n required=False,\n queryset=Airport.objects.all(),\n widget=ModelSelect2Widget(data_view='airport-lookup')\n )\n languages = forms.ModelMultipleChoiceField(\n label='Languages',\n required=False,\n queryset=Language.objects.all(),\n widget=ModelSelect2MultipleWidget(data_view='language-lookup')\n )\n\n helper = BootstrapHelper(add_cancel_button=False,\n duplicate_buttons_on_top=True)\n\n class Meta:\n model = Person\n # don't display the 'password', 'user_permissions',\n # 'groups' or 'is_superuser' fields\n # + reorder fields\n fields = [\n 'username',\n 'personal',\n 'middle',\n 'family',\n 'may_contact',\n 'publish_profile',\n 'lesson_publication_consent',\n 'data_privacy_agreement',\n 'email',\n 'secondary_email',\n 'gender',\n 'gender_other',\n 'country',\n 'airport',\n 'affiliation',\n 'github',\n 'twitter',\n 'url',\n 'occupation',\n 'orcid',\n 'user_notes',\n 'lessons',\n 'domains',\n 'languages',\n ]\n\n widgets = {\n 'country': Select2Widget,\n 'gender': RadioSelectWithOther('gender_other'),\n }\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n # set up a layout object for the helper\n self.helper.layout = self.helper.build_default_layout(self)\n\n # set up `*WithOther` widgets so that they can display additional\n # fields inline\n self['gender'].field.widget.other_field = self['gender_other']\n\n # remove additional fields\n self.helper.layout.fields.remove('gender_other')\n\n def clean(self):\n super().clean()\n errors = dict()\n\n # 1: require \"other gender\" field if \"other\" was selected in\n # \"gender\" field\n gender = self.cleaned_data.get('gender', '')\n gender_other = self.cleaned_data.get('gender_other', '')\n if gender == GenderMixin.OTHER and not gender_other:\n errors['gender'] = ValidationError(\"This field is required.\")\n elif gender != GenderMixin.OTHER and gender_other:\n errors['gender'] = ValidationError(\n 'If you entered data in \"Other\" field, please select that '\n \"option.\")\n\n # raise errors if any present\n if errors:\n raise ValidationError(errors)\n\n\nclass PersonCreateForm(PersonForm):\n comment = MarkdownxFormField(\n label='Comment',\n help_text='This will be added to comments after the person is '\n 'created.',\n widget=forms.Textarea,\n required=False,\n )\n\n class Meta(PersonForm.Meta):\n # remove 'username' field as it's being populated after form save\n # in the `views.PersonCreate.form_valid`\n fields = PersonForm.Meta.fields.copy()\n fields.remove('username')\n fields.append('comment')\n\n\nclass PersonPermissionsForm(forms.ModelForm):\n helper = BootstrapHelper(add_cancel_button=False)\n\n user_permissions = forms.ModelMultipleChoiceField(\n label=Person._meta.get_field('user_permissions').verbose_name,\n help_text=Person._meta.get_field('user_permissions').help_text,\n required=False,\n queryset=Permission.objects.select_related('content_type'),\n )\n user_permissions.widget.attrs.update({'class': 'resizable-vertical',\n 'size': '20'})\n\n class Meta:\n model = Person\n # only display administration-related fields: groups, permissions,\n # being a superuser or being active (== ability to log in)\n fields = [\n 'is_active',\n 'is_superuser',\n 'user_permissions',\n 'groups',\n ]\n\n\nclass PersonsSelectionForm(forms.Form):\n person_a = forms.ModelChoiceField(\n label='Person From',\n required=True,\n queryset=Person.objects.all(),\n widget=ModelSelect2Widget(data_view='person-lookup')\n )\n\n person_b = forms.ModelChoiceField(\n label='Person To',\n required=True,\n queryset=Person.objects.all(),\n widget=ModelSelect2Widget(data_view='person-lookup')\n )\n\n helper = BootstrapHelper(use_get_method=True, add_cancel_button=False)\n\n\nclass PersonsMergeForm(forms.Form):\n TWO = (\n ('obj_a', 'Use A'),\n ('obj_b', 'Use B'),\n )\n THREE = TWO + (('combine', 'Combine'), )\n DEFAULT = 'obj_a'\n\n person_a = forms.ModelChoiceField(queryset=Person.objects.all(),\n widget=forms.HiddenInput)\n\n person_b = forms.ModelChoiceField(queryset=Person.objects.all(),\n widget=forms.HiddenInput)\n\n id = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n username = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n personal = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n middle = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n family = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n email = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n secondary_email = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n may_contact = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n publish_profile = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n data_privacy_agreement = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n gender = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n gender_other = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n airport = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n github = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n twitter = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n url = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n affiliation = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n occupation = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n orcid = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n award_set = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n qualification_set = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n label='Lessons',\n )\n domains = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n languages = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n task_set = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n is_active = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n trainingprogress_set = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n comment_comments = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n comments = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n\n\nclass AwardForm(WidgetOverrideMixin, forms.ModelForm):\n\n helper = BootstrapHelper(add_cancel_button=False)\n\n class Meta:\n model = Award\n fields = '__all__'\n widgets = {\n 'person': ModelSelect2Widget(data_view='person-lookup',\n attrs=SELECT2_SIDEBAR),\n 'event': ModelSelect2Widget(data_view='event-lookup',\n attrs=SELECT2_SIDEBAR),\n 'awarded_by': ModelSelect2Widget(data_view='admin-lookup',\n attrs=SELECT2_SIDEBAR),\n }\n\n\nclass EventLookupForm(forms.Form):\n event = forms.ModelChoiceField(\n label='Event',\n required=True,\n queryset=Event.objects.all(),\n widget=ModelSelect2Widget(data_view='event-lookup')\n )\n\n helper = BootstrapHelper(add_cancel_button=False)\n\n\nclass PersonLookupForm(forms.Form):\n person = forms.ModelChoiceField(\n label='Person',\n required=True,\n queryset=Person.objects.all(),\n widget=ModelSelect2Widget(data_view='person-lookup')\n )\n\n helper = BootstrapHelper(use_get_method=True, add_cancel_button=False)\n\n\nclass AdminLookupForm(forms.Form):\n person = forms.ModelChoiceField(\n label='Administrator',\n required=True,\n queryset=Person.objects.all(),\n widget=ModelSelect2Widget(\n data_view='admin-lookup',\n attrs=SELECT2_SIDEBAR,\n ),\n )\n\n helper = BootstrapHelper(add_cancel_button=False)\n\n\nclass EventsSelectionForm(forms.Form):\n event_a = forms.ModelChoiceField(\n label='Event A',\n required=True,\n queryset=Event.objects.all(),\n widget=ModelSelect2Widget(data_view='event-lookup')\n )\n\n event_b = forms.ModelChoiceField(\n label='Event B',\n required=True,\n queryset=Event.objects.all(),\n widget=ModelSelect2Widget(data_view='event-lookup')\n )\n\n helper = BootstrapHelper(use_get_method=True, add_cancel_button=False)\n\n\nclass EventsMergeForm(forms.Form):\n TWO = (\n ('obj_a', 'Use A'),\n ('obj_b', 'Use B'),\n )\n THREE = TWO + (('combine', 'Combine'), )\n DEFAULT = 'obj_a'\n\n event_a = forms.ModelChoiceField(queryset=Event.objects.all(),\n widget=forms.HiddenInput)\n\n event_b = forms.ModelChoiceField(queryset=Event.objects.all(),\n widget=forms.HiddenInput)\n\n id = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n slug = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n completed = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n assigned_to = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n start = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n end = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n host = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n administrator = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n tags = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n url = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n language = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n reg_key = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n admin_fee = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n invoice_status = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n manual_attendance = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n contact = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n country = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n venue = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n address = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n latitude = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n longitude = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n learners_pre = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n learners_post = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n instructors_pre = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n instructors_post = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n learners_longterm = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n task_set = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n comments = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n\n\n# ----------------------------------------------------------\n# Action required forms\n\nclass ActionRequiredPrivacyForm(forms.ModelForm):\n data_privacy_agreement = forms.BooleanField(\n label='*I have read and agree to <a href='\n '\"https://docs.carpentries.org/topic_folders/policies/'\n 'privacy.html\" target=\"_blank\">'\n 'the data privacy policy of The Carpentries</a>.',\n required=True)\n\n helper = BootstrapHelper(add_cancel_button=False)\n\n class Meta:\n model = Person\n fields = [\n 'data_privacy_agreement',\n 'may_contact',\n 'publish_profile',\n ]\n\n\n# ----------------------------------------------------------\n# Signals\n\n@receiver(create_comment_signal, sender=EventForm)\n@receiver(create_comment_signal, sender=EventCreateForm)\n@receiver(create_comment_signal, sender=PersonCreateForm)\ndef form_saved_add_comment(sender, **kwargs):\n \"\"\"A receiver for custom form.save() signal. This is intended to save\n comment, entered as a form field, when creating a new object, and present\n it as automatic system Comment (from django_comments app).\"\"\"\n content_object = kwargs.get('content_object', None)\n comment = kwargs.get('comment', None)\n timestamp = kwargs.get('timestamp', datetime.now(timezone.utc))\n\n # only proceed if we have an actual object (that exists in DB), and\n # comment contents\n if content_object and comment and content_object.pk:\n site = Site.objects.get_current()\n Comment.objects.create(\n content_object=content_object,\n site=site,\n user=None,\n user_name='Automatic comment',\n submit_date=timestamp,\n comment=comment,\n )\n", "path": "amy/workshops/forms.py" } ]
[ { "content": "from datetime import datetime, timezone\nimport re\n\nfrom crispy_forms.helper import FormHelper\nfrom crispy_forms.layout import Layout, Div, HTML, Submit, Button, Field\nfrom django import forms\nfrom django.contrib.auth.models import Permission\nfrom django.contrib.sites.models import Site\nfrom django.dispatch import receiver\nfrom django.forms import (\n SelectMultiple,\n CheckboxSelectMultiple,\n TextInput,\n RadioSelect,\n)\nfrom django_comments.models import Comment\nfrom django_countries import Countries\nfrom django_countries.fields import CountryField\nfrom markdownx.fields import MarkdownxFormField\n\nfrom dashboard.models import Continent\nfrom workshops.models import (\n Award,\n Event,\n Lesson,\n GenderMixin,\n Person,\n Task,\n Airport,\n Organization,\n Membership,\n Tag,\n Language,\n Badge,\n)\n# this is used instead of Django Autocomplete Light widgets\n# see issue #1330: https://github.com/swcarpentry/amy/issues/1330\nfrom workshops.fields import (\n Select2Widget,\n Select2MultipleWidget,\n ModelSelect2Widget,\n ModelSelect2MultipleWidget,\n RadioSelectWithOther,\n)\nfrom workshops.signals import create_comment_signal\n\n\n#### settings for Select2\n# this makes it possible for autocomplete widget to fit in low-width sidebar\nSELECT2_SIDEBAR = {\n 'data-width': '100%',\n 'width': 'style',\n}\n\n\nclass BootstrapHelper(FormHelper):\n \"\"\"Layout and behavior for crispy-displayed forms.\"\"\"\n html5_required = True\n form_id = 'main-form'\n\n def __init__(self,\n form=None,\n duplicate_buttons_on_top=False,\n submit_label='Submit',\n submit_name='submit',\n use_get_method=False,\n wider_labels=False,\n add_submit_button=True,\n add_delete_button=False,\n add_cancel_button=True,\n additional_form_class='',\n form_tag=True,\n display_labels=True,\n form_action=None,\n form_id=None,\n include_media=True):\n \"\"\"\n `duplicate_buttons_on_top` -- Whether submit buttons should be\n displayed on both top and bottom of the form.\n\n `use_get_method` -- Force form to use GET instead of default POST.\n\n `wider_labels` -- SWCEventRequestForm and DCEventRequestForm have\n long labels, so this flag (set to True) is used to address that issue.\n\n `add_delete_button` -- displays additional red \"delete\" button.\n If you want to use it, you need to include in your template the\n following code:\n\n <form action=\"delete?next={{ request.GET.next|urlencode }}\" method=\"POST\" id=\"delete-form\">\n {% csrf_token %}\n </form>\n\n This is necessary, because delete button must be reassigned from the\n form using this helper to \"delete-form\". This reassignment is done\n via HTML5 \"form\" attribute on the \"delete\" button.\n\n `display_labels` -- Set to False, when your form has only submit\n buttons and you want these buttons to be aligned to left.\n \"\"\"\n\n super().__init__(form)\n\n self.attrs['role'] = 'form'\n\n self.duplicate_buttons_on_top = duplicate_buttons_on_top\n\n self.submit_label = submit_label\n\n if use_get_method:\n self.form_method = 'get'\n\n if wider_labels:\n assert display_labels\n self.label_class = 'col-12 col-lg-3'\n self.field_class = 'col-12 col-lg-9'\n elif display_labels:\n self.label_class = 'col-12 col-lg-2'\n self.field_class = 'col-12 col-lg-10'\n else:\n self.label_class = ''\n self.field_class = 'col-lg-12'\n\n if add_submit_button:\n self.add_input(Submit(submit_name, submit_label))\n\n if add_delete_button:\n self.add_input(Submit(\n 'delete', 'Delete',\n onclick='return '\n 'confirm(\"Are you sure you want to delete it?\");',\n form='delete-form',\n css_class='btn-danger float-right'))\n\n if add_cancel_button:\n self.add_input(Button(\n 'cancel', 'Cancel',\n css_class='btn-secondary float-right',\n onclick='window.history.back()'))\n\n # offset here adds horizontal centering for all these forms\n self.form_class = 'form-horizontal ' + additional_form_class\n\n self.form_tag = form_tag\n\n if form_action is not None:\n self.form_action = form_action\n\n if form_id is not None:\n self.form_id = form_id\n\n # don't prevent from loading media by default\n self.include_media = include_media\n\n def hr(self):\n \"\"\"Horizontal line as a separator in forms is used very often. But\n since from time to time the forms are changed (in terms of columns\n width), we should rather use one global <hr>...\"\"\"\n return '<hr class=\"col-12 mx-0 px-0\">'\n\n\nclass BootstrapHelperFilter(FormHelper):\n \"\"\"A differently shaped forms (more space-efficient) for use in sidebar as\n filter forms.\"\"\"\n form_method = 'get'\n form_id = 'filter-form'\n\n def __init__(self, form=None):\n super().__init__(form)\n self.attrs['role'] = 'form'\n self.inputs.append(Submit('', 'Submit'))\n\n\nclass BootstrapHelperFormsetInline(BootstrapHelper):\n \"\"\"For use in inline formsets.\"\"\"\n template = 'bootstrap/table_inline_formset.html'\n\n\nbootstrap_helper_filter = BootstrapHelperFilter()\nbootstrap_helper_inline_formsets = BootstrapHelperFormsetInline()\n\n\n# ----------------------------------------------------------\n# MixIns\n\nclass PrivacyConsentMixin(forms.Form):\n privacy_consent = forms.BooleanField(\n label='*I have read and agree to <a href='\n '\"https://docs.carpentries.org/topic_folders/policies/'\n 'privacy.html\" target=\"_blank\">'\n 'the data privacy policy of The Carpentries</a>.',\n required=True)\n\n\nclass WidgetOverrideMixin:\n def __init__(self, *args, **kwargs):\n widgets = kwargs.pop('widgets', {})\n super().__init__(*args, **kwargs)\n for field, widget in widgets.items():\n self.fields[field].widget = widget\n\n\n# ----------------------------------------------------------\n# Forms\n\n\ndef continent_list():\n \"\"\"This has to be as a callable, because otherwise Django evaluates this\n query and, if the database doesn't exist yet (e.g. during Travis-CI\n tests).\"\"\"\n return [('', '')] + list(Continent.objects.values_list('pk', 'name'))\n\n\nclass WorkshopStaffForm(forms.Form):\n '''Represent instructor matching form.'''\n\n latitude = forms.FloatField(label='Latitude',\n min_value=-90.0,\n max_value=90.0,\n required=False)\n longitude = forms.FloatField(label='Longitude',\n min_value=-180.0,\n max_value=180.0,\n required=False)\n airport = forms.ModelChoiceField(\n label='Airport',\n required=False,\n queryset=Airport.objects.all(),\n widget=ModelSelect2Widget(\n data_view='airport-lookup',\n attrs=SELECT2_SIDEBAR,\n )\n )\n languages = forms.ModelMultipleChoiceField(\n label='Languages',\n required=False,\n queryset=Language.objects.all(),\n widget=ModelSelect2MultipleWidget(\n data_view='language-lookup',\n attrs=SELECT2_SIDEBAR,\n )\n )\n\n country = forms.MultipleChoiceField(\n choices=list(Countries()), required=False,\n widget=Select2MultipleWidget,\n )\n\n continent = forms.ChoiceField(\n choices=continent_list, required=False, widget=Select2Widget,\n )\n\n lessons = forms.ModelMultipleChoiceField(\n queryset=Lesson.objects.all(),\n widget=SelectMultiple(),\n required=False,\n )\n\n badges = forms.ModelMultipleChoiceField(\n queryset=Badge.objects.instructor_badges(),\n widget=CheckboxSelectMultiple(),\n required=False,\n )\n\n is_trainer = forms.BooleanField(\n required=False,\n label='Has Trainer badge')\n\n GENDER_CHOICES = ((None, '---------'), ) + Person.GENDER_CHOICES\n gender = forms.ChoiceField(choices=GENDER_CHOICES, required=False)\n\n was_helper = forms.BooleanField(\n required=False, label='Was helper at least once before')\n was_organizer = forms.BooleanField(\n required=False, label='Was organizer at least once before')\n is_in_progress_trainee = forms.BooleanField(\n required=False, label='Is an in-progress instructor trainee')\n\n def __init__(self, *args, **kwargs):\n '''Build form layout dynamically.'''\n super().__init__(*args, **kwargs)\n\n self.helper = FormHelper(self)\n self.helper.form_method = 'get'\n self.helper.layout = Layout(\n Div(\n Div(\n HTML('<h5 class=\"card-title\">Location</h5>'),\n 'airport',\n HTML('<hr>'),\n 'country',\n HTML('<hr>'),\n 'continent',\n HTML('<hr>'),\n 'latitude',\n 'longitude',\n css_class='card-body'\n ),\n css_class='card',\n ),\n 'badges',\n 'is_trainer',\n HTML('<hr>'),\n 'was_helper',\n 'was_organizer',\n 'is_in_progress_trainee',\n 'languages',\n 'gender',\n 'lessons',\n Submit('', 'Submit'),\n )\n\n def clean(self):\n cleaned_data = super().clean()\n lat = bool(cleaned_data.get('latitude'))\n lng = bool(cleaned_data.get('longitude'))\n airport = bool(cleaned_data.get('airport'))\n country = bool(cleaned_data.get('country'))\n latlng = lat and lng\n\n # if searching by coordinates, then there must be both lat & lng\n # present\n if lat ^ lng:\n raise forms.ValidationError(\n 'Must specify both latitude and longitude if searching by '\n 'coordinates')\n\n # User must search by airport, or country, or coordinates, or none\n # of them. Sum of boolean elements must be equal 0 (if general search)\n # or 1 (if searching by airport OR country OR lat/lng).\n if sum([airport, country, latlng]) not in [0, 1]:\n raise forms.ValidationError(\n 'Must specify an airport OR a country, OR use coordinates, OR '\n 'none of them.')\n return cleaned_data\n\n\nclass BulkUploadCSVForm(forms.Form):\n \"\"\"This form allows to upload a single file; it's used by person bulk\n upload and training request manual score bulk upload.\"\"\"\n file = forms.FileField()\n\n\nclass SearchForm(forms.Form):\n '''Represent general searching form.'''\n\n term = forms.CharField(label='Term',\n max_length=100)\n in_organizations = forms.BooleanField(label='in organizations',\n required=False,\n initial=True)\n in_events = forms.BooleanField(label='in events',\n required=False,\n initial=True)\n in_persons = forms.BooleanField(label='in persons',\n required=False,\n initial=True)\n in_airports = forms.BooleanField(label='in airports',\n required=False,\n initial=True)\n in_training_requests = forms.BooleanField(label='in training requests',\n required=False,\n initial=True)\n\n in_comments = forms.BooleanField(label='in comments',\n required=False,\n initial=True)\n\n helper = BootstrapHelper(\n add_cancel_button=False,\n use_get_method=True,\n )\n\n\nclass EventForm(forms.ModelForm):\n host = forms.ModelChoiceField(\n label='Host',\n required=True,\n help_text=Event._meta.get_field('host').help_text,\n queryset=Organization.objects.all(),\n widget=ModelSelect2Widget(data_view='organization-lookup')\n )\n\n administrator = forms.ModelChoiceField(\n label='Administrator',\n required=False,\n help_text=Event._meta.get_field('administrator').help_text,\n queryset=Organization.objects.administrators(),\n widget=ModelSelect2Widget(data_view='administrator-org-lookup'),\n )\n\n assigned_to = forms.ModelChoiceField(\n label='Assigned to',\n required=False,\n queryset=Person.objects.all(),\n widget=ModelSelect2Widget(data_view='admin-lookup')\n )\n\n language = forms.ModelChoiceField(\n label='Language',\n required=False,\n queryset=Language.objects.all(),\n widget=ModelSelect2Widget(data_view='language-lookup')\n )\n\n country = CountryField().formfield(\n required=False,\n help_text=Event._meta.get_field('country').help_text,\n widget=Select2Widget,\n )\n\n comment = MarkdownxFormField(\n label='Comment',\n help_text='Any content in here will be added to comments after this '\n 'event is saved.',\n widget=forms.Textarea,\n required=False,\n )\n\n helper = BootstrapHelper(add_cancel_button=False,\n duplicate_buttons_on_top=True)\n\n class Meta:\n model = Event\n fields = [\n 'slug',\n 'completed',\n 'start',\n 'end',\n 'host',\n 'administrator',\n 'assigned_to',\n 'tags',\n 'url',\n 'language',\n 'reg_key',\n 'venue',\n 'manual_attendance',\n 'contact',\n 'country',\n 'address',\n 'latitude',\n 'longitude',\n 'open_TTT_applications',\n 'curricula',\n 'lessons',\n 'comment',\n ]\n widgets = {\n 'manual_attendance': TextInput,\n 'latitude': TextInput,\n 'longitude': TextInput,\n 'invoice_status': RadioSelect,\n 'tags': SelectMultiple(attrs={\n 'size': Tag.ITEMS_VISIBLE_IN_SELECT_WIDGET\n }),\n 'curricula': CheckboxSelectMultiple(),\n 'lessons': CheckboxSelectMultiple(),\n }\n\n class Media:\n # thanks to this, {{ form.media }} in the template will generate\n # a <link href=\"\"> (for CSS files) or <script src=\"\"> (for JS files)\n js = (\n 'date_yyyymmdd.js',\n 'edit_from_url.js',\n 'online_country.js',\n )\n\n def __init__(self, *args, **kwargs):\n show_lessons = kwargs.pop('show_lessons', False)\n super().__init__(*args, **kwargs)\n\n self.helper.layout = Layout(\n Field('slug', placeholder='YYYY-MM-DD-location'),\n 'completed',\n Field('start', placeholder='YYYY-MM-DD'),\n Field('end', placeholder='YYYY-MM-DD'),\n 'host',\n 'administrator',\n 'assigned_to',\n 'tags',\n 'open_TTT_applications',\n 'curricula',\n 'url',\n 'language',\n 'reg_key',\n 'manual_attendance',\n 'contact',\n Div(\n Div(HTML('Location details'), css_class='card-header'),\n Div('country',\n 'venue',\n 'address',\n 'latitude',\n 'longitude',\n css_class='card-body'),\n css_class='card mb-2'\n ),\n 'comment',\n )\n\n # if we want to show lessons, we need to alter existing layout\n # otherwise we should remove the field so it doesn't break validation\n if show_lessons:\n self.helper.layout.insert(\n # insert AFTER the curricula\n self.helper.layout.fields.index('curricula') + 1,\n 'lessons',\n )\n else:\n del self.fields['lessons']\n\n def clean_slug(self):\n # Ensure slug is in \"YYYY-MM-DD-location\" format\n data = self.cleaned_data['slug']\n match = re.match(r'(\\d{4}|x{4})-(\\d{2}|x{2})-(\\d{2}|x{2})-.+', data)\n if not match:\n raise forms.ValidationError('Slug must be in \"YYYY-MM-DD-location\"'\n ' format, where \"YYYY\", \"MM\", \"DD\" can'\n ' be unspecified (ie. \"xx\").')\n return data\n\n def clean_end(self):\n \"\"\"Ensure end >= start.\"\"\"\n start = self.cleaned_data['start']\n end = self.cleaned_data['end']\n\n if start and end and end < start:\n raise forms.ValidationError('Must not be earlier than start date.')\n return end\n\n def clean_open_TTT_applications(self):\n \"\"\"Ensure there's a TTT tag applied to the event, if the\n `open_TTT_applications` is True.\"\"\"\n open_TTT_applications = self.cleaned_data['open_TTT_applications']\n tags = self.cleaned_data.get('tags', None)\n error_msg = 'You cannot open applications on a non-TTT event.'\n\n if open_TTT_applications and tags:\n # find TTT tag\n TTT_tag = False\n for tag in tags:\n if tag.name == 'TTT':\n TTT_tag = True\n break\n\n if not TTT_tag:\n raise forms.ValidationError(error_msg)\n\n elif open_TTT_applications:\n raise forms.ValidationError(error_msg)\n\n return open_TTT_applications\n\n def clean_curricula(self):\n \"\"\"Validate tags when some curricula are selected.\"\"\"\n curricula = self.cleaned_data['curricula']\n tags = self.cleaned_data['tags']\n\n try:\n expected_tags = []\n for c in curricula:\n if c.active and c.carpentry:\n expected_tags.append(c.carpentry)\n elif c.active and c.mix_match:\n expected_tags.append('Circuits')\n except (ValueError, TypeError):\n expected_tags = []\n\n for tag in expected_tags:\n if not tags.filter(name=tag):\n raise forms.ValidationError(\n \"You must add tags corresponding to these curricula.\")\n\n return curricula\n\n def clean_manual_attendance(self):\n \"\"\"Regression: #1608 - fix 500 server error when field is cleared.\"\"\"\n manual_attendance = self.cleaned_data['manual_attendance'] or 0\n return manual_attendance\n\n def save(self, *args, **kwargs):\n res = super().save(*args, **kwargs)\n\n create_comment_signal.send(sender=self.__class__,\n content_object=res,\n comment=self.cleaned_data['comment'],\n timestamp=None)\n\n return res\n\n\nclass EventCreateForm(EventForm):\n comment = MarkdownxFormField(\n label='Comment',\n help_text='This will be added to comments after the event is created.',\n widget=forms.Textarea,\n required=False,\n )\n\n\nclass TaskForm(WidgetOverrideMixin, forms.ModelForm):\n\n helper = BootstrapHelper(add_cancel_button=False)\n\n SEAT_MEMBERSHIP_HELP_TEXT = (\n '{}<br><b>Hint:</b> you can use input format YYYY-MM-DD to display '\n 'memberships available on that date.'.format(\n Task._meta.get_field('seat_membership').help_text\n )\n )\n seat_membership = forms.ModelChoiceField(\n label=Task._meta.get_field('seat_membership').verbose_name,\n help_text=SEAT_MEMBERSHIP_HELP_TEXT,\n required=False,\n queryset=Membership.objects.all(),\n widget=ModelSelect2Widget(\n data_view='membership-lookup',\n attrs=SELECT2_SIDEBAR,\n )\n )\n\n class Meta:\n model = Task\n fields = [\n 'event', 'person', 'role', 'title', 'url',\n 'seat_membership', 'seat_open_training',\n ]\n widgets = {\n 'person': ModelSelect2Widget(data_view='person-lookup',\n attrs=SELECT2_SIDEBAR),\n 'event': ModelSelect2Widget(data_view='event-lookup',\n attrs=SELECT2_SIDEBAR),\n }\n\n\nclass PersonForm(forms.ModelForm):\n airport = forms.ModelChoiceField(\n label='Airport',\n required=False,\n queryset=Airport.objects.all(),\n widget=ModelSelect2Widget(data_view='airport-lookup')\n )\n languages = forms.ModelMultipleChoiceField(\n label='Languages',\n required=False,\n queryset=Language.objects.all(),\n widget=ModelSelect2MultipleWidget(data_view='language-lookup')\n )\n\n helper = BootstrapHelper(add_cancel_button=False,\n duplicate_buttons_on_top=True)\n\n class Meta:\n model = Person\n # don't display the 'password', 'user_permissions',\n # 'groups' or 'is_superuser' fields\n # + reorder fields\n fields = [\n 'username',\n 'personal',\n 'middle',\n 'family',\n 'may_contact',\n 'publish_profile',\n 'lesson_publication_consent',\n 'data_privacy_agreement',\n 'email',\n 'secondary_email',\n 'gender',\n 'gender_other',\n 'country',\n 'airport',\n 'affiliation',\n 'github',\n 'twitter',\n 'url',\n 'occupation',\n 'orcid',\n 'user_notes',\n 'lessons',\n 'domains',\n 'languages',\n ]\n\n widgets = {\n 'country': Select2Widget,\n 'gender': RadioSelectWithOther('gender_other'),\n }\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n # set up a layout object for the helper\n self.helper.layout = self.helper.build_default_layout(self)\n\n # set up `*WithOther` widgets so that they can display additional\n # fields inline\n self['gender'].field.widget.other_field = self['gender_other']\n\n # remove additional fields\n self.helper.layout.fields.remove('gender_other')\n\n def clean(self):\n super().clean()\n errors = dict()\n\n # 1: require \"other gender\" field if \"other\" was selected in\n # \"gender\" field\n gender = self.cleaned_data.get('gender', '')\n gender_other = self.cleaned_data.get('gender_other', '')\n if gender == GenderMixin.OTHER and not gender_other:\n errors['gender'] = ValidationError(\"This field is required.\")\n elif gender != GenderMixin.OTHER and gender_other:\n errors['gender'] = ValidationError(\n 'If you entered data in \"Other\" field, please select that '\n \"option.\")\n\n # raise errors if any present\n if errors:\n raise ValidationError(errors)\n\n\nclass PersonCreateForm(PersonForm):\n comment = MarkdownxFormField(\n label='Comment',\n help_text='This will be added to comments after the person is '\n 'created.',\n widget=forms.Textarea,\n required=False,\n )\n\n class Meta(PersonForm.Meta):\n # remove 'username' field as it's being populated after form save\n # in the `views.PersonCreate.form_valid`\n fields = PersonForm.Meta.fields.copy()\n fields.remove('username')\n fields.append('comment')\n\n\nclass PersonPermissionsForm(forms.ModelForm):\n helper = BootstrapHelper(add_cancel_button=False)\n\n user_permissions = forms.ModelMultipleChoiceField(\n label=Person._meta.get_field('user_permissions').verbose_name,\n help_text=Person._meta.get_field('user_permissions').help_text,\n required=False,\n queryset=Permission.objects.select_related('content_type'),\n )\n user_permissions.widget.attrs.update({'class': 'resizable-vertical',\n 'size': '20'})\n\n class Meta:\n model = Person\n # only display administration-related fields: groups, permissions,\n # being a superuser or being active (== ability to log in)\n fields = [\n 'is_active',\n 'is_superuser',\n 'user_permissions',\n 'groups',\n ]\n\n\nclass PersonsSelectionForm(forms.Form):\n person_a = forms.ModelChoiceField(\n label='Person From',\n required=True,\n queryset=Person.objects.all(),\n widget=ModelSelect2Widget(data_view='person-lookup')\n )\n\n person_b = forms.ModelChoiceField(\n label='Person To',\n required=True,\n queryset=Person.objects.all(),\n widget=ModelSelect2Widget(data_view='person-lookup')\n )\n\n helper = BootstrapHelper(use_get_method=True, add_cancel_button=False)\n\n\nclass PersonsMergeForm(forms.Form):\n TWO = (\n ('obj_a', 'Use A'),\n ('obj_b', 'Use B'),\n )\n THREE = TWO + (('combine', 'Combine'), )\n DEFAULT = 'obj_a'\n\n person_a = forms.ModelChoiceField(queryset=Person.objects.all(),\n widget=forms.HiddenInput)\n\n person_b = forms.ModelChoiceField(queryset=Person.objects.all(),\n widget=forms.HiddenInput)\n\n id = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n username = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n personal = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n middle = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n family = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n email = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n secondary_email = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n may_contact = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n publish_profile = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n data_privacy_agreement = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n gender = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n gender_other = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n airport = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n github = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n twitter = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n url = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n affiliation = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n occupation = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n orcid = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n award_set = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n qualification_set = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n label='Lessons',\n )\n domains = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n languages = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n task_set = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n is_active = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n trainingprogress_set = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n comment_comments = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n comments = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n\n\nclass AwardForm(WidgetOverrideMixin, forms.ModelForm):\n\n helper = BootstrapHelper(add_cancel_button=False)\n\n class Meta:\n model = Award\n fields = '__all__'\n widgets = {\n 'person': ModelSelect2Widget(data_view='person-lookup',\n attrs=SELECT2_SIDEBAR),\n 'event': ModelSelect2Widget(data_view='event-lookup',\n attrs=SELECT2_SIDEBAR),\n 'awarded_by': ModelSelect2Widget(data_view='admin-lookup',\n attrs=SELECT2_SIDEBAR),\n }\n\n\nclass EventLookupForm(forms.Form):\n event = forms.ModelChoiceField(\n label='Event',\n required=True,\n queryset=Event.objects.all(),\n widget=ModelSelect2Widget(data_view='event-lookup')\n )\n\n helper = BootstrapHelper(add_cancel_button=False)\n\n\nclass PersonLookupForm(forms.Form):\n person = forms.ModelChoiceField(\n label='Person',\n required=True,\n queryset=Person.objects.all(),\n widget=ModelSelect2Widget(data_view='person-lookup')\n )\n\n helper = BootstrapHelper(use_get_method=True, add_cancel_button=False)\n\n\nclass AdminLookupForm(forms.Form):\n person = forms.ModelChoiceField(\n label='Administrator',\n required=True,\n queryset=Person.objects.all(),\n widget=ModelSelect2Widget(\n data_view='admin-lookup',\n attrs=SELECT2_SIDEBAR,\n ),\n )\n\n helper = BootstrapHelper(add_cancel_button=False)\n\n\nclass EventsSelectionForm(forms.Form):\n event_a = forms.ModelChoiceField(\n label='Event A',\n required=True,\n queryset=Event.objects.all(),\n widget=ModelSelect2Widget(data_view='event-lookup')\n )\n\n event_b = forms.ModelChoiceField(\n label='Event B',\n required=True,\n queryset=Event.objects.all(),\n widget=ModelSelect2Widget(data_view='event-lookup')\n )\n\n helper = BootstrapHelper(use_get_method=True, add_cancel_button=False)\n\n\nclass EventsMergeForm(forms.Form):\n TWO = (\n ('obj_a', 'Use A'),\n ('obj_b', 'Use B'),\n )\n THREE = TWO + (('combine', 'Combine'), )\n DEFAULT = 'obj_a'\n\n event_a = forms.ModelChoiceField(queryset=Event.objects.all(),\n widget=forms.HiddenInput)\n\n event_b = forms.ModelChoiceField(queryset=Event.objects.all(),\n widget=forms.HiddenInput)\n\n id = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n slug = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n completed = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n assigned_to = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n start = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n end = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n host = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n administrator = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n tags = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n url = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n language = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n reg_key = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n admin_fee = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n invoice_status = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n manual_attendance = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n contact = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n country = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n venue = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n address = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n latitude = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n longitude = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n learners_pre = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n learners_post = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n instructors_pre = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n instructors_post = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n learners_longterm = forms.ChoiceField(\n choices=TWO, initial=DEFAULT, widget=forms.RadioSelect,\n )\n task_set = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n comments = forms.ChoiceField(\n choices=THREE, initial=DEFAULT, widget=forms.RadioSelect,\n )\n\n\n# ----------------------------------------------------------\n# Action required forms\n\nclass ActionRequiredPrivacyForm(forms.ModelForm):\n data_privacy_agreement = forms.BooleanField(\n label='*I have read and agree to <a href='\n '\"https://docs.carpentries.org/topic_folders/policies/'\n 'privacy.html\" target=\"_blank\">'\n 'the data privacy policy of The Carpentries</a>.',\n required=True)\n\n helper = BootstrapHelper(add_cancel_button=False)\n\n class Meta:\n model = Person\n fields = [\n 'data_privacy_agreement',\n 'may_contact',\n 'publish_profile',\n ]\n\n\n# ----------------------------------------------------------\n# Signals\n\n@receiver(create_comment_signal, sender=EventForm)\n@receiver(create_comment_signal, sender=EventCreateForm)\n@receiver(create_comment_signal, sender=PersonCreateForm)\ndef form_saved_add_comment(sender, **kwargs):\n \"\"\"A receiver for custom form.save() signal. This is intended to save\n comment, entered as a form field, when creating a new object, and present\n it as automatic system Comment (from django_comments app).\"\"\"\n content_object = kwargs.get('content_object', None)\n comment = kwargs.get('comment', None)\n timestamp = kwargs.get('timestamp', datetime.now(timezone.utc))\n\n # only proceed if we have an actual object (that exists in DB), and\n # comment contents\n if content_object and comment and content_object.pk:\n site = Site.objects.get_current()\n Comment.objects.create(\n content_object=content_object,\n site=site,\n user=None,\n user_name='Automatic comment',\n submit_date=timestamp,\n comment=comment,\n )\n", "path": "amy/workshops/forms.py" } ]
diff --git a/amy/workshops/forms.py b/amy/workshops/forms.py index a1da9863d..03a4051e2 100644 --- a/amy/workshops/forms.py +++ b/amy/workshops/forms.py @@ -575,6 +575,11 @@ def clean_curricula(self): return curricula + def clean_manual_attendance(self): + """Regression: #1608 - fix 500 server error when field is cleared.""" + manual_attendance = self.cleaned_data['manual_attendance'] or 0 + return manual_attendance + def save(self, *args, **kwargs): res = super().save(*args, **kwargs) diff --git a/amy/workshops/tests/test_event.py b/amy/workshops/tests/test_event.py index 41af7fdf2..25fb4eb4c 100644 --- a/amy/workshops/tests/test_event.py +++ b/amy/workshops/tests/test_event.py @@ -567,6 +567,24 @@ def test_negative_manual_attendance(self): f = EventForm(data) self.assertTrue(f.is_valid()) + def test_empty_manual_attendance(self): + """Ensure we don't get 500 server error when field is left with empty + value. + + This is a regression test for + https://github.com/swcarpentry/amy/issues/1608.""" + + data = { + 'slug': '2016-06-30-test-event', + 'host': self.test_host.id, + 'tags': [self.test_tag.id], + 'manual_attendance': '', + } + f = EventForm(data) + self.assertTrue(f.is_valid()) + event = f.save() + self.assertEqual(event.manual_attendance, 0) + def test_number_of_attendees_increasing(self): """Ensure event.attendance gets bigger after adding new learners.""" event = Event.objects.get(slug='test_event_0')
mdn__kuma-5638
Why does SSR rendering /en-US/docs/Web require 54 SQL queries? <img width="1500" alt="Screen Shot 2019-08-13 at 3 05 10 PM" src="https://user-images.githubusercontent.com/26739/62969706-da54f880-bddb-11e9-8405-dd1ecd25b657.png">
[ { "content": "from django.conf import settings\nfrom django.http import HttpResponsePermanentRedirect, JsonResponse\nfrom django.shortcuts import get_object_or_404\nfrom django.utils.translation import activate, ugettext as _\nfrom django.views.decorators.cache import never_cache\nfrom django.views.decorators.http import require_GET\nfrom elasticsearch_dsl import Q, query\nfrom rest_framework import status\nfrom rest_framework.decorators import api_view\nfrom rest_framework.response import Response\nfrom waffle.decorators import waffle_flag\nfrom waffle.models import Flag, Sample, Switch\n\nfrom kuma.api.v1.serializers import BCSignalSerializer\nfrom kuma.core.urlresolvers import reverse\nfrom kuma.users.templatetags.jinja_helpers import gravatar_url\nfrom kuma.wiki.jobs import DocumentContributorsJob\nfrom kuma.wiki.models import Document\nfrom kuma.wiki.search import WikiDocumentType\nfrom kuma.wiki.templatetags.jinja_helpers import absolutify\n\n\n@never_cache\n@require_GET\ndef doc(request, locale, slug):\n \"\"\"\n Return a JSON object that includes document content and metadata\n for the document specified by the locale and path. Raises a 404\n error if no such document exists. This is an API with URL\n /api/v1/doc/<locale>/<path>\n \"\"\"\n # TODO: This API endpoint probably needs to handle redirect documents\n # and documents that fall back to the en-US locale. See\n # the document() function in wiki/views/document.py for a model to follow.\n\n # Since we don't have the locale at the start of the path, our\n # locale middleware can't set the translation language correctly\n # and we need to do it explicitly. (We need to know the language\n # so that we can provide translated language names for the\n # translations menu.)\n activate(locale)\n document = get_object_or_404(Document, locale=locale, slug=slug)\n\n redirect = get_content_based_redirect(document)\n if redirect:\n redirect_url, is_redirect_to_document = redirect\n if is_redirect_to_document:\n return HttpResponsePermanentRedirect(redirect_url)\n return JsonResponse(document_api_data(redirect_url=redirect_url))\n\n return JsonResponse(document_api_data(document))\n\n\ndef get_s3_key(doc=None, locale=None, slug=None,\n prefix_with_forward_slash=False):\n if doc:\n locale, slug = doc.locale, doc.slug\n key = reverse('api.v1.doc', args=(locale, slug))\n if prefix_with_forward_slash:\n # Redirects within an S3 bucket must be prefixed with \"/\".\n return key\n return key.lstrip('/')\n\n\ndef get_cdn_key(locale, slug):\n \"\"\"Given a document's locale and slug, return the \"key\" for the CDN.\"\"\"\n return get_s3_key(locale=locale, slug=slug, prefix_with_forward_slash=True)\n\n\ndef get_content_based_redirect(document):\n \"\"\"\n Returns None if the document is not a content-based redirect, otherwise a\n tuple pair comprising the redirect URL as well as a boolean value. The\n boolean value will be True if this is a redirect to another document,\n otherwise False. If the document is a redirect to another document or a\n redirect to the homepage, a relative URL will be returned, otherwise it\n will be a full URL to the wiki site.\n \"\"\"\n redirect_url = document.get_redirect_url()\n if redirect_url and (redirect_url != document.get_absolute_url()):\n redirect_document = document.get_redirect_document(id_only=False)\n if redirect_document:\n # This is a redirect to another document.\n return (\n get_s3_key(redirect_document, prefix_with_forward_slash=True),\n True\n )\n # This is a redirect to non-document page. For now, if it's the home\n # page, return a relative path (so we stay on the read-only domain),\n # otherwise return the full URL for the wiki site.\n locale = document.locale\n is_home_page = (redirect_url in\n ('/', '/' + locale, '/{}/'.format(locale)))\n if is_home_page:\n # Let's return a relative URL to the home page for this locale.\n return ('/{}/'.format(locale), False)\n # Otherwise, let's return a full URL to the Wiki site.\n return (absolutify(redirect_url, for_wiki_site=True), False)\n return None\n\n\ndef document_api_data(doc=None, ensure_contributors=False, redirect_url=None):\n \"\"\"\n Returns the JSON data for the document for the document API.\n \"\"\"\n if redirect_url:\n return {\n 'documentData': None,\n 'redirectURL': redirect_url,\n }\n\n job = DocumentContributorsJob()\n # If \"ensure_contributors\" is True, we need the contributors since the\n # result will likely be cached, so we'll set \"fetch_on_miss\" and wait\n # for the result if it's not already available or stale.\n job.fetch_on_miss = ensure_contributors\n contributors = [c['username'] for c in job.get(doc.pk)]\n\n # The original english slug for this document, for google analytics\n if doc.locale == 'en-US':\n en_slug = doc.slug\n elif doc.parent_id and doc.parent.locale == 'en-US':\n en_slug = doc.parent.slug\n else:\n en_slug = ''\n\n other_translations = doc.get_other_translations(\n fields=('locale', 'slug', 'title'))\n available_locales = (\n set([doc.locale]) | set(t.locale for t in other_translations))\n\n return {\n 'documentData': {\n 'locale': doc.locale,\n 'slug': doc.slug,\n 'enSlug': en_slug,\n 'id': doc.id,\n 'title': doc.title,\n 'summary': doc.get_summary_html(),\n 'language': doc.language,\n 'hrefLang': doc.get_hreflang(available_locales),\n 'absoluteURL': doc.get_absolute_url(),\n 'editURL': absolutify(doc.get_edit_url(), for_wiki_site=True),\n 'translateURL': (\n absolutify(\n reverse(\n 'wiki.select_locale',\n args=(doc.slug,),\n locale=doc.locale,\n ),\n for_wiki_site=True\n )\n if doc.is_localizable else\n None\n ),\n 'bodyHTML': doc.get_body_html(),\n 'quickLinksHTML': doc.get_quick_links_html(),\n 'tocHTML': doc.get_toc_html(),\n 'parents': [\n {\n 'url': d.get_absolute_url(),\n 'title': d.title\n } for d in doc.parents\n ],\n 'translations': [\n {\n 'language': t.language,\n 'hrefLang': t.get_hreflang(available_locales),\n 'localizedLanguage': _(settings.LOCALES[t.locale].english),\n 'locale': t.locale,\n 'url': t.get_absolute_url(),\n 'title': t.title\n } for t in other_translations\n ],\n 'contributors': contributors,\n 'lastModified': (doc.current_revision and\n doc.current_revision.created.isoformat()),\n 'lastModifiedBy': (doc.current_revision and\n str(doc.current_revision.creator))\n },\n 'redirectURL': None,\n }\n\n\n@never_cache\n@require_GET\ndef whoami(request):\n \"\"\"\n Return a JSON object representing the current user, either\n authenticated or anonymous.\n \"\"\"\n user = request.user\n if user.is_authenticated:\n data = {\n 'username': user.username,\n 'timezone': user.timezone,\n 'is_authenticated': True,\n 'is_staff': user.is_staff,\n 'is_superuser': user.is_superuser,\n 'is_beta_tester': user.is_beta_tester,\n 'gravatar_url': {\n 'small': gravatar_url(user.email, size=50),\n 'large': gravatar_url(user.email, size=200),\n }\n }\n else:\n data = {\n 'username': None,\n 'timezone': settings.TIME_ZONE,\n 'is_authenticated': False,\n 'is_staff': False,\n 'is_superuser': False,\n 'is_beta_tester': False,\n 'gravatar_url': {\n 'small': None,\n 'large': None,\n }\n }\n\n # Add waffle data to the dict we're going to be returning.\n # This is what the waffle.wafflejs() template tag does, but we're\n # doing it via an API instead of hardcoding the settings into\n # the HTML page. See also from waffle.views._generate_waffle_js.\n #\n # Note that if we upgrade django-waffle, version 15 introduces a\n # pluggable flag model, and the approved way to get all flag\n # objects will then become:\n # get_waffle_flag_model().get_all()\n #\n data['waffle'] = {\n 'flags': {f.name: f.is_active(request) for f in Flag.get_all()},\n 'switches': {s.name: s.is_active() for s in Switch.get_all()},\n 'samples': {s.name: s.is_active() for s in Sample.get_all()},\n }\n\n return JsonResponse(data)\n\n\n@never_cache\n@require_GET\ndef search(request, locale):\n \"\"\" An API endpoint to return search results as a JSON blob.\n This endpoint makes a relatively simple ElasticSearch query\n for documents matching the value of the q parameter.\n \"\"\"\n # TODO: I'm betting that a simple search like this will be faster\n # and just as good as the more complex searches implemented by the\n # code in kuma/search/. Peter disagrees and thinks that we might\n # eventually want to make this endpoint use code from kuma/search/.\n # An alternative is to just abandon this API endpoint and have\n # the frontend call wiki.d.m.o/locale/search.json?q=query. On the\n # other hand, if we're ever going to implement any kind of\n # search-as-you-type interface, we'll need a super-fast custom\n # endpoint like this one.\n query_string = request.GET.get('q')\n if locale == 'en-US':\n search = (WikiDocumentType.search()\n .filter('term', locale=locale)\n .source(['slug', 'title', 'summary'])\n .query('multi_match', query=query_string,\n fields=['title^7', 'summary^2', 'content']))\n else:\n search = (WikiDocumentType.search()\n .filter('terms', locale=[locale, 'en-US'])\n .source(['slug', 'title', 'summary', 'locale'])\n .query(query.Bool(\n must=Q('multi_match', query=query_string,\n fields=['title^7', 'summary^2', 'content']),\n should=[\n # boost the score if the document is translated\n Q('term', locale={'value': locale, 'boost': 8}),\n ])))\n\n # Add excerpts with search results highlighted\n search = search.highlight('content')\n search = search.highlight_options(order='score',\n pre_tags=['<mark>'],\n post_tags=['</mark>'])\n\n # Return as many as 40 matches, since we're not implementing pagination yet\n response = search[0:40].execute()\n return JsonResponse(response.to_dict())\n\n\n@waffle_flag('bc-signals')\n@api_view(['POST'])\ndef bc_signal(request):\n serializer = BCSignalSerializer(data=request.data)\n if serializer.is_valid():\n serializer.save()\n return Response(serializer.validated_data, status=status.HTTP_201_CREATED)\n return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)\n", "path": "kuma/api/v1/views.py" } ]
[ { "content": "from django.conf import settings\nfrom django.http import HttpResponsePermanentRedirect, JsonResponse\nfrom django.shortcuts import get_object_or_404\nfrom django.utils.translation import activate, ugettext as _\nfrom django.views.decorators.cache import never_cache\nfrom django.views.decorators.http import require_GET\nfrom elasticsearch_dsl import Q, query\nfrom rest_framework import status\nfrom rest_framework.decorators import api_view\nfrom rest_framework.response import Response\nfrom waffle.decorators import waffle_flag\nfrom waffle.models import Flag, Sample, Switch\n\nfrom kuma.api.v1.serializers import BCSignalSerializer\nfrom kuma.core.urlresolvers import reverse\nfrom kuma.users.templatetags.jinja_helpers import gravatar_url\nfrom kuma.wiki.jobs import DocumentContributorsJob\nfrom kuma.wiki.models import Document\nfrom kuma.wiki.search import WikiDocumentType\nfrom kuma.wiki.templatetags.jinja_helpers import absolutify\n\n\n@never_cache\n@require_GET\ndef doc(request, locale, slug):\n \"\"\"\n Return a JSON object that includes document content and metadata\n for the document specified by the locale and path. Raises a 404\n error if no such document exists. This is an API with URL\n /api/v1/doc/<locale>/<path>\n \"\"\"\n # TODO: This API endpoint probably needs to handle redirect documents\n # and documents that fall back to the en-US locale. See\n # the document() function in wiki/views/document.py for a model to follow.\n\n # Since we don't have the locale at the start of the path, our\n # locale middleware can't set the translation language correctly\n # and we need to do it explicitly. (We need to know the language\n # so that we can provide translated language names for the\n # translations menu.)\n activate(locale)\n document = get_object_or_404(Document, locale=locale, slug=slug)\n\n redirect = get_content_based_redirect(document)\n if redirect:\n redirect_url, is_redirect_to_document = redirect\n if is_redirect_to_document:\n return HttpResponsePermanentRedirect(redirect_url)\n return JsonResponse(document_api_data(redirect_url=redirect_url))\n\n return JsonResponse(document_api_data(document))\n\n\ndef get_s3_key(doc=None, locale=None, slug=None,\n prefix_with_forward_slash=False):\n if doc:\n locale, slug = doc.locale, doc.slug\n key = reverse('api.v1.doc', args=(locale, slug))\n if prefix_with_forward_slash:\n # Redirects within an S3 bucket must be prefixed with \"/\".\n return key\n return key.lstrip('/')\n\n\ndef get_cdn_key(locale, slug):\n \"\"\"Given a document's locale and slug, return the \"key\" for the CDN.\"\"\"\n return get_s3_key(locale=locale, slug=slug, prefix_with_forward_slash=True)\n\n\ndef get_content_based_redirect(document):\n \"\"\"\n Returns None if the document is not a content-based redirect, otherwise a\n tuple pair comprising the redirect URL as well as a boolean value. The\n boolean value will be True if this is a redirect to another document,\n otherwise False. If the document is a redirect to another document or a\n redirect to the homepage, a relative URL will be returned, otherwise it\n will be a full URL to the wiki site.\n \"\"\"\n redirect_url = document.get_redirect_url()\n if redirect_url and (redirect_url != document.get_absolute_url()):\n redirect_document = document.get_redirect_document(id_only=False)\n if redirect_document:\n # This is a redirect to another document.\n return (\n get_s3_key(redirect_document, prefix_with_forward_slash=True),\n True\n )\n # This is a redirect to non-document page. For now, if it's the home\n # page, return a relative path (so we stay on the read-only domain),\n # otherwise return the full URL for the wiki site.\n locale = document.locale\n is_home_page = (redirect_url in\n ('/', '/' + locale, '/{}/'.format(locale)))\n if is_home_page:\n # Let's return a relative URL to the home page for this locale.\n return ('/{}/'.format(locale), False)\n # Otherwise, let's return a full URL to the Wiki site.\n return (absolutify(redirect_url, for_wiki_site=True), False)\n return None\n\n\ndef document_api_data(doc=None, ensure_contributors=False, redirect_url=None):\n \"\"\"\n Returns the JSON data for the document for the document API.\n \"\"\"\n if redirect_url:\n return {\n 'documentData': None,\n 'redirectURL': redirect_url,\n }\n\n job = DocumentContributorsJob()\n # If \"ensure_contributors\" is True, we need the contributors since the\n # result will likely be cached, so we'll set \"fetch_on_miss\" and wait\n # for the result if it's not already available or stale.\n job.fetch_on_miss = ensure_contributors\n contributors = [c['username'] for c in job.get(doc.pk)]\n\n # The original english slug for this document, for google analytics\n if doc.locale == 'en-US':\n en_slug = doc.slug\n elif doc.parent_id and doc.parent.locale == 'en-US':\n en_slug = doc.parent.slug\n else:\n en_slug = ''\n\n other_translations = doc.get_other_translations(\n fields=('locale', 'slug', 'title', 'parent'))\n available_locales = (\n set([doc.locale]) | set(t.locale for t in other_translations))\n\n return {\n 'documentData': {\n 'locale': doc.locale,\n 'slug': doc.slug,\n 'enSlug': en_slug,\n 'id': doc.id,\n 'title': doc.title,\n 'summary': doc.get_summary_html(),\n 'language': doc.language,\n 'hrefLang': doc.get_hreflang(available_locales),\n 'absoluteURL': doc.get_absolute_url(),\n 'editURL': absolutify(doc.get_edit_url(), for_wiki_site=True),\n 'translateURL': (\n absolutify(\n reverse(\n 'wiki.select_locale',\n args=(doc.slug,),\n locale=doc.locale,\n ),\n for_wiki_site=True\n )\n if doc.is_localizable else\n None\n ),\n 'bodyHTML': doc.get_body_html(),\n 'quickLinksHTML': doc.get_quick_links_html(),\n 'tocHTML': doc.get_toc_html(),\n 'parents': [\n {\n 'url': d.get_absolute_url(),\n 'title': d.title\n } for d in doc.parents\n ],\n 'translations': [\n {\n 'language': t.language,\n 'hrefLang': t.get_hreflang(available_locales),\n 'localizedLanguage': _(settings.LOCALES[t.locale].english),\n 'locale': t.locale,\n 'url': t.get_absolute_url(),\n 'title': t.title\n } for t in other_translations\n ],\n 'contributors': contributors,\n 'lastModified': (doc.current_revision and\n doc.current_revision.created.isoformat()),\n 'lastModifiedBy': (doc.current_revision and\n str(doc.current_revision.creator))\n },\n 'redirectURL': None,\n }\n\n\n@never_cache\n@require_GET\ndef whoami(request):\n \"\"\"\n Return a JSON object representing the current user, either\n authenticated or anonymous.\n \"\"\"\n user = request.user\n if user.is_authenticated:\n data = {\n 'username': user.username,\n 'timezone': user.timezone,\n 'is_authenticated': True,\n 'is_staff': user.is_staff,\n 'is_superuser': user.is_superuser,\n 'is_beta_tester': user.is_beta_tester,\n 'gravatar_url': {\n 'small': gravatar_url(user.email, size=50),\n 'large': gravatar_url(user.email, size=200),\n }\n }\n else:\n data = {\n 'username': None,\n 'timezone': settings.TIME_ZONE,\n 'is_authenticated': False,\n 'is_staff': False,\n 'is_superuser': False,\n 'is_beta_tester': False,\n 'gravatar_url': {\n 'small': None,\n 'large': None,\n }\n }\n\n # Add waffle data to the dict we're going to be returning.\n # This is what the waffle.wafflejs() template tag does, but we're\n # doing it via an API instead of hardcoding the settings into\n # the HTML page. See also from waffle.views._generate_waffle_js.\n #\n # Note that if we upgrade django-waffle, version 15 introduces a\n # pluggable flag model, and the approved way to get all flag\n # objects will then become:\n # get_waffle_flag_model().get_all()\n #\n data['waffle'] = {\n 'flags': {f.name: f.is_active(request) for f in Flag.get_all()},\n 'switches': {s.name: s.is_active() for s in Switch.get_all()},\n 'samples': {s.name: s.is_active() for s in Sample.get_all()},\n }\n\n return JsonResponse(data)\n\n\n@never_cache\n@require_GET\ndef search(request, locale):\n \"\"\" An API endpoint to return search results as a JSON blob.\n This endpoint makes a relatively simple ElasticSearch query\n for documents matching the value of the q parameter.\n \"\"\"\n # TODO: I'm betting that a simple search like this will be faster\n # and just as good as the more complex searches implemented by the\n # code in kuma/search/. Peter disagrees and thinks that we might\n # eventually want to make this endpoint use code from kuma/search/.\n # An alternative is to just abandon this API endpoint and have\n # the frontend call wiki.d.m.o/locale/search.json?q=query. On the\n # other hand, if we're ever going to implement any kind of\n # search-as-you-type interface, we'll need a super-fast custom\n # endpoint like this one.\n query_string = request.GET.get('q')\n if locale == 'en-US':\n search = (WikiDocumentType.search()\n .filter('term', locale=locale)\n .source(['slug', 'title', 'summary'])\n .query('multi_match', query=query_string,\n fields=['title^7', 'summary^2', 'content']))\n else:\n search = (WikiDocumentType.search()\n .filter('terms', locale=[locale, 'en-US'])\n .source(['slug', 'title', 'summary', 'locale'])\n .query(query.Bool(\n must=Q('multi_match', query=query_string,\n fields=['title^7', 'summary^2', 'content']),\n should=[\n # boost the score if the document is translated\n Q('term', locale={'value': locale, 'boost': 8}),\n ])))\n\n # Add excerpts with search results highlighted\n search = search.highlight('content')\n search = search.highlight_options(order='score',\n pre_tags=['<mark>'],\n post_tags=['</mark>'])\n\n # Return as many as 40 matches, since we're not implementing pagination yet\n response = search[0:40].execute()\n return JsonResponse(response.to_dict())\n\n\n@waffle_flag('bc-signals')\n@api_view(['POST'])\ndef bc_signal(request):\n serializer = BCSignalSerializer(data=request.data)\n if serializer.is_valid():\n serializer.save()\n return Response(serializer.validated_data, status=status.HTTP_201_CREATED)\n return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)\n", "path": "kuma/api/v1/views.py" } ]
diff --git a/kuma/api/v1/views.py b/kuma/api/v1/views.py index ddabd8ac64f..48a6c2f90c0 100644 --- a/kuma/api/v1/views.py +++ b/kuma/api/v1/views.py @@ -125,7 +125,7 @@ def document_api_data(doc=None, ensure_contributors=False, redirect_url=None): en_slug = '' other_translations = doc.get_other_translations( - fields=('locale', 'slug', 'title')) + fields=('locale', 'slug', 'title', 'parent')) available_locales = ( set([doc.locale]) | set(t.locale for t in other_translations))
ManimCommunity__manim-3200
issue when using opengl as renderer ## Description of bug / unexpected behavior <!--No module named 'moderngl.program_members' --> When I tried to run the command: `manim --renderer=opengl interactive.py`, an error occurred saying that no such module. But I suppose this is something in the source code, does this mean there's something going on with moderngl? ## Expected behavior <!-- Add a clear and concise description of what you expected to happen. --> I think the program should run, and it does when I unflagged the renderer command. ## How to reproduce the issue <!-- Provide a piece of code illustrating the undesired behavior. --> <details><summary>Code for reproducing the problem</summary> ``` from manim import * class InteractiveRadius(Scene): def construct(self): plane = NumberPlane() cursor_dot = Dot().move_to(3 * RIGHT + 2 * UP) red_circle = Circle( radius=5, color=RED ) red_circle.add_updater(lambda mob: mob.become( Circle( radius=3, color=RED ) )) self.play(Create(plane), Create(red_circle), FadeIn(cursor_dot)) self.cursor_dot = cursor_dot self.interactive_embed() def on_key_press(self, symbol, modifiers): from pyglet.window import key as pyglet_key if symbol == pyglet_key.G: self.play( self.cursor_dot.animate.move_to(self.mouse_point.get_center()) ) super().on_key_press(symbol, modifiers) ``` </details> ## Additional media files <!-- Paste in the files manim produced on rendering the code above. --> <details><summary>Images/GIFs</summary> <!-- PASTE MEDIA HERE --> </details> ## Logs <details><summary>Terminal output</summary> <!-- Add "-v DEBUG" when calling manim to generate more detailed logs --> ``` $ manim --renderer=opengl interactive.py Manim Community v0.17.2 ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ C:\Users\baichuanzhou\anaconda3\envs\manim\lib\site-packages\manim\cli\render\commands.py:97 in │ │ render │ │ │ │ 94 │ │ │ │ for SceneClass in scene_classes_from_file(file): │ │ 95 │ │ │ │ │ with tempconfig({}): │ │ 96 │ │ │ │ │ │ scene = SceneClass(renderer) │ │ ❱ 97 │ │ │ │ │ │ rerun = scene.render() │ │ 98 │ │ │ │ │ if rerun or config["write_all"]: │ │ 99 │ │ │ │ │ │ renderer.num_plays = 0 │ │ 100 │ │ │ │ │ │ continue │ │ │ │ C:\Users\baichuanzhou\anaconda3\envs\manim\lib\site-packages\manim\scene\scene.py:223 in render │ │ │ │ 220 │ │ """ │ │ 221 │ │ self.setup() │ │ 222 │ │ try: │ │ ❱ 223 │ │ │ self.construct() │ │ 224 │ │ except EndSceneEarlyException: │ │ 225 │ │ │ pass │ │ 226 │ │ except RerunSceneException as e: │ │ │ │ C:\Users\baichuanzhou\Desktop\ManimDL\interactive.py:20 in construct │ │ │ │ 17 │ │ │ ) │ │ 18 │ │ )) │ │ 19 │ │ │ │ ❱ 20 │ │ self.play(Create(plane), Create(red_circle), FadeIn(cursor_dot)) │ │ 21 │ │ self.cursor_dot = cursor_dot │ │ 22 │ │ self.interactive_embed() │ │ 23 │ │ │ │ C:\Users\baichuanzhou\anaconda3\envs\manim\lib\site-packages\manim\scene\scene.py:1033 in play │ │ │ │ 1030 │ │ │ return │ │ 1031 │ │ │ │ 1032 │ │ start_time = self.renderer.time │ │ ❱ 1033 │ │ self.renderer.play(self, *args, **kwargs) │ │ 1034 │ │ run_time = self.renderer.time - start_time │ │ 1035 │ │ if subcaption: │ │ 1036 │ │ │ if subcaption_duration is None: │ │ │ │ C:\Users\baichuanzhou\anaconda3\envs\manim\lib\site-packages\manim\utils\caching.py:65 in │ │ wrapper │ │ │ │ 62 │ │ │ "List of the first few animation hashes of the scene: %(h)s", │ │ 63 │ │ │ {"h": str(self.animations_hashes[:5])}, │ │ 64 │ │ ) │ │ ❱ 65 │ │ func(self, scene, *args, **kwargs) │ │ 66 │ │ │ 67 │ return wrapper │ │ 68 │ │ │ │ C:\Users\baichuanzhou\anaconda3\envs\manim\lib\site-packages\manim\renderer\opengl_renderer.py:4 │ │ 39 in play │ │ │ │ 436 │ │ │ self.animation_elapsed_time = scene.duration │ │ 437 │ │ │ │ 438 │ │ else: │ │ ❱ 439 │ │ │ scene.play_internal() │ │ 440 │ │ │ │ 441 │ │ self.file_writer.end_animation(not self.skip_animations) │ │ 442 │ │ self.time += scene.duration │ │ │ │ C:\Users\baichuanzhou\anaconda3\envs\manim\lib\site-packages\manim\scene\scene.py:1200 in │ │ play_internal │ │ │ │ 1197 │ │ for t in self.time_progression: │ │ 1198 │ │ │ self.update_to_time(t) │ │ 1199 │ │ │ if not skip_rendering and not self.skip_animation_preview: │ │ ❱ 1200 │ │ │ │ self.renderer.render(self, t, self.moving_mobjects) │ │ 1201 │ │ │ if self.stop_condition is not None and self.stop_condition(): │ │ 1202 │ │ │ │ self.time_progression.close() │ │ 1203 │ │ │ │ break │ │ │ │ C:\Users\baichuanzhou\anaconda3\envs\manim\lib\site-packages\manim\renderer\opengl_renderer.py:4 │ │ 50 in render │ │ │ │ 447 │ │ self.window.swap_buffers() │ │ 448 │ │ │ 449 │ def render(self, scene, frame_offset, moving_mobjects): │ │ ❱ 450 │ │ self.update_frame(scene) │ │ 451 │ │ │ │ 452 │ │ if self.skip_animations: │ │ 453 │ │ │ return │ │ │ │ C:\Users\baichuanzhou\anaconda3\envs\manim\lib\site-packages\manim\renderer\opengl_renderer.py:4 │ │ 70 in update_frame │ │ │ │ 467 │ │ for mobject in scene.mobjects: │ │ 468 │ │ │ if not mobject.should_render: │ │ 469 │ │ │ │ continue │ │ ❱ 470 │ │ │ self.render_mobject(mobject) │ │ 471 │ │ │ │ 472 │ │ for obj in scene.meshes: │ │ 473 │ │ │ for mesh in obj.get_meshes(): │ │ │ │ C:\Users\baichuanzhou\anaconda3\envs\manim\lib\site-packages\manim\renderer\opengl_renderer.py:3 │ │ 75 in render_mobject │ │ │ │ 372 │ │ │ │ primitive=mobject.render_primitive, │ │ 373 │ │ │ ) │ │ 374 │ │ │ mesh.set_uniforms(self) │ │ ❱ 375 │ │ │ mesh.render() │ │ 376 │ │ │ 377 │ def get_texture_id(self, path): │ │ 378 │ │ if repr(path) not in self.path_to_texture_id: │ │ │ │ C:\Users\baichuanzhou\anaconda3\envs\manim\lib\site-packages\manim\renderer\shader.py:315 in │ │ render │ │ │ │ 312 │ │ else: │ │ 313 │ │ │ self.shader.context.disable(moderngl.DEPTH_TEST) │ │ 314 │ │ │ │ ❱ 315 │ │ from moderngl.program_members import Attribute │ │ 316 │ │ │ │ 317 │ │ shader_attributes = [] │ │ 318 │ │ for k, v in self.shader.shader_program._members.items(): │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ModuleNotFoundError: No module named 'moderngl.program_members' ``` <!-- Insert screenshots here (only when absolutely necessary, we prefer copy/pasted output!) --> </details> ## System specifications <details><summary>System Details</summary> - OS (with version, e.g., Windows 10 v2004 or macOS 10.15 (Catalina)): - RAM: - Python version (`python/py/python3 --version`): - Installed modules (provide output from `pip list`): ``` PASTE HERE ``` </details> <details><summary>LaTeX details</summary> + LaTeX distribution (e.g. TeX Live 2020): + Installed LaTeX packages: <!-- output of `tlmgr list --only-installed` for TeX Live or a screenshot of the Packages page for MikTeX --> </details> <details><summary>FFMPEG</summary> Output of `ffmpeg -version`: ``` PASTE HERE ``` </details> ## Additional comments <!-- Add further context that you think might be relevant for this issue here. -->
[ { "content": "from __future__ import annotations\n\nimport re\nimport textwrap\nfrom pathlib import Path\n\nimport moderngl\nimport numpy as np\n\nfrom .. import config\nfrom ..utils import opengl\nfrom ..utils.simple_functions import get_parameters\n\nSHADER_FOLDER = Path(__file__).parent / \"shaders\"\nshader_program_cache: dict = {}\nfile_path_to_code_map: dict = {}\n\n__all__ = [\n \"Object3D\",\n \"Mesh\",\n \"Shader\",\n \"FullScreenQuad\",\n]\n\n\ndef get_shader_code_from_file(file_path: Path) -> str:\n if file_path in file_path_to_code_map:\n return file_path_to_code_map[file_path]\n source = file_path.read_text()\n include_lines = re.finditer(\n r\"^#include (?P<include_path>.*\\.glsl)$\",\n source,\n flags=re.MULTILINE,\n )\n for match in include_lines:\n include_path = match.group(\"include_path\")\n included_code = get_shader_code_from_file(\n file_path.parent / include_path,\n )\n source = source.replace(match.group(0), included_code)\n file_path_to_code_map[file_path] = source\n return source\n\n\ndef filter_attributes(unfiltered_attributes, attributes):\n # Construct attributes for only those needed by the shader.\n filtered_attributes_dtype = []\n for i, dtype_name in enumerate(unfiltered_attributes.dtype.names):\n if dtype_name in attributes:\n filtered_attributes_dtype.append(\n (\n dtype_name,\n unfiltered_attributes.dtype[i].subdtype[0].str,\n unfiltered_attributes.dtype[i].shape,\n ),\n )\n\n filtered_attributes = np.zeros(\n unfiltered_attributes[unfiltered_attributes.dtype.names[0]].shape[0],\n dtype=filtered_attributes_dtype,\n )\n\n for dtype_name in unfiltered_attributes.dtype.names:\n if dtype_name in attributes:\n filtered_attributes[dtype_name] = unfiltered_attributes[dtype_name]\n\n return filtered_attributes\n\n\nclass Object3D:\n def __init__(self, *children):\n self.model_matrix = np.eye(4)\n self.normal_matrix = np.eye(4)\n self.children = []\n self.parent = None\n self.add(*children)\n self.init_updaters()\n\n # TODO: Use path_func.\n def interpolate(self, start, end, alpha, _):\n self.model_matrix = (1 - alpha) * start.model_matrix + alpha * end.model_matrix\n self.normal_matrix = (\n 1 - alpha\n ) * start.normal_matrix + alpha * end.normal_matrix\n\n def single_copy(self):\n copy = Object3D()\n copy.model_matrix = self.model_matrix.copy()\n copy.normal_matrix = self.normal_matrix.copy()\n return copy\n\n def copy(self):\n node_to_copy = {}\n\n bfs = [self]\n while bfs:\n node = bfs.pop(0)\n bfs.extend(node.children)\n\n node_copy = node.single_copy()\n node_to_copy[node] = node_copy\n\n # Add the copy to the copy of the parent.\n if node.parent is not None and node is not self:\n node_to_copy[node.parent].add(node_copy)\n return node_to_copy[self]\n\n def add(self, *children):\n for child in children:\n if child.parent is not None:\n raise Exception(\n \"Attempt to add child that's already added to another Object3D\",\n )\n self.remove(*children, current_children_only=False)\n self.children.extend(children)\n for child in children:\n child.parent = self\n\n def remove(self, *children, current_children_only=True):\n if current_children_only:\n for child in children:\n if child.parent != self:\n raise Exception(\n \"Attempt to remove child that isn't added to this Object3D\",\n )\n self.children = list(filter(lambda child: child not in children, self.children))\n for child in children:\n child.parent = None\n\n def get_position(self):\n return self.model_matrix[:, 3][:3]\n\n def set_position(self, position):\n self.model_matrix[:, 3][:3] = position\n return self\n\n def get_meshes(self):\n dfs = [self]\n while dfs:\n parent = dfs.pop()\n if isinstance(parent, Mesh):\n yield parent\n dfs.extend(parent.children)\n\n def get_family(self):\n dfs = [self]\n while dfs:\n parent = dfs.pop()\n yield parent\n dfs.extend(parent.children)\n\n def align_data_and_family(self, _):\n pass\n\n def hierarchical_model_matrix(self):\n if self.parent is None:\n return self.model_matrix\n\n model_matrices = [self.model_matrix]\n current_object = self\n while current_object.parent is not None:\n model_matrices.append(current_object.parent.model_matrix)\n current_object = current_object.parent\n return np.linalg.multi_dot(list(reversed(model_matrices)))\n\n def hierarchical_normal_matrix(self):\n if self.parent is None:\n return self.normal_matrix[:3, :3]\n\n normal_matrices = [self.normal_matrix]\n current_object = self\n while current_object.parent is not None:\n normal_matrices.append(current_object.parent.model_matrix)\n current_object = current_object.parent\n return np.linalg.multi_dot(list(reversed(normal_matrices)))[:3, :3]\n\n def init_updaters(self):\n self.time_based_updaters = []\n self.non_time_updaters = []\n self.has_updaters = False\n self.updating_suspended = False\n\n def update(self, dt=0):\n if not self.has_updaters or self.updating_suspended:\n return self\n for updater in self.time_based_updaters:\n updater(self, dt)\n for updater in self.non_time_updaters:\n updater(self)\n return self\n\n def get_time_based_updaters(self):\n return self.time_based_updaters\n\n def has_time_based_updater(self):\n return len(self.time_based_updaters) > 0\n\n def get_updaters(self):\n return self.time_based_updaters + self.non_time_updaters\n\n def add_updater(self, update_function, index=None, call_updater=True):\n if \"dt\" in get_parameters(update_function):\n updater_list = self.time_based_updaters\n else:\n updater_list = self.non_time_updaters\n\n if index is None:\n updater_list.append(update_function)\n else:\n updater_list.insert(index, update_function)\n\n self.refresh_has_updater_status()\n if call_updater:\n self.update()\n return self\n\n def remove_updater(self, update_function):\n for updater_list in [self.time_based_updaters, self.non_time_updaters]:\n while update_function in updater_list:\n updater_list.remove(update_function)\n self.refresh_has_updater_status()\n return self\n\n def clear_updaters(self):\n self.time_based_updaters = []\n self.non_time_updaters = []\n self.refresh_has_updater_status()\n return self\n\n def match_updaters(self, mobject):\n self.clear_updaters()\n for updater in mobject.get_updaters():\n self.add_updater(updater)\n return self\n\n def suspend_updating(self):\n self.updating_suspended = True\n return self\n\n def resume_updating(self, call_updater=True):\n self.updating_suspended = False\n if call_updater:\n self.update(dt=0)\n return self\n\n def refresh_has_updater_status(self):\n self.has_updaters = len(self.get_updaters()) > 0\n return self\n\n\nclass Mesh(Object3D):\n def __init__(\n self,\n shader=None,\n attributes=None,\n geometry=None,\n material=None,\n indices=None,\n use_depth_test=True,\n primitive=moderngl.TRIANGLES,\n ):\n super().__init__()\n if shader is not None and attributes is not None:\n self.shader = shader\n self.attributes = attributes\n self.indices = indices\n elif geometry is not None and material is not None:\n self.shader = material\n self.attributes = geometry.attributes\n self.indices = geometry.index\n else:\n raise Exception(\n \"Mesh requires either attributes and a Shader or a Geometry and a \"\n \"Material\",\n )\n self.use_depth_test = use_depth_test\n self.primitive = primitive\n self.skip_render = False\n self.init_updaters()\n\n def single_copy(self):\n copy = Mesh(\n attributes=self.attributes.copy(),\n shader=self.shader,\n indices=self.indices.copy() if self.indices is not None else None,\n use_depth_test=self.use_depth_test,\n primitive=self.primitive,\n )\n copy.skip_render = self.skip_render\n copy.model_matrix = self.model_matrix.copy()\n copy.normal_matrix = self.normal_matrix.copy()\n # TODO: Copy updaters?\n return copy\n\n def set_uniforms(self, renderer):\n self.shader.set_uniform(\n \"u_model_matrix\",\n opengl.matrix_to_shader_input(self.model_matrix),\n )\n self.shader.set_uniform(\"u_view_matrix\", renderer.camera.formatted_view_matrix)\n self.shader.set_uniform(\n \"u_projection_matrix\",\n renderer.camera.projection_matrix,\n )\n\n def render(self):\n if self.skip_render:\n return\n\n if self.use_depth_test:\n self.shader.context.enable(moderngl.DEPTH_TEST)\n else:\n self.shader.context.disable(moderngl.DEPTH_TEST)\n\n from moderngl.program_members import Attribute\n\n shader_attributes = []\n for k, v in self.shader.shader_program._members.items():\n if isinstance(v, Attribute):\n shader_attributes.append(k)\n shader_attributes = filter_attributes(self.attributes, shader_attributes)\n\n vertex_buffer_object = self.shader.context.buffer(shader_attributes.tobytes())\n if self.indices is None:\n index_buffer_object = None\n else:\n vert_index_data = self.indices.astype(\"i4\").tobytes()\n if vert_index_data:\n index_buffer_object = self.shader.context.buffer(vert_index_data)\n else:\n index_buffer_object = None\n vertex_array_object = self.shader.context.simple_vertex_array(\n self.shader.shader_program,\n vertex_buffer_object,\n *shader_attributes.dtype.names,\n index_buffer=index_buffer_object,\n )\n vertex_array_object.render(self.primitive)\n vertex_buffer_object.release()\n vertex_array_object.release()\n if index_buffer_object is not None:\n index_buffer_object.release()\n\n\nclass Shader:\n def __init__(\n self,\n context,\n name=None,\n source=None,\n ):\n global shader_program_cache\n self.context = context\n self.name = name\n\n # See if the program is cached.\n if (\n self.name in shader_program_cache\n and shader_program_cache[self.name].ctx == self.context\n ):\n self.shader_program = shader_program_cache[self.name]\n elif source is not None:\n # Generate the shader from inline code if it was passed.\n self.shader_program = context.program(**source)\n else:\n # Search for a file containing the shader.\n source_dict = {}\n source_dict_key = {\n \"vert\": \"vertex_shader\",\n \"frag\": \"fragment_shader\",\n \"geom\": \"geometry_shader\",\n }\n shader_folder = SHADER_FOLDER / name\n for shader_file in shader_folder.iterdir():\n shader_file_path = shader_folder / shader_file\n shader_source = get_shader_code_from_file(shader_file_path)\n source_dict[source_dict_key[shader_file_path.stem]] = shader_source\n self.shader_program = context.program(**source_dict)\n\n # Cache the shader.\n if name is not None and name not in shader_program_cache:\n shader_program_cache[self.name] = self.shader_program\n\n def set_uniform(self, name, value):\n try:\n self.shader_program[name] = value\n except KeyError:\n pass\n\n\nclass FullScreenQuad(Mesh):\n def __init__(\n self,\n context,\n fragment_shader_source=None,\n fragment_shader_name=None,\n ):\n if fragment_shader_source is None and fragment_shader_name is None:\n raise Exception(\"Must either pass shader name or shader source.\")\n\n if fragment_shader_name is not None:\n # Use the name.\n shader_file_path = SHADER_FOLDER / f\"{fragment_shader_name}.frag\"\n fragment_shader_source = get_shader_code_from_file(shader_file_path)\n elif fragment_shader_source is not None:\n fragment_shader_source = textwrap.dedent(fragment_shader_source.lstrip())\n\n shader = Shader(\n context,\n source={\n \"vertex_shader\": \"\"\"\n #version 330\n in vec4 in_vert;\n uniform mat4 u_model_view_matrix;\n uniform mat4 u_projection_matrix;\n void main() {{\n vec4 camera_space_vertex = u_model_view_matrix * in_vert;\n vec4 clip_space_vertex = u_projection_matrix * camera_space_vertex;\n gl_Position = clip_space_vertex;\n }}\n \"\"\",\n \"fragment_shader\": fragment_shader_source,\n },\n )\n attributes = np.zeros(6, dtype=[(\"in_vert\", np.float32, (4,))])\n attributes[\"in_vert\"] = np.array(\n [\n [-config[\"frame_x_radius\"], -config[\"frame_y_radius\"], 0, 1],\n [-config[\"frame_x_radius\"], config[\"frame_y_radius\"], 0, 1],\n [config[\"frame_x_radius\"], config[\"frame_y_radius\"], 0, 1],\n [-config[\"frame_x_radius\"], -config[\"frame_y_radius\"], 0, 1],\n [config[\"frame_x_radius\"], -config[\"frame_y_radius\"], 0, 1],\n [config[\"frame_x_radius\"], config[\"frame_y_radius\"], 0, 1],\n ],\n )\n shader.set_uniform(\"u_model_view_matrix\", opengl.view_matrix())\n shader.set_uniform(\n \"u_projection_matrix\",\n opengl.orthographic_projection_matrix(),\n )\n super().__init__(shader, attributes)\n\n def render(self):\n super().render()\n", "path": "manim/renderer/shader.py" } ]
[ { "content": "from __future__ import annotations\n\nimport re\nimport textwrap\nfrom pathlib import Path\n\nimport moderngl\nimport numpy as np\n\nfrom .. import config\nfrom ..utils import opengl\nfrom ..utils.simple_functions import get_parameters\n\nSHADER_FOLDER = Path(__file__).parent / \"shaders\"\nshader_program_cache: dict = {}\nfile_path_to_code_map: dict = {}\n\n__all__ = [\n \"Object3D\",\n \"Mesh\",\n \"Shader\",\n \"FullScreenQuad\",\n]\n\n\ndef get_shader_code_from_file(file_path: Path) -> str:\n if file_path in file_path_to_code_map:\n return file_path_to_code_map[file_path]\n source = file_path.read_text()\n include_lines = re.finditer(\n r\"^#include (?P<include_path>.*\\.glsl)$\",\n source,\n flags=re.MULTILINE,\n )\n for match in include_lines:\n include_path = match.group(\"include_path\")\n included_code = get_shader_code_from_file(\n file_path.parent / include_path,\n )\n source = source.replace(match.group(0), included_code)\n file_path_to_code_map[file_path] = source\n return source\n\n\ndef filter_attributes(unfiltered_attributes, attributes):\n # Construct attributes for only those needed by the shader.\n filtered_attributes_dtype = []\n for i, dtype_name in enumerate(unfiltered_attributes.dtype.names):\n if dtype_name in attributes:\n filtered_attributes_dtype.append(\n (\n dtype_name,\n unfiltered_attributes.dtype[i].subdtype[0].str,\n unfiltered_attributes.dtype[i].shape,\n ),\n )\n\n filtered_attributes = np.zeros(\n unfiltered_attributes[unfiltered_attributes.dtype.names[0]].shape[0],\n dtype=filtered_attributes_dtype,\n )\n\n for dtype_name in unfiltered_attributes.dtype.names:\n if dtype_name in attributes:\n filtered_attributes[dtype_name] = unfiltered_attributes[dtype_name]\n\n return filtered_attributes\n\n\nclass Object3D:\n def __init__(self, *children):\n self.model_matrix = np.eye(4)\n self.normal_matrix = np.eye(4)\n self.children = []\n self.parent = None\n self.add(*children)\n self.init_updaters()\n\n # TODO: Use path_func.\n def interpolate(self, start, end, alpha, _):\n self.model_matrix = (1 - alpha) * start.model_matrix + alpha * end.model_matrix\n self.normal_matrix = (\n 1 - alpha\n ) * start.normal_matrix + alpha * end.normal_matrix\n\n def single_copy(self):\n copy = Object3D()\n copy.model_matrix = self.model_matrix.copy()\n copy.normal_matrix = self.normal_matrix.copy()\n return copy\n\n def copy(self):\n node_to_copy = {}\n\n bfs = [self]\n while bfs:\n node = bfs.pop(0)\n bfs.extend(node.children)\n\n node_copy = node.single_copy()\n node_to_copy[node] = node_copy\n\n # Add the copy to the copy of the parent.\n if node.parent is not None and node is not self:\n node_to_copy[node.parent].add(node_copy)\n return node_to_copy[self]\n\n def add(self, *children):\n for child in children:\n if child.parent is not None:\n raise Exception(\n \"Attempt to add child that's already added to another Object3D\",\n )\n self.remove(*children, current_children_only=False)\n self.children.extend(children)\n for child in children:\n child.parent = self\n\n def remove(self, *children, current_children_only=True):\n if current_children_only:\n for child in children:\n if child.parent != self:\n raise Exception(\n \"Attempt to remove child that isn't added to this Object3D\",\n )\n self.children = list(filter(lambda child: child not in children, self.children))\n for child in children:\n child.parent = None\n\n def get_position(self):\n return self.model_matrix[:, 3][:3]\n\n def set_position(self, position):\n self.model_matrix[:, 3][:3] = position\n return self\n\n def get_meshes(self):\n dfs = [self]\n while dfs:\n parent = dfs.pop()\n if isinstance(parent, Mesh):\n yield parent\n dfs.extend(parent.children)\n\n def get_family(self):\n dfs = [self]\n while dfs:\n parent = dfs.pop()\n yield parent\n dfs.extend(parent.children)\n\n def align_data_and_family(self, _):\n pass\n\n def hierarchical_model_matrix(self):\n if self.parent is None:\n return self.model_matrix\n\n model_matrices = [self.model_matrix]\n current_object = self\n while current_object.parent is not None:\n model_matrices.append(current_object.parent.model_matrix)\n current_object = current_object.parent\n return np.linalg.multi_dot(list(reversed(model_matrices)))\n\n def hierarchical_normal_matrix(self):\n if self.parent is None:\n return self.normal_matrix[:3, :3]\n\n normal_matrices = [self.normal_matrix]\n current_object = self\n while current_object.parent is not None:\n normal_matrices.append(current_object.parent.model_matrix)\n current_object = current_object.parent\n return np.linalg.multi_dot(list(reversed(normal_matrices)))[:3, :3]\n\n def init_updaters(self):\n self.time_based_updaters = []\n self.non_time_updaters = []\n self.has_updaters = False\n self.updating_suspended = False\n\n def update(self, dt=0):\n if not self.has_updaters or self.updating_suspended:\n return self\n for updater in self.time_based_updaters:\n updater(self, dt)\n for updater in self.non_time_updaters:\n updater(self)\n return self\n\n def get_time_based_updaters(self):\n return self.time_based_updaters\n\n def has_time_based_updater(self):\n return len(self.time_based_updaters) > 0\n\n def get_updaters(self):\n return self.time_based_updaters + self.non_time_updaters\n\n def add_updater(self, update_function, index=None, call_updater=True):\n if \"dt\" in get_parameters(update_function):\n updater_list = self.time_based_updaters\n else:\n updater_list = self.non_time_updaters\n\n if index is None:\n updater_list.append(update_function)\n else:\n updater_list.insert(index, update_function)\n\n self.refresh_has_updater_status()\n if call_updater:\n self.update()\n return self\n\n def remove_updater(self, update_function):\n for updater_list in [self.time_based_updaters, self.non_time_updaters]:\n while update_function in updater_list:\n updater_list.remove(update_function)\n self.refresh_has_updater_status()\n return self\n\n def clear_updaters(self):\n self.time_based_updaters = []\n self.non_time_updaters = []\n self.refresh_has_updater_status()\n return self\n\n def match_updaters(self, mobject):\n self.clear_updaters()\n for updater in mobject.get_updaters():\n self.add_updater(updater)\n return self\n\n def suspend_updating(self):\n self.updating_suspended = True\n return self\n\n def resume_updating(self, call_updater=True):\n self.updating_suspended = False\n if call_updater:\n self.update(dt=0)\n return self\n\n def refresh_has_updater_status(self):\n self.has_updaters = len(self.get_updaters()) > 0\n return self\n\n\nclass Mesh(Object3D):\n def __init__(\n self,\n shader=None,\n attributes=None,\n geometry=None,\n material=None,\n indices=None,\n use_depth_test=True,\n primitive=moderngl.TRIANGLES,\n ):\n super().__init__()\n if shader is not None and attributes is not None:\n self.shader = shader\n self.attributes = attributes\n self.indices = indices\n elif geometry is not None and material is not None:\n self.shader = material\n self.attributes = geometry.attributes\n self.indices = geometry.index\n else:\n raise Exception(\n \"Mesh requires either attributes and a Shader or a Geometry and a \"\n \"Material\",\n )\n self.use_depth_test = use_depth_test\n self.primitive = primitive\n self.skip_render = False\n self.init_updaters()\n\n def single_copy(self):\n copy = Mesh(\n attributes=self.attributes.copy(),\n shader=self.shader,\n indices=self.indices.copy() if self.indices is not None else None,\n use_depth_test=self.use_depth_test,\n primitive=self.primitive,\n )\n copy.skip_render = self.skip_render\n copy.model_matrix = self.model_matrix.copy()\n copy.normal_matrix = self.normal_matrix.copy()\n # TODO: Copy updaters?\n return copy\n\n def set_uniforms(self, renderer):\n self.shader.set_uniform(\n \"u_model_matrix\",\n opengl.matrix_to_shader_input(self.model_matrix),\n )\n self.shader.set_uniform(\"u_view_matrix\", renderer.camera.formatted_view_matrix)\n self.shader.set_uniform(\n \"u_projection_matrix\",\n renderer.camera.projection_matrix,\n )\n\n def render(self):\n if self.skip_render:\n return\n\n if self.use_depth_test:\n self.shader.context.enable(moderngl.DEPTH_TEST)\n else:\n self.shader.context.disable(moderngl.DEPTH_TEST)\n\n from moderngl import Attribute\n\n shader_attributes = []\n for k, v in self.shader.shader_program._members.items():\n if isinstance(v, Attribute):\n shader_attributes.append(k)\n shader_attributes = filter_attributes(self.attributes, shader_attributes)\n\n vertex_buffer_object = self.shader.context.buffer(shader_attributes.tobytes())\n if self.indices is None:\n index_buffer_object = None\n else:\n vert_index_data = self.indices.astype(\"i4\").tobytes()\n if vert_index_data:\n index_buffer_object = self.shader.context.buffer(vert_index_data)\n else:\n index_buffer_object = None\n vertex_array_object = self.shader.context.simple_vertex_array(\n self.shader.shader_program,\n vertex_buffer_object,\n *shader_attributes.dtype.names,\n index_buffer=index_buffer_object,\n )\n vertex_array_object.render(self.primitive)\n vertex_buffer_object.release()\n vertex_array_object.release()\n if index_buffer_object is not None:\n index_buffer_object.release()\n\n\nclass Shader:\n def __init__(\n self,\n context,\n name=None,\n source=None,\n ):\n global shader_program_cache\n self.context = context\n self.name = name\n\n # See if the program is cached.\n if (\n self.name in shader_program_cache\n and shader_program_cache[self.name].ctx == self.context\n ):\n self.shader_program = shader_program_cache[self.name]\n elif source is not None:\n # Generate the shader from inline code if it was passed.\n self.shader_program = context.program(**source)\n else:\n # Search for a file containing the shader.\n source_dict = {}\n source_dict_key = {\n \"vert\": \"vertex_shader\",\n \"frag\": \"fragment_shader\",\n \"geom\": \"geometry_shader\",\n }\n shader_folder = SHADER_FOLDER / name\n for shader_file in shader_folder.iterdir():\n shader_file_path = shader_folder / shader_file\n shader_source = get_shader_code_from_file(shader_file_path)\n source_dict[source_dict_key[shader_file_path.stem]] = shader_source\n self.shader_program = context.program(**source_dict)\n\n # Cache the shader.\n if name is not None and name not in shader_program_cache:\n shader_program_cache[self.name] = self.shader_program\n\n def set_uniform(self, name, value):\n try:\n self.shader_program[name] = value\n except KeyError:\n pass\n\n\nclass FullScreenQuad(Mesh):\n def __init__(\n self,\n context,\n fragment_shader_source=None,\n fragment_shader_name=None,\n ):\n if fragment_shader_source is None and fragment_shader_name is None:\n raise Exception(\"Must either pass shader name or shader source.\")\n\n if fragment_shader_name is not None:\n # Use the name.\n shader_file_path = SHADER_FOLDER / f\"{fragment_shader_name}.frag\"\n fragment_shader_source = get_shader_code_from_file(shader_file_path)\n elif fragment_shader_source is not None:\n fragment_shader_source = textwrap.dedent(fragment_shader_source.lstrip())\n\n shader = Shader(\n context,\n source={\n \"vertex_shader\": \"\"\"\n #version 330\n in vec4 in_vert;\n uniform mat4 u_model_view_matrix;\n uniform mat4 u_projection_matrix;\n void main() {{\n vec4 camera_space_vertex = u_model_view_matrix * in_vert;\n vec4 clip_space_vertex = u_projection_matrix * camera_space_vertex;\n gl_Position = clip_space_vertex;\n }}\n \"\"\",\n \"fragment_shader\": fragment_shader_source,\n },\n )\n attributes = np.zeros(6, dtype=[(\"in_vert\", np.float32, (4,))])\n attributes[\"in_vert\"] = np.array(\n [\n [-config[\"frame_x_radius\"], -config[\"frame_y_radius\"], 0, 1],\n [-config[\"frame_x_radius\"], config[\"frame_y_radius\"], 0, 1],\n [config[\"frame_x_radius\"], config[\"frame_y_radius\"], 0, 1],\n [-config[\"frame_x_radius\"], -config[\"frame_y_radius\"], 0, 1],\n [config[\"frame_x_radius\"], -config[\"frame_y_radius\"], 0, 1],\n [config[\"frame_x_radius\"], config[\"frame_y_radius\"], 0, 1],\n ],\n )\n shader.set_uniform(\"u_model_view_matrix\", opengl.view_matrix())\n shader.set_uniform(\n \"u_projection_matrix\",\n opengl.orthographic_projection_matrix(),\n )\n super().__init__(shader, attributes)\n\n def render(self):\n super().render()\n", "path": "manim/renderer/shader.py" } ]
diff --git a/manim/renderer/shader.py b/manim/renderer/shader.py index dc28222489..892ccb5892 100644 --- a/manim/renderer/shader.py +++ b/manim/renderer/shader.py @@ -312,7 +312,7 @@ def render(self): else: self.shader.context.disable(moderngl.DEPTH_TEST) - from moderngl.program_members import Attribute + from moderngl import Attribute shader_attributes = [] for k, v in self.shader.shader_program._members.items():
mlflow__mlflow-6024
[BUG] sqlite for backend store ### Willingness to contribute Yes. I would be willing to contribute a fix for this bug with guidance from the MLflow community. ### MLflow version 1.26 ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: 20.04 - **Python version**: 3.8 - **yarn version, if running the dev UI**: ### Describe the problem Upgraded from 1.23, setting sqlite as backend store an sqlalchemy.future library error is produced. A similar issue with this one https://stackoverflow.com/questions/72341647/mlflow-modulenotfounderror-no-module-named-sqlalchemy-future/72432684#72432684 ### Tracking information mlflow server --backend-store-uri sqlite:///mlflow.sqlite --default-artifact-root ./mlruns ### Code to reproduce issue ``` mlflow server --backend-store-uri sqlite:///mlflow.sqlite --default-artifact-root ./mlruns ``` ### Other info / logs ``` 2022/05/30 13:18:36 ERROR mlflow.cli: Error initializing backend store 2022/05/30 13:18:36 ERROR mlflow.cli: No module named 'sqlalchemy.future' Traceback (most recent call last): lib/python3.8/site-packages/mlflow/store/tracking/sqlalchemy_store.py", line 11, in <module> from sqlalchemy.future import select ModuleNotFoundError: No module named 'sqlalchemy.future' ``` ### What component(s) does this bug affect? - [X] `area/artifacts`: Artifact stores and artifact logging - [ ] `area/build`: Build and test infrastructure for MLflow - [X] `area/docs`: MLflow documentation pages - [ ] `area/examples`: Example code - [X] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry - [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors - [ ] `area/projects`: MLproject format, project running backends - [ ] `area/scoring`: MLflow Model server, model deployment tools, Spark UDFs - [ ] `area/server-infra`: MLflow Tracking server backend - [X] `area/tracking`: Tracking Service, tracking client APIs, autologging ### What interface(s) does this bug affect? - [ ] `area/uiux`: Front-end, user experience, plotting, JavaScript, JavaScript dev server - [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models - [X] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry - [ ] `area/windows`: Windows support ### What language(s) does this bug affect? - [ ] `language/r`: R APIs and clients - [ ] `language/java`: Java APIs and clients - [ ] `language/new`: Proposals for new client languages ### What integration(s) does this bug affect? - [ ] `integrations/azure`: Azure and Azure ML integrations - [ ] `integrations/sagemaker`: SageMaker integrations - [ ] `integrations/databricks`: Databricks integrations
[ { "content": "import os\nimport logging\nimport distutils\n\nfrom importlib.machinery import SourceFileLoader\nfrom setuptools import setup, find_packages\n\n_MLFLOW_SKINNY_ENV_VAR = \"MLFLOW_SKINNY\"\n\nversion = (\n SourceFileLoader(\"mlflow.version\", os.path.join(\"mlflow\", \"version.py\")).load_module().VERSION\n)\n\n\n# Get a list of all files in the JS directory to include in our module\ndef package_files(directory):\n paths = []\n for (path, _, filenames) in os.walk(directory):\n for filename in filenames:\n paths.append(os.path.join(\"..\", path, filename))\n return paths\n\n\n# Prints out a set of paths (relative to the mlflow/ directory) of files in mlflow/server/js/build\n# to include in the wheel, e.g. \"../mlflow/server/js/build/index.html\"\njs_files = package_files(\"mlflow/server/js/build\")\nmodels_container_server_files = package_files(\"mlflow/models/container\")\nalembic_files = [\n \"../mlflow/store/db_migrations/alembic.ini\",\n \"../mlflow/temporary_db_migrations_for_pre_1_users/alembic.ini\",\n]\nextra_files = [\n \"ml-package-versions.yml\",\n \"pypi_package_index.json\",\n \"pyspark/ml/log_model_allowlist.txt\",\n]\n\n\"\"\"\nMinimal requirements for the skinny MLflow client which provides a limited\nsubset of functionality such as: RESTful client functionality for Tracking and\nModel Registry, as well as support for Project execution against local backends\nand Databricks.\n\"\"\"\nSKINNY_REQUIREMENTS = [\n \"click>=7.0\",\n \"cloudpickle\",\n \"databricks-cli>=0.8.7\",\n \"entrypoints\",\n \"gitpython>=2.1.0\",\n \"pyyaml>=5.1\",\n \"protobuf>=3.12.0\",\n \"pytz\",\n \"requests>=2.17.3\",\n \"packaging\",\n # Automated dependency detection in MLflow Models relies on\n # `importlib_metadata.packages_distributions` to resolve a module name to its package name\n # (e.g. 'sklearn' -> 'scikit-learn'). importlib_metadata 3.7.0 or newer supports this function:\n # https://github.com/python/importlib_metadata/blob/main/CHANGES.rst#v370\n \"importlib_metadata>=3.7.0,!=4.7.0\",\n]\n\n\"\"\"\nThese are the core requirements for the complete MLflow platform, which augments\nthe skinny client functionality with support for running the MLflow Tracking\nServer & UI. It also adds project backends such as Docker and Kubernetes among\nother capabilities.\n\"\"\"\nCORE_REQUIREMENTS = SKINNY_REQUIREMENTS + [\n \"alembic\",\n # Required\n \"docker>=4.0.0\",\n \"Flask\",\n \"gunicorn; platform_system != 'Windows'\",\n \"numpy\",\n \"scipy\",\n \"pandas\",\n \"prometheus-flask-exporter\",\n \"querystring_parser\",\n # Pin sqlparse for: https://github.com/mlflow/mlflow/issues/3433\n \"sqlparse>=0.3.1\",\n # Required to run the MLflow server against SQL-backed storage\n \"sqlalchemy\",\n \"waitress; platform_system == 'Windows'\",\n]\n\n_is_mlflow_skinny = bool(os.environ.get(_MLFLOW_SKINNY_ENV_VAR))\nlogging.debug(\"{} env var is set: {}\".format(_MLFLOW_SKINNY_ENV_VAR, _is_mlflow_skinny))\n\n\nclass ListDependencies(distutils.cmd.Command):\n # `python setup.py <command name>` prints out \"running <command name>\" by default.\n # This logging message must be hidden by specifying `--quiet` (or `-q`) when piping the output\n # of this command to `pip install`.\n description = \"List mlflow dependencies\"\n user_options = [\n (\"skinny\", None, \"List mlflow-skinny dependencies\"),\n ]\n\n def initialize_options(self):\n self.skinny = False\n\n def finalize_options(self):\n pass\n\n def run(self):\n dependencies = SKINNY_REQUIREMENTS if self.skinny else CORE_REQUIREMENTS\n print(\"\\n\".join(dependencies))\n\n\nMINIMUM_SUPPORTED_PYTHON_VERSION = \"3.7\"\n\n\nclass MinPythonVersion(distutils.cmd.Command):\n description = \"Print out the minimum supported Python version\"\n user_options = []\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n print(MINIMUM_SUPPORTED_PYTHON_VERSION)\n\n\nsetup(\n name=\"mlflow\" if not _is_mlflow_skinny else \"mlflow-skinny\",\n version=version,\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n package_data={\"mlflow\": js_files + models_container_server_files + alembic_files + extra_files}\n if not _is_mlflow_skinny\n # include alembic files to enable usage of the skinny client with SQL databases\n # if users install sqlalchemy, alembic, and sqlparse independently\n else {\"mlflow\": alembic_files + extra_files},\n install_requires=CORE_REQUIREMENTS if not _is_mlflow_skinny else SKINNY_REQUIREMENTS,\n extras_require={\n \"extras\": [\n \"scikit-learn\",\n # Required to log artifacts and models to HDFS artifact locations\n \"pyarrow\",\n # Required to log artifacts and models to AWS S3 artifact locations\n \"boto3\",\n # Required to log artifacts and models to GCS artifact locations\n \"google-cloud-storage>=1.30.0\",\n \"azureml-core>=1.2.0\",\n # Required to log artifacts to SFTP artifact locations\n \"pysftp\",\n # Required by the mlflow.projects module, when running projects against\n # a remote Kubernetes cluster\n \"kubernetes\",\n # Required to serve models through MLServer\n \"mlserver>=0.5.3\",\n \"mlserver-mlflow>=0.5.3\",\n \"virtualenv\",\n ],\n \"sqlserver\": [\"mlflow-dbstore\"],\n \"aliyun-oss\": [\"aliyunstoreplugin\"],\n },\n entry_points=\"\"\"\n [console_scripts]\n mlflow=mlflow.cli:cli\n \"\"\",\n cmdclass={\n \"dependencies\": ListDependencies,\n \"min_python_version\": MinPythonVersion,\n },\n zip_safe=False,\n author=\"Databricks\",\n description=\"MLflow: A Platform for ML Development and Productionization\",\n long_description=open(\"README.rst\").read()\n if not _is_mlflow_skinny\n else open(\"README_SKINNY.rst\").read() + open(\"README.rst\").read(),\n long_description_content_type=\"text/x-rst\",\n license=\"Apache License 2.0\",\n classifiers=[\n \"Intended Audience :: Developers\",\n f\"Programming Language :: Python :: {MINIMUM_SUPPORTED_PYTHON_VERSION}\",\n ],\n keywords=\"ml ai databricks\",\n url=\"https://mlflow.org/\",\n python_requires=f\">={MINIMUM_SUPPORTED_PYTHON_VERSION}\",\n project_urls={\n \"Bug Tracker\": \"https://github.com/mlflow/mlflow/issues\",\n \"Documentation\": \"https://mlflow.org/docs/latest/index.html\",\n \"Source Code\": \"https://github.com/mlflow/mlflow\",\n },\n)\n", "path": "setup.py" } ]
[ { "content": "import os\nimport logging\nimport distutils\n\nfrom importlib.machinery import SourceFileLoader\nfrom setuptools import setup, find_packages\n\n_MLFLOW_SKINNY_ENV_VAR = \"MLFLOW_SKINNY\"\n\nversion = (\n SourceFileLoader(\"mlflow.version\", os.path.join(\"mlflow\", \"version.py\")).load_module().VERSION\n)\n\n\n# Get a list of all files in the JS directory to include in our module\ndef package_files(directory):\n paths = []\n for (path, _, filenames) in os.walk(directory):\n for filename in filenames:\n paths.append(os.path.join(\"..\", path, filename))\n return paths\n\n\n# Prints out a set of paths (relative to the mlflow/ directory) of files in mlflow/server/js/build\n# to include in the wheel, e.g. \"../mlflow/server/js/build/index.html\"\njs_files = package_files(\"mlflow/server/js/build\")\nmodels_container_server_files = package_files(\"mlflow/models/container\")\nalembic_files = [\n \"../mlflow/store/db_migrations/alembic.ini\",\n \"../mlflow/temporary_db_migrations_for_pre_1_users/alembic.ini\",\n]\nextra_files = [\n \"ml-package-versions.yml\",\n \"pypi_package_index.json\",\n \"pyspark/ml/log_model_allowlist.txt\",\n]\n\n\"\"\"\nMinimal requirements for the skinny MLflow client which provides a limited\nsubset of functionality such as: RESTful client functionality for Tracking and\nModel Registry, as well as support for Project execution against local backends\nand Databricks.\n\"\"\"\nSKINNY_REQUIREMENTS = [\n \"click>=7.0\",\n \"cloudpickle\",\n \"databricks-cli>=0.8.7\",\n \"entrypoints\",\n \"gitpython>=2.1.0\",\n \"pyyaml>=5.1\",\n \"protobuf>=3.12.0\",\n \"pytz\",\n \"requests>=2.17.3\",\n \"packaging\",\n # Automated dependency detection in MLflow Models relies on\n # `importlib_metadata.packages_distributions` to resolve a module name to its package name\n # (e.g. 'sklearn' -> 'scikit-learn'). importlib_metadata 3.7.0 or newer supports this function:\n # https://github.com/python/importlib_metadata/blob/main/CHANGES.rst#v370\n \"importlib_metadata>=3.7.0,!=4.7.0\",\n]\n\n\"\"\"\nThese are the core requirements for the complete MLflow platform, which augments\nthe skinny client functionality with support for running the MLflow Tracking\nServer & UI. It also adds project backends such as Docker and Kubernetes among\nother capabilities.\n\"\"\"\nCORE_REQUIREMENTS = SKINNY_REQUIREMENTS + [\n \"alembic\",\n # Required\n \"docker>=4.0.0\",\n \"Flask\",\n \"gunicorn; platform_system != 'Windows'\",\n \"numpy\",\n \"scipy\",\n \"pandas\",\n \"prometheus-flask-exporter\",\n \"querystring_parser\",\n # Pin sqlparse for: https://github.com/mlflow/mlflow/issues/3433\n \"sqlparse>=0.3.1\",\n # Required to run the MLflow server against SQL-backed storage\n \"sqlalchemy>=1.4.0\",\n \"waitress; platform_system == 'Windows'\",\n]\n\n_is_mlflow_skinny = bool(os.environ.get(_MLFLOW_SKINNY_ENV_VAR))\nlogging.debug(\"{} env var is set: {}\".format(_MLFLOW_SKINNY_ENV_VAR, _is_mlflow_skinny))\n\n\nclass ListDependencies(distutils.cmd.Command):\n # `python setup.py <command name>` prints out \"running <command name>\" by default.\n # This logging message must be hidden by specifying `--quiet` (or `-q`) when piping the output\n # of this command to `pip install`.\n description = \"List mlflow dependencies\"\n user_options = [\n (\"skinny\", None, \"List mlflow-skinny dependencies\"),\n ]\n\n def initialize_options(self):\n self.skinny = False\n\n def finalize_options(self):\n pass\n\n def run(self):\n dependencies = SKINNY_REQUIREMENTS if self.skinny else CORE_REQUIREMENTS\n print(\"\\n\".join(dependencies))\n\n\nMINIMUM_SUPPORTED_PYTHON_VERSION = \"3.7\"\n\n\nclass MinPythonVersion(distutils.cmd.Command):\n description = \"Print out the minimum supported Python version\"\n user_options = []\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n print(MINIMUM_SUPPORTED_PYTHON_VERSION)\n\n\nsetup(\n name=\"mlflow\" if not _is_mlflow_skinny else \"mlflow-skinny\",\n version=version,\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n package_data={\"mlflow\": js_files + models_container_server_files + alembic_files + extra_files}\n if not _is_mlflow_skinny\n # include alembic files to enable usage of the skinny client with SQL databases\n # if users install sqlalchemy, alembic, and sqlparse independently\n else {\"mlflow\": alembic_files + extra_files},\n install_requires=CORE_REQUIREMENTS if not _is_mlflow_skinny else SKINNY_REQUIREMENTS,\n extras_require={\n \"extras\": [\n \"scikit-learn\",\n # Required to log artifacts and models to HDFS artifact locations\n \"pyarrow\",\n # Required to log artifacts and models to AWS S3 artifact locations\n \"boto3\",\n # Required to log artifacts and models to GCS artifact locations\n \"google-cloud-storage>=1.30.0\",\n \"azureml-core>=1.2.0\",\n # Required to log artifacts to SFTP artifact locations\n \"pysftp\",\n # Required by the mlflow.projects module, when running projects against\n # a remote Kubernetes cluster\n \"kubernetes\",\n # Required to serve models through MLServer\n \"mlserver>=0.5.3\",\n \"mlserver-mlflow>=0.5.3\",\n \"virtualenv\",\n ],\n \"sqlserver\": [\"mlflow-dbstore\"],\n \"aliyun-oss\": [\"aliyunstoreplugin\"],\n },\n entry_points=\"\"\"\n [console_scripts]\n mlflow=mlflow.cli:cli\n \"\"\",\n cmdclass={\n \"dependencies\": ListDependencies,\n \"min_python_version\": MinPythonVersion,\n },\n zip_safe=False,\n author=\"Databricks\",\n description=\"MLflow: A Platform for ML Development and Productionization\",\n long_description=open(\"README.rst\").read()\n if not _is_mlflow_skinny\n else open(\"README_SKINNY.rst\").read() + open(\"README.rst\").read(),\n long_description_content_type=\"text/x-rst\",\n license=\"Apache License 2.0\",\n classifiers=[\n \"Intended Audience :: Developers\",\n f\"Programming Language :: Python :: {MINIMUM_SUPPORTED_PYTHON_VERSION}\",\n ],\n keywords=\"ml ai databricks\",\n url=\"https://mlflow.org/\",\n python_requires=f\">={MINIMUM_SUPPORTED_PYTHON_VERSION}\",\n project_urls={\n \"Bug Tracker\": \"https://github.com/mlflow/mlflow/issues\",\n \"Documentation\": \"https://mlflow.org/docs/latest/index.html\",\n \"Source Code\": \"https://github.com/mlflow/mlflow\",\n },\n)\n", "path": "setup.py" } ]
diff --git a/setup.py b/setup.py index adf20d1e6dcd5..8d73781b087ad 100644 --- a/setup.py +++ b/setup.py @@ -79,7 +79,7 @@ def package_files(directory): # Pin sqlparse for: https://github.com/mlflow/mlflow/issues/3433 "sqlparse>=0.3.1", # Required to run the MLflow server against SQL-backed storage - "sqlalchemy", + "sqlalchemy>=1.4.0", "waitress; platform_system == 'Windows'", ]
mitmproxy__mitmproxy-2142
mitmdump -nr does not exit automatically ##### Steps to reproduce the problem: 1. Use `mitmdump -nr foo.mitm` to print some flows. 2. Mitmdump should exit automatically after printing, but it doesn't. ##### System information Mitmproxy version: 3.0.0 (2.0.0dev0136-0x05e1154) Python version: 3.5.2 Platform: Linux-3.4.0+-x86_64-with-Ubuntu-14.04-trusty SSL version: OpenSSL 1.0.2g-fips 1 Mar 2016 Linux distro: Ubuntu 14.04 trusty mitmdump -nr does not exit automatically ##### Steps to reproduce the problem: 1. Use `mitmdump -nr foo.mitm` to print some flows. 2. Mitmdump should exit automatically after printing, but it doesn't. ##### System information Mitmproxy version: 3.0.0 (2.0.0dev0136-0x05e1154) Python version: 3.5.2 Platform: Linux-3.4.0+-x86_64-with-Ubuntu-14.04-trusty SSL version: OpenSSL 1.0.2g-fips 1 Mar 2016 Linux distro: Ubuntu 14.04 trusty
[ { "content": "from typing import Optional, Sequence\n\nfrom mitmproxy import optmanager\nfrom mitmproxy.net import tcp\n\n# We redefine these here for now to avoid importing Urwid-related guff on\n# platforms that don't support it, and circular imports. We can do better using\n# a lazy checker down the track.\nconsole_palettes = [\n \"lowlight\",\n \"lowdark\",\n \"light\",\n \"dark\",\n \"solarized_light\",\n \"solarized_dark\"\n]\nview_orders = [\n \"time\",\n \"method\",\n \"url\",\n \"size\",\n]\n\nAPP_HOST = \"mitm.it\"\nAPP_PORT = 80\nCA_DIR = \"~/.mitmproxy\"\nLISTEN_PORT = 8080\n\n# We manually need to specify this, otherwise OpenSSL may select a non-HTTP2 cipher by default.\n# https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=apache-2.2.15&openssl=1.0.2&hsts=yes&profile=old\nDEFAULT_CLIENT_CIPHERS = (\n \"ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:\"\n \"ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:\"\n \"ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:\"\n \"ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:\"\n \"DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:\"\n \"DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:\"\n \"AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:DES-CBC3-SHA:\"\n \"HIGH:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:\"\n \"!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA\"\n)\n\n\nclass Options(optmanager.OptManager):\n def __init__(self, **kwargs) -> None:\n super().__init__()\n self.add_option(\n \"onboarding\", bool, True,\n \"Toggle the mitmproxy onboarding app.\"\n )\n self.add_option(\n \"onboarding_host\", str, APP_HOST,\n \"\"\"\n Domain to serve the onboarding app from. For transparent mode, use\n an IP when a DNS entry for the app domain is not present. \"\"\"\n )\n self.add_option(\n \"onboarding_port\", int, APP_PORT,\n \"Port to serve the onboarding app from.\"\n )\n self.add_option(\n \"anticache\", bool, False,\n \"\"\"\n Strip out request headers that might cause the server to return\n 304-not-modified.\n \"\"\"\n )\n self.add_option(\n \"anticomp\", bool, False,\n \"Try to convince servers to send us un-compressed data.\"\n )\n self.add_option(\n \"client_replay\", Sequence[str], [],\n \"Replay client requests from a saved file.\"\n )\n self.add_option(\n \"replay_kill_extra\", bool, False,\n \"Kill extra requests during replay.\"\n )\n self.add_option(\n \"keepserving\", bool, True,\n \"Continue serving after client playback or file read.\"\n )\n self.add_option(\n \"server\", bool, True,\n \"Start a proxy server.\"\n )\n self.add_option(\n \"server_replay_nopop\", bool, False,\n \"\"\"\n Disable response pop from response flow. This makes it possible to\n replay same response multiple times.\n \"\"\"\n )\n self.add_option(\n \"refresh_server_playback\", bool, True,\n \"\"\"\n Refresh server replay responses by adjusting date, expires and\n last-modified headers, as well as adjusting cookie expiration.\n \"\"\"\n )\n self.add_option(\n \"rfile\", Optional[str], None,\n \"Read flows from file.\"\n )\n self.add_option(\n \"scripts\", Sequence[str], [],\n \"\"\"\n Execute a script.\n \"\"\"\n )\n self.add_option(\n \"showhost\", bool, False,\n \"Use the Host header to construct URLs for display.\"\n )\n self.add_option(\n \"replacements\", Sequence[str], [],\n \"\"\"\n Replacement patterns of the form \"/pattern/regex/replacement\", where\n the separator can be any character.\n \"\"\"\n )\n self.add_option(\n \"replacement_files\", Sequence[str], [],\n \"\"\"\n Replacement pattern, where the replacement clause is a path to a\n file.\n \"\"\"\n )\n self.add_option(\n \"server_replay_use_headers\", Sequence[str], [],\n \"Request headers to be considered during replay.\"\n )\n self.add_option(\n \"setheaders\", Sequence[str], [],\n \"\"\"\n Header set pattern of the form \"/pattern/header/value\", where the\n separator can be any character.\n \"\"\"\n )\n self.add_option(\n \"server_replay\", Sequence[str], [],\n \"Replay server responses from a saved file.\"\n )\n self.add_option(\n \"stickycookie\", Optional[str], None,\n \"Set sticky cookie filter. Matched against requests.\"\n )\n self.add_option(\n \"stickyauth\", Optional[str], None,\n \"Set sticky auth filter. Matched against requests.\"\n )\n self.add_option(\n \"stream_large_bodies\", Optional[str], None,\n \"\"\"\n Stream data to the client if response body exceeds the given\n threshold. If streamed, the body will not be stored in any way.\n Understands k/m/g suffixes, i.e. 3m for 3 megabytes.\n \"\"\"\n )\n self.add_option(\n \"verbosity\", int, 2,\n \"Log verbosity.\"\n )\n self.add_option(\n \"default_contentview\", str, \"auto\",\n \"The default content view mode.\"\n )\n self.add_option(\n \"streamfile\", Optional[str], None,\n \"Write flows to file. Prefix path with + to append.\"\n )\n self.add_option(\n \"server_replay_ignore_content\", bool, False,\n \"Ignore request's content while searching for a saved flow to replay.\"\n )\n self.add_option(\n \"server_replay_ignore_params\", Sequence[str], [],\n \"\"\"\n Request's parameters to be ignored while searching for a saved flow\n to replay. Can be passed multiple times.\n \"\"\"\n )\n self.add_option(\n \"server_replay_ignore_payload_params\", Sequence[str], [],\n \"\"\"\n Request's payload parameters (application/x-www-form-urlencoded or\n multipart/form-data) to be ignored while searching for a saved flow\n to replay.\n \"\"\"\n )\n self.add_option(\n \"server_replay_ignore_host\", bool, False,\n \"\"\"\n Ignore request's destination host while searching for a saved flow\n to replay.\n \"\"\"\n )\n\n # Proxy options\n self.add_option(\n \"proxyauth\", Optional[str], None,\n \"\"\"\n Require authentication before proxying requests. If the value is\n \"any\", we prompt for authentication, but permit any values. If it\n starts with an \"@\", it is treated as a path to an Apache htpasswd\n file. If its is of the form \"username:password\", it is treated as a\n single-user credential.\n \"\"\"\n )\n self.add_option(\n \"add_upstream_certs_to_client_chain\", bool, False,\n \"\"\"\n Add all certificates of the upstream server to the certificate chain\n that will be served to the proxy client, as extras.\n \"\"\"\n )\n self.add_option(\n \"body_size_limit\", Optional[str], None,\n \"\"\"\n Byte size limit of HTTP request and response bodies. Understands\n k/m/g suffixes, i.e. 3m for 3 megabytes.\n \"\"\"\n )\n self.add_option(\n \"cadir\", str, CA_DIR,\n \"Location of the default mitmproxy CA files.\"\n )\n self.add_option(\n \"certs\", Sequence[str], [],\n \"\"\"\n SSL certificates. SPEC is of the form \"[domain=]path\". The\n domain may include a wildcard, and is equal to \"*\" if not specified.\n The file at path is a certificate in PEM format. If a private key is\n included in the PEM, it is used, else the default key in the conf\n dir is used. The PEM file should contain the full certificate chain,\n with the leaf certificate as the first entry. Can be passed multiple\n times.\n \"\"\"\n )\n self.add_option(\n \"ciphers_client\", str, DEFAULT_CLIENT_CIPHERS,\n \"Set supported ciphers for client connections using OpenSSL syntax.\"\n )\n self.add_option(\n \"ciphers_server\", Optional[str], None,\n \"Set supported ciphers for server connections using OpenSSL syntax.\"\n )\n self.add_option(\n \"client_certs\", Optional[str], None,\n \"Client certificate file or directory.\"\n )\n self.add_option(\n \"ignore_hosts\", Sequence[str], [],\n \"\"\"\n Ignore host and forward all traffic without processing it. In\n transparent mode, it is recommended to use an IP address (range),\n not the hostname. In regular mode, only SSL traffic is ignored and\n the hostname should be used. The supplied value is interpreted as a\n regular expression and matched on the ip or the hostname.\n \"\"\"\n )\n self.add_option(\n \"listen_host\", str, \"\",\n \"Address to bind proxy to.\"\n )\n self.add_option(\n \"listen_port\", int, LISTEN_PORT,\n \"Proxy service port.\"\n )\n self.add_option(\n \"upstream_bind_address\", str, \"\",\n \"Address to bind upstream requests to.\"\n )\n self.add_option(\n \"mode\", str, \"regular\",\n \"\"\"\n Mode can be \"regular\", \"transparent\", \"socks5\", \"reverse:SPEC\",\n or \"upstream:SPEC\". For reverse and upstream proxy modes, SPEC\n is proxy specification in the form of \"http[s]://host[:port]\".\n \"\"\"\n )\n self.add_option(\n \"upstream_cert\", bool, True,\n \"Connect to upstream server to look up certificate details.\"\n )\n self.add_option(\n \"keep_host_header\", bool, False,\n \"\"\"\n Reverse Proxy: Keep the original host header instead of rewriting it\n to the reverse proxy target.\n \"\"\"\n )\n\n self.add_option(\n \"http2\", bool, True,\n \"Enable/disable HTTP/2 support. \"\n \"HTTP/2 support is enabled by default.\",\n )\n self.add_option(\n \"http2_priority\", bool, False,\n \"\"\"\n PRIORITY forwarding for HTTP/2 connections. PRIORITY forwarding is\n disabled by default, because some webservers fail to implement the\n RFC properly.\n \"\"\"\n )\n self.add_option(\n \"websocket\", bool, True,\n \"Enable/disable WebSocket support. \"\n \"WebSocket support is enabled by default.\",\n )\n self.add_option(\n \"rawtcp\", bool, False,\n \"Enable/disable experimental raw TCP support. \"\n \"Disabled by default. \"\n )\n\n self.add_option(\n \"spoof_source_address\", bool, False,\n \"\"\"\n Use the client's IP for server-side connections. Combine with\n --upstream-bind-address to spoof a fixed source address.\n \"\"\"\n )\n self.add_option(\n \"upstream_auth\", Optional[str], None,\n \"\"\"\n Add HTTP Basic authentcation to upstream proxy and reverse proxy\n requests. Format: username:password.\n \"\"\"\n )\n self.add_option(\n \"ssl_version_client\", str, \"secure\",\n \"\"\"\n Set supported SSL/TLS versions for client connections. SSLv2, SSLv3\n and 'all' are INSECURE. Defaults to secure, which is TLS1.0+.\n \"\"\",\n choices=tcp.sslversion_choices.keys(),\n )\n self.add_option(\n \"ssl_version_server\", str, \"secure\",\n \"\"\"\n Set supported SSL/TLS versions for server connections. SSLv2, SSLv3\n and 'all' are INSECURE. Defaults to secure, which is TLS1.0+.\n \"\"\",\n choices=tcp.sslversion_choices.keys(),\n )\n self.add_option(\n \"ssl_insecure\", bool, False,\n \"Do not verify upstream server SSL/TLS certificates.\"\n )\n self.add_option(\n \"ssl_verify_upstream_trusted_cadir\", Optional[str], None,\n \"\"\"\n Path to a directory of trusted CA certificates for upstream server\n verification prepared using the c_rehash tool.\n \"\"\"\n )\n self.add_option(\n \"ssl_verify_upstream_trusted_ca\", Optional[str], None,\n \"Path to a PEM formatted trusted CA certificate.\"\n )\n self.add_option(\n \"tcp_hosts\", Sequence[str], [],\n \"\"\"\n Generic TCP SSL proxy mode for all hosts that match the pattern.\n Similar to --ignore, but SSL connections are intercepted. The\n communication contents are printed to the log in verbose mode.\n \"\"\"\n )\n\n self.add_option(\n \"intercept\", Optional[str], None,\n \"Intercept filter expression.\"\n )\n\n # Console options\n self.add_option(\n \"console_eventlog\", bool, False,\n \"Show event log.\"\n )\n self.add_option(\n \"console_focus_follow\", bool, False,\n \"Focus follows new flows.\"\n )\n self.add_option(\n \"console_palette\", str, \"dark\",\n \"Color palette.\",\n choices=sorted(console_palettes),\n )\n self.add_option(\n \"console_palette_transparent\", bool, False,\n \"Set transparent background for palette.\"\n )\n self.add_option(\n \"console_mouse\", bool, True,\n \"Console mouse interaction.\"\n )\n self.add_option(\n \"console_order\", Optional[str], None,\n \"Flow sort order.\",\n choices=view_orders,\n )\n self.add_option(\n \"console_order_reversed\", bool, False,\n \"Reverse the sorting order.\"\n )\n\n self.add_option(\n \"filter\", Optional[str], None,\n \"Filter view expression.\"\n )\n\n # Web options\n self.add_option(\n \"web_open_browser\", bool, True,\n \"Start a browser.\"\n )\n self.add_option(\n \"web_debug\", bool, False,\n \"Mitmweb debugging.\"\n )\n self.add_option(\n \"web_port\", int, 8081,\n \"Mitmweb port.\"\n )\n self.add_option(\n \"web_iface\", str, \"127.0.0.1\",\n \"Mitmweb interface.\"\n )\n\n # Dump options\n self.add_option(\n \"filtstr\", Optional[str], None,\n \"The filter string for mitmdump.\"\n )\n self.add_option(\n \"flow_detail\", int, 1,\n \"Flow detail display level.\"\n )\n\n self.update(**kwargs)\n", "path": "mitmproxy/options.py" } ]
[ { "content": "from typing import Optional, Sequence\n\nfrom mitmproxy import optmanager\nfrom mitmproxy.net import tcp\n\n# We redefine these here for now to avoid importing Urwid-related guff on\n# platforms that don't support it, and circular imports. We can do better using\n# a lazy checker down the track.\nconsole_palettes = [\n \"lowlight\",\n \"lowdark\",\n \"light\",\n \"dark\",\n \"solarized_light\",\n \"solarized_dark\"\n]\nview_orders = [\n \"time\",\n \"method\",\n \"url\",\n \"size\",\n]\n\nAPP_HOST = \"mitm.it\"\nAPP_PORT = 80\nCA_DIR = \"~/.mitmproxy\"\nLISTEN_PORT = 8080\n\n# We manually need to specify this, otherwise OpenSSL may select a non-HTTP2 cipher by default.\n# https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=apache-2.2.15&openssl=1.0.2&hsts=yes&profile=old\nDEFAULT_CLIENT_CIPHERS = (\n \"ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:\"\n \"ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:\"\n \"ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:\"\n \"ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:\"\n \"DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:\"\n \"DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:\"\n \"AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:DES-CBC3-SHA:\"\n \"HIGH:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:\"\n \"!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA\"\n)\n\n\nclass Options(optmanager.OptManager):\n def __init__(self, **kwargs) -> None:\n super().__init__()\n self.add_option(\n \"onboarding\", bool, True,\n \"Toggle the mitmproxy onboarding app.\"\n )\n self.add_option(\n \"onboarding_host\", str, APP_HOST,\n \"\"\"\n Domain to serve the onboarding app from. For transparent mode, use\n an IP when a DNS entry for the app domain is not present. \"\"\"\n )\n self.add_option(\n \"onboarding_port\", int, APP_PORT,\n \"Port to serve the onboarding app from.\"\n )\n self.add_option(\n \"anticache\", bool, False,\n \"\"\"\n Strip out request headers that might cause the server to return\n 304-not-modified.\n \"\"\"\n )\n self.add_option(\n \"anticomp\", bool, False,\n \"Try to convince servers to send us un-compressed data.\"\n )\n self.add_option(\n \"client_replay\", Sequence[str], [],\n \"Replay client requests from a saved file.\"\n )\n self.add_option(\n \"replay_kill_extra\", bool, False,\n \"Kill extra requests during replay.\"\n )\n self.add_option(\n \"keepserving\", bool, False,\n \"Continue serving after client playback or file read.\"\n )\n self.add_option(\n \"server\", bool, True,\n \"Start a proxy server.\"\n )\n self.add_option(\n \"server_replay_nopop\", bool, False,\n \"\"\"\n Disable response pop from response flow. This makes it possible to\n replay same response multiple times.\n \"\"\"\n )\n self.add_option(\n \"refresh_server_playback\", bool, True,\n \"\"\"\n Refresh server replay responses by adjusting date, expires and\n last-modified headers, as well as adjusting cookie expiration.\n \"\"\"\n )\n self.add_option(\n \"rfile\", Optional[str], None,\n \"Read flows from file.\"\n )\n self.add_option(\n \"scripts\", Sequence[str], [],\n \"\"\"\n Execute a script.\n \"\"\"\n )\n self.add_option(\n \"showhost\", bool, False,\n \"Use the Host header to construct URLs for display.\"\n )\n self.add_option(\n \"replacements\", Sequence[str], [],\n \"\"\"\n Replacement patterns of the form \"/pattern/regex/replacement\", where\n the separator can be any character.\n \"\"\"\n )\n self.add_option(\n \"replacement_files\", Sequence[str], [],\n \"\"\"\n Replacement pattern, where the replacement clause is a path to a\n file.\n \"\"\"\n )\n self.add_option(\n \"server_replay_use_headers\", Sequence[str], [],\n \"Request headers to be considered during replay.\"\n )\n self.add_option(\n \"setheaders\", Sequence[str], [],\n \"\"\"\n Header set pattern of the form \"/pattern/header/value\", where the\n separator can be any character.\n \"\"\"\n )\n self.add_option(\n \"server_replay\", Sequence[str], [],\n \"Replay server responses from a saved file.\"\n )\n self.add_option(\n \"stickycookie\", Optional[str], None,\n \"Set sticky cookie filter. Matched against requests.\"\n )\n self.add_option(\n \"stickyauth\", Optional[str], None,\n \"Set sticky auth filter. Matched against requests.\"\n )\n self.add_option(\n \"stream_large_bodies\", Optional[str], None,\n \"\"\"\n Stream data to the client if response body exceeds the given\n threshold. If streamed, the body will not be stored in any way.\n Understands k/m/g suffixes, i.e. 3m for 3 megabytes.\n \"\"\"\n )\n self.add_option(\n \"verbosity\", int, 2,\n \"Log verbosity.\"\n )\n self.add_option(\n \"default_contentview\", str, \"auto\",\n \"The default content view mode.\"\n )\n self.add_option(\n \"streamfile\", Optional[str], None,\n \"Write flows to file. Prefix path with + to append.\"\n )\n self.add_option(\n \"server_replay_ignore_content\", bool, False,\n \"Ignore request's content while searching for a saved flow to replay.\"\n )\n self.add_option(\n \"server_replay_ignore_params\", Sequence[str], [],\n \"\"\"\n Request's parameters to be ignored while searching for a saved flow\n to replay. Can be passed multiple times.\n \"\"\"\n )\n self.add_option(\n \"server_replay_ignore_payload_params\", Sequence[str], [],\n \"\"\"\n Request's payload parameters (application/x-www-form-urlencoded or\n multipart/form-data) to be ignored while searching for a saved flow\n to replay.\n \"\"\"\n )\n self.add_option(\n \"server_replay_ignore_host\", bool, False,\n \"\"\"\n Ignore request's destination host while searching for a saved flow\n to replay.\n \"\"\"\n )\n\n # Proxy options\n self.add_option(\n \"proxyauth\", Optional[str], None,\n \"\"\"\n Require authentication before proxying requests. If the value is\n \"any\", we prompt for authentication, but permit any values. If it\n starts with an \"@\", it is treated as a path to an Apache htpasswd\n file. If its is of the form \"username:password\", it is treated as a\n single-user credential.\n \"\"\"\n )\n self.add_option(\n \"add_upstream_certs_to_client_chain\", bool, False,\n \"\"\"\n Add all certificates of the upstream server to the certificate chain\n that will be served to the proxy client, as extras.\n \"\"\"\n )\n self.add_option(\n \"body_size_limit\", Optional[str], None,\n \"\"\"\n Byte size limit of HTTP request and response bodies. Understands\n k/m/g suffixes, i.e. 3m for 3 megabytes.\n \"\"\"\n )\n self.add_option(\n \"cadir\", str, CA_DIR,\n \"Location of the default mitmproxy CA files.\"\n )\n self.add_option(\n \"certs\", Sequence[str], [],\n \"\"\"\n SSL certificates. SPEC is of the form \"[domain=]path\". The\n domain may include a wildcard, and is equal to \"*\" if not specified.\n The file at path is a certificate in PEM format. If a private key is\n included in the PEM, it is used, else the default key in the conf\n dir is used. The PEM file should contain the full certificate chain,\n with the leaf certificate as the first entry. Can be passed multiple\n times.\n \"\"\"\n )\n self.add_option(\n \"ciphers_client\", str, DEFAULT_CLIENT_CIPHERS,\n \"Set supported ciphers for client connections using OpenSSL syntax.\"\n )\n self.add_option(\n \"ciphers_server\", Optional[str], None,\n \"Set supported ciphers for server connections using OpenSSL syntax.\"\n )\n self.add_option(\n \"client_certs\", Optional[str], None,\n \"Client certificate file or directory.\"\n )\n self.add_option(\n \"ignore_hosts\", Sequence[str], [],\n \"\"\"\n Ignore host and forward all traffic without processing it. In\n transparent mode, it is recommended to use an IP address (range),\n not the hostname. In regular mode, only SSL traffic is ignored and\n the hostname should be used. The supplied value is interpreted as a\n regular expression and matched on the ip or the hostname.\n \"\"\"\n )\n self.add_option(\n \"listen_host\", str, \"\",\n \"Address to bind proxy to.\"\n )\n self.add_option(\n \"listen_port\", int, LISTEN_PORT,\n \"Proxy service port.\"\n )\n self.add_option(\n \"upstream_bind_address\", str, \"\",\n \"Address to bind upstream requests to.\"\n )\n self.add_option(\n \"mode\", str, \"regular\",\n \"\"\"\n Mode can be \"regular\", \"transparent\", \"socks5\", \"reverse:SPEC\",\n or \"upstream:SPEC\". For reverse and upstream proxy modes, SPEC\n is proxy specification in the form of \"http[s]://host[:port]\".\n \"\"\"\n )\n self.add_option(\n \"upstream_cert\", bool, True,\n \"Connect to upstream server to look up certificate details.\"\n )\n self.add_option(\n \"keep_host_header\", bool, False,\n \"\"\"\n Reverse Proxy: Keep the original host header instead of rewriting it\n to the reverse proxy target.\n \"\"\"\n )\n\n self.add_option(\n \"http2\", bool, True,\n \"Enable/disable HTTP/2 support. \"\n \"HTTP/2 support is enabled by default.\",\n )\n self.add_option(\n \"http2_priority\", bool, False,\n \"\"\"\n PRIORITY forwarding for HTTP/2 connections. PRIORITY forwarding is\n disabled by default, because some webservers fail to implement the\n RFC properly.\n \"\"\"\n )\n self.add_option(\n \"websocket\", bool, True,\n \"Enable/disable WebSocket support. \"\n \"WebSocket support is enabled by default.\",\n )\n self.add_option(\n \"rawtcp\", bool, False,\n \"Enable/disable experimental raw TCP support. \"\n \"Disabled by default. \"\n )\n\n self.add_option(\n \"spoof_source_address\", bool, False,\n \"\"\"\n Use the client's IP for server-side connections. Combine with\n --upstream-bind-address to spoof a fixed source address.\n \"\"\"\n )\n self.add_option(\n \"upstream_auth\", Optional[str], None,\n \"\"\"\n Add HTTP Basic authentcation to upstream proxy and reverse proxy\n requests. Format: username:password.\n \"\"\"\n )\n self.add_option(\n \"ssl_version_client\", str, \"secure\",\n \"\"\"\n Set supported SSL/TLS versions for client connections. SSLv2, SSLv3\n and 'all' are INSECURE. Defaults to secure, which is TLS1.0+.\n \"\"\",\n choices=tcp.sslversion_choices.keys(),\n )\n self.add_option(\n \"ssl_version_server\", str, \"secure\",\n \"\"\"\n Set supported SSL/TLS versions for server connections. SSLv2, SSLv3\n and 'all' are INSECURE. Defaults to secure, which is TLS1.0+.\n \"\"\",\n choices=tcp.sslversion_choices.keys(),\n )\n self.add_option(\n \"ssl_insecure\", bool, False,\n \"Do not verify upstream server SSL/TLS certificates.\"\n )\n self.add_option(\n \"ssl_verify_upstream_trusted_cadir\", Optional[str], None,\n \"\"\"\n Path to a directory of trusted CA certificates for upstream server\n verification prepared using the c_rehash tool.\n \"\"\"\n )\n self.add_option(\n \"ssl_verify_upstream_trusted_ca\", Optional[str], None,\n \"Path to a PEM formatted trusted CA certificate.\"\n )\n self.add_option(\n \"tcp_hosts\", Sequence[str], [],\n \"\"\"\n Generic TCP SSL proxy mode for all hosts that match the pattern.\n Similar to --ignore, but SSL connections are intercepted. The\n communication contents are printed to the log in verbose mode.\n \"\"\"\n )\n\n self.add_option(\n \"intercept\", Optional[str], None,\n \"Intercept filter expression.\"\n )\n\n # Console options\n self.add_option(\n \"console_eventlog\", bool, False,\n \"Show event log.\"\n )\n self.add_option(\n \"console_focus_follow\", bool, False,\n \"Focus follows new flows.\"\n )\n self.add_option(\n \"console_palette\", str, \"dark\",\n \"Color palette.\",\n choices=sorted(console_palettes),\n )\n self.add_option(\n \"console_palette_transparent\", bool, False,\n \"Set transparent background for palette.\"\n )\n self.add_option(\n \"console_mouse\", bool, True,\n \"Console mouse interaction.\"\n )\n self.add_option(\n \"console_order\", Optional[str], None,\n \"Flow sort order.\",\n choices=view_orders,\n )\n self.add_option(\n \"console_order_reversed\", bool, False,\n \"Reverse the sorting order.\"\n )\n\n self.add_option(\n \"filter\", Optional[str], None,\n \"Filter view expression.\"\n )\n\n # Web options\n self.add_option(\n \"web_open_browser\", bool, True,\n \"Start a browser.\"\n )\n self.add_option(\n \"web_debug\", bool, False,\n \"Mitmweb debugging.\"\n )\n self.add_option(\n \"web_port\", int, 8081,\n \"Mitmweb port.\"\n )\n self.add_option(\n \"web_iface\", str, \"127.0.0.1\",\n \"Mitmweb interface.\"\n )\n\n # Dump options\n self.add_option(\n \"filtstr\", Optional[str], None,\n \"The filter string for mitmdump.\"\n )\n self.add_option(\n \"flow_detail\", int, 1,\n \"Flow detail display level.\"\n )\n\n self.update(**kwargs)\n", "path": "mitmproxy/options.py" } ]
diff --git a/mitmproxy/options.py b/mitmproxy/options.py index 6dd8616be4..798d5b9cee 100644 --- a/mitmproxy/options.py +++ b/mitmproxy/options.py @@ -78,7 +78,7 @@ def __init__(self, **kwargs) -> None: "Kill extra requests during replay." ) self.add_option( - "keepserving", bool, True, + "keepserving", bool, False, "Continue serving after client playback or file read." ) self.add_option(
weecology__retriever-1104
Incorrectly lower casing table_name for csv It looks like we're lower casing manually set table/directory names, at least for csv but probably for all flat file engines. ``` $ mkdir TESTER $ retriever install csv mammal-masses --table_name TESTER/test.csv => Installing mammal-masses [Errno 2] No such file or directory: 'tester/test.csv' Done! $ mkdir tester $ retriever install csv mammal-masses --table_name TESTER/test.csv => Installing mammal-masses Progress: 5731/5731 rows inserted into tester/test.csv totaling 5731: Done! ``` This is causing issues for the R package, see https://github.com/ropensci/rdataretriever/issues/131, but is also a general problem since directory names are case sensitive for 2/3 OSs.
[ { "content": "\"\"\"Data Retriever Wizard\n\nRunning this module directly will launch the download wizard, allowing the user\nto choose from all scripts.\n\nThe main() function can be used for bootstrapping.\n\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import print_function\n\nimport os\nimport sys\nfrom builtins import input\nfrom imp import reload\n\nfrom retriever.engines import engine_list, choose_engine\nfrom retriever.lib.datapackage import create_json, edit_json, delete_json, get_script_filename\nfrom retriever.lib.datasets import datasets, dataset_names, license\nfrom retriever.lib.defaults import sample_script, CITATION, ENCODING, SCRIPT_SEARCH_PATHS\nfrom retriever.lib.get_opts import parser\nfrom retriever.lib.repository import check_for_updates\nfrom retriever.lib.scripts import SCRIPT_LIST, get_script\nfrom retriever.lib.engine_tools import name_matches, reset_retriever\n\nencoding = ENCODING.lower()\n# sys removes the setdefaultencoding method at startup; reload to get it back\nreload(sys)\nif hasattr(sys, 'setdefaultencoding'):\n sys.setdefaultencoding(encoding)\n\n\ndef main():\n \"\"\"This function launches the Data Retriever.\"\"\"\n sys.argv[1:] = [arg.lower() for arg in sys.argv[1:]]\n if len(sys.argv) == 1:\n # if no command line args are passed, show the help options\n parser.parse_args(['-h'])\n\n else:\n # otherwise, parse them\n\n if not os.path.isdir(SCRIPT_SEARCH_PATHS[1]) and not \\\n [f for f in os.listdir(SCRIPT_SEARCH_PATHS[-1])\n if os.path.exists(SCRIPT_SEARCH_PATHS[-1])]:\n check_for_updates()\n script_list = SCRIPT_LIST()\n\n args = parser.parse_args()\n\n if args.command == \"install\" and not args.engine:\n parser.parse_args(['install', '-h'])\n\n if args.quiet:\n sys.stdout = open(os.devnull, 'w')\n\n if args.command == 'help':\n parser.parse_args(['-h'])\n\n if hasattr(args, 'compile') and args.compile:\n script_list = SCRIPT_LIST(force_compile=True)\n\n if args.command == 'defaults':\n for engine_item in engine_list:\n print(\"Default options for engine \", engine_item.name)\n for default_opts in engine_item.required_opts:\n print(default_opts[0], \" \", default_opts[2])\n print()\n return\n\n if args.command == 'update':\n check_for_updates(False)\n script_list = SCRIPT_LIST()\n return\n\n elif args.command == 'citation':\n if args.dataset is None:\n print(\"\\nCitation for retriever:\\n\")\n print(CITATION)\n else:\n scripts = name_matches(script_list, args.dataset)\n for dataset in scripts:\n print(\"\\nDataset: {}\".format(dataset.name))\n print(\"Citation: {}\".format(dataset.citation))\n print(\"Description: {}\\n\".format(dataset.description))\n\n return\n\n elif args.command == 'license':\n dataset_license = license(args.dataset)\n if dataset_license:\n print(dataset_license)\n else:\n print(\"There is no license information for {}\".format(args.dataset))\n return\n\n elif args.command == 'new':\n f = open(args.filename, 'w')\n f.write(sample_script)\n f.close()\n\n return\n\n elif args.command == 'reset':\n reset_retriever(args.scope)\n return\n\n elif args.command == 'new_json':\n # create new JSON script\n create_json()\n return\n\n elif args.command == 'edit_json':\n # edit existing JSON script\n json_file = get_script_filename(args.dataset.lower())\n edit_json(json_file)\n return\n\n elif args.command == 'delete_json':\n # delete existing JSON script from home directory and or script directory if exists in current dir\n confirm = input(\"Really remove \" + args.dataset.lower() +\n \" and all its contents? (y/N): \")\n if confirm.lower().strip() in ['y', 'yes']:\n json_file = get_script_filename(args.dataset.lower())\n delete_json(json_file)\n return\n\n if args.command == 'ls':\n # If scripts have never been downloaded there is nothing to list\n if not script_list:\n print(\"No scripts are currently available. Updating scripts now...\")\n check_for_updates(False)\n print(\"\\n\\nScripts downloaded.\\n\")\n if not (args.l or args.k or (type(args.v) is list)):\n all_scripts = dataset_names()\n print(\"Available datasets : {}\\n\".format(len(all_scripts)))\n from retriever import lscolumns\n lscolumns.printls(all_scripts)\n \n elif type(args.v) is list:\n if args.v:\n try:\n all_scripts = [get_script(dataset) for dataset in args.v]\n except KeyError:\n all_scripts = []\n print(\"Dataset(s) is not found.\")\n else:\n all_scripts = datasets()\n count = 1\n for script in all_scripts:\n print(\"{}. {}\\n{}\\n{}\\n{}\\n\".format(\n count, script.title,\n script.name,\n script.keywords,\n script.description,\n str(script.licenses[0]['name']),\n script.citation\n ))\n count += 1\n \n else:\n param_licenses = args.l if args.l else None\n keywords = args.k if args.k else None\n\n # search\n searched_scripts = datasets(keywords, param_licenses)\n if not searched_scripts:\n print(\"No available datasets found\")\n else:\n print(\"Available datasets : {}\\n\".format(len(searched_scripts)))\n count = 1\n for script in searched_scripts:\n print(\"{}. {}\\n{}\\n{}\\n{}\\n\".format(\n count, script.title,\n script.name,\n script.keywords,\n str(script.licenses[0]['name'])\n ))\n count += 1\n return\n\n engine = choose_engine(args.__dict__)\n\n if hasattr(args, 'debug') and args.debug:\n debug = True\n else:\n debug = False\n sys.tracebacklimit = 0\n\n if hasattr(args, 'debug') and args.not_cached:\n engine.use_cache = False\n else:\n engine.use_cache = True\n\n if args.dataset is not None:\n scripts = name_matches(script_list, args.dataset)\n else:\n raise Exception(\"no dataset specified.\")\n if scripts:\n for dataset in scripts:\n print(\"=> Installing\", dataset.name)\n try:\n dataset.download(engine, debug=debug)\n dataset.engine.final_cleanup()\n except KeyboardInterrupt:\n pass\n except Exception as e:\n print(e)\n if debug:\n raise\n print(\"Done!\")\n else:\n print(\"Run 'retriever ls' to see a list of currently available datasets.\")\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "retriever/__main__.py" } ]
[ { "content": "\"\"\"Data Retriever Wizard\n\nRunning this module directly will launch the download wizard, allowing the user\nto choose from all scripts.\n\nThe main() function can be used for bootstrapping.\n\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import print_function\n\nimport os\nimport sys\nfrom builtins import input\nfrom imp import reload\n\nfrom retriever.engines import engine_list, choose_engine\nfrom retriever.lib.datapackage import create_json, edit_json, delete_json, get_script_filename\nfrom retriever.lib.datasets import datasets, dataset_names, license\nfrom retriever.lib.defaults import sample_script, CITATION, ENCODING, SCRIPT_SEARCH_PATHS\nfrom retriever.lib.get_opts import parser\nfrom retriever.lib.repository import check_for_updates\nfrom retriever.lib.scripts import SCRIPT_LIST, get_script\nfrom retriever.lib.engine_tools import name_matches, reset_retriever\n\nencoding = ENCODING.lower()\n# sys removes the setdefaultencoding method at startup; reload to get it back\nreload(sys)\nif hasattr(sys, 'setdefaultencoding'):\n sys.setdefaultencoding(encoding)\n\n\ndef main():\n \"\"\"This function launches the Data Retriever.\"\"\"\n if len(sys.argv) == 1:\n # if no command line args are passed, show the help options\n parser.parse_args(['-h'])\n\n else:\n # otherwise, parse them\n\n if not os.path.isdir(SCRIPT_SEARCH_PATHS[1]) and not \\\n [f for f in os.listdir(SCRIPT_SEARCH_PATHS[-1])\n if os.path.exists(SCRIPT_SEARCH_PATHS[-1])]:\n check_for_updates()\n script_list = SCRIPT_LIST()\n\n args = parser.parse_args()\n\n if args.command == \"install\" and not args.engine:\n parser.parse_args(['install', '-h'])\n\n if args.quiet:\n sys.stdout = open(os.devnull, 'w')\n\n if args.command == 'help':\n parser.parse_args(['-h'])\n\n if hasattr(args, 'compile') and args.compile:\n script_list = SCRIPT_LIST(force_compile=True)\n\n if args.command == 'defaults':\n for engine_item in engine_list:\n print(\"Default options for engine \", engine_item.name)\n for default_opts in engine_item.required_opts:\n print(default_opts[0], \" \", default_opts[2])\n print()\n return\n\n if args.command == 'update':\n check_for_updates(False)\n script_list = SCRIPT_LIST()\n return\n\n elif args.command == 'citation':\n if args.dataset is None:\n print(\"\\nCitation for retriever:\\n\")\n print(CITATION)\n else:\n scripts = name_matches(script_list, args.dataset)\n for dataset in scripts:\n print(\"\\nDataset: {}\".format(dataset.name))\n print(\"Citation: {}\".format(dataset.citation))\n print(\"Description: {}\\n\".format(dataset.description))\n\n return\n\n elif args.command == 'license':\n dataset_license = license(args.dataset)\n if dataset_license:\n print(dataset_license)\n else:\n print(\"There is no license information for {}\".format(args.dataset))\n return\n\n elif args.command == 'new':\n f = open(args.filename, 'w')\n f.write(sample_script)\n f.close()\n\n return\n\n elif args.command == 'reset':\n reset_retriever(args.scope)\n return\n\n elif args.command == 'new_json':\n # create new JSON script\n create_json()\n return\n\n elif args.command == 'edit_json':\n # edit existing JSON script\n json_file = get_script_filename(args.dataset.lower())\n edit_json(json_file)\n return\n\n elif args.command == 'delete_json':\n # delete existing JSON script from home directory and or script directory if exists in current dir\n confirm = input(\"Really remove \" + args.dataset.lower() +\n \" and all its contents? (y/N): \")\n if confirm.lower().strip() in ['y', 'yes']:\n json_file = get_script_filename(args.dataset.lower())\n delete_json(json_file)\n return\n\n if args.command == 'ls':\n # If scripts have never been downloaded there is nothing to list\n if not script_list:\n print(\"No scripts are currently available. Updating scripts now...\")\n check_for_updates(False)\n print(\"\\n\\nScripts downloaded.\\n\")\n if not (args.l or args.k or (type(args.v) is list)):\n all_scripts = dataset_names()\n print(\"Available datasets : {}\\n\".format(len(all_scripts)))\n from retriever import lscolumns\n lscolumns.printls(all_scripts)\n \n elif type(args.v) is list:\n if args.v:\n try:\n all_scripts = [get_script(dataset) for dataset in args.v]\n except KeyError:\n all_scripts = []\n print(\"Dataset(s) is not found.\")\n else:\n all_scripts = datasets()\n count = 1\n for script in all_scripts:\n print(\"{}. {}\\n{}\\n{}\\n{}\\n\".format(\n count, script.title,\n script.name,\n script.keywords,\n script.description,\n str(script.licenses[0]['name']),\n script.citation\n ))\n count += 1\n \n else:\n param_licenses = args.l if args.l else None\n keywords = args.k if args.k else None\n\n # search\n searched_scripts = datasets(keywords, param_licenses)\n if not searched_scripts:\n print(\"No available datasets found\")\n else:\n print(\"Available datasets : {}\\n\".format(len(searched_scripts)))\n count = 1\n for script in searched_scripts:\n print(\"{}. {}\\n{}\\n{}\\n{}\\n\".format(\n count, script.title,\n script.name,\n script.keywords,\n str(script.licenses[0]['name'])\n ))\n count += 1\n return\n\n engine = choose_engine(args.__dict__)\n\n if hasattr(args, 'debug') and args.debug:\n debug = True\n else:\n debug = False\n sys.tracebacklimit = 0\n\n if hasattr(args, 'debug') and args.not_cached:\n engine.use_cache = False\n else:\n engine.use_cache = True\n\n if args.dataset is not None:\n scripts = name_matches(script_list, args.dataset)\n else:\n raise Exception(\"no dataset specified.\")\n if scripts:\n for dataset in scripts:\n print(\"=> Installing\", dataset.name)\n try:\n dataset.download(engine, debug=debug)\n dataset.engine.final_cleanup()\n except KeyboardInterrupt:\n pass\n except Exception as e:\n print(e)\n if debug:\n raise\n print(\"Done!\")\n else:\n print(\"Run 'retriever ls' to see a list of currently available datasets.\")\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "retriever/__main__.py" } ]
diff --git a/retriever/__main__.py b/retriever/__main__.py index a3ae1fa12..9971c7e9a 100644 --- a/retriever/__main__.py +++ b/retriever/__main__.py @@ -32,7 +32,6 @@ def main(): """This function launches the Data Retriever.""" - sys.argv[1:] = [arg.lower() for arg in sys.argv[1:]] if len(sys.argv) == 1: # if no command line args are passed, show the help options parser.parse_args(['-h'])
ivy-llc__ivy-16518
uniform
[ { "content": "# global\n", "path": "ivy/functional/frontends/paddle/tensor/random.py" } ]
[ { "content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_supported_dtypes(\n {\"2.4.2 and below\": (\"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef uniform(shape, dtype=None, min=-1.0, max=1.0, seed=0, name=None):\n return ivy.random_uniform(low=min, high=max, shape=shape, dtype=dtype, seed=seed)\n", "path": "ivy/functional/frontends/paddle/tensor/random.py" } ]
diff --git a/ivy/functional/frontends/paddle/tensor/random.py b/ivy/functional/frontends/paddle/tensor/random.py index 28ffa370ad668..62a91cfa2a7ce 100644 --- a/ivy/functional/frontends/paddle/tensor/random.py +++ b/ivy/functional/frontends/paddle/tensor/random.py @@ -1 +1,15 @@ # global +import ivy +from ivy.func_wrapper import with_supported_dtypes +from ivy.functional.frontends.paddle.func_wrapper import ( + to_ivy_arrays_and_back, +) + + +@with_supported_dtypes( + {"2.4.2 and below": ("float32", "float64")}, + "paddle", +) +@to_ivy_arrays_and_back +def uniform(shape, dtype=None, min=-1.0, max=1.0, seed=0, name=None): + return ivy.random_uniform(low=min, high=max, shape=shape, dtype=dtype, seed=seed) diff --git a/ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_paddle_random.py b/ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_paddle_random.py index 0a9c8754f4fed..0026c14ddbe41 100644 --- a/ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_paddle_random.py +++ b/ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_paddle_random.py @@ -1,3 +1,42 @@ # global +from hypothesis import strategies as st # local +import ivy_tests.test_ivy.helpers as helpers +from ivy_tests.test_ivy.helpers import handle_frontend_test + + +@handle_frontend_test( + fn_tree="paddle.uniform", + input_dtypes=helpers.get_dtypes("float"), + shape=st.tuples( + st.integers(min_value=2, max_value=5), st.integers(min_value=2, max_value=5) + ), + dtype=helpers.get_dtypes("valid", full=False), + min=st.floats(allow_nan=False, allow_infinity=False, width=32), + max=st.floats(allow_nan=False, allow_infinity=False, width=32), + seed=st.integers(min_value=2, max_value=5), +) +def test_paddle_uniform( + input_dtypes, + shape, + dtype, + min, + max, + seed, + frontend, + test_flags, + fn_tree, +): + helpers.test_frontend_function( + input_dtypes=input_dtypes, + frontend=frontend, + test_flags=test_flags, + fn_tree=fn_tree, + test_values=False, + shape=shape, + dtype=dtype[0], + min=min, + max=max, + seed=seed, + )
canonical__cloud-init-4422
package-update-upgrade-install does not work on Gentoo This bug was originally filed in Launchpad as [LP: #1799544](https://bugs.launchpad.net/cloud-init/+bug/1799544) <details> <summary>Launchpad details</summary> <pre> affected_projects = [] assignee = holmanb assignee_name = Brett Holman date_closed = 2022-07-21T15:16:56.010973+00:00 date_created = 2018-10-23T17:34:36.633424+00:00 date_fix_committed = 2022-07-21T15:16:56.010973+00:00 date_fix_released = 2022-07-21T15:16:56.010973+00:00 id = 1799544 importance = medium is_complete = True lp_url = https://bugs.launchpad.net/cloud-init/+bug/1799544 milestone = 22.2 owner = gilles-dartiguelongue owner_name = Gilles Dartiguelongue private = False status = fix_released submitter = gilles-dartiguelongue submitter_name = Gilles Dartiguelongue tags = ['gentoo'] duplicates = [] </pre> </details> _Launchpad user **Gilles Dartiguelongue(gilles-dartiguelongue)** wrote on 2018-10-23T17:34:36.633424+00:00_ I'm testing cloud-init in a nocloud setup. I'm trying to perform installation of packages using the appropriate module and after fixing some issues in Gentoo packaging, I hit an error in execution due to cmd = list('emerge') being interpreted as ['e', 'm', 'e', ...] while it was meant as ['emerge'].
[ { "content": "# Copyright (C) 2014 Rackspace, US Inc.\n# Copyright (C) 2016 Matthew Thode.\n#\n# Author: Nate House <[email protected]>\n# Author: Matthew Thode <[email protected]>\n#\n# This file is part of cloud-init. See LICENSE file for license information.\n\nfrom cloudinit import distros, helpers\nfrom cloudinit import log as logging\nfrom cloudinit import subp, util\nfrom cloudinit.distros import net_util\nfrom cloudinit.distros.parsers.hostname import HostnameConf\nfrom cloudinit.settings import PER_INSTANCE\n\nLOG = logging.getLogger(__name__)\n\n\nclass Distro(distros.Distro):\n locale_conf_fn = \"/etc/env.d/02locale\"\n locale_gen_fn = \"/etc/locale.gen\"\n network_conf_fn = \"/etc/conf.d/net\"\n hostname_conf_fn = \"/etc/conf.d/hostname\"\n init_cmd = [\"rc-service\"] # init scripts\n default_locale = \"en_US.UTF-8\"\n\n # C.UTF8 makes sense to generate, but is not selected\n # Add /etc/locale.gen entries to this list to support more locales\n locales = [\"C.UTF8 UTF-8\", \"en_US.UTF-8 UTF-8\"]\n\n def __init__(self, name, cfg, paths):\n distros.Distro.__init__(self, name, cfg, paths)\n # This will be used to restrict certain\n # calls from repeatly happening (when they\n # should only happen say once per instance...)\n self._runner = helpers.Runners(paths)\n self.osfamily = \"gentoo\"\n # Fix sshd restarts\n cfg[\"ssh_svcname\"] = \"/etc/init.d/sshd\"\n if distros.uses_systemd():\n LOG.error(\"Cloud-init does not support systemd with gentoo\")\n\n def apply_locale(self, _, out_fn=None):\n \"\"\"rc-only - not compatible with systemd\n\n Locales need to be added to /etc/locale.gen and generated prior\n to selection. Default to en_US.UTF-8 for simplicity.\n \"\"\"\n util.write_file(self.locale_gen_fn, \"\\n\".join(self.locales), mode=644)\n\n # generate locales\n subp.subp([\"locale-gen\"], capture=False)\n\n # select locale\n subp.subp(\n [\"eselect\", \"locale\", \"set\", self.default_locale], capture=False\n )\n\n def install_packages(self, pkglist):\n self.update_package_sources()\n self.package_command(\"\", pkgs=pkglist)\n\n def _write_network(self, settings):\n entries = net_util.translate_network(settings)\n LOG.debug(\n \"Translated ubuntu style network settings %s into %s\",\n settings,\n entries,\n )\n dev_names = entries.keys()\n nameservers = []\n\n for (dev, info) in entries.items():\n if \"dns-nameservers\" in info:\n nameservers.extend(info[\"dns-nameservers\"])\n if dev == \"lo\":\n continue\n net_fn = self.network_conf_fn + \".\" + dev\n dns_nameservers = info.get(\"dns-nameservers\")\n if isinstance(dns_nameservers, (list, tuple)):\n dns_nameservers = str(tuple(dns_nameservers)).replace(\",\", \"\")\n # eth0, {'auto': True, 'ipv6': {}, 'bootproto': 'dhcp'}\n # lo, {'dns-nameservers': ['10.0.1.3'], 'ipv6': {}, 'auto': True}\n results = \"\"\n if info.get(\"bootproto\") == \"dhcp\":\n results += 'config_{name}=\"dhcp\"'.format(name=dev)\n else:\n results += (\n 'config_{name}=\"{ip_address} netmask {netmask}\"\\n'\n 'mac_{name}=\"{hwaddr}\"\\n'\n ).format(\n name=dev,\n ip_address=info.get(\"address\"),\n netmask=info.get(\"netmask\"),\n hwaddr=info.get(\"hwaddress\"),\n )\n results += 'routes_{name}=\"default via {gateway}\"\\n'.format(\n name=dev, gateway=info.get(\"gateway\")\n )\n if info.get(\"dns-nameservers\"):\n results += 'dns_servers_{name}=\"{dnsservers}\"\\n'.format(\n name=dev, dnsservers=dns_nameservers\n )\n util.write_file(net_fn, results)\n self._create_network_symlink(dev)\n if info.get(\"auto\"):\n cmd = [\n \"rc-update\",\n \"add\",\n \"net.{name}\".format(name=dev),\n \"default\",\n ]\n try:\n (_out, err) = subp.subp(cmd)\n if len(err):\n LOG.warning(\n \"Running %s resulted in stderr output: %s\",\n cmd,\n err,\n )\n except subp.ProcessExecutionError:\n util.logexc(\n LOG, \"Running interface command %s failed\", cmd\n )\n\n if nameservers:\n util.write_file(\n self.resolve_conf_fn, convert_resolv_conf(nameservers)\n )\n\n return dev_names\n\n @staticmethod\n def _create_network_symlink(interface_name):\n file_path = \"/etc/init.d/net.{name}\".format(name=interface_name)\n if not util.is_link(file_path):\n util.sym_link(\"/etc/init.d/net.lo\", file_path)\n\n def _bring_up_interface(self, device_name):\n cmd = [\"/etc/init.d/net.%s\" % device_name, \"restart\"]\n LOG.debug(\n \"Attempting to run bring up interface %s using command %s\",\n device_name,\n cmd,\n )\n try:\n (_out, err) = subp.subp(cmd)\n if len(err):\n LOG.warning(\n \"Running %s resulted in stderr output: %s\", cmd, err\n )\n return True\n except subp.ProcessExecutionError:\n util.logexc(LOG, \"Running interface command %s failed\", cmd)\n return False\n\n def _bring_up_interfaces(self, device_names):\n use_all = False\n for d in device_names:\n if d == \"all\":\n use_all = True\n if use_all:\n # Grab device names from init scripts\n cmd = [\"ls\", \"/etc/init.d/net.*\"]\n try:\n (_out, err) = subp.subp(cmd)\n if len(err):\n LOG.warning(\n \"Running %s resulted in stderr output: %s\", cmd, err\n )\n except subp.ProcessExecutionError:\n util.logexc(LOG, \"Running interface command %s failed\", cmd)\n return False\n devices = [x.split(\".\")[2] for x in _out.split(\" \")]\n return distros.Distro._bring_up_interfaces(self, devices)\n else:\n return distros.Distro._bring_up_interfaces(self, device_names)\n\n def _write_hostname(self, hostname, filename):\n conf = None\n try:\n # Try to update the previous one\n # so lets see if we can read it first.\n conf = self._read_hostname_conf(filename)\n except IOError:\n pass\n if not conf:\n conf = HostnameConf(\"\")\n\n # Many distro's format is the hostname by itself, and that is the\n # way HostnameConf works but gentoo expects it to be in\n # hostname=\"the-actual-hostname\"\n conf.set_hostname('hostname=\"%s\"' % hostname)\n util.write_file(filename, str(conf), 0o644)\n\n def _read_system_hostname(self):\n sys_hostname = self._read_hostname(self.hostname_conf_fn)\n return self.hostname_conf_fn, sys_hostname\n\n @staticmethod\n def _read_hostname_conf(filename):\n conf = HostnameConf(util.load_file(filename))\n conf.parse()\n return conf\n\n def _read_hostname(self, filename, default=None):\n hostname = None\n try:\n conf = self._read_hostname_conf(filename)\n hostname = conf.hostname\n except IOError:\n pass\n if not hostname:\n return default\n return hostname\n\n def set_timezone(self, tz):\n distros.set_etc_timezone(tz=tz, tz_file=self._find_tz_file(tz))\n\n def package_command(self, command, args=None, pkgs=None):\n cmd = list(\"emerge\")\n # Redirect output\n cmd.append(\"--quiet\")\n\n if command == \"upgrade\":\n cmd.extend([\"--update\", \"world\"])\n else:\n if pkgs is None:\n pkgs = []\n\n if args and isinstance(args, str):\n cmd.append(args)\n elif args and isinstance(args, list):\n cmd.extend(args)\n\n if command:\n cmd.append(command)\n\n pkglist = util.expand_package_list(\"%s-%s\", pkgs)\n cmd.extend(pkglist)\n\n # Allow the output of this to flow outwards (ie not be captured)\n subp.subp(cmd, capture=False)\n\n def update_package_sources(self):\n self._runner.run(\n \"update-sources\",\n self.package_command,\n [\"--sync\"],\n freq=PER_INSTANCE,\n )\n\n\ndef convert_resolv_conf(settings):\n \"\"\"Returns a settings string formatted for resolv.conf.\"\"\"\n result = \"\"\n if isinstance(settings, list):\n for ns in settings:\n result += \"nameserver %s\\n\" % ns\n return result\n\n\n# vi: ts=4 expandtab\n", "path": "cloudinit/distros/gentoo.py" } ]
[ { "content": "# Copyright (C) 2014 Rackspace, US Inc.\n# Copyright (C) 2016 Matthew Thode.\n#\n# Author: Nate House <[email protected]>\n# Author: Matthew Thode <[email protected]>\n#\n# This file is part of cloud-init. See LICENSE file for license information.\n\nfrom cloudinit import distros, helpers\nfrom cloudinit import log as logging\nfrom cloudinit import subp, util\nfrom cloudinit.distros import net_util\nfrom cloudinit.distros.parsers.hostname import HostnameConf\nfrom cloudinit.settings import PER_INSTANCE\n\nLOG = logging.getLogger(__name__)\n\n\nclass Distro(distros.Distro):\n locale_conf_fn = \"/etc/env.d/02locale\"\n locale_gen_fn = \"/etc/locale.gen\"\n network_conf_fn = \"/etc/conf.d/net\"\n hostname_conf_fn = \"/etc/conf.d/hostname\"\n init_cmd = [\"rc-service\"] # init scripts\n default_locale = \"en_US.UTF-8\"\n\n # C.UTF8 makes sense to generate, but is not selected\n # Add /etc/locale.gen entries to this list to support more locales\n locales = [\"C.UTF8 UTF-8\", \"en_US.UTF-8 UTF-8\"]\n\n def __init__(self, name, cfg, paths):\n distros.Distro.__init__(self, name, cfg, paths)\n # This will be used to restrict certain\n # calls from repeatly happening (when they\n # should only happen say once per instance...)\n self._runner = helpers.Runners(paths)\n self.osfamily = \"gentoo\"\n # Fix sshd restarts\n cfg[\"ssh_svcname\"] = \"/etc/init.d/sshd\"\n if distros.uses_systemd():\n LOG.error(\"Cloud-init does not support systemd with gentoo\")\n\n def apply_locale(self, _, out_fn=None):\n \"\"\"rc-only - not compatible with systemd\n\n Locales need to be added to /etc/locale.gen and generated prior\n to selection. Default to en_US.UTF-8 for simplicity.\n \"\"\"\n util.write_file(self.locale_gen_fn, \"\\n\".join(self.locales), mode=644)\n\n # generate locales\n subp.subp([\"locale-gen\"], capture=False)\n\n # select locale\n subp.subp(\n [\"eselect\", \"locale\", \"set\", self.default_locale], capture=False\n )\n\n def install_packages(self, pkglist):\n self.update_package_sources()\n self.package_command(\"\", pkgs=pkglist)\n\n def _write_network(self, settings):\n entries = net_util.translate_network(settings)\n LOG.debug(\n \"Translated ubuntu style network settings %s into %s\",\n settings,\n entries,\n )\n dev_names = entries.keys()\n nameservers = []\n\n for (dev, info) in entries.items():\n if \"dns-nameservers\" in info:\n nameservers.extend(info[\"dns-nameservers\"])\n if dev == \"lo\":\n continue\n net_fn = self.network_conf_fn + \".\" + dev\n dns_nameservers = info.get(\"dns-nameservers\")\n if isinstance(dns_nameservers, (list, tuple)):\n dns_nameservers = str(tuple(dns_nameservers)).replace(\",\", \"\")\n # eth0, {'auto': True, 'ipv6': {}, 'bootproto': 'dhcp'}\n # lo, {'dns-nameservers': ['10.0.1.3'], 'ipv6': {}, 'auto': True}\n results = \"\"\n if info.get(\"bootproto\") == \"dhcp\":\n results += 'config_{name}=\"dhcp\"'.format(name=dev)\n else:\n results += (\n 'config_{name}=\"{ip_address} netmask {netmask}\"\\n'\n 'mac_{name}=\"{hwaddr}\"\\n'\n ).format(\n name=dev,\n ip_address=info.get(\"address\"),\n netmask=info.get(\"netmask\"),\n hwaddr=info.get(\"hwaddress\"),\n )\n results += 'routes_{name}=\"default via {gateway}\"\\n'.format(\n name=dev, gateway=info.get(\"gateway\")\n )\n if info.get(\"dns-nameservers\"):\n results += 'dns_servers_{name}=\"{dnsservers}\"\\n'.format(\n name=dev, dnsservers=dns_nameservers\n )\n util.write_file(net_fn, results)\n self._create_network_symlink(dev)\n if info.get(\"auto\"):\n cmd = [\n \"rc-update\",\n \"add\",\n \"net.{name}\".format(name=dev),\n \"default\",\n ]\n try:\n (_out, err) = subp.subp(cmd)\n if len(err):\n LOG.warning(\n \"Running %s resulted in stderr output: %s\",\n cmd,\n err,\n )\n except subp.ProcessExecutionError:\n util.logexc(\n LOG, \"Running interface command %s failed\", cmd\n )\n\n if nameservers:\n util.write_file(\n self.resolve_conf_fn, convert_resolv_conf(nameservers)\n )\n\n return dev_names\n\n @staticmethod\n def _create_network_symlink(interface_name):\n file_path = \"/etc/init.d/net.{name}\".format(name=interface_name)\n if not util.is_link(file_path):\n util.sym_link(\"/etc/init.d/net.lo\", file_path)\n\n def _bring_up_interface(self, device_name):\n cmd = [\"/etc/init.d/net.%s\" % device_name, \"restart\"]\n LOG.debug(\n \"Attempting to run bring up interface %s using command %s\",\n device_name,\n cmd,\n )\n try:\n (_out, err) = subp.subp(cmd)\n if len(err):\n LOG.warning(\n \"Running %s resulted in stderr output: %s\", cmd, err\n )\n return True\n except subp.ProcessExecutionError:\n util.logexc(LOG, \"Running interface command %s failed\", cmd)\n return False\n\n def _bring_up_interfaces(self, device_names):\n use_all = False\n for d in device_names:\n if d == \"all\":\n use_all = True\n if use_all:\n # Grab device names from init scripts\n cmd = [\"ls\", \"/etc/init.d/net.*\"]\n try:\n (_out, err) = subp.subp(cmd)\n if len(err):\n LOG.warning(\n \"Running %s resulted in stderr output: %s\", cmd, err\n )\n except subp.ProcessExecutionError:\n util.logexc(LOG, \"Running interface command %s failed\", cmd)\n return False\n devices = [x.split(\".\")[2] for x in _out.split(\" \")]\n return distros.Distro._bring_up_interfaces(self, devices)\n else:\n return distros.Distro._bring_up_interfaces(self, device_names)\n\n def _write_hostname(self, hostname, filename):\n conf = None\n try:\n # Try to update the previous one\n # so lets see if we can read it first.\n conf = self._read_hostname_conf(filename)\n except IOError:\n pass\n if not conf:\n conf = HostnameConf(\"\")\n\n # Many distro's format is the hostname by itself, and that is the\n # way HostnameConf works but gentoo expects it to be in\n # hostname=\"the-actual-hostname\"\n conf.set_hostname('hostname=\"%s\"' % hostname)\n util.write_file(filename, str(conf), 0o644)\n\n def _read_system_hostname(self):\n sys_hostname = self._read_hostname(self.hostname_conf_fn)\n return self.hostname_conf_fn, sys_hostname\n\n @staticmethod\n def _read_hostname_conf(filename):\n conf = HostnameConf(util.load_file(filename))\n conf.parse()\n return conf\n\n def _read_hostname(self, filename, default=None):\n hostname = None\n try:\n conf = self._read_hostname_conf(filename)\n hostname = conf.hostname\n except IOError:\n pass\n if not hostname:\n return default\n return hostname\n\n def set_timezone(self, tz):\n distros.set_etc_timezone(tz=tz, tz_file=self._find_tz_file(tz))\n\n def package_command(self, command, args=None, pkgs=None):\n cmd = [\"emerge\"]\n # Redirect output\n cmd.append(\"--quiet\")\n\n if command == \"upgrade\":\n cmd.extend([\"--update\", \"world\"])\n else:\n if pkgs is None:\n pkgs = []\n\n if args and isinstance(args, str):\n cmd.append(args)\n elif args and isinstance(args, list):\n cmd.extend(args)\n\n if command:\n cmd.append(command)\n\n pkglist = util.expand_package_list(\"%s-%s\", pkgs)\n cmd.extend(pkglist)\n\n # Allow the output of this to flow outwards (ie not be captured)\n subp.subp(cmd, capture=False)\n\n def update_package_sources(self):\n self._runner.run(\n \"update-sources\",\n self.package_command,\n [\"--sync\"],\n freq=PER_INSTANCE,\n )\n\n\ndef convert_resolv_conf(settings):\n \"\"\"Returns a settings string formatted for resolv.conf.\"\"\"\n result = \"\"\n if isinstance(settings, list):\n for ns in settings:\n result += \"nameserver %s\\n\" % ns\n return result\n\n\n# vi: ts=4 expandtab\n", "path": "cloudinit/distros/gentoo.py" } ]
diff --git a/cloudinit/distros/gentoo.py b/cloudinit/distros/gentoo.py index 37217fe4332..d364eae4b0e 100644 --- a/cloudinit/distros/gentoo.py +++ b/cloudinit/distros/gentoo.py @@ -218,7 +218,7 @@ def set_timezone(self, tz): distros.set_etc_timezone(tz=tz, tz_file=self._find_tz_file(tz)) def package_command(self, command, args=None, pkgs=None): - cmd = list("emerge") + cmd = ["emerge"] # Redirect output cmd.append("--quiet")
django-hijack__django-hijack-383
TypeError when releasing when LOGOUT_REDIRECT_URL is None According to the Django docs LOGOUT_REDIRECT_URL can be set to None https://docs.djangoproject.com/en/dev/ref/settings/#logout-redirect-url When this is the case a TypeError can be raised [here](https://github.com/django-hijack/django-hijack/blob/master/hijack/views.py#L48) in the release view Because `self.success_url` == `LOGOUT_REDIRECT_URL` == `None` Passing None to `resolve_url` causes a TypeError
[ { "content": "from contextlib import contextmanager\n\nimport django\nfrom django.contrib.auth import BACKEND_SESSION_KEY, get_user_model, load_backend, login\nfrom django.contrib.auth.mixins import LoginRequiredMixin, UserPassesTestMixin\nfrom django.db import transaction\nfrom django.http import HttpResponseBadRequest, HttpResponseRedirect\nfrom django.shortcuts import get_object_or_404, resolve_url\nfrom django.utils.decorators import method_decorator\nfrom django.utils.module_loading import import_string\nfrom django.views import View\nfrom django.views.decorators.csrf import csrf_protect\nfrom django.views.generic.detail import SingleObjectMixin\n\nif django.VERSION >= (3, 0):\n from django.utils.http import url_has_allowed_host_and_scheme\nelse:\n from django.utils.http import is_safe_url as url_has_allowed_host_and_scheme\n\nfrom hijack import signals\nfrom hijack.conf import settings\n\n\ndef get_used_backend(request):\n backend_str = request.session[BACKEND_SESSION_KEY]\n backend = load_backend(backend_str)\n return backend\n\n\n@contextmanager\ndef keep_session_age(session):\n try:\n session_expiry = session[\"_session_expiry\"]\n except KeyError:\n yield\n else:\n yield\n session[\"_session_expiry\"] = session_expiry\n\n\nclass SuccessUrlMixin:\n redirect_field_name = \"next\"\n\n success_url = \"/\"\n\n def get_success_url(self):\n url = self.get_redirect_url()\n return url or resolve_url(self.success_url)\n\n def get_redirect_url(self):\n \"\"\"Return the user-originating redirect URL if it's safe.\"\"\"\n redirect_to = self.request.POST.get(\n self.redirect_field_name, self.request.GET.get(self.redirect_field_name, \"\")\n )\n url_is_safe = url_has_allowed_host_and_scheme(\n url=redirect_to,\n allowed_hosts=self.request.get_host(),\n require_https=self.request.is_secure(),\n )\n return redirect_to if url_is_safe else \"\"\n\n\nclass LockUserTableMixin:\n @transaction.atomic()\n def dispatch(self, request, *args, **kwargs):\n # Lock entire user table to avoid race conditions\n next(get_user_model().objects.select_for_update().iterator())\n return super().dispatch(request, *args, **kwargs)\n\n\nclass AcquireUserView(\n LockUserTableMixin,\n LoginRequiredMixin,\n UserPassesTestMixin,\n SuccessUrlMixin,\n SingleObjectMixin,\n View,\n):\n model = get_user_model()\n success_url = settings.LOGIN_REDIRECT_URL\n\n def test_func(self):\n func = import_string(settings.HIJACK_PERMISSION_CHECK)\n return func(hijacker=self.request.user, hijacked=self.get_object())\n\n def get_object(self, queryset=None):\n return get_object_or_404(self.model, pk=self.request.POST[\"user_pk\"])\n\n def dispatch(self, request, *args, **kwargs):\n if \"user_pk\" not in self.request.POST:\n return HttpResponseBadRequest()\n return super().dispatch(request, *args, **kwargs)\n\n @method_decorator(csrf_protect)\n def post(self, request, *args, **kwargs):\n hijacker = request.user\n hijacked = self.get_object()\n\n hijack_history = request.session.get(\"hijack_history\", [])\n hijack_history.append(request.user._meta.pk.value_to_string(hijacker))\n\n backend = get_used_backend(request)\n backend = f\"{backend.__module__}.{backend.__class__.__name__}\"\n\n with signals.no_update_last_login(), keep_session_age(request.session):\n login(request, hijacked, backend=backend)\n\n request.session[\"hijack_history\"] = hijack_history\n\n signals.hijack_started.send(\n sender=None,\n request=request,\n hijacker=hijacker,\n hijacked=hijacked,\n )\n return HttpResponseRedirect(self.get_success_url())\n\n\nclass ReleaseUserView(\n LockUserTableMixin, LoginRequiredMixin, UserPassesTestMixin, SuccessUrlMixin, View\n):\n raise_exception = True\n\n success_url = settings.LOGOUT_REDIRECT_URL\n\n def test_func(self):\n return bool(self.request.session.get(\"hijack_history\", []))\n\n @method_decorator(csrf_protect)\n def post(self, request, *args, **kwargs):\n hijack_history = request.session.get(\"hijack_history\", [])\n hijacked = request.user\n user_pk = hijack_history.pop()\n hijacker = get_object_or_404(get_user_model(), pk=user_pk)\n backend = get_used_backend(request)\n backend = f\"{backend.__module__}.{backend.__class__.__name__}\"\n with signals.no_update_last_login(), keep_session_age(request.session):\n login(request, hijacker, backend=backend)\n\n request.session[\"hijack_history\"] = hijack_history\n\n signals.hijack_ended.send(\n sender=None,\n request=request,\n hijacker=hijacker,\n hijacked=hijacked,\n )\n return HttpResponseRedirect(self.get_success_url())\n", "path": "hijack/views.py" } ]
[ { "content": "from contextlib import contextmanager\n\nimport django\nfrom django.contrib.auth import BACKEND_SESSION_KEY, get_user_model, load_backend, login\nfrom django.contrib.auth.mixins import LoginRequiredMixin, UserPassesTestMixin\nfrom django.db import transaction\nfrom django.http import HttpResponseBadRequest, HttpResponseRedirect\nfrom django.shortcuts import get_object_or_404, resolve_url\nfrom django.utils.decorators import method_decorator\nfrom django.utils.module_loading import import_string\nfrom django.views import View\nfrom django.views.decorators.csrf import csrf_protect\nfrom django.views.generic.detail import SingleObjectMixin\n\nif django.VERSION >= (3, 0):\n from django.utils.http import url_has_allowed_host_and_scheme\nelse:\n from django.utils.http import is_safe_url as url_has_allowed_host_and_scheme\n\nfrom hijack import signals\nfrom hijack.conf import settings\n\n\ndef get_used_backend(request):\n backend_str = request.session[BACKEND_SESSION_KEY]\n backend = load_backend(backend_str)\n return backend\n\n\n@contextmanager\ndef keep_session_age(session):\n try:\n session_expiry = session[\"_session_expiry\"]\n except KeyError:\n yield\n else:\n yield\n session[\"_session_expiry\"] = session_expiry\n\n\nclass SuccessUrlMixin:\n redirect_field_name = \"next\"\n\n success_url = \"/\"\n\n def get_success_url(self):\n url = self.get_redirect_url()\n return url or resolve_url(self.success_url or \"/\")\n\n def get_redirect_url(self):\n \"\"\"Return the user-originating redirect URL if it's safe.\"\"\"\n redirect_to = self.request.POST.get(\n self.redirect_field_name, self.request.GET.get(self.redirect_field_name, \"\")\n )\n url_is_safe = url_has_allowed_host_and_scheme(\n url=redirect_to,\n allowed_hosts=self.request.get_host(),\n require_https=self.request.is_secure(),\n )\n return redirect_to if url_is_safe else \"\"\n\n\nclass LockUserTableMixin:\n @transaction.atomic()\n def dispatch(self, request, *args, **kwargs):\n # Lock entire user table to avoid race conditions\n next(get_user_model().objects.select_for_update().iterator())\n return super().dispatch(request, *args, **kwargs)\n\n\nclass AcquireUserView(\n LockUserTableMixin,\n LoginRequiredMixin,\n UserPassesTestMixin,\n SuccessUrlMixin,\n SingleObjectMixin,\n View,\n):\n model = get_user_model()\n success_url = settings.LOGIN_REDIRECT_URL\n\n def test_func(self):\n func = import_string(settings.HIJACK_PERMISSION_CHECK)\n return func(hijacker=self.request.user, hijacked=self.get_object())\n\n def get_object(self, queryset=None):\n return get_object_or_404(self.model, pk=self.request.POST[\"user_pk\"])\n\n def dispatch(self, request, *args, **kwargs):\n if \"user_pk\" not in self.request.POST:\n return HttpResponseBadRequest()\n return super().dispatch(request, *args, **kwargs)\n\n @method_decorator(csrf_protect)\n def post(self, request, *args, **kwargs):\n hijacker = request.user\n hijacked = self.get_object()\n\n hijack_history = request.session.get(\"hijack_history\", [])\n hijack_history.append(request.user._meta.pk.value_to_string(hijacker))\n\n backend = get_used_backend(request)\n backend = f\"{backend.__module__}.{backend.__class__.__name__}\"\n\n with signals.no_update_last_login(), keep_session_age(request.session):\n login(request, hijacked, backend=backend)\n\n request.session[\"hijack_history\"] = hijack_history\n\n signals.hijack_started.send(\n sender=None,\n request=request,\n hijacker=hijacker,\n hijacked=hijacked,\n )\n return HttpResponseRedirect(self.get_success_url())\n\n\nclass ReleaseUserView(\n LockUserTableMixin, LoginRequiredMixin, UserPassesTestMixin, SuccessUrlMixin, View\n):\n raise_exception = True\n\n success_url = settings.LOGOUT_REDIRECT_URL\n\n def test_func(self):\n return bool(self.request.session.get(\"hijack_history\", []))\n\n @method_decorator(csrf_protect)\n def post(self, request, *args, **kwargs):\n hijack_history = request.session.get(\"hijack_history\", [])\n hijacked = request.user\n user_pk = hijack_history.pop()\n hijacker = get_object_or_404(get_user_model(), pk=user_pk)\n backend = get_used_backend(request)\n backend = f\"{backend.__module__}.{backend.__class__.__name__}\"\n with signals.no_update_last_login(), keep_session_age(request.session):\n login(request, hijacker, backend=backend)\n\n request.session[\"hijack_history\"] = hijack_history\n\n signals.hijack_ended.send(\n sender=None,\n request=request,\n hijacker=hijacker,\n hijacked=hijacked,\n )\n return HttpResponseRedirect(self.get_success_url())\n", "path": "hijack/views.py" } ]
diff --git a/hijack/views.py b/hijack/views.py index 79cee93e..b3ca060e 100755 --- a/hijack/views.py +++ b/hijack/views.py @@ -45,7 +45,7 @@ class SuccessUrlMixin: def get_success_url(self): url = self.get_redirect_url() - return url or resolve_url(self.success_url) + return url or resolve_url(self.success_url or "/") def get_redirect_url(self): """Return the user-originating redirect URL if it's safe."""
ibis-project__ibis-2426
fix bigquery version https://dev.azure.com/ibis-project/ibis/_build/results?buildId=3396&view=logs&j=8f09edc2-e3b7-52de-126a-0225c4f3efa1&t=78a72aec-b398-558e-7c0d-2d33604b9e53 I think we need to limit the upper bound of bigquery library here.
[ { "content": "#!/usr/bin/env python\n\"\"\"Ibis setup module.\"\"\"\nimport pathlib\nimport sys\n\nfrom setuptools import find_packages, setup\n\nimport versioneer\n\nLONG_DESCRIPTION = \"\"\"\nIbis is a productivity-centric Python big data framework.\n\nSee http://ibis-project.org\n\"\"\"\n\nVERSION = sys.version_info.major, sys.version_info.minor\n\nimpala_requires = ['hdfs>=2.0.16', 'sqlalchemy>=1.1,<1.3.7', 'requests']\nimpala_requires.append('impyla[kerberos]>=0.15.0')\n\nsqlite_requires = ['sqlalchemy>=1.1,<1.3.7']\npostgres_requires = sqlite_requires + ['psycopg2']\nmysql_requires = sqlite_requires + ['pymysql']\n\nomniscidb_requires = ['pymapd>=0.12.0']\nkerberos_requires = ['requests-kerberos']\nvisualization_requires = ['graphviz']\nclickhouse_requires = [\n 'clickhouse-driver>=0.1.3',\n 'clickhouse-cityhash',\n]\nbigquery_requires = ['google-cloud-bigquery>=1.12.0', 'pydata-google-auth']\nhdf5_requires = ['tables>=3.0.0']\n\nparquet_requires = ['pyarrow>=0.12.0']\nspark_requires = ['pyspark>=2.4.3']\n\ngeospatial_requires = ['geoalchemy2', 'geopandas', 'shapely']\n\nall_requires = (\n impala_requires\n + postgres_requires\n + omniscidb_requires\n + mysql_requires\n + kerberos_requires\n + visualization_requires\n + clickhouse_requires\n + bigquery_requires\n + hdf5_requires\n + parquet_requires\n + spark_requires\n + geospatial_requires\n)\n\ndevelop_requires = all_requires + [\n 'black',\n 'click',\n 'pydocstyle==4.0.1',\n 'flake8',\n 'isort',\n 'mypy',\n 'pre-commit',\n 'pygit2',\n 'pytest>=4.5',\n]\n\ninstall_requires = [\n line.strip()\n for line in pathlib.Path(__file__)\n .parent.joinpath('requirements.txt')\n .read_text()\n .splitlines()\n]\n\nsetup(\n name='ibis-framework',\n url='https://github.com/ibis-project/ibis',\n packages=find_packages(),\n version=versioneer.get_version(),\n cmdclass=versioneer.get_cmdclass(),\n install_requires=install_requires,\n python_requires='>=3.7',\n extras_require={\n 'all': all_requires,\n 'develop': develop_requires,\n 'impala': impala_requires,\n 'kerberos': kerberos_requires,\n 'postgres': postgres_requires,\n 'omniscidb': omniscidb_requires,\n 'mysql': mysql_requires,\n 'sqlite': sqlite_requires,\n 'visualization': visualization_requires,\n 'clickhouse': clickhouse_requires,\n 'bigquery': bigquery_requires,\n 'hdf5': hdf5_requires,\n 'parquet': parquet_requires,\n 'spark': spark_requires,\n 'geospatial': geospatial_requires,\n },\n description=\"Productivity-centric Python Big Data Framework\",\n long_description=LONG_DESCRIPTION,\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Operating System :: OS Independent',\n 'Intended Audience :: Science/Research',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Topic :: Scientific/Engineering',\n ],\n license='Apache License, Version 2.0',\n maintainer=\"Phillip Cloud\",\n maintainer_email=\"[email protected]\",\n)\n", "path": "setup.py" } ]
[ { "content": "#!/usr/bin/env python\n\"\"\"Ibis setup module.\"\"\"\nimport pathlib\nimport sys\n\nfrom setuptools import find_packages, setup\n\nimport versioneer\n\nLONG_DESCRIPTION = \"\"\"\nIbis is a productivity-centric Python big data framework.\n\nSee http://ibis-project.org\n\"\"\"\n\nVERSION = sys.version_info.major, sys.version_info.minor\n\nimpala_requires = ['hdfs>=2.0.16', 'sqlalchemy>=1.1,<1.3.7', 'requests']\nimpala_requires.append('impyla[kerberos]>=0.15.0')\n\nsqlite_requires = ['sqlalchemy>=1.1,<1.3.7']\npostgres_requires = sqlite_requires + ['psycopg2']\nmysql_requires = sqlite_requires + ['pymysql']\n\nomniscidb_requires = ['pymapd>=0.12.0']\nkerberos_requires = ['requests-kerberos']\nvisualization_requires = ['graphviz']\nclickhouse_requires = [\n 'clickhouse-driver>=0.1.3',\n 'clickhouse-cityhash',\n]\nbigquery_requires = [\n 'google-cloud-bigquery[bqstorage,pandas]>=1.12.0,<2.0.0dev',\n 'pydata-google-auth',\n]\nhdf5_requires = ['tables>=3.0.0']\n\nparquet_requires = ['pyarrow>=0.12.0']\nspark_requires = ['pyspark>=2.4.3']\n\ngeospatial_requires = ['geoalchemy2', 'geopandas', 'shapely']\n\nall_requires = (\n impala_requires\n + postgres_requires\n + omniscidb_requires\n + mysql_requires\n + kerberos_requires\n + visualization_requires\n + clickhouse_requires\n + bigquery_requires\n + hdf5_requires\n + parquet_requires\n + spark_requires\n + geospatial_requires\n)\n\ndevelop_requires = all_requires + [\n 'black',\n 'click',\n 'pydocstyle==4.0.1',\n 'flake8',\n 'isort',\n 'mypy',\n 'pre-commit',\n 'pygit2',\n 'pytest>=4.5',\n]\n\ninstall_requires = [\n line.strip()\n for line in pathlib.Path(__file__)\n .parent.joinpath('requirements.txt')\n .read_text()\n .splitlines()\n]\n\nsetup(\n name='ibis-framework',\n url='https://github.com/ibis-project/ibis',\n packages=find_packages(),\n version=versioneer.get_version(),\n cmdclass=versioneer.get_cmdclass(),\n install_requires=install_requires,\n python_requires='>=3.7',\n extras_require={\n 'all': all_requires,\n 'develop': develop_requires,\n 'impala': impala_requires,\n 'kerberos': kerberos_requires,\n 'postgres': postgres_requires,\n 'omniscidb': omniscidb_requires,\n 'mysql': mysql_requires,\n 'sqlite': sqlite_requires,\n 'visualization': visualization_requires,\n 'clickhouse': clickhouse_requires,\n 'bigquery': bigquery_requires,\n 'hdf5': hdf5_requires,\n 'parquet': parquet_requires,\n 'spark': spark_requires,\n 'geospatial': geospatial_requires,\n },\n description=\"Productivity-centric Python Big Data Framework\",\n long_description=LONG_DESCRIPTION,\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Operating System :: OS Independent',\n 'Intended Audience :: Science/Research',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Topic :: Scientific/Engineering',\n ],\n license='Apache License, Version 2.0',\n maintainer=\"Phillip Cloud\",\n maintainer_email=\"[email protected]\",\n)\n", "path": "setup.py" } ]
diff --git a/ci/deps/bigquery.yml b/ci/deps/bigquery.yml index e3824aeb63d4..05a1632ad96f 100644 --- a/ci/deps/bigquery.yml +++ b/ci/deps/bigquery.yml @@ -1,2 +1,2 @@ -google-cloud-bigquery>=1.12.0 +google-cloud-bigquery-core >=1.12.0,<1.24.0dev pydata-google-auth diff --git a/setup.py b/setup.py index 148bdf05dc33..812119a4be55 100644 --- a/setup.py +++ b/setup.py @@ -29,7 +29,10 @@ 'clickhouse-driver>=0.1.3', 'clickhouse-cityhash', ] -bigquery_requires = ['google-cloud-bigquery>=1.12.0', 'pydata-google-auth'] +bigquery_requires = [ + 'google-cloud-bigquery[bqstorage,pandas]>=1.12.0,<2.0.0dev', + 'pydata-google-auth', +] hdf5_requires = ['tables>=3.0.0'] parquet_requires = ['pyarrow>=0.12.0']
svthalia__concrexit-2208
Date format for events <!-- Please add the appropriate label for what change should be made: docs: changes to the documentation) refactor: refactoring production code, eg. renaming a variable or rewriting a function test: adding missing tests, refactoring tests; no production code change chore: updating poetry etc; no production code change --> ### Describe the change Change the date format for events from { MMM. DD, YYYY, H A } (see additional context) to { EEE, DD - MMMM - YYYY, HH:MM }. ### Motivation The current dateformat is following the US style, according to the spelling in the styleguide we use British English (Colour), and we are EU based. So the date format should also follow the British English date format (a.m./p.m. is mostly used in the US, Australia and Canada), throughout Europe 24-hour notation is used. ### Additional context Current: ![image](https://user-images.githubusercontent.com/33718602/146271152-5853aac7-cf7f-4458-afcc-3c9315e9bb75.png)
[ { "content": "\"\"\"Django settings for concrexit.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/dev/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/dev/ref/settings/\n\"\"\"\n\nimport logging\n\nimport base64\nimport json\nimport os\n\nfrom django.core.management.commands import makemessages\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\n\nlogger = logging.getLogger(__name__)\n\n# Sentinel objects that are distinct from None\n_NOT_SET = object()\n\n\nclass Misconfiguration(Exception):\n \"\"\"Exception that is raised when something is misconfigured in this file.\"\"\"\n\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.abspath(\n os.path.join(os.path.dirname(os.path.abspath(__file__)), \"\", \"..\")\n)\n\nSOURCE_COMMIT = os.environ.get(\"SOURCE_COMMIT\", \"unknown\")\n\n# Many of the settings are dependent on the environment we're running in.\n# The default environment is development, so the programmer doesn't have to set anything\nDJANGO_ENV = os.environ.get(\"DJANGO_ENV\", \"development\")\n_environments = [\"production\", \"staging\", \"testing\", \"development\"]\nif DJANGO_ENV not in _environments:\n raise Misconfiguration(f\"Set DJANGO_ENV to one of: {', '.join(_environments)}\")\n\n\ndef _set_django_env(env):\n \"\"\"Set the DJANGO_ENV variable.\n\n This is a helper function for the doctests below because doctests cannot set global variables.\n \"\"\"\n # pylint: disable=global-statement\n global DJANGO_ENV\n DJANGO_ENV = env\n\n\ndef setting(*, development, production, staging=_NOT_SET, testing=_NOT_SET):\n \"\"\"Generate a setting depending on the DJANGO_ENV and the arguments.\n\n This function is meant for static settings that depend on the DJANGO_ENV. If the\n staging or testing arguments are left to their defaults, they will fall back to\n the production and development settings respectively.\n\n Example:\n >>> _set_django_env(\"production\")\n >>> SEND_MESSAGES_WITH = setting(development=\"console\", production=\"mail\", staging=\"DM\")\n >>> SEND_MESSAGES_WITH\n 'mail'\n >>> _set_django_env(\"testing\")\n >>> setting(development=\"console\", production=\"mail\", staging=\"DM\")\n 'console'\n \"\"\"\n if DJANGO_ENV == \"development\" or (DJANGO_ENV == \"testing\" and testing is _NOT_SET):\n return development\n if DJANGO_ENV == \"testing\":\n return testing\n if DJANGO_ENV == \"production\" or (DJANGO_ENV == \"staging\" and staging is _NOT_SET):\n return production\n if DJANGO_ENV == \"staging\":\n return staging\n raise Misconfiguration(f\"Set DJANGO_ENV to one of: {', '.join(_environments)}\")\n\n\ndef from_env(\n name, *, production=_NOT_SET, staging=_NOT_SET, testing=_NOT_SET, development=None\n):\n \"\"\"Generate a setting that's overridable by the process environment.\n\n This will raise an exception if a default is not set for production. Because we use\n the sentinel value _NOT_SET, you can still set a default of None for production if wanted.\n\n As with :func:`setting` the staging and testing values will fall back to production\n and development. So if an environment variable is required in production, and no default\n is set for staging, staging will also raise the exception.\n\n Example:\n >>> _set_django_env(\"production\")\n >>> # A secret key should always be set in production via the environment\n >>> from_env(\"MEDIA_ROOT\", development=\"/media/root\")\n Traceback (most recent call last):\n ...\n thaliawebsite.settings.Misconfiguration: Environment variable `MEDIA_ROOT` must be supplied in production\n >>> _set_django_env(\"development\")\n >>> from_env(\"MEDIA_ROOT\", development=\"/media/root\")\n '/media/root'\n \"\"\"\n try:\n return os.environ[name]\n except KeyError:\n if DJANGO_ENV == \"production\" or (\n DJANGO_ENV == \"staging\" and staging is _NOT_SET\n ):\n if production is _NOT_SET and os.environ.get(\"MANAGE_PY\", \"0\") == \"0\":\n # pylint: disable=raise-missing-from\n raise Misconfiguration(\n f\"Environment variable `{name}` must be supplied in production\"\n )\n if production is _NOT_SET and os.environ.get(\"MANAGE_PY\", \"0\") == \"1\":\n logger.warning(\n \"Ignoring unset %s because we're running a management command\", name\n )\n return development\n return production\n if DJANGO_ENV == \"staging\":\n return staging\n if DJANGO_ENV == \"development\" or (\n DJANGO_ENV == \"testing\" and testing is _NOT_SET\n ):\n return development\n if DJANGO_ENV == \"testing\":\n return testing\n # pylint: disable=raise-missing-from\n raise Misconfiguration(f\"DJANGO_ENV set to unsupported value: {DJANGO_ENV}\")\n\n\n###############################################################################\n# Site settings\n\n# We use this setting to generate the email addresses\nSITE_DOMAIN = from_env(\n \"SITE_DOMAIN\", development=\"thalia.localhost\", production=\"thalia.nu\"\n)\n# We use this domain to generate some absolute urls when we don't have access to a request\nBASE_URL = os.environ.get(\"BASE_URL\", f\"https://{SITE_DOMAIN}\")\n\n# Default FROM email\nDEFAULT_FROM_EMAIL = f\"{os.environ.get('ADDRESS_NOREPLY', 'noreply')}@{SITE_DOMAIN}\"\n# https://docs.djangoproject.com/en/dev/ref/settings/#server-email\nSERVER_EMAIL = DEFAULT_FROM_EMAIL\nNEWSLETTER_FROM_ADDRESS = (\n f\"{os.environ.get('ADDRESS_NEWSLETTER', 'newsletter')}@{SITE_DOMAIN}\"\n)\nBOARD_NOTIFICATION_ADDRESS = (\n f\"{os.environ.get('ADDRESS_CONTACT', 'info')}@{SITE_DOMAIN}\"\n)\nPARTNER_NOTIFICATION_ADDRESS = (\n f\"{os.environ.get('ADDRESS_COLLABORATION', 'samenwerking')}@{SITE_DOMAIN}\"\n)\nEDUCATION_NOTIFICATION_ADDRESS = (\n f\"{os.environ.get('ADDRESS_EDUCATION', 'educacie')}@{SITE_DOMAIN}\"\n)\nPROMO_REQUEST_NOTIFICATION_ADDRESS = (\n f\"{os.environ.get('ADDRESS_PROMOREQUESTS', 'paparazcie')}@{SITE_DOMAIN}\"\n)\n\n# The scheme the app uses for oauth redirection\nAPP_OAUTH_SCHEME = os.environ.get(\"APP_OAUTH_SCHEME\", \"nu.thalia\")\n\n# Membership prices\nMEMBERSHIP_PRICES = {\n \"year\": int(os.environ.get(\"MEMBERSHIP_PRICE_YEAR_CENTS\", \"750\")) / 100,\n \"study\": int(os.environ.get(\"MEMBERSHIP_PRICE_STUDY_CENTS\", \"3000\")) / 100,\n}\n\n# Window during which a payment can be deleted again\nPAYMENT_CHANGE_WINDOW = int(os.environ.get(\"PAYMENTS_CHANGE_WINDOW\", 10 * 60))\n\n# Payments creditor identifier\nSEPA_CREDITOR_ID = os.environ.get(\"SEPA_CREDITOR_ID\", \"<unknown>\")\n\n# Payment batch withdrawal date default offset after creation date\nPAYMENT_BATCH_DEFAULT_WITHDRAWAL_DATE_OFFSET = timezone.timedelta(days=14)\n\nTHALIA_PAY_ENABLED_PAYMENT_METHOD = (\n from_env(\"THALIA_PAY_ENABLED\", development=\"1\", staging=\"1\", production=\"0\") == \"1\"\n)\nTHALIA_PAY_FOR_NEW_MEMBERS = os.environ.get(\"THALIA_PAY_FOR_NEW_MEMBERS\", \"1\") == \"1\"\n\n###############################################################################\n# Django settings\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#secret-key\nSECRET_KEY = from_env(\n \"SECRET_KEY\", development=\"#o-0d1q5&^&06tn@8pr1f(n3$crafd++^%sacao7hj*ea@c)^t\"\n)\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts\nALLOWED_HOSTS = [\n SITE_DOMAIN,\n *from_env(\"ALLOWED_HOSTS\", development=\"*\", production=\"\").split(\",\"),\n]\n# https://docs.djangoproject.com/en/dev/ref/settings/#internal-ips\nINTERNAL_IPS = setting(development=[\"127.0.0.1\", \"172.17.0.1\"], production=[])\n\n# https://django-compressor.readthedocs.io/en/stable/settings/#django.conf.settings.COMPRESS_OFFLINE\nCOMPRESS_OFFLINE = setting(development=False, production=True)\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#static-url\nSTATIC_URL = \"/static/\"\n# https://docs.djangoproject.com/en/dev/ref/settings/#static-root\nSTATIC_ROOT = from_env(\"STATIC_ROOT\", development=os.path.join(BASE_DIR, \"static\"))\n\nSENDFILE_BACKEND = setting(\n development=\"django_sendfile.backends.development\",\n production=\"django_sendfile.backends.nginx\",\n)\n# https://github.com/johnsensible/django-sendfile#nginx-backend\nSENDFILE_URL = \"/media/sendfile/\"\nSENDFILE_ROOT = from_env(\n \"SENDFILE_ROOT\",\n production=\"/concrexit/media/\",\n development=os.path.join(BASE_DIR, \"media\"),\n)\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#media-url\nMEDIA_URL = \"/media/\"\n# https://docs.djangoproject.com/en/dev/ref/settings/#media-root\nMEDIA_ROOT = from_env(\"MEDIA_ROOT\", development=os.path.join(BASE_DIR, \"media\"))\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#conn-max-age\nCONN_MAX_AGE = int(from_env(\"CONN_MAX_AGE\", development=\"0\", production=\"60\"))\n\n# Useful for managing members\n# https://docs.djangoproject.com/en/dev/ref/settings/#data-upload-max-number-fields\nDATA_UPLOAD_MAX_NUMBER_FIELDS = os.environ.get(\"DATA_UPLOAD_MAX_NUMBER_FIELDS\", 10000)\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#debug\nDEBUG = setting(development=True, production=False, testing=False)\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#session-cookie-secure\nSESSION_COOKIE_SECURE = setting(development=False, production=True)\n# https://docs.djangoproject.com/en/dev/ref/settings/#csrf-cookie-secure\nCSRF_COOKIE_SECURE = setting(development=False, production=True)\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#default-auto-field\nDEFAULT_AUTO_FIELD = \"django.db.models.AutoField\"\n\n###############################################################################\n# Email settings\n# https://docs.djangoproject.com/en/dev/ref/settings/#email-backend\n_EMAIL_BACKEND = from_env(\"EMAIL_BACKEND\", development=\"console\", production=\"smtp\")\nif _EMAIL_BACKEND == \"console\":\n EMAIL_BACKEND = \"django.core.mail.backends.console.EmailBackend\"\n\nif _EMAIL_BACKEND == \"smtp\":\n EMAIL_BACKEND = \"django.core.mail.backends.smtp.EmailBackend\"\n EMAIL_HOST = os.environ.get(\"DJANGO_EMAIL_HOST\")\n EMAIL_PORT = os.environ.get(\"DJANGO_EMAIL_PORT\", 25)\n EMAIL_HOST_USER = os.environ.get(\"DJANGO_EMAIL_HOST_USER\", \"\")\n EMAIL_HOST_PASSWORD = os.environ.get(\"DJANGO_EMAIL_HOST_PASSWORD\", \"\")\n EMAIL_USE_TLS = os.environ.get(\"DJANGO_EMAIL_USE_TLS\", \"1\") == \"1\"\n EMAIL_TIMEOUT = int(os.environ.get(\"EMAIL_TIMEOUT\", \"10\"))\n if EMAIL_HOST is None:\n logger.warning(\n \"The email host is set to the default of localhost, are you sure you don't want to set EMAIL_HOST?\"\n )\n EMAIL_HOST = \"localhost\"\n\n###############################################################################\n# Database settings\n# https://docs.djangoproject.com/en/dev/ref/settings/#databases\nDATABASE_ENGINE = from_env(\n \"DATABASE_ENGINE\", development=\"sqlite\", production=\"postgresql\", testing=None\n)\nif DATABASE_ENGINE == \"sqlite\":\n DATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.sqlite3\",\n \"NAME\": os.path.join(BASE_DIR, \"db.sqlite3\"),\n }\n }\n\nif DATABASE_ENGINE == \"postgresql\":\n DATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"USER\": os.environ.get(\"POSTGRES_USER\", \"concrexit\"),\n \"PASSWORD\": os.environ.get(\"POSTGRES_PASSWORD\", None),\n \"NAME\": os.environ.get(\"POSTGRES_DB\", \"\"),\n \"HOST\": os.environ.get(\"POSTGRES_HOST\", \"\"),\n \"PORT\": os.environ.get(\"POSTGRES_PORT\", \"5432\"),\n }\n }\n\nif DJANGO_ENV == \"testing\":\n DATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"NAME\": \"thalia\",\n \"USER\": \"postgres\",\n \"PASSWORD\": \"postgres\",\n \"HOST\": \"postgres\",\n \"PORT\": 5432,\n },\n }\n\n###############################################################################\n# Firebase config\nFIREBASE_CREDENTIALS = os.environ.get(\"FIREBASE_CREDENTIALS\", \"{}\")\nif FIREBASE_CREDENTIALS != \"{}\":\n FIREBASE_CREDENTIALS = base64.urlsafe_b64decode(FIREBASE_CREDENTIALS)\nFIREBASE_CREDENTIALS = json.loads(FIREBASE_CREDENTIALS)\n\nif FIREBASE_CREDENTIALS != {}:\n from firebase_admin import initialize_app, credentials\n\n try:\n initialize_app(credential=credentials.Certificate(FIREBASE_CREDENTIALS))\n except ValueError as e:\n logger.error(\"Firebase application failed to initialise\")\n\n###############################################################################\n# GSuite config\nGSUITE_ADMIN_SCOPES = [\n \"https://www.googleapis.com/auth/admin.directory.group\",\n \"https://www.googleapis.com/auth/admin.directory.user\",\n \"https://www.googleapis.com/auth/apps.groups.settings\",\n]\n\nGSUITE_ADMIN_CREDENTIALS = os.environ.get(\"GSUITE_ADMIN_CREDENTIALS\", \"{}\")\nif GSUITE_ADMIN_CREDENTIALS != \"{}\":\n GSUITE_ADMIN_CREDENTIALS = base64.urlsafe_b64decode(GSUITE_ADMIN_CREDENTIALS)\nGSUITE_ADMIN_CREDENTIALS = json.loads(GSUITE_ADMIN_CREDENTIALS)\nGSUITE_ADMIN_USER = os.environ.get(\"GSUITE_ADMIN_USER\", \"[email protected]\")\nGSUITE_DOMAIN = from_env(\n \"GSUITE_DOMAIN\", development=\"thalia.localhost\", production=\"thalia.nu\"\n)\nGSUITE_MEMBERS_DOMAIN = from_env(\n \"GSUITE_MEMBERS_DOMAIN\",\n development=\"members.thalia.localhost\",\n production=\"members.thalia.nu\",\n)\nGSUITE_MEMBERS_AUTOSYNC = os.environ.get(\"GSUITE_MEMBERS_AUTOSYNC\", \"0\") == \"1\"\n\nif GSUITE_ADMIN_CREDENTIALS != {}:\n from google.oauth2 import service_account\n\n GSUITE_ADMIN_CREDENTIALS = service_account.Credentials.from_service_account_info(\n GSUITE_ADMIN_CREDENTIALS, scopes=GSUITE_ADMIN_SCOPES\n ).with_subject(GSUITE_ADMIN_USER)\n\nEMAIL_DOMAIN_BLACKLIST = [GSUITE_MEMBERS_DOMAIN]\n\n###############################################################################\n# Google maps API key and secrets\nGOOGLE_MAPS_API_KEY = os.environ.get(\"GOOGLE_MAPS_API_KEY\", \"\")\nGOOGLE_MAPS_API_SECRET = os.environ.get(\"GOOGLE_MAPS_API_SECRET\", \"\")\nGOOGLE_PLACES_API_KEY = os.environ.get(\"GOOGLE_PLACES_API_KEY\", \"\")\n\n###############################################################################\n# Conscribo settings\nCONSCRIBO_ACCOUNT = os.environ.get(\"CONSCRIBO_ACCOUNT\", \"\")\nCONSCRIBO_USER = os.environ.get(\"CONSCRIBO_USER\", \"\")\nCONSCRIBO_PASSWORD = os.environ.get(\"CONSCRIBO_PASSWORD\", \"\")\n\n###############################################################################\n# Sentry setup\nif \"SENTRY_DSN\" in os.environ:\n import sentry_sdk\n from sentry_sdk.integrations.django import DjangoIntegration\n\n # Pylint sees the faked init class that sentry uses for typing purposes\n # pylint: disable=abstract-class-instantiated\n sentry_sdk.init(\n dsn=os.environ.get(\"SENTRY_DSN\"),\n integrations=[DjangoIntegration()],\n release=SOURCE_COMMIT,\n send_default_pii=True,\n environment=DJANGO_ENV,\n traces_sample_rate=0.2,\n )\n\n\n###############################################################################\n# (Mostly) static settings\nINSTALLED_APPS = [\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"django.contrib.sitemaps\",\n # Dependencies\n \"oauth2_provider\",\n \"corsheaders\",\n \"django_bootstrap5\",\n \"tinymce\",\n \"rest_framework\",\n \"rest_framework.authtoken\",\n \"compressor\",\n \"debug_toolbar\",\n \"admin_auto_filters\",\n # Our apps\n # Directly link to the app config when applicable as recommended\n # by the docs: https://docs.djangoproject.com/en/2.0/ref/applications/\n \"thaliawebsite.apps.ThaliaWebsiteConfig\", # include for admin settings\n # Load django.contrib.admin after thaliawebsite so the admin page gets modified\n \"django.contrib.admin\",\n \"pushnotifications.apps.PushNotificationsConfig\",\n \"promotion.apps.PromotionConfig\",\n \"members.apps.MembersConfig\",\n \"documents.apps.DocumentsConfig\",\n \"activemembers.apps.ActiveMembersConfig\",\n \"photos.apps.PhotosConfig\",\n \"utils\",\n \"mailinglists.apps.MailinglistsConfig\",\n \"merchandise.apps.MerchandiseConfig\",\n \"thabloid.apps.ThabloidConfig\",\n \"partners.apps.PartnersConfig\",\n \"events.apps.EventsConfig\",\n \"pizzas.apps.PizzasConfig\",\n \"newsletters.apps.NewslettersConfig\",\n \"education.apps.EducationConfig\",\n \"announcements.apps.AnnouncementsConfig\",\n \"registrations.apps.RegistrationsConfig\",\n \"payments.apps.PaymentsConfig\",\n \"singlepages.apps.SinglepagesConfig\",\n \"shortlinks.apps.ShortLinkConfig\",\n \"sales.apps.SalesConfig\",\n]\n\nMIDDLEWARE = [\n \"debug_toolbar.middleware.DebugToolbarMiddleware\",\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.http.ConditionalGetMiddleware\",\n \"corsheaders.middleware.CorsMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.locale.LocaleMiddleware\",\n # Our middleware\n \"members.middleware.MemberMiddleware\",\n]\n\nif DJANGO_ENV in (\"development\", \"testing\"):\n INSTALLED_APPS += [\"django_template_check\"]\n\nif DJANGO_ENV == \"testing\":\n for x in (\n \"debug_toolbar.middleware.DebugToolbarMiddleware\",\n \"django.middleware.http.ConditionalGetMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n ):\n MIDDLEWARE.remove(x)\n for x in (\"debug_toolbar\",):\n INSTALLED_APPS.remove(x)\n\nROOT_URLCONF = \"thaliawebsite.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [os.path.join(BASE_DIR, \"templates\")],\n \"APP_DIRS\": setting(development=True, production=False),\n \"OPTIONS\": {\n \"context_processors\": [\n \"thaliawebsite.context_processors.source_commit\",\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.template.context_processors.media\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n \"announcements.context_processors.announcements\",\n \"thaliawebsite.context_processors.thumbnail_sizes\",\n ],\n },\n },\n]\n\nif DJANGO_ENV in [\"production\", \"staging\"]:\n # Use caching template loader\n TEMPLATES[0][\"OPTIONS\"][\"loaders\"] = [\n (\n \"django.template.loaders.cached.Loader\",\n [\n \"django.template.loaders.filesystem.Loader\",\n \"django.template.loaders.app_directories.Loader\",\n ],\n )\n ]\n\n # Default logging: https://github.com/django/django/blob/master/django/utils/log.py\n # We disable mailing the admin.\n # Server errors will be sent to Sentry via the config below this.\n LOGGING = {\n \"version\": 1,\n \"formatters\": {\n \"verbose\": {\"format\": \"%(asctime)s %(name)s[%(levelname)s]: %(message)s\"},\n },\n \"handlers\": {\n \"console\": {\n \"level\": \"INFO\",\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"verbose\",\n }\n },\n \"loggers\": {\n \"django\": {\"handlers\": [], \"level\": \"INFO\"},\n \"\": {\"handlers\": [\"console\"], \"level\": \"INFO\"},\n },\n }\n\nWSGI_APPLICATION = \"thaliawebsite.wsgi.application\"\n\n# Login pages\nLOGIN_URL = \"/user/login/\"\n\nLOGIN_REDIRECT_URL = \"/\"\n\n# Cors configuration\nCORS_ORIGIN_ALLOW_ALL = True\nCORS_URLS_REGEX = r\"^/(?:api/v1|api/v2|user/oauth)/.*\"\n\n# OAuth configuration\nOIDC_RSA_PRIVATE_KEY = from_env(\"OIDC_RSA_PRIVATE_KEY\", testing=None)\nif OIDC_RSA_PRIVATE_KEY is not None:\n OIDC_RSA_PRIVATE_KEY = base64.urlsafe_b64decode(OIDC_RSA_PRIVATE_KEY)\n\nOAUTH2_PROVIDER = {\n \"OIDC_ENABLED\": True,\n \"OIDC_RSA_PRIVATE_KEY\": OIDC_RSA_PRIVATE_KEY,\n \"ALLOWED_REDIRECT_URI_SCHEMES\": setting(\n production=[\"https\", APP_OAUTH_SCHEME],\n staging=[\"http\", \"https\", APP_OAUTH_SCHEME],\n development=[\"http\", \"https\", APP_OAUTH_SCHEME],\n ),\n \"SCOPES\": {\n \"openid\": \"OpenID Connect\",\n \"read\": \"Authenticated read access to the website\",\n \"write\": \"Authenticated write access to the website\",\n \"activemembers:read\": \"Read access to committee, society and board groups\",\n \"announcements:read\": \"Read access to announcements\",\n \"events:read\": \"Read access to events and your event registrations\",\n \"events:register\": \"Write access to the state of your event registrations\",\n \"events:admin\": \"Admin access to the events\",\n \"food:read\": \"Read access to food events\",\n \"food:order\": \"Order access to food events\",\n \"food:admin\": \"Admin access to food events\",\n \"members:read\": \"Read access to the members directory\",\n \"photos:read\": \"Read access to photos\",\n \"profile:read\": \"Read access to your member profile\",\n \"profile:write\": \"Write access to your member profile\",\n \"pushnotifications:read\": \"Read access to push notifications\",\n \"pushnotifications:write\": \"Write access to push notifications\",\n \"partners:read\": \"Read access to partners\",\n \"payments:read\": \"Read access to payments\",\n \"payments:write\": \"Write access to payments\",\n \"payments:admin\": \"Admin access to payments\",\n \"sales:read\": \"Read access to your Point of Sale orders\",\n \"sales:order\": \"Place Point of Sale orders on your behalf\",\n \"sales:admin\": \"Admin access to Point of Sale orders\",\n },\n}\n\n# Password validation\n# https://docs.djangoproject.com/en/dev/ref/settings/#auth-password-validators\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": (\n \"django.contrib.auth.\"\n \"password_validation.UserAttributeSimilarityValidator\"\n ),\n },\n {\n \"NAME\": (\"django.contrib.auth.password_validation.MinimumLengthValidator\"),\n },\n {\n \"NAME\": (\"django.contrib.auth.password_validation.CommonPasswordValidator\"),\n },\n {\n \"NAME\": (\"django.contrib.auth.password_validation.NumericPasswordValidator\"),\n },\n]\n\nPASSWORD_HASHERS = setting(\n development=(\n \"django.contrib.auth.hashers.PBKDF2PasswordHasher\",\n \"django.contrib.auth.hashers.MD5PasswordHasher\",\n ),\n production=(\n \"django.contrib.auth.hashers.Argon2PasswordHasher\",\n \"django.contrib.auth.hashers.PBKDF2PasswordHasher\",\n \"django.contrib.auth.hashers.PBKDF2SHA1PasswordHasher\",\n \"django.contrib.auth.hashers.BCryptSHA256PasswordHasher\",\n \"django.contrib.auth.hashers.BCryptPasswordHasher\",\n ),\n testing=(\"django.contrib.auth.hashers.MD5PasswordHasher\",),\n)\n\nAUTHENTICATION_BACKENDS = [\n \"django.contrib.auth.backends.ModelBackend\",\n \"activemembers.backends.MemberGroupBackend\",\n]\n\nREST_FRAMEWORK = {\n \"DEFAULT_AUTHENTICATION_CLASSES\": (\n \"rest_framework.authentication.SessionAuthentication\",\n \"thaliawebsite.api.authentication.APIv1TokenAuthentication\",\n \"oauth2_provider.contrib.rest_framework.OAuth2Authentication\",\n ),\n \"DEFAULT_PAGINATION_CLASS\": \"thaliawebsite.api.pagination.APIv2LimitOffsetPagination\",\n \"PAGE_SIZE\": 50, # Only for API v2\n \"ALLOWED_VERSIONS\": [\"v1\", \"v2\", \"calendarjs\"],\n \"DEFAULT_VERSIONING_CLASS\": \"rest_framework.versioning.NamespaceVersioning\",\n \"DEFAULT_SCHEMA_CLASS\": \"thaliawebsite.api.openapi.OAuthAutoSchema\",\n \"DEFAULT_THROTTLE_CLASSES\": [\n \"thaliawebsite.api.throttling.AnonRateThrottle\",\n \"thaliawebsite.api.throttling.UserRateThrottle\",\n ],\n \"DEFAULT_THROTTLE_RATES\": setting(\n production={\"anon\": \"100/day\", \"user\": \"20/min\"},\n staging={\"anon\": \"100/day\", \"user\": \"20/min\"},\n development={\"anon\": None, \"user\": None},\n ),\n}\n\n# Internationalization\n# https://docs.djangoproject.com/en/dev/topics/i18n/\n\nLANGUAGE_CODE = \"en\"\n\nTIME_ZONE = \"Europe/Amsterdam\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\nLANGUAGES = [(\"en\", _(\"English\"))]\n\nLOCALE_PATHS = (\"locale\",)\n\n# Static files\nSTATICFILES_FINDERS = (\n \"django.contrib.staticfiles.finders.FileSystemFinder\",\n \"django.contrib.staticfiles.finders.AppDirectoriesFinder\",\n # other finders\n \"compressor.finders.CompressorFinder\",\n)\n\n# Compressor settings\nCOMPRESS_ENABLED = True\n\nCOMPRESS_PRECOMPILERS = ((\"text/x-scss\", \"django_libsass.SassCompiler\"),)\n\nCOMPRESS_FILTERS = {\n \"css\": [\n \"compressor.filters.css_default.CssAbsoluteFilter\",\n \"compressor.filters.cssmin.rCSSMinFilter\",\n ],\n \"js\": [\"compressor.filters.jsmin.JSMinFilter\"],\n}\n\n# Precompiler settings\nSTATIC_PRECOMPILER_LIST_FILES = True\n\n# See utils/model/signals.py for explanation\nSUSPEND_SIGNALS = False\n\nTHUMBNAIL_SIZES = {\n \"small\": \"300x300\",\n \"medium\": \"600x600\",\n \"large\": \"1200x900\",\n \"avatar_large\": \"900x900\",\n \"slide_small\": \"500x108\",\n \"slide_medium\": \"1000x215\",\n \"slide\": \"2000x430\",\n}\n\n# Photos settings\nPHOTO_UPLOAD_SIZE = 2560, 1440\n\n# TinyMCE config\nTINYMCE_DEFAULT_CONFIG = {\n \"max_height\": 500,\n \"menubar\": False,\n \"plugins\": \"autolink autoresize link image code media paste\",\n \"toolbar\": \"h2 h3 | bold italic underline strikethrough | image media | link unlink | \"\n \"bullist numlist | undo redo | code\",\n \"contextmenu\": \"bold italic underline strikethrough | link\",\n \"paste_as_text\": True,\n \"relative_urls\": False,\n \"remove_script_host\": False,\n \"autoresize_bottom_margin\": 50,\n}\n\nBOOTSTRAP5 = {\"required_css_class\": \"required-field\"}\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#default-exception-reporter-filter\nDEFAULT_EXCEPTION_REPORTER_FILTER = (\n \"utils.exception_filter.ThaliaSafeExceptionReporterFilter\"\n)\n\n# Make sure the locations in django.po files don't include line nrs.\nmakemessages.Command.xgettext_options.append(\"--add-location=file\")\n", "path": "website/thaliawebsite/settings.py" } ]
[ { "content": "\"\"\"Django settings for concrexit.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/dev/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/dev/ref/settings/\n\"\"\"\n\nimport logging\n\nimport base64\nimport json\nimport os\n\nfrom django.core.management.commands import makemessages\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\n\nlogger = logging.getLogger(__name__)\n\n# Sentinel objects that are distinct from None\n_NOT_SET = object()\n\n\nclass Misconfiguration(Exception):\n \"\"\"Exception that is raised when something is misconfigured in this file.\"\"\"\n\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.abspath(\n os.path.join(os.path.dirname(os.path.abspath(__file__)), \"\", \"..\")\n)\n\nSOURCE_COMMIT = os.environ.get(\"SOURCE_COMMIT\", \"unknown\")\n\n# Many of the settings are dependent on the environment we're running in.\n# The default environment is development, so the programmer doesn't have to set anything\nDJANGO_ENV = os.environ.get(\"DJANGO_ENV\", \"development\")\n_environments = [\"production\", \"staging\", \"testing\", \"development\"]\nif DJANGO_ENV not in _environments:\n raise Misconfiguration(f\"Set DJANGO_ENV to one of: {', '.join(_environments)}\")\n\n\ndef _set_django_env(env):\n \"\"\"Set the DJANGO_ENV variable.\n\n This is a helper function for the doctests below because doctests cannot set global variables.\n \"\"\"\n # pylint: disable=global-statement\n global DJANGO_ENV\n DJANGO_ENV = env\n\n\ndef setting(*, development, production, staging=_NOT_SET, testing=_NOT_SET):\n \"\"\"Generate a setting depending on the DJANGO_ENV and the arguments.\n\n This function is meant for static settings that depend on the DJANGO_ENV. If the\n staging or testing arguments are left to their defaults, they will fall back to\n the production and development settings respectively.\n\n Example:\n >>> _set_django_env(\"production\")\n >>> SEND_MESSAGES_WITH = setting(development=\"console\", production=\"mail\", staging=\"DM\")\n >>> SEND_MESSAGES_WITH\n 'mail'\n >>> _set_django_env(\"testing\")\n >>> setting(development=\"console\", production=\"mail\", staging=\"DM\")\n 'console'\n \"\"\"\n if DJANGO_ENV == \"development\" or (DJANGO_ENV == \"testing\" and testing is _NOT_SET):\n return development\n if DJANGO_ENV == \"testing\":\n return testing\n if DJANGO_ENV == \"production\" or (DJANGO_ENV == \"staging\" and staging is _NOT_SET):\n return production\n if DJANGO_ENV == \"staging\":\n return staging\n raise Misconfiguration(f\"Set DJANGO_ENV to one of: {', '.join(_environments)}\")\n\n\ndef from_env(\n name, *, production=_NOT_SET, staging=_NOT_SET, testing=_NOT_SET, development=None\n):\n \"\"\"Generate a setting that's overridable by the process environment.\n\n This will raise an exception if a default is not set for production. Because we use\n the sentinel value _NOT_SET, you can still set a default of None for production if wanted.\n\n As with :func:`setting` the staging and testing values will fall back to production\n and development. So if an environment variable is required in production, and no default\n is set for staging, staging will also raise the exception.\n\n Example:\n >>> _set_django_env(\"production\")\n >>> # A secret key should always be set in production via the environment\n >>> from_env(\"MEDIA_ROOT\", development=\"/media/root\")\n Traceback (most recent call last):\n ...\n thaliawebsite.settings.Misconfiguration: Environment variable `MEDIA_ROOT` must be supplied in production\n >>> _set_django_env(\"development\")\n >>> from_env(\"MEDIA_ROOT\", development=\"/media/root\")\n '/media/root'\n \"\"\"\n try:\n return os.environ[name]\n except KeyError:\n if DJANGO_ENV == \"production\" or (\n DJANGO_ENV == \"staging\" and staging is _NOT_SET\n ):\n if production is _NOT_SET and os.environ.get(\"MANAGE_PY\", \"0\") == \"0\":\n # pylint: disable=raise-missing-from\n raise Misconfiguration(\n f\"Environment variable `{name}` must be supplied in production\"\n )\n if production is _NOT_SET and os.environ.get(\"MANAGE_PY\", \"0\") == \"1\":\n logger.warning(\n \"Ignoring unset %s because we're running a management command\", name\n )\n return development\n return production\n if DJANGO_ENV == \"staging\":\n return staging\n if DJANGO_ENV == \"development\" or (\n DJANGO_ENV == \"testing\" and testing is _NOT_SET\n ):\n return development\n if DJANGO_ENV == \"testing\":\n return testing\n # pylint: disable=raise-missing-from\n raise Misconfiguration(f\"DJANGO_ENV set to unsupported value: {DJANGO_ENV}\")\n\n\n###############################################################################\n# Site settings\n\n# We use this setting to generate the email addresses\nSITE_DOMAIN = from_env(\n \"SITE_DOMAIN\", development=\"thalia.localhost\", production=\"thalia.nu\"\n)\n# We use this domain to generate some absolute urls when we don't have access to a request\nBASE_URL = os.environ.get(\"BASE_URL\", f\"https://{SITE_DOMAIN}\")\n\n# Default FROM email\nDEFAULT_FROM_EMAIL = f\"{os.environ.get('ADDRESS_NOREPLY', 'noreply')}@{SITE_DOMAIN}\"\n# https://docs.djangoproject.com/en/dev/ref/settings/#server-email\nSERVER_EMAIL = DEFAULT_FROM_EMAIL\nNEWSLETTER_FROM_ADDRESS = (\n f\"{os.environ.get('ADDRESS_NEWSLETTER', 'newsletter')}@{SITE_DOMAIN}\"\n)\nBOARD_NOTIFICATION_ADDRESS = (\n f\"{os.environ.get('ADDRESS_CONTACT', 'info')}@{SITE_DOMAIN}\"\n)\nPARTNER_NOTIFICATION_ADDRESS = (\n f\"{os.environ.get('ADDRESS_COLLABORATION', 'samenwerking')}@{SITE_DOMAIN}\"\n)\nEDUCATION_NOTIFICATION_ADDRESS = (\n f\"{os.environ.get('ADDRESS_EDUCATION', 'educacie')}@{SITE_DOMAIN}\"\n)\nPROMO_REQUEST_NOTIFICATION_ADDRESS = (\n f\"{os.environ.get('ADDRESS_PROMOREQUESTS', 'paparazcie')}@{SITE_DOMAIN}\"\n)\n\n# The scheme the app uses for oauth redirection\nAPP_OAUTH_SCHEME = os.environ.get(\"APP_OAUTH_SCHEME\", \"nu.thalia\")\n\n# Membership prices\nMEMBERSHIP_PRICES = {\n \"year\": int(os.environ.get(\"MEMBERSHIP_PRICE_YEAR_CENTS\", \"750\")) / 100,\n \"study\": int(os.environ.get(\"MEMBERSHIP_PRICE_STUDY_CENTS\", \"3000\")) / 100,\n}\n\n# Window during which a payment can be deleted again\nPAYMENT_CHANGE_WINDOW = int(os.environ.get(\"PAYMENTS_CHANGE_WINDOW\", 10 * 60))\n\n# Payments creditor identifier\nSEPA_CREDITOR_ID = os.environ.get(\"SEPA_CREDITOR_ID\", \"<unknown>\")\n\n# Payment batch withdrawal date default offset after creation date\nPAYMENT_BATCH_DEFAULT_WITHDRAWAL_DATE_OFFSET = timezone.timedelta(days=14)\n\nTHALIA_PAY_ENABLED_PAYMENT_METHOD = (\n from_env(\"THALIA_PAY_ENABLED\", development=\"1\", staging=\"1\", production=\"0\") == \"1\"\n)\nTHALIA_PAY_FOR_NEW_MEMBERS = os.environ.get(\"THALIA_PAY_FOR_NEW_MEMBERS\", \"1\") == \"1\"\n\n###############################################################################\n# Django settings\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#secret-key\nSECRET_KEY = from_env(\n \"SECRET_KEY\", development=\"#o-0d1q5&^&06tn@8pr1f(n3$crafd++^%sacao7hj*ea@c)^t\"\n)\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts\nALLOWED_HOSTS = [\n SITE_DOMAIN,\n *from_env(\"ALLOWED_HOSTS\", development=\"*\", production=\"\").split(\",\"),\n]\n# https://docs.djangoproject.com/en/dev/ref/settings/#internal-ips\nINTERNAL_IPS = setting(development=[\"127.0.0.1\", \"172.17.0.1\"], production=[])\n\n# https://django-compressor.readthedocs.io/en/stable/settings/#django.conf.settings.COMPRESS_OFFLINE\nCOMPRESS_OFFLINE = setting(development=False, production=True)\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#static-url\nSTATIC_URL = \"/static/\"\n# https://docs.djangoproject.com/en/dev/ref/settings/#static-root\nSTATIC_ROOT = from_env(\"STATIC_ROOT\", development=os.path.join(BASE_DIR, \"static\"))\n\nSENDFILE_BACKEND = setting(\n development=\"django_sendfile.backends.development\",\n production=\"django_sendfile.backends.nginx\",\n)\n# https://github.com/johnsensible/django-sendfile#nginx-backend\nSENDFILE_URL = \"/media/sendfile/\"\nSENDFILE_ROOT = from_env(\n \"SENDFILE_ROOT\",\n production=\"/concrexit/media/\",\n development=os.path.join(BASE_DIR, \"media\"),\n)\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#media-url\nMEDIA_URL = \"/media/\"\n# https://docs.djangoproject.com/en/dev/ref/settings/#media-root\nMEDIA_ROOT = from_env(\"MEDIA_ROOT\", development=os.path.join(BASE_DIR, \"media\"))\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#conn-max-age\nCONN_MAX_AGE = int(from_env(\"CONN_MAX_AGE\", development=\"0\", production=\"60\"))\n\n# Useful for managing members\n# https://docs.djangoproject.com/en/dev/ref/settings/#data-upload-max-number-fields\nDATA_UPLOAD_MAX_NUMBER_FIELDS = os.environ.get(\"DATA_UPLOAD_MAX_NUMBER_FIELDS\", 10000)\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#debug\nDEBUG = setting(development=True, production=False, testing=False)\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#session-cookie-secure\nSESSION_COOKIE_SECURE = setting(development=False, production=True)\n# https://docs.djangoproject.com/en/dev/ref/settings/#csrf-cookie-secure\nCSRF_COOKIE_SECURE = setting(development=False, production=True)\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#default-auto-field\nDEFAULT_AUTO_FIELD = \"django.db.models.AutoField\"\n\n###############################################################################\n# Email settings\n# https://docs.djangoproject.com/en/dev/ref/settings/#email-backend\n_EMAIL_BACKEND = from_env(\"EMAIL_BACKEND\", development=\"console\", production=\"smtp\")\nif _EMAIL_BACKEND == \"console\":\n EMAIL_BACKEND = \"django.core.mail.backends.console.EmailBackend\"\n\nif _EMAIL_BACKEND == \"smtp\":\n EMAIL_BACKEND = \"django.core.mail.backends.smtp.EmailBackend\"\n EMAIL_HOST = os.environ.get(\"DJANGO_EMAIL_HOST\")\n EMAIL_PORT = os.environ.get(\"DJANGO_EMAIL_PORT\", 25)\n EMAIL_HOST_USER = os.environ.get(\"DJANGO_EMAIL_HOST_USER\", \"\")\n EMAIL_HOST_PASSWORD = os.environ.get(\"DJANGO_EMAIL_HOST_PASSWORD\", \"\")\n EMAIL_USE_TLS = os.environ.get(\"DJANGO_EMAIL_USE_TLS\", \"1\") == \"1\"\n EMAIL_TIMEOUT = int(os.environ.get(\"EMAIL_TIMEOUT\", \"10\"))\n if EMAIL_HOST is None:\n logger.warning(\n \"The email host is set to the default of localhost, are you sure you don't want to set EMAIL_HOST?\"\n )\n EMAIL_HOST = \"localhost\"\n\n###############################################################################\n# Database settings\n# https://docs.djangoproject.com/en/dev/ref/settings/#databases\nDATABASE_ENGINE = from_env(\n \"DATABASE_ENGINE\", development=\"sqlite\", production=\"postgresql\", testing=None\n)\nif DATABASE_ENGINE == \"sqlite\":\n DATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.sqlite3\",\n \"NAME\": os.path.join(BASE_DIR, \"db.sqlite3\"),\n }\n }\n\nif DATABASE_ENGINE == \"postgresql\":\n DATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"USER\": os.environ.get(\"POSTGRES_USER\", \"concrexit\"),\n \"PASSWORD\": os.environ.get(\"POSTGRES_PASSWORD\", None),\n \"NAME\": os.environ.get(\"POSTGRES_DB\", \"\"),\n \"HOST\": os.environ.get(\"POSTGRES_HOST\", \"\"),\n \"PORT\": os.environ.get(\"POSTGRES_PORT\", \"5432\"),\n }\n }\n\nif DJANGO_ENV == \"testing\":\n DATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"NAME\": \"thalia\",\n \"USER\": \"postgres\",\n \"PASSWORD\": \"postgres\",\n \"HOST\": \"postgres\",\n \"PORT\": 5432,\n },\n }\n\n###############################################################################\n# Firebase config\nFIREBASE_CREDENTIALS = os.environ.get(\"FIREBASE_CREDENTIALS\", \"{}\")\nif FIREBASE_CREDENTIALS != \"{}\":\n FIREBASE_CREDENTIALS = base64.urlsafe_b64decode(FIREBASE_CREDENTIALS)\nFIREBASE_CREDENTIALS = json.loads(FIREBASE_CREDENTIALS)\n\nif FIREBASE_CREDENTIALS != {}:\n from firebase_admin import initialize_app, credentials\n\n try:\n initialize_app(credential=credentials.Certificate(FIREBASE_CREDENTIALS))\n except ValueError as e:\n logger.error(\"Firebase application failed to initialise\")\n\n###############################################################################\n# GSuite config\nGSUITE_ADMIN_SCOPES = [\n \"https://www.googleapis.com/auth/admin.directory.group\",\n \"https://www.googleapis.com/auth/admin.directory.user\",\n \"https://www.googleapis.com/auth/apps.groups.settings\",\n]\n\nGSUITE_ADMIN_CREDENTIALS = os.environ.get(\"GSUITE_ADMIN_CREDENTIALS\", \"{}\")\nif GSUITE_ADMIN_CREDENTIALS != \"{}\":\n GSUITE_ADMIN_CREDENTIALS = base64.urlsafe_b64decode(GSUITE_ADMIN_CREDENTIALS)\nGSUITE_ADMIN_CREDENTIALS = json.loads(GSUITE_ADMIN_CREDENTIALS)\nGSUITE_ADMIN_USER = os.environ.get(\"GSUITE_ADMIN_USER\", \"[email protected]\")\nGSUITE_DOMAIN = from_env(\n \"GSUITE_DOMAIN\", development=\"thalia.localhost\", production=\"thalia.nu\"\n)\nGSUITE_MEMBERS_DOMAIN = from_env(\n \"GSUITE_MEMBERS_DOMAIN\",\n development=\"members.thalia.localhost\",\n production=\"members.thalia.nu\",\n)\nGSUITE_MEMBERS_AUTOSYNC = os.environ.get(\"GSUITE_MEMBERS_AUTOSYNC\", \"0\") == \"1\"\n\nif GSUITE_ADMIN_CREDENTIALS != {}:\n from google.oauth2 import service_account\n\n GSUITE_ADMIN_CREDENTIALS = service_account.Credentials.from_service_account_info(\n GSUITE_ADMIN_CREDENTIALS, scopes=GSUITE_ADMIN_SCOPES\n ).with_subject(GSUITE_ADMIN_USER)\n\nEMAIL_DOMAIN_BLACKLIST = [GSUITE_MEMBERS_DOMAIN]\n\n###############################################################################\n# Google maps API key and secrets\nGOOGLE_MAPS_API_KEY = os.environ.get(\"GOOGLE_MAPS_API_KEY\", \"\")\nGOOGLE_MAPS_API_SECRET = os.environ.get(\"GOOGLE_MAPS_API_SECRET\", \"\")\nGOOGLE_PLACES_API_KEY = os.environ.get(\"GOOGLE_PLACES_API_KEY\", \"\")\n\n###############################################################################\n# Conscribo settings\nCONSCRIBO_ACCOUNT = os.environ.get(\"CONSCRIBO_ACCOUNT\", \"\")\nCONSCRIBO_USER = os.environ.get(\"CONSCRIBO_USER\", \"\")\nCONSCRIBO_PASSWORD = os.environ.get(\"CONSCRIBO_PASSWORD\", \"\")\n\n###############################################################################\n# Sentry setup\nif \"SENTRY_DSN\" in os.environ:\n import sentry_sdk\n from sentry_sdk.integrations.django import DjangoIntegration\n\n # Pylint sees the faked init class that sentry uses for typing purposes\n # pylint: disable=abstract-class-instantiated\n sentry_sdk.init(\n dsn=os.environ.get(\"SENTRY_DSN\"),\n integrations=[DjangoIntegration()],\n release=SOURCE_COMMIT,\n send_default_pii=True,\n environment=DJANGO_ENV,\n traces_sample_rate=0.2,\n )\n\n\n###############################################################################\n# (Mostly) static settings\nINSTALLED_APPS = [\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"django.contrib.sitemaps\",\n # Dependencies\n \"oauth2_provider\",\n \"corsheaders\",\n \"django_bootstrap5\",\n \"tinymce\",\n \"rest_framework\",\n \"rest_framework.authtoken\",\n \"compressor\",\n \"debug_toolbar\",\n \"admin_auto_filters\",\n # Our apps\n # Directly link to the app config when applicable as recommended\n # by the docs: https://docs.djangoproject.com/en/2.0/ref/applications/\n \"thaliawebsite.apps.ThaliaWebsiteConfig\", # include for admin settings\n # Load django.contrib.admin after thaliawebsite so the admin page gets modified\n \"django.contrib.admin\",\n \"pushnotifications.apps.PushNotificationsConfig\",\n \"promotion.apps.PromotionConfig\",\n \"members.apps.MembersConfig\",\n \"documents.apps.DocumentsConfig\",\n \"activemembers.apps.ActiveMembersConfig\",\n \"photos.apps.PhotosConfig\",\n \"utils\",\n \"mailinglists.apps.MailinglistsConfig\",\n \"merchandise.apps.MerchandiseConfig\",\n \"thabloid.apps.ThabloidConfig\",\n \"partners.apps.PartnersConfig\",\n \"events.apps.EventsConfig\",\n \"pizzas.apps.PizzasConfig\",\n \"newsletters.apps.NewslettersConfig\",\n \"education.apps.EducationConfig\",\n \"announcements.apps.AnnouncementsConfig\",\n \"registrations.apps.RegistrationsConfig\",\n \"payments.apps.PaymentsConfig\",\n \"singlepages.apps.SinglepagesConfig\",\n \"shortlinks.apps.ShortLinkConfig\",\n \"sales.apps.SalesConfig\",\n]\n\nMIDDLEWARE = [\n \"debug_toolbar.middleware.DebugToolbarMiddleware\",\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.http.ConditionalGetMiddleware\",\n \"corsheaders.middleware.CorsMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.locale.LocaleMiddleware\",\n # Our middleware\n \"members.middleware.MemberMiddleware\",\n]\n\nif DJANGO_ENV in (\"development\", \"testing\"):\n INSTALLED_APPS += [\"django_template_check\"]\n\nif DJANGO_ENV == \"testing\":\n for x in (\n \"debug_toolbar.middleware.DebugToolbarMiddleware\",\n \"django.middleware.http.ConditionalGetMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n ):\n MIDDLEWARE.remove(x)\n for x in (\"debug_toolbar\",):\n INSTALLED_APPS.remove(x)\n\nROOT_URLCONF = \"thaliawebsite.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [os.path.join(BASE_DIR, \"templates\")],\n \"APP_DIRS\": setting(development=True, production=False),\n \"OPTIONS\": {\n \"context_processors\": [\n \"thaliawebsite.context_processors.source_commit\",\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.template.context_processors.media\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n \"announcements.context_processors.announcements\",\n \"thaliawebsite.context_processors.thumbnail_sizes\",\n ],\n },\n },\n]\n\nif DJANGO_ENV in [\"production\", \"staging\"]:\n # Use caching template loader\n TEMPLATES[0][\"OPTIONS\"][\"loaders\"] = [\n (\n \"django.template.loaders.cached.Loader\",\n [\n \"django.template.loaders.filesystem.Loader\",\n \"django.template.loaders.app_directories.Loader\",\n ],\n )\n ]\n\n # Default logging: https://github.com/django/django/blob/master/django/utils/log.py\n # We disable mailing the admin.\n # Server errors will be sent to Sentry via the config below this.\n LOGGING = {\n \"version\": 1,\n \"formatters\": {\n \"verbose\": {\"format\": \"%(asctime)s %(name)s[%(levelname)s]: %(message)s\"},\n },\n \"handlers\": {\n \"console\": {\n \"level\": \"INFO\",\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"verbose\",\n }\n },\n \"loggers\": {\n \"django\": {\"handlers\": [], \"level\": \"INFO\"},\n \"\": {\"handlers\": [\"console\"], \"level\": \"INFO\"},\n },\n }\n\nWSGI_APPLICATION = \"thaliawebsite.wsgi.application\"\n\n# Login pages\nLOGIN_URL = \"/user/login/\"\n\nLOGIN_REDIRECT_URL = \"/\"\n\n# Cors configuration\nCORS_ORIGIN_ALLOW_ALL = True\nCORS_URLS_REGEX = r\"^/(?:api/v1|api/v2|user/oauth)/.*\"\n\n# OAuth configuration\nOIDC_RSA_PRIVATE_KEY = from_env(\"OIDC_RSA_PRIVATE_KEY\", testing=None)\nif OIDC_RSA_PRIVATE_KEY is not None:\n OIDC_RSA_PRIVATE_KEY = base64.urlsafe_b64decode(OIDC_RSA_PRIVATE_KEY)\n\nOAUTH2_PROVIDER = {\n \"OIDC_ENABLED\": True,\n \"OIDC_RSA_PRIVATE_KEY\": OIDC_RSA_PRIVATE_KEY,\n \"ALLOWED_REDIRECT_URI_SCHEMES\": setting(\n production=[\"https\", APP_OAUTH_SCHEME],\n staging=[\"http\", \"https\", APP_OAUTH_SCHEME],\n development=[\"http\", \"https\", APP_OAUTH_SCHEME],\n ),\n \"SCOPES\": {\n \"openid\": \"OpenID Connect\",\n \"read\": \"Authenticated read access to the website\",\n \"write\": \"Authenticated write access to the website\",\n \"activemembers:read\": \"Read access to committee, society and board groups\",\n \"announcements:read\": \"Read access to announcements\",\n \"events:read\": \"Read access to events and your event registrations\",\n \"events:register\": \"Write access to the state of your event registrations\",\n \"events:admin\": \"Admin access to the events\",\n \"food:read\": \"Read access to food events\",\n \"food:order\": \"Order access to food events\",\n \"food:admin\": \"Admin access to food events\",\n \"members:read\": \"Read access to the members directory\",\n \"photos:read\": \"Read access to photos\",\n \"profile:read\": \"Read access to your member profile\",\n \"profile:write\": \"Write access to your member profile\",\n \"pushnotifications:read\": \"Read access to push notifications\",\n \"pushnotifications:write\": \"Write access to push notifications\",\n \"partners:read\": \"Read access to partners\",\n \"payments:read\": \"Read access to payments\",\n \"payments:write\": \"Write access to payments\",\n \"payments:admin\": \"Admin access to payments\",\n \"sales:read\": \"Read access to your Point of Sale orders\",\n \"sales:order\": \"Place Point of Sale orders on your behalf\",\n \"sales:admin\": \"Admin access to Point of Sale orders\",\n },\n}\n\n# Password validation\n# https://docs.djangoproject.com/en/dev/ref/settings/#auth-password-validators\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": (\n \"django.contrib.auth.\"\n \"password_validation.UserAttributeSimilarityValidator\"\n ),\n },\n {\n \"NAME\": (\"django.contrib.auth.password_validation.MinimumLengthValidator\"),\n },\n {\n \"NAME\": (\"django.contrib.auth.password_validation.CommonPasswordValidator\"),\n },\n {\n \"NAME\": (\"django.contrib.auth.password_validation.NumericPasswordValidator\"),\n },\n]\n\nPASSWORD_HASHERS = setting(\n development=(\n \"django.contrib.auth.hashers.PBKDF2PasswordHasher\",\n \"django.contrib.auth.hashers.MD5PasswordHasher\",\n ),\n production=(\n \"django.contrib.auth.hashers.Argon2PasswordHasher\",\n \"django.contrib.auth.hashers.PBKDF2PasswordHasher\",\n \"django.contrib.auth.hashers.PBKDF2SHA1PasswordHasher\",\n \"django.contrib.auth.hashers.BCryptSHA256PasswordHasher\",\n \"django.contrib.auth.hashers.BCryptPasswordHasher\",\n ),\n testing=(\"django.contrib.auth.hashers.MD5PasswordHasher\",),\n)\n\nAUTHENTICATION_BACKENDS = [\n \"django.contrib.auth.backends.ModelBackend\",\n \"activemembers.backends.MemberGroupBackend\",\n]\n\nREST_FRAMEWORK = {\n \"DEFAULT_AUTHENTICATION_CLASSES\": (\n \"rest_framework.authentication.SessionAuthentication\",\n \"thaliawebsite.api.authentication.APIv1TokenAuthentication\",\n \"oauth2_provider.contrib.rest_framework.OAuth2Authentication\",\n ),\n \"DEFAULT_PAGINATION_CLASS\": \"thaliawebsite.api.pagination.APIv2LimitOffsetPagination\",\n \"PAGE_SIZE\": 50, # Only for API v2\n \"ALLOWED_VERSIONS\": [\"v1\", \"v2\", \"calendarjs\"],\n \"DEFAULT_VERSIONING_CLASS\": \"rest_framework.versioning.NamespaceVersioning\",\n \"DEFAULT_SCHEMA_CLASS\": \"thaliawebsite.api.openapi.OAuthAutoSchema\",\n \"DEFAULT_THROTTLE_CLASSES\": [\n \"thaliawebsite.api.throttling.AnonRateThrottle\",\n \"thaliawebsite.api.throttling.UserRateThrottle\",\n ],\n \"DEFAULT_THROTTLE_RATES\": setting(\n production={\"anon\": \"100/day\", \"user\": \"20/min\"},\n staging={\"anon\": \"100/day\", \"user\": \"20/min\"},\n development={\"anon\": None, \"user\": None},\n ),\n}\n\n# Internationalization\n# https://docs.djangoproject.com/en/dev/topics/i18n/\n\nDATETIME_FORMAT = \"j M, Y, H:i\"\n\nLANGUAGE_CODE = \"en\"\n\nTIME_ZONE = \"Europe/Amsterdam\"\n\nUSE_I18N = True\n\nUSE_L10N = False\n\nUSE_TZ = True\n\nLANGUAGES = [(\"en\", _(\"English\"))]\n\nLOCALE_PATHS = (\"locale\",)\n\n# Static files\nSTATICFILES_FINDERS = (\n \"django.contrib.staticfiles.finders.FileSystemFinder\",\n \"django.contrib.staticfiles.finders.AppDirectoriesFinder\",\n # other finders\n \"compressor.finders.CompressorFinder\",\n)\n\n# Compressor settings\nCOMPRESS_ENABLED = True\n\nCOMPRESS_PRECOMPILERS = ((\"text/x-scss\", \"django_libsass.SassCompiler\"),)\n\nCOMPRESS_FILTERS = {\n \"css\": [\n \"compressor.filters.css_default.CssAbsoluteFilter\",\n \"compressor.filters.cssmin.rCSSMinFilter\",\n ],\n \"js\": [\"compressor.filters.jsmin.JSMinFilter\"],\n}\n\n# Precompiler settings\nSTATIC_PRECOMPILER_LIST_FILES = True\n\n# See utils/model/signals.py for explanation\nSUSPEND_SIGNALS = False\n\nTHUMBNAIL_SIZES = {\n \"small\": \"300x300\",\n \"medium\": \"600x600\",\n \"large\": \"1200x900\",\n \"avatar_large\": \"900x900\",\n \"slide_small\": \"500x108\",\n \"slide_medium\": \"1000x215\",\n \"slide\": \"2000x430\",\n}\n\n# Photos settings\nPHOTO_UPLOAD_SIZE = 2560, 1440\n\n# TinyMCE config\nTINYMCE_DEFAULT_CONFIG = {\n \"max_height\": 500,\n \"menubar\": False,\n \"plugins\": \"autolink autoresize link image code media paste\",\n \"toolbar\": \"h2 h3 | bold italic underline strikethrough | image media | link unlink | \"\n \"bullist numlist | undo redo | code\",\n \"contextmenu\": \"bold italic underline strikethrough | link\",\n \"paste_as_text\": True,\n \"relative_urls\": False,\n \"remove_script_host\": False,\n \"autoresize_bottom_margin\": 50,\n}\n\nBOOTSTRAP5 = {\"required_css_class\": \"required-field\"}\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#default-exception-reporter-filter\nDEFAULT_EXCEPTION_REPORTER_FILTER = (\n \"utils.exception_filter.ThaliaSafeExceptionReporterFilter\"\n)\n\n# Make sure the locations in django.po files don't include line nrs.\nmakemessages.Command.xgettext_options.append(\"--add-location=file\")\n", "path": "website/thaliawebsite/settings.py" } ]
diff --git a/website/thaliawebsite/settings.py b/website/thaliawebsite/settings.py index da955e0e5..912b48815 100644 --- a/website/thaliawebsite/settings.py +++ b/website/thaliawebsite/settings.py @@ -627,13 +627,15 @@ def from_env( # Internationalization # https://docs.djangoproject.com/en/dev/topics/i18n/ +DATETIME_FORMAT = "j M, Y, H:i" + LANGUAGE_CODE = "en" TIME_ZONE = "Europe/Amsterdam" USE_I18N = True -USE_L10N = True +USE_L10N = False USE_TZ = True
litestar-org__litestar-1231
Bug: LoggingMiddleware is sending obfuscated session id to client **Describe the bug** When using the logging middleware and session middleware together, the logging middleware's cookie obfuscation is overwriting the session name with "*****" and that name is being pushed down to the client. The initial set-cookie has the correct session id but subsequent requests do not. **To Reproduce** I created a test function in tests/middleware/test_logging_middleware.py which I believe confirms the bug: ```python def test_logging_with_session_middleware() -> None: @post("/") async def set_session(request: Request) -> None: request.set_session({"hello": "world"}) @get("/") async def get_session() -> None: pass logging_middleware_config = LoggingMiddlewareConfig() session_config = MemoryBackendConfig() with create_test_client( [set_session, get_session], logging_config=LoggingConfig(), middleware=[logging_middleware_config.middleware, session_config.middleware], ) as client: response = client.post("/") assert response.status_code == HTTP_201_CREATED assert len(client.cookies.get("session", "")) == 64 response = client.get("/") assert response.status_code == HTTP_200_OK assert len(client.cookies.get("session", "")) == 64 ``` The test results in the following exception: ``` > assert len(client.cookies.get("session", "")) == 64 E AssertionError: assert 5 == 64 E + where 5 = len('*****') E + where '*****' = <bound method Cookies.get of <Cookies[<Cookie session=***** for testserver.local />]>>('session', '') E + where <bound method Cookies.get of <Cookies[<Cookie session=***** for testserver.local />]>> = <Cookies[<Cookie session=***** for testserver.local />]>.get E + where <Cookies[<Cookie session=***** for testserver.local />]> = <starlite.testing.client.sync_client.TestClient object at 0x7f4cbf7bea40>.cookies ``` **Additional Context** Starlite version: 1.51.4
[ { "content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any, Callable, Coroutine, Literal, cast\n\nfrom typing_extensions import TypedDict\n\nfrom starlite.connection.request import Request\nfrom starlite.datastructures.upload_file import UploadFile\nfrom starlite.enums import HttpMethod, RequestEncodingType\nfrom starlite.parsers import parse_cookie_string\n\nif TYPE_CHECKING:\n from starlite.connection import ASGIConnection\n from starlite.types import Method\n from starlite.types.asgi_types import HTTPResponseBodyEvent, HTTPResponseStartEvent\n\n\ndef obfuscate(values: dict[str, Any], fields_to_obfuscate: set[str]) -> dict[str, Any]:\n \"\"\"Obfuscate values in a dictionary, replacing values with `******`\n\n Args:\n values: A dictionary of strings\n fields_to_obfuscate: keys to obfuscate\n\n Returns:\n A dictionary with obfuscated strings\n \"\"\"\n for key in values:\n if key.lower() in fields_to_obfuscate:\n values[key] = \"*****\"\n return values\n\n\nRequestExtractorField = Literal[\n \"path\", \"method\", \"content_type\", \"headers\", \"cookies\", \"query\", \"path_params\", \"body\", \"scheme\", \"client\"\n]\n\nResponseExtractorField = Literal[\"status_code\", \"headers\", \"body\", \"cookies\"]\n\n\nclass ExtractedRequestData(TypedDict, total=False):\n \"\"\"Dictionary representing extracted request data.\"\"\"\n\n body: Coroutine\n client: tuple[str, int]\n content_type: tuple[str, dict[str, str]]\n cookies: dict[str, str]\n headers: dict[str, str]\n method: Method\n path: str\n path_params: dict[str, Any]\n query: bytes | dict[str, Any]\n scheme: str\n\n\nclass ConnectionDataExtractor:\n \"\"\"Utility class to extract data from an :class:`ASGIConnection <starlite.connection.ASGIConnection>`,\n :class:`Request <starlite.connection.Request>` or :class:`WebSocket <starlite.connection.WebSocket>` instance.\n \"\"\"\n\n __slots__ = (\n \"connection_extractors\",\n \"request_extractors\",\n \"parse_body\",\n \"parse_query\",\n \"obfuscate_headers\",\n \"obfuscate_cookies\",\n )\n\n def __init__(\n self,\n extract_body: bool = True,\n extract_client: bool = True,\n extract_content_type: bool = True,\n extract_cookies: bool = True,\n extract_headers: bool = True,\n extract_method: bool = True,\n extract_path: bool = True,\n extract_path_params: bool = True,\n extract_query: bool = True,\n extract_scheme: bool = True,\n obfuscate_cookies: set[str] | None = None,\n obfuscate_headers: set[str] | None = None,\n parse_body: bool = False,\n parse_query: bool = False,\n ):\n \"\"\"Initialize ``ConnectionDataExtractor``\n\n Args:\n extract_body: Whether to extract body, (for requests only).\n extract_client: Whether to extract the client (host, port) mapping.\n extract_content_type: Whether to extract the content type and any options.\n extract_cookies: Whether to extract cookies.\n extract_headers: Whether to extract headers.\n extract_method: Whether to extract the HTTP method, (for requests only).\n extract_path: Whether to extract the path.\n extract_path_params: Whether to extract path parameters.\n extract_query: Whether to extract query parameters.\n extract_scheme: Whether to extract the http scheme.\n obfuscate_headers: headers keys to obfuscate. Obfuscated values are replaced with '*****'.\n obfuscate_cookies: cookie keys to obfuscate. Obfuscated values are replaced with '*****'.\n parse_body: Whether to parse the body value or return the raw byte string, (for requests only).\n parse_query: Whether to parse query parameters or return the raw byte string.\n \"\"\"\n self.parse_body = parse_body\n self.parse_query = parse_query\n self.obfuscate_headers = {h.lower() for h in (obfuscate_headers or set())}\n self.obfuscate_cookies = {c.lower() for c in (obfuscate_cookies or set())}\n self.connection_extractors: dict[str, Callable[[ASGIConnection[Any, Any, Any, Any]], Any]] = {}\n self.request_extractors: dict[RequestExtractorField, Callable[[Request[Any, Any, Any]], Any]] = {}\n if extract_scheme:\n self.connection_extractors[\"scheme\"] = self.extract_scheme\n if extract_client:\n self.connection_extractors[\"client\"] = self.extract_client\n if extract_path:\n self.connection_extractors[\"path\"] = self.extract_path\n if extract_headers:\n self.connection_extractors[\"headers\"] = self.extract_headers\n if extract_cookies:\n self.connection_extractors[\"cookies\"] = self.extract_cookies\n if extract_query:\n self.connection_extractors[\"query\"] = self.extract_query\n if extract_path_params:\n self.connection_extractors[\"path_params\"] = self.extract_path_params\n if extract_method:\n self.request_extractors[\"method\"] = self.extract_method\n if extract_content_type:\n self.request_extractors[\"content_type\"] = self.extract_content_type\n if extract_body:\n self.request_extractors[\"body\"] = self.extract_body\n\n def __call__(self, connection: ASGIConnection[Any, Any, Any, Any]) -> ExtractedRequestData:\n \"\"\"Extract data from the connection, returning a dictionary of values.\n\n Notes:\n - The value for ``body`` - if present - is an unresolved Coroutine and as such should be awaited by the receiver.\n\n Args:\n connection: An ASGI connection or its subclasses.\n\n Returns:\n A string keyed dictionary of extracted values.\n \"\"\"\n extractors = (\n {**self.connection_extractors, **self.request_extractors} # type: ignore\n if isinstance(connection, Request)\n else self.connection_extractors\n )\n return cast(\"ExtractedRequestData\", {key: extractor(connection) for key, extractor in extractors.items()})\n\n @staticmethod\n def extract_scheme(connection: ASGIConnection[Any, Any, Any, Any]) -> str:\n \"\"\"Extract the scheme from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n The connection's scope[\"scheme\"] value\n \"\"\"\n return connection.scope[\"scheme\"]\n\n @staticmethod\n def extract_client(connection: ASGIConnection[Any, Any, Any, Any]) -> tuple[str, int]:\n \"\"\"Extract the client from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n The connection's scope[\"client\"] value or a default value.\n \"\"\"\n return connection.scope.get(\"client\") or (\"\", 0)\n\n @staticmethod\n def extract_path(connection: ASGIConnection[Any, Any, Any, Any]) -> str:\n \"\"\"Extract the path from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n The connection's scope[\"path\"] value\n \"\"\"\n return connection.scope[\"path\"]\n\n def extract_headers(self, connection: ASGIConnection[Any, Any, Any, Any]) -> dict[str, str]:\n \"\"\"Extract headers from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n A dictionary with the connection's headers.\n \"\"\"\n headers = {k.decode(\"latin-1\"): v.decode(\"latin-1\") for k, v in connection.scope[\"headers\"]}\n return obfuscate(headers, self.obfuscate_headers) if self.obfuscate_headers else headers\n\n def extract_cookies(self, connection: ASGIConnection[Any, Any, Any, Any]) -> dict[str, str]:\n \"\"\"Extract cookies from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n A dictionary with the connection's cookies.\n \"\"\"\n return obfuscate(connection.cookies, self.obfuscate_cookies) if self.obfuscate_cookies else connection.cookies\n\n def extract_query(self, connection: ASGIConnection[Any, Any, Any, Any]) -> Any:\n \"\"\"Extract query from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n Either a dictionary with the connection's parsed query string or the raw query byte-string.\n \"\"\"\n return connection.query_params.dict() if self.parse_query else connection.scope.get(\"query_string\", b\"\")\n\n @staticmethod\n def extract_path_params(connection: ASGIConnection[Any, Any, Any, Any]) -> dict[str, Any]:\n \"\"\"Extract the path parameters from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n A dictionary with the connection's path parameters.\n \"\"\"\n return connection.path_params\n\n @staticmethod\n def extract_method(request: Request[Any, Any, Any]) -> Method:\n \"\"\"Extract the method from an ``ASGIConnection``\n\n Args:\n request: A :class:`Request <starlite.connection.Request>` instance.\n\n Returns:\n The request's scope[\"method\"] value.\n \"\"\"\n return request.scope[\"method\"]\n\n @staticmethod\n def extract_content_type(request: Request[Any, Any, Any]) -> tuple[str, dict[str, str]]:\n \"\"\"Extract the content-type from an ``ASGIConnection``\n\n Args:\n request: A :class:`Request <starlite.connection.Request>` instance.\n\n Returns:\n A tuple containing the request's parsed 'Content-Type' header.\n \"\"\"\n return request.content_type\n\n async def extract_body(self, request: \"Request[Any, Any, Any]\") -> Any:\n \"\"\"Extract the body from an ``ASGIConnection``\n\n Args:\n request: A :class:`Request <starlite.connection.Request>` instance.\n\n Returns:\n Either the parsed request body or the raw byte-string.\n \"\"\"\n if request.method != HttpMethod.GET:\n if not self.parse_body:\n return await request.body()\n request_encoding_type = request.content_type[0]\n if request_encoding_type == RequestEncodingType.JSON:\n return await request.json()\n form_data = await request.form()\n if request_encoding_type == RequestEncodingType.URL_ENCODED:\n return dict(form_data)\n return {\n key: repr(value) if isinstance(value, UploadFile) else value for key, value in form_data.multi_items()\n }\n return None\n\n\nclass ExtractedResponseData(TypedDict, total=False):\n \"\"\"Dictionary representing extracted response data.\"\"\"\n\n body: bytes\n status_code: int\n headers: dict[str, str]\n cookies: dict[str, str]\n\n\nclass ResponseDataExtractor:\n \"\"\"Utility class to extract data from a ``Message``\"\"\"\n\n __slots__ = (\"extractors\", \"parse_headers\", \"obfuscate_headers\", \"obfuscate_cookies\")\n\n def __init__(\n self,\n extract_body: bool = True,\n extract_cookies: bool = True,\n extract_headers: bool = True,\n extract_status_code: bool = True,\n obfuscate_cookies: set[str] | None = None,\n obfuscate_headers: set[str] | None = None,\n ):\n \"\"\"Initialize ``ResponseDataExtractor`` with options.\n\n Args:\n extract_body: Whether to extract the body.\n extract_cookies: Whether to extract the cookies.\n extract_headers: Whether to extract the headers.\n extract_status_code: Whether to extract the status code.\n obfuscate_cookies: cookie keys to obfuscate. Obfuscated values are replaced with '*****'.\n obfuscate_headers: headers keys to obfuscate. Obfuscated values are replaced with '*****'.\n \"\"\"\n self.obfuscate_headers = {h.lower() for h in (obfuscate_headers or set())}\n self.obfuscate_cookies = {c.lower() for c in (obfuscate_cookies or set())}\n self.extractors: dict[\n ResponseExtractorField, Callable[[tuple[HTTPResponseStartEvent, HTTPResponseBodyEvent]], Any]\n ] = {}\n if extract_body:\n self.extractors[\"body\"] = self.extract_response_body\n if extract_status_code:\n self.extractors[\"status_code\"] = self.extract_status_code\n if extract_headers:\n self.extractors[\"headers\"] = self.extract_headers\n if extract_cookies:\n self.extractors[\"cookies\"] = self.extract_cookies\n\n def __call__(self, messages: tuple[HTTPResponseStartEvent, HTTPResponseBodyEvent]) -> ExtractedResponseData:\n \"\"\"Extract data from the response, returning a dictionary of values.\n\n Args:\n messages: A tuple containing\n :class:`HTTPResponseStartEvent <starlite.types.asgi_types.HTTPResponseStartEvent>`\n and :class:`HTTPResponseBodyEvent <starlite.types.asgi_types.HTTPResponseBodyEvent>`.\n\n Returns:\n A string keyed dictionary of extracted values.\n \"\"\"\n return cast(\"ExtractedResponseData\", {key: extractor(messages) for key, extractor in self.extractors.items()})\n\n @staticmethod\n def extract_response_body(messages: tuple[HTTPResponseStartEvent, HTTPResponseBodyEvent]) -> bytes:\n \"\"\"Extract the response body from a ``Message``\n\n Args:\n messages: A tuple containing\n :class:`HTTPResponseStartEvent <starlite.types.asgi_types.HTTPResponseStartEvent>`\n and :class:`HTTPResponseBodyEvent <starlite.types.asgi_types.HTTPResponseBodyEvent>`.\n\n Returns:\n The Response's body as a byte-string.\n \"\"\"\n return messages[1][\"body\"]\n\n @staticmethod\n def extract_status_code(messages: tuple[HTTPResponseStartEvent, HTTPResponseBodyEvent]) -> int:\n \"\"\"Extract a status code from a ``Message``\n\n Args:\n messages: A tuple containing\n :class:`HTTPResponseStartEvent <starlite.types.asgi_types.HTTPResponseStartEvent>`\n and :class:`HTTPResponseBodyEvent <starlite.types.asgi_types.HTTPResponseBodyEvent>`.\n\n Returns:\n The Response's status-code.\n \"\"\"\n return messages[0][\"status\"]\n\n def extract_headers(self, messages: tuple[HTTPResponseStartEvent, HTTPResponseBodyEvent]) -> dict[str, str]:\n \"\"\"Extract headers from a ``Message``\n\n Args:\n messages: A tuple containing\n :class:`HTTPResponseStartEvent <starlite.types.asgi_types.HTTPResponseStartEvent>`\n and :class:`HTTPResponseBodyEvent <starlite.types.asgi_types.HTTPResponseBodyEvent>`.\n\n Returns:\n The Response's headers dict.\n \"\"\"\n headers = {\n key.decode(\"latin-1\"): value.decode(\"latin-1\")\n for key, value in filter(lambda x: x[0].lower() != b\"set-cookie\", messages[0][\"headers\"])\n }\n return (\n obfuscate(\n headers,\n self.obfuscate_headers,\n )\n if self.obfuscate_headers\n else headers\n )\n\n def extract_cookies(self, messages: tuple[HTTPResponseStartEvent, HTTPResponseBodyEvent]) -> dict[str, str]:\n \"\"\"Extract cookies from a ``Message``\n\n Args:\n messages: A tuple containing\n :class:`HTTPResponseStartEvent <starlite.types.asgi_types.HTTPResponseStartEvent>`\n and :class:`HTTPResponseBodyEvent <starlite.types.asgi_types.HTTPResponseBodyEvent>`.\n\n Returns:\n The Response's cookies dict.\n \"\"\"\n cookie_string = \";\".join(\n list( # noqa: C417\n map(\n lambda x: x[1].decode(\"latin-1\"),\n filter(lambda x: x[0].lower() == b\"set-cookie\", messages[0][\"headers\"]),\n )\n )\n )\n if cookie_string:\n parsed_cookies = parse_cookie_string(cookie_string)\n return obfuscate(parsed_cookies, self.obfuscate_cookies) if self.obfuscate_cookies else parsed_cookies\n return {}\n", "path": "starlite/utils/extractors.py" } ]
[ { "content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any, Callable, Coroutine, Literal, cast\n\nfrom typing_extensions import TypedDict\n\nfrom starlite.connection.request import Request\nfrom starlite.datastructures.upload_file import UploadFile\nfrom starlite.enums import HttpMethod, RequestEncodingType\nfrom starlite.parsers import parse_cookie_string\n\nif TYPE_CHECKING:\n from starlite.connection import ASGIConnection\n from starlite.types import Method\n from starlite.types.asgi_types import HTTPResponseBodyEvent, HTTPResponseStartEvent\n\n\ndef obfuscate(values: dict[str, Any], fields_to_obfuscate: set[str]) -> dict[str, Any]:\n \"\"\"Obfuscate values in a dictionary, replacing values with `******`\n\n Args:\n values: A dictionary of strings\n fields_to_obfuscate: keys to obfuscate\n\n Returns:\n A dictionary with obfuscated strings\n \"\"\"\n return {key: \"*****\" if key.lower() in fields_to_obfuscate else value for key, value in values.items()}\n\n\nRequestExtractorField = Literal[\n \"path\", \"method\", \"content_type\", \"headers\", \"cookies\", \"query\", \"path_params\", \"body\", \"scheme\", \"client\"\n]\n\nResponseExtractorField = Literal[\"status_code\", \"headers\", \"body\", \"cookies\"]\n\n\nclass ExtractedRequestData(TypedDict, total=False):\n \"\"\"Dictionary representing extracted request data.\"\"\"\n\n body: Coroutine\n client: tuple[str, int]\n content_type: tuple[str, dict[str, str]]\n cookies: dict[str, str]\n headers: dict[str, str]\n method: Method\n path: str\n path_params: dict[str, Any]\n query: bytes | dict[str, Any]\n scheme: str\n\n\nclass ConnectionDataExtractor:\n \"\"\"Utility class to extract data from an :class:`ASGIConnection <starlite.connection.ASGIConnection>`,\n :class:`Request <starlite.connection.Request>` or :class:`WebSocket <starlite.connection.WebSocket>` instance.\n \"\"\"\n\n __slots__ = (\n \"connection_extractors\",\n \"request_extractors\",\n \"parse_body\",\n \"parse_query\",\n \"obfuscate_headers\",\n \"obfuscate_cookies\",\n )\n\n def __init__(\n self,\n extract_body: bool = True,\n extract_client: bool = True,\n extract_content_type: bool = True,\n extract_cookies: bool = True,\n extract_headers: bool = True,\n extract_method: bool = True,\n extract_path: bool = True,\n extract_path_params: bool = True,\n extract_query: bool = True,\n extract_scheme: bool = True,\n obfuscate_cookies: set[str] | None = None,\n obfuscate_headers: set[str] | None = None,\n parse_body: bool = False,\n parse_query: bool = False,\n ):\n \"\"\"Initialize ``ConnectionDataExtractor``\n\n Args:\n extract_body: Whether to extract body, (for requests only).\n extract_client: Whether to extract the client (host, port) mapping.\n extract_content_type: Whether to extract the content type and any options.\n extract_cookies: Whether to extract cookies.\n extract_headers: Whether to extract headers.\n extract_method: Whether to extract the HTTP method, (for requests only).\n extract_path: Whether to extract the path.\n extract_path_params: Whether to extract path parameters.\n extract_query: Whether to extract query parameters.\n extract_scheme: Whether to extract the http scheme.\n obfuscate_headers: headers keys to obfuscate. Obfuscated values are replaced with '*****'.\n obfuscate_cookies: cookie keys to obfuscate. Obfuscated values are replaced with '*****'.\n parse_body: Whether to parse the body value or return the raw byte string, (for requests only).\n parse_query: Whether to parse query parameters or return the raw byte string.\n \"\"\"\n self.parse_body = parse_body\n self.parse_query = parse_query\n self.obfuscate_headers = {h.lower() for h in (obfuscate_headers or set())}\n self.obfuscate_cookies = {c.lower() for c in (obfuscate_cookies or set())}\n self.connection_extractors: dict[str, Callable[[ASGIConnection[Any, Any, Any, Any]], Any]] = {}\n self.request_extractors: dict[RequestExtractorField, Callable[[Request[Any, Any, Any]], Any]] = {}\n if extract_scheme:\n self.connection_extractors[\"scheme\"] = self.extract_scheme\n if extract_client:\n self.connection_extractors[\"client\"] = self.extract_client\n if extract_path:\n self.connection_extractors[\"path\"] = self.extract_path\n if extract_headers:\n self.connection_extractors[\"headers\"] = self.extract_headers\n if extract_cookies:\n self.connection_extractors[\"cookies\"] = self.extract_cookies\n if extract_query:\n self.connection_extractors[\"query\"] = self.extract_query\n if extract_path_params:\n self.connection_extractors[\"path_params\"] = self.extract_path_params\n if extract_method:\n self.request_extractors[\"method\"] = self.extract_method\n if extract_content_type:\n self.request_extractors[\"content_type\"] = self.extract_content_type\n if extract_body:\n self.request_extractors[\"body\"] = self.extract_body\n\n def __call__(self, connection: ASGIConnection[Any, Any, Any, Any]) -> ExtractedRequestData:\n \"\"\"Extract data from the connection, returning a dictionary of values.\n\n Notes:\n - The value for ``body`` - if present - is an unresolved Coroutine and as such should be awaited by the receiver.\n\n Args:\n connection: An ASGI connection or its subclasses.\n\n Returns:\n A string keyed dictionary of extracted values.\n \"\"\"\n extractors = (\n {**self.connection_extractors, **self.request_extractors} # type: ignore\n if isinstance(connection, Request)\n else self.connection_extractors\n )\n return cast(\"ExtractedRequestData\", {key: extractor(connection) for key, extractor in extractors.items()})\n\n @staticmethod\n def extract_scheme(connection: ASGIConnection[Any, Any, Any, Any]) -> str:\n \"\"\"Extract the scheme from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n The connection's scope[\"scheme\"] value\n \"\"\"\n return connection.scope[\"scheme\"]\n\n @staticmethod\n def extract_client(connection: ASGIConnection[Any, Any, Any, Any]) -> tuple[str, int]:\n \"\"\"Extract the client from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n The connection's scope[\"client\"] value or a default value.\n \"\"\"\n return connection.scope.get(\"client\") or (\"\", 0)\n\n @staticmethod\n def extract_path(connection: ASGIConnection[Any, Any, Any, Any]) -> str:\n \"\"\"Extract the path from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n The connection's scope[\"path\"] value\n \"\"\"\n return connection.scope[\"path\"]\n\n def extract_headers(self, connection: ASGIConnection[Any, Any, Any, Any]) -> dict[str, str]:\n \"\"\"Extract headers from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n A dictionary with the connection's headers.\n \"\"\"\n headers = {k.decode(\"latin-1\"): v.decode(\"latin-1\") for k, v in connection.scope[\"headers\"]}\n return obfuscate(headers, self.obfuscate_headers) if self.obfuscate_headers else headers\n\n def extract_cookies(self, connection: ASGIConnection[Any, Any, Any, Any]) -> dict[str, str]:\n \"\"\"Extract cookies from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n A dictionary with the connection's cookies.\n \"\"\"\n return obfuscate(connection.cookies, self.obfuscate_cookies) if self.obfuscate_cookies else connection.cookies\n\n def extract_query(self, connection: ASGIConnection[Any, Any, Any, Any]) -> Any:\n \"\"\"Extract query from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n Either a dictionary with the connection's parsed query string or the raw query byte-string.\n \"\"\"\n return connection.query_params.dict() if self.parse_query else connection.scope.get(\"query_string\", b\"\")\n\n @staticmethod\n def extract_path_params(connection: ASGIConnection[Any, Any, Any, Any]) -> dict[str, Any]:\n \"\"\"Extract the path parameters from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n A dictionary with the connection's path parameters.\n \"\"\"\n return connection.path_params\n\n @staticmethod\n def extract_method(request: Request[Any, Any, Any]) -> Method:\n \"\"\"Extract the method from an ``ASGIConnection``\n\n Args:\n request: A :class:`Request <starlite.connection.Request>` instance.\n\n Returns:\n The request's scope[\"method\"] value.\n \"\"\"\n return request.scope[\"method\"]\n\n @staticmethod\n def extract_content_type(request: Request[Any, Any, Any]) -> tuple[str, dict[str, str]]:\n \"\"\"Extract the content-type from an ``ASGIConnection``\n\n Args:\n request: A :class:`Request <starlite.connection.Request>` instance.\n\n Returns:\n A tuple containing the request's parsed 'Content-Type' header.\n \"\"\"\n return request.content_type\n\n async def extract_body(self, request: \"Request[Any, Any, Any]\") -> Any:\n \"\"\"Extract the body from an ``ASGIConnection``\n\n Args:\n request: A :class:`Request <starlite.connection.Request>` instance.\n\n Returns:\n Either the parsed request body or the raw byte-string.\n \"\"\"\n if request.method != HttpMethod.GET:\n if not self.parse_body:\n return await request.body()\n request_encoding_type = request.content_type[0]\n if request_encoding_type == RequestEncodingType.JSON:\n return await request.json()\n form_data = await request.form()\n if request_encoding_type == RequestEncodingType.URL_ENCODED:\n return dict(form_data)\n return {\n key: repr(value) if isinstance(value, UploadFile) else value for key, value in form_data.multi_items()\n }\n return None\n\n\nclass ExtractedResponseData(TypedDict, total=False):\n \"\"\"Dictionary representing extracted response data.\"\"\"\n\n body: bytes\n status_code: int\n headers: dict[str, str]\n cookies: dict[str, str]\n\n\nclass ResponseDataExtractor:\n \"\"\"Utility class to extract data from a ``Message``\"\"\"\n\n __slots__ = (\"extractors\", \"parse_headers\", \"obfuscate_headers\", \"obfuscate_cookies\")\n\n def __init__(\n self,\n extract_body: bool = True,\n extract_cookies: bool = True,\n extract_headers: bool = True,\n extract_status_code: bool = True,\n obfuscate_cookies: set[str] | None = None,\n obfuscate_headers: set[str] | None = None,\n ):\n \"\"\"Initialize ``ResponseDataExtractor`` with options.\n\n Args:\n extract_body: Whether to extract the body.\n extract_cookies: Whether to extract the cookies.\n extract_headers: Whether to extract the headers.\n extract_status_code: Whether to extract the status code.\n obfuscate_cookies: cookie keys to obfuscate. Obfuscated values are replaced with '*****'.\n obfuscate_headers: headers keys to obfuscate. Obfuscated values are replaced with '*****'.\n \"\"\"\n self.obfuscate_headers = {h.lower() for h in (obfuscate_headers or set())}\n self.obfuscate_cookies = {c.lower() for c in (obfuscate_cookies or set())}\n self.extractors: dict[\n ResponseExtractorField, Callable[[tuple[HTTPResponseStartEvent, HTTPResponseBodyEvent]], Any]\n ] = {}\n if extract_body:\n self.extractors[\"body\"] = self.extract_response_body\n if extract_status_code:\n self.extractors[\"status_code\"] = self.extract_status_code\n if extract_headers:\n self.extractors[\"headers\"] = self.extract_headers\n if extract_cookies:\n self.extractors[\"cookies\"] = self.extract_cookies\n\n def __call__(self, messages: tuple[HTTPResponseStartEvent, HTTPResponseBodyEvent]) -> ExtractedResponseData:\n \"\"\"Extract data from the response, returning a dictionary of values.\n\n Args:\n messages: A tuple containing\n :class:`HTTPResponseStartEvent <starlite.types.asgi_types.HTTPResponseStartEvent>`\n and :class:`HTTPResponseBodyEvent <starlite.types.asgi_types.HTTPResponseBodyEvent>`.\n\n Returns:\n A string keyed dictionary of extracted values.\n \"\"\"\n return cast(\"ExtractedResponseData\", {key: extractor(messages) for key, extractor in self.extractors.items()})\n\n @staticmethod\n def extract_response_body(messages: tuple[HTTPResponseStartEvent, HTTPResponseBodyEvent]) -> bytes:\n \"\"\"Extract the response body from a ``Message``\n\n Args:\n messages: A tuple containing\n :class:`HTTPResponseStartEvent <starlite.types.asgi_types.HTTPResponseStartEvent>`\n and :class:`HTTPResponseBodyEvent <starlite.types.asgi_types.HTTPResponseBodyEvent>`.\n\n Returns:\n The Response's body as a byte-string.\n \"\"\"\n return messages[1][\"body\"]\n\n @staticmethod\n def extract_status_code(messages: tuple[HTTPResponseStartEvent, HTTPResponseBodyEvent]) -> int:\n \"\"\"Extract a status code from a ``Message``\n\n Args:\n messages: A tuple containing\n :class:`HTTPResponseStartEvent <starlite.types.asgi_types.HTTPResponseStartEvent>`\n and :class:`HTTPResponseBodyEvent <starlite.types.asgi_types.HTTPResponseBodyEvent>`.\n\n Returns:\n The Response's status-code.\n \"\"\"\n return messages[0][\"status\"]\n\n def extract_headers(self, messages: tuple[HTTPResponseStartEvent, HTTPResponseBodyEvent]) -> dict[str, str]:\n \"\"\"Extract headers from a ``Message``\n\n Args:\n messages: A tuple containing\n :class:`HTTPResponseStartEvent <starlite.types.asgi_types.HTTPResponseStartEvent>`\n and :class:`HTTPResponseBodyEvent <starlite.types.asgi_types.HTTPResponseBodyEvent>`.\n\n Returns:\n The Response's headers dict.\n \"\"\"\n headers = {\n key.decode(\"latin-1\"): value.decode(\"latin-1\")\n for key, value in filter(lambda x: x[0].lower() != b\"set-cookie\", messages[0][\"headers\"])\n }\n return (\n obfuscate(\n headers,\n self.obfuscate_headers,\n )\n if self.obfuscate_headers\n else headers\n )\n\n def extract_cookies(self, messages: tuple[HTTPResponseStartEvent, HTTPResponseBodyEvent]) -> dict[str, str]:\n \"\"\"Extract cookies from a ``Message``\n\n Args:\n messages: A tuple containing\n :class:`HTTPResponseStartEvent <starlite.types.asgi_types.HTTPResponseStartEvent>`\n and :class:`HTTPResponseBodyEvent <starlite.types.asgi_types.HTTPResponseBodyEvent>`.\n\n Returns:\n The Response's cookies dict.\n \"\"\"\n cookie_string = \";\".join(\n list( # noqa: C417\n map(\n lambda x: x[1].decode(\"latin-1\"),\n filter(lambda x: x[0].lower() == b\"set-cookie\", messages[0][\"headers\"]),\n )\n )\n )\n if cookie_string:\n parsed_cookies = parse_cookie_string(cookie_string)\n return obfuscate(parsed_cookies, self.obfuscate_cookies) if self.obfuscate_cookies else parsed_cookies\n return {}\n", "path": "starlite/utils/extractors.py" } ]
diff --git a/starlite/utils/extractors.py b/starlite/utils/extractors.py index ec4c7d514c..fcb7b77881 100644 --- a/starlite/utils/extractors.py +++ b/starlite/utils/extractors.py @@ -25,10 +25,7 @@ def obfuscate(values: dict[str, Any], fields_to_obfuscate: set[str]) -> dict[str Returns: A dictionary with obfuscated strings """ - for key in values: - if key.lower() in fields_to_obfuscate: - values[key] = "*****" - return values + return {key: "*****" if key.lower() in fields_to_obfuscate else value for key, value in values.items()} RequestExtractorField = Literal[ diff --git a/tests/middleware/test_logging_middleware.py b/tests/middleware/test_logging_middleware.py index a23cdeecb1..21c2d67fb9 100644 --- a/tests/middleware/test_logging_middleware.py +++ b/tests/middleware/test_logging_middleware.py @@ -7,14 +7,16 @@ from starlite import Response, get, post from starlite.config.compression import CompressionConfig from starlite.config.logging import LoggingConfig, StructLoggingConfig +from starlite.connection import Request from starlite.datastructures import Cookie from starlite.middleware.logging import LoggingMiddlewareConfig -from starlite.status_codes import HTTP_200_OK +from starlite.status_codes import HTTP_200_OK, HTTP_201_CREATED from starlite.testing import create_test_client if TYPE_CHECKING: from _pytest.logging import LogCaptureFixture + from starlite.middleware.session.server_side import ServerSideSessionConfig from starlite.types.callable_types import GetLogger @@ -210,3 +212,33 @@ def test_logging_middleware_log_fields(get_logger: "GetLogger", caplog: "LogCapt assert caplog.messages[0] == "HTTP Request: path=/" assert caplog.messages[1] == "HTTP Response: status_code=200" + + +def test_logging_middleware_with_session_middleware(session_backend_config_memory: "ServerSideSessionConfig") -> None: + # https://github.com/starlite-api/starlite/issues/1228 + + @post("/") + async def set_session(request: Request) -> None: + request.set_session({"hello": "world"}) + + @get("/") + async def get_session() -> None: + pass + + logging_middleware_config = LoggingMiddlewareConfig() + + with create_test_client( + [set_session, get_session], + logging_config=LoggingConfig(), + middleware=[logging_middleware_config.middleware, session_backend_config_memory.middleware], + ) as client: + response = client.post("/") + assert response.status_code == HTTP_201_CREATED + assert "session" in client.cookies + assert client.cookies["session"] != "*****" + session_id = client.cookies["session"] + + response = client.get("/") + assert response.status_code == HTTP_200_OK + assert "session" in client.cookies + assert client.cookies["session"] == session_id
kovidgoyal__kitty-2824
path expanding on hints **Is your feature request related to a problem? Please describe.** for hints i would like to expand the path this is my current hint config `map kitty_mod+p>n kitten hints --type=linenum --linenum-action=tab nvim {path}` and my term text is ``` /home/becker on  master [⇡»!?] via C base ✦ ❯ vi ~/.config/kitty/kitty.conf:5 /home/becker on  master [⇡»!?] via C base took 16h49m54s ✦ ❯ /home/becker/.config/kitty/kitty.conf:5 ``` the full path opens correct the ~ path opens.. im not sure. when i try and write it vim says "E212: Can't open file for writing: no such file or directory" If i run just `vi ~/.config/kitty/kitty.conf` the activity shows nvim loading the full path. **Describe the solution you'd like** maybe an {expanded_path} ? **Describe alternatives you've considered** maybe {path} passed to nvim i just receiving the wrong thing? **Additional context** the running vim commands from manual vs the hint. ![image](https://user-images.githubusercontent.com/12170/86269569-cc889d00-bb7e-11ea-9aa5-6bf5c7a8b1e2.png)
[ { "content": "#!/usr/bin/env python3\n# vim:fileencoding=utf-8\n# License: GPL v3 Copyright: 2018, Kovid Goyal <kovid at kovidgoyal.net>\n\nimport os\nimport re\nimport string\nimport sys\nfrom functools import lru_cache\nfrom gettext import gettext as _\nfrom itertools import repeat\nfrom typing import (\n Any, Callable, Dict, Generator, Iterable, List, Optional, Pattern,\n Sequence, Set, Tuple, Type, cast\n)\n\nfrom kitty.cli import parse_args\nfrom kitty.cli_stub import HintsCLIOptions\nfrom kitty.fast_data_types import set_clipboard_string\nfrom kitty.key_encoding import (\n KeyEvent, backspace_key, enter_key, key_defs as K\n)\nfrom kitty.typing import BossType, KittyCommonOpts\nfrom kitty.utils import ScreenSize, screen_size_function, set_primary_selection\n\nfrom ..tui.handler import Handler, result_handler\nfrom ..tui.loop import Loop\nfrom ..tui.operations import faint, styled\n\n\n@lru_cache()\ndef kitty_common_opts() -> KittyCommonOpts:\n import json\n v = os.environ.get('KITTY_COMMON_OPTS')\n if v:\n return cast(KittyCommonOpts, json.loads(v))\n from kitty.config import common_opts_as_dict\n return common_opts_as_dict()\n\n\nDEFAULT_HINT_ALPHABET = string.digits + string.ascii_lowercase\nDEFAULT_REGEX = r'(?m)^\\s*(.+)\\s*$'\nESCAPE = K['ESCAPE']\n\n\nclass Mark:\n\n __slots__ = ('index', 'start', 'end', 'text', 'groupdict')\n\n def __init__(self, index: int, start: int, end: int, text: str, groupdict: Any):\n self.index, self.start, self.end = index, start, end\n self.text = text\n self.groupdict = groupdict\n\n\n@lru_cache(maxsize=2048)\ndef encode_hint(num: int, alphabet: str) -> str:\n res = ''\n d = len(alphabet)\n while not res or num > 0:\n num, i = divmod(num, d)\n res = alphabet[i] + res\n return res\n\n\ndef decode_hint(x: str, alphabet: str = DEFAULT_HINT_ALPHABET) -> int:\n base = len(alphabet)\n index_map = {c: i for i, c in enumerate(alphabet)}\n i = 0\n for char in x:\n i = i * base + index_map[char]\n return i\n\n\ndef highlight_mark(m: Mark, text: str, current_input: str, alphabet: str) -> str:\n hint = encode_hint(m.index, alphabet)\n if current_input and not hint.startswith(current_input):\n return faint(text)\n hint = hint[len(current_input):] or ' '\n text = text[len(hint):]\n return styled(\n hint,\n fg='black',\n bg='green',\n bold=True\n ) + styled(\n text, fg='gray', fg_intense=True, bold=True\n )\n\n\ndef render(text: str, current_input: str, all_marks: Sequence[Mark], ignore_mark_indices: Set[int], alphabet: str) -> str:\n for mark in reversed(all_marks):\n if mark.index in ignore_mark_indices:\n continue\n mtext = highlight_mark(mark, text[mark.start:mark.end], current_input, alphabet)\n text = text[:mark.start] + mtext + text[mark.end:]\n\n text = text.replace('\\0', '')\n\n return text.replace('\\n', '\\r\\n').rstrip()\n\n\nclass Hints(Handler):\n\n def __init__(self, text: str, all_marks: Sequence[Mark], index_map: Dict[int, Mark], args: HintsCLIOptions):\n self.text, self.index_map = text, index_map\n self.alphabet = args.alphabet or DEFAULT_HINT_ALPHABET\n self.all_marks = all_marks\n self.ignore_mark_indices: Set[int] = set()\n self.args = args\n self.window_title = _('Choose URL') if args.type == 'url' else _('Choose text')\n self.multiple = args.multiple\n self.match_suffix = self.get_match_suffix(args)\n self.chosen: List[Mark] = []\n self.reset()\n\n @property\n def text_matches(self) -> List[str]:\n return [m.text + self.match_suffix for m in self.chosen]\n\n @property\n def groupdicts(self) -> List[Any]:\n return [m.groupdict for m in self.chosen]\n\n def get_match_suffix(self, args: HintsCLIOptions) -> str:\n if args.add_trailing_space == 'always':\n return ' '\n if args.add_trailing_space == 'never':\n return ''\n return ' ' if args.multiple else ''\n\n def reset(self) -> None:\n self.current_input = ''\n self.current_text: Optional[str] = None\n\n def init_terminal_state(self) -> None:\n self.cmd.set_cursor_visible(False)\n self.cmd.set_window_title(self.window_title)\n self.cmd.set_line_wrapping(False)\n\n def initialize(self) -> None:\n self.init_terminal_state()\n self.draw_screen()\n\n def on_text(self, text: str, in_bracketed_paste: bool = False) -> None:\n changed = False\n for c in text:\n if c in self.alphabet:\n self.current_input += c\n changed = True\n if changed:\n matches = [\n m for idx, m in self.index_map.items()\n if encode_hint(idx, self.alphabet).startswith(self.current_input)\n ]\n if len(matches) == 1:\n self.chosen.append(matches[0])\n if self.multiple:\n self.ignore_mark_indices.add(matches[0].index)\n self.reset()\n else:\n self.quit_loop(0)\n return\n self.current_text = None\n self.draw_screen()\n\n def on_key(self, key_event: KeyEvent) -> None:\n if key_event is backspace_key:\n self.current_input = self.current_input[:-1]\n self.current_text = None\n self.draw_screen()\n elif key_event is enter_key and self.current_input:\n try:\n idx = decode_hint(self.current_input, self.alphabet)\n self.chosen.append(self.index_map[idx])\n self.ignore_mark_indices.add(idx)\n except Exception:\n self.current_input = ''\n self.current_text = None\n self.draw_screen()\n else:\n if self.multiple:\n self.reset()\n self.draw_screen()\n else:\n self.quit_loop(0)\n elif key_event.key is ESCAPE:\n self.quit_loop(0 if self.multiple else 1)\n\n def on_interrupt(self) -> None:\n self.quit_loop(1)\n\n def on_eot(self) -> None:\n self.quit_loop(1)\n\n def on_resize(self, new_size: ScreenSize) -> None:\n self.draw_screen()\n\n def draw_screen(self) -> None:\n if self.current_text is None:\n self.current_text = render(self.text, self.current_input, self.all_marks, self.ignore_mark_indices, self.alphabet)\n self.cmd.clear_screen()\n self.write(self.current_text)\n\n\ndef regex_finditer(pat: Pattern, minimum_match_length: int, text: str) -> Generator[Tuple[int, int, Dict], None, None]:\n has_named_groups = bool(pat.groupindex)\n for m in pat.finditer(text):\n s, e = m.span(0 if has_named_groups else pat.groups)\n while e > s + 1 and text[e-1] == '\\0':\n e -= 1\n if e - s >= minimum_match_length:\n yield s, e, m.groupdict()\n\n\nclosing_bracket_map = {'(': ')', '[': ']', '{': '}', '<': '>', '*': '*', '\"': '\"', \"'\": \"'\"}\nopening_brackets = ''.join(closing_bracket_map)\nPostprocessorFunc = Callable[[str, int, int], Tuple[int, int]]\npostprocessor_map: Dict[str, PostprocessorFunc] = {}\n\n\ndef postprocessor(func: PostprocessorFunc) -> PostprocessorFunc:\n postprocessor_map[func.__name__] = func\n return func\n\n\n@postprocessor\ndef url(text: str, s: int, e: int) -> Tuple[int, int]:\n if s > 4 and text[s - 5:s] == 'link:': # asciidoc URLs\n url = text[s:e]\n idx = url.rfind('[')\n if idx > -1:\n e -= len(url) - idx\n while text[e - 1] in '.,?!' and e > 1: # remove trailing punctuation\n e -= 1\n # truncate url at closing bracket/quote\n if s > 0 and e <= len(text) and text[s-1] in opening_brackets:\n q = closing_bracket_map[text[s-1]]\n idx = text.find(q, s)\n if idx > s:\n e = idx\n # Restructured Text URLs\n if e > 3 and text[e-2:e] == '`_':\n e -= 2\n\n return s, e\n\n\n@postprocessor\ndef brackets(text: str, s: int, e: int) -> Tuple[int, int]:\n # Remove matching brackets\n if s < e <= len(text):\n before = text[s]\n if before in '({[<' and text[e-1] == closing_bracket_map[before]:\n s += 1\n e -= 1\n return s, e\n\n\n@postprocessor\ndef quotes(text: str, s: int, e: int) -> Tuple[int, int]:\n # Remove matching quotes\n if s < e <= len(text):\n before = text[s]\n if before in '\\'\"' and text[e-1] == before:\n s += 1\n e -= 1\n return s, e\n\n\ndef mark(pattern: str, post_processors: Iterable[PostprocessorFunc], text: str, args: HintsCLIOptions) -> Generator[Mark, None, None]:\n pat = re.compile(pattern)\n for idx, (s, e, groupdict) in enumerate(regex_finditer(pat, args.minimum_match_length, text)):\n for func in post_processors:\n s, e = func(text, s, e)\n mark_text = text[s:e].replace('\\n', '').replace('\\0', '')\n yield Mark(idx, s, e, mark_text, groupdict)\n\n\ndef run_loop(args: HintsCLIOptions, text: str, all_marks: Sequence[Mark], index_map: Dict[int, Mark], extra_cli_args: Sequence[str] = ()) -> Dict[str, Any]:\n loop = Loop()\n handler = Hints(text, all_marks, index_map, args)\n loop.loop(handler)\n if handler.chosen and loop.return_code == 0:\n return {\n 'match': handler.text_matches, 'programs': args.program,\n 'multiple_joiner': args.multiple_joiner, 'customize_processing': args.customize_processing,\n 'type': args.type, 'groupdicts': handler.groupdicts, 'extra_cli_args': extra_cli_args, 'linenum_action': args.linenum_action\n }\n raise SystemExit(loop.return_code)\n\n\ndef escape(chars: str) -> str:\n return chars.replace('\\\\', '\\\\\\\\').replace('-', r'\\-').replace(']', r'\\]')\n\n\ndef functions_for(args: HintsCLIOptions) -> Tuple[str, List[PostprocessorFunc]]:\n post_processors = []\n if args.type == 'url':\n if args.url_prefixes == 'default':\n url_prefixes = kitty_common_opts()['url_prefixes']\n else:\n url_prefixes = tuple(args.url_prefixes.split(','))\n from .url_regex import url_delimiters\n pattern = '(?:{})://[^{}]{{3,}}'.format(\n '|'.join(url_prefixes), url_delimiters\n )\n post_processors.append(url)\n elif args.type == 'path':\n pattern = r'(?:\\S*/\\S+)|(?:\\S+[.][a-zA-Z0-9]{2,7})'\n post_processors.extend((brackets, quotes))\n elif args.type == 'line':\n pattern = '(?m)^\\\\s*(.+)[\\\\s\\0]*$'\n elif args.type == 'hash':\n pattern = '[0-9a-f]{7,128}'\n elif args.type == 'word':\n chars = args.word_characters\n if chars is None:\n chars = kitty_common_opts()['select_by_word_characters']\n pattern = r'(?u)[{}\\w]{{{},}}'.format(escape(chars), args.minimum_match_length)\n post_processors.extend((brackets, quotes))\n else:\n pattern = args.regex\n return pattern, post_processors\n\n\ndef convert_text(text: str, cols: int) -> str:\n lines: List[str] = []\n empty_line = '\\0' * cols\n for full_line in text.split('\\n'):\n if full_line:\n if not full_line.rstrip('\\r'): # empty lines\n lines.extend(repeat(empty_line, len(full_line)))\n continue\n for line in full_line.split('\\r'):\n if line:\n lines.append(line.ljust(cols, '\\0'))\n return '\\n'.join(lines)\n\n\ndef parse_input(text: str) -> str:\n try:\n cols = int(os.environ['OVERLAID_WINDOW_COLS'])\n except KeyError:\n cols = screen_size_function()().cols\n return convert_text(text, cols)\n\n\ndef linenum_marks(text: str, args: HintsCLIOptions, Mark: Type[Mark], extra_cli_args: Sequence[str], *a: Any) -> Generator[Mark, None, None]:\n regex = args.regex\n if regex == DEFAULT_REGEX:\n regex = r'(?P<path>(?:\\S*/\\S+?)|(?:\\S+[.][a-zA-Z0-9]{2,7})):(?P<line>\\d+)'\n yield from mark(regex, [brackets, quotes], text, args)\n\n\ndef load_custom_processor(customize_processing: str) -> Any:\n if customize_processing.startswith('::import::'):\n import importlib\n m = importlib.import_module(customize_processing[len('::import::'):])\n return {k: getattr(m, k) for k in dir(m)}\n if customize_processing == '::linenum::':\n return {'mark': linenum_marks, 'handle_result': linenum_handle_result}\n from kitty.constants import resolve_custom_file\n custom_path = resolve_custom_file(customize_processing)\n import runpy\n return runpy.run_path(custom_path, run_name='__main__')\n\n\ndef run(args: HintsCLIOptions, text: str, extra_cli_args: Sequence[str] = ()) -> Optional[Dict[str, Any]]:\n try:\n text = parse_input(text)\n pattern, post_processors = functions_for(args)\n if args.type == 'linenum':\n args.customize_processing = '::linenum::'\n if args.customize_processing:\n m = load_custom_processor(args.customize_processing)\n if 'mark' in m:\n all_marks = tuple(m['mark'](text, args, Mark, extra_cli_args))\n else:\n all_marks = tuple(mark(pattern, post_processors, text, args))\n else:\n all_marks = tuple(mark(pattern, post_processors, text, args))\n if not all_marks:\n input(_('No {} found, press Enter to quit.').format(\n 'URLs' if args.type == 'url' else 'matches'\n ))\n return None\n\n largest_index = all_marks[-1].index\n offset = max(0, args.hints_offset)\n for m in all_marks:\n if args.ascending:\n m.index += offset\n else:\n m.index = largest_index - m.index + offset\n index_map = {m.index: m for m in all_marks}\n except Exception:\n import traceback\n traceback.print_exc()\n input('Press Enter to quit.')\n raise SystemExit(1)\n\n return run_loop(args, text, all_marks, index_map, extra_cli_args)\n\n\n# CLI {{{\nOPTIONS = r'''\n--program\ntype=list\nWhat program to use to open matched text. Defaults to the default open program\nfor the operating system. Use a value of :file:`-` to paste the match into the\nterminal window instead. A value of :file:`@` will copy the match to the\nclipboard. A value of :file:`*` will copy the match to the primary selection\n(on systems that support primary selections). A value of :file:`default` will\nrun the default open program. Can be specified multiple times to run multiple\nprograms.\n\n\n--type\ndefault=url\nchoices=url,regex,path,line,hash,word,linenum\nThe type of text to search for. A value of :code:`linenum` is special, it looks\nfor error messages using the pattern specified with :option:`--regex`, which\nmust have the named groups, :code:`path` and :code:`line`. If not specified,\nwill look for :code:`path:line`. The :option:`--linenum-action` option\ncontrols what to do with the selected error message, other options are ignored.\n\n\n--regex\ndefault={default_regex}\nThe regular expression to use when :option:`kitty +kitten hints --type`=regex.\nThe regular expression is in python syntax. If you specify a numbered group in\nthe regular expression only the group will be matched. This allow you to match\ntext ignoring a prefix/suffix, as needed. The default expression matches lines.\nTo match text over multiple lines you should prefix the regular expression with\n:code:`(?ms)`, which turns on MULTILINE and DOTALL modes for the regex engine.\nIf you specify named groups and a :option:`kitty +kitten hints --program` then\nthe program will be passed arguments corresponding to each named group of\nthe form key=value.\n\n\n--linenum-action\ndefault=self\ntype=choice\nchoices=self,window,tab,os_window,background\nThe action to perform on the matched errors. The actual action is whatever\narguments are provided to the kitten, for example:\n:code:`kitty + kitten hints --type=linenum vim +{line} {path}`\nwill open the matched path at the matched line number in vim. This option\ncontrols where the action is executed: :code:`self` means the current window,\n:code:`window` a new kitty window, :code:`tab` a new tab, :code:`os_window`\na new OS window and :code:`background` run in the background.\n\n\n--url-prefixes\ndefault=default\nComma separated list of recognized URL prefixes. Defaults, to\nthe list of prefixes defined in kitty.conf.\n\n\n--word-characters\nCharacters to consider as part of a word. In addition, all characters marked as\nalphanumeric in the unicode database will be considered as word characters.\nDefaults to the select_by_word_characters setting from kitty.conf.\n\n\n--minimum-match-length\ndefault=3\ntype=int\nThe minimum number of characters to consider a match.\n\n\n--multiple\ntype=bool-set\nSelect multiple matches and perform the action on all of them together at the end.\nIn this mode, press :kbd:`Esc` to finish selecting.\n\n\n--multiple-joiner\ndefault=auto\nString to use to join multiple selections when copying to the clipboard or\ninserting into the terminal. The special strings: \"space\", \"newline\", \"empty\",\n\"json\" and \"auto\" are interpreted as a space character, a newline an empty\njoiner, a JSON serialized list and an automatic choice, based on the type of\ntext being selected. In addition, integers are interpreted as zero-based\nindices into the list of selections. You can use 0 for the first selection and\n-1 for the last.\n\n\n--add-trailing-space\ndefault=auto\nchoices=auto,always,never\nAdd trailing space after matched text. Defaults to auto, which adds the space\nwhen used together with --multiple.\n\n\n--hints-offset\ndefault=1\ntype=int\nThe offset (from zero) at which to start hint numbering. Note that only numbers\ngreater than or equal to zero are respected.\n\n\n--alphabet\nThe list of characters to use for hints. The default is to use numbers and lowercase\nEnglish alphabets. Specify your preference as a string of characters. Note that\nunless you specify the hints offset as zero the first match will be highlighted with\nthe second character you specify.\n\n\n--ascending\ntype=bool-set\nHave the hints increase from top to bottom instead of decreasing from top to bottom.\n\n\n--customize-processing\nName of a python file in the kitty config directory which will be imported to provide\ncustom implementations for pattern finding and performing actions\non selected matches. See https://sw.kovidgoyal.net/kitty/kittens/hints.html\nfor details. You can also specify absolute paths to load the script from elsewhere.\n\n\n'''.format(\n default_regex=DEFAULT_REGEX,\n line='{{line}}', path='{{path}}'\n).format\nhelp_text = 'Select text from the screen using the keyboard. Defaults to searching for URLs.'\nusage = ''\n\n\ndef parse_hints_args(args: List[str]) -> Tuple[HintsCLIOptions, List[str]]:\n return parse_args(args, OPTIONS, usage, help_text, 'kitty +kitten hints', result_class=HintsCLIOptions)\n\n\ndef main(args: List[str]) -> Optional[Dict[str, Any]]:\n text = ''\n if sys.stdin.isatty():\n if '--help' not in args and '-h' not in args:\n print('You must pass the text to be hinted on STDIN', file=sys.stderr)\n input(_('Press Enter to quit'))\n return None\n else:\n text = sys.stdin.buffer.read().decode('utf-8')\n sys.stdin = open(os.ctermid())\n try:\n opts, items = parse_hints_args(args[1:])\n except SystemExit as e:\n if e.code != 0:\n print(e.args[0], file=sys.stderr)\n input(_('Press Enter to quit'))\n return None\n if items and not (opts.customize_processing or opts.type == 'linenum'):\n print('Extra command line arguments present: {}'.format(' '.join(items)), file=sys.stderr)\n input(_('Press Enter to quit'))\n return run(opts, text, items)\n\n\ndef linenum_handle_result(args: List[str], data: Dict[str, Any], target_window_id: int, boss: BossType, extra_cli_args: Sequence[str], *a: Any) -> None:\n for m, g in zip(data['match'], data['groupdicts']):\n if m:\n path, line = g['path'], g['line']\n path = path.split(':')[-1]\n line = int(line)\n break\n else:\n return\n\n cmd = [x.format(path=path, line=line) for x in extra_cli_args or ('vim', '+{line}', '{path}')]\n w = boss.window_id_map.get(target_window_id)\n action = data['linenum_action']\n\n if action == 'self':\n if w is not None:\n import shlex\n text = ' '.join(shlex.quote(arg) for arg in cmd)\n w.paste_bytes(text + '\\r')\n elif action == 'background':\n import subprocess\n subprocess.Popen(cmd)\n else:\n getattr(boss, {\n 'window': 'new_window_with_cwd', 'tab': 'new_tab_with_cwd', 'os_window': 'new_os_window_with_cwd'\n }[action])(*cmd)\n\n\n@result_handler(type_of_input='screen')\ndef handle_result(args: List[str], data: Dict[str, Any], target_window_id: int, boss: BossType) -> None:\n if data['customize_processing']:\n m = load_custom_processor(data['customize_processing'])\n if 'handle_result' in m:\n m['handle_result'](args, data, target_window_id, boss, data['extra_cli_args'])\n return None\n\n programs = data['programs'] or ('default',)\n matches: List[str] = []\n groupdicts = []\n for m, g in zip(data['match'], data['groupdicts']):\n if m:\n matches.append(m)\n groupdicts.append(g)\n joiner = data['multiple_joiner']\n try:\n is_int: Optional[int] = int(joiner)\n except Exception:\n is_int = None\n text_type = data['type']\n\n @lru_cache()\n def joined_text() -> str:\n if is_int is not None:\n try:\n return matches[is_int]\n except IndexError:\n return matches[-1]\n if joiner == 'json':\n import json\n return json.dumps(matches, ensure_ascii=False, indent='\\t')\n if joiner == 'auto':\n q = '\\n\\r' if text_type in ('line', 'url') else ' '\n else:\n q = {'newline': '\\n\\r', 'space': ' '}.get(joiner, '')\n return q.join(matches)\n\n for program in programs:\n if program == '-':\n w = boss.window_id_map.get(target_window_id)\n if w is not None:\n w.paste(joined_text())\n elif program == '@':\n set_clipboard_string(joined_text())\n elif program == '*':\n set_primary_selection(joined_text())\n else:\n cwd = None\n w = boss.window_id_map.get(target_window_id)\n if w is not None:\n cwd = w.cwd_of_child\n program = None if program == 'default' else program\n for m, groupdict in zip(matches, groupdicts):\n if groupdict:\n m = []\n for k, v in groupdict.items():\n m.append('{}={}'.format(k, v or ''))\n boss.open_url(m, program, cwd=cwd)\n\n\nif __name__ == '__main__':\n # Run with kitty +kitten hints\n ans = main(sys.argv)\n if ans:\n print(ans)\nelif __name__ == '__doc__':\n cd = sys.cli_docs # type: ignore\n cd['usage'] = usage\n cd['options'] = OPTIONS\n cd['help_text'] = help_text\n# }}}\n", "path": "kittens/hints/main.py" } ]
[ { "content": "#!/usr/bin/env python3\n# vim:fileencoding=utf-8\n# License: GPL v3 Copyright: 2018, Kovid Goyal <kovid at kovidgoyal.net>\n\nimport os\nimport re\nimport string\nimport sys\nfrom functools import lru_cache\nfrom gettext import gettext as _\nfrom itertools import repeat\nfrom typing import (\n Any, Callable, Dict, Generator, Iterable, List, Optional, Pattern,\n Sequence, Set, Tuple, Type, cast\n)\n\nfrom kitty.cli import parse_args\nfrom kitty.cli_stub import HintsCLIOptions\nfrom kitty.fast_data_types import set_clipboard_string\nfrom kitty.key_encoding import (\n KeyEvent, backspace_key, enter_key, key_defs as K\n)\nfrom kitty.typing import BossType, KittyCommonOpts\nfrom kitty.utils import ScreenSize, screen_size_function, set_primary_selection\n\nfrom ..tui.handler import Handler, result_handler\nfrom ..tui.loop import Loop\nfrom ..tui.operations import faint, styled\n\n\n@lru_cache()\ndef kitty_common_opts() -> KittyCommonOpts:\n import json\n v = os.environ.get('KITTY_COMMON_OPTS')\n if v:\n return cast(KittyCommonOpts, json.loads(v))\n from kitty.config import common_opts_as_dict\n return common_opts_as_dict()\n\n\nDEFAULT_HINT_ALPHABET = string.digits + string.ascii_lowercase\nDEFAULT_REGEX = r'(?m)^\\s*(.+)\\s*$'\nESCAPE = K['ESCAPE']\n\n\nclass Mark:\n\n __slots__ = ('index', 'start', 'end', 'text', 'groupdict')\n\n def __init__(self, index: int, start: int, end: int, text: str, groupdict: Any):\n self.index, self.start, self.end = index, start, end\n self.text = text\n self.groupdict = groupdict\n\n\n@lru_cache(maxsize=2048)\ndef encode_hint(num: int, alphabet: str) -> str:\n res = ''\n d = len(alphabet)\n while not res or num > 0:\n num, i = divmod(num, d)\n res = alphabet[i] + res\n return res\n\n\ndef decode_hint(x: str, alphabet: str = DEFAULT_HINT_ALPHABET) -> int:\n base = len(alphabet)\n index_map = {c: i for i, c in enumerate(alphabet)}\n i = 0\n for char in x:\n i = i * base + index_map[char]\n return i\n\n\ndef highlight_mark(m: Mark, text: str, current_input: str, alphabet: str) -> str:\n hint = encode_hint(m.index, alphabet)\n if current_input and not hint.startswith(current_input):\n return faint(text)\n hint = hint[len(current_input):] or ' '\n text = text[len(hint):]\n return styled(\n hint,\n fg='black',\n bg='green',\n bold=True\n ) + styled(\n text, fg='gray', fg_intense=True, bold=True\n )\n\n\ndef render(text: str, current_input: str, all_marks: Sequence[Mark], ignore_mark_indices: Set[int], alphabet: str) -> str:\n for mark in reversed(all_marks):\n if mark.index in ignore_mark_indices:\n continue\n mtext = highlight_mark(mark, text[mark.start:mark.end], current_input, alphabet)\n text = text[:mark.start] + mtext + text[mark.end:]\n\n text = text.replace('\\0', '')\n\n return text.replace('\\n', '\\r\\n').rstrip()\n\n\nclass Hints(Handler):\n\n def __init__(self, text: str, all_marks: Sequence[Mark], index_map: Dict[int, Mark], args: HintsCLIOptions):\n self.text, self.index_map = text, index_map\n self.alphabet = args.alphabet or DEFAULT_HINT_ALPHABET\n self.all_marks = all_marks\n self.ignore_mark_indices: Set[int] = set()\n self.args = args\n self.window_title = _('Choose URL') if args.type == 'url' else _('Choose text')\n self.multiple = args.multiple\n self.match_suffix = self.get_match_suffix(args)\n self.chosen: List[Mark] = []\n self.reset()\n\n @property\n def text_matches(self) -> List[str]:\n return [m.text + self.match_suffix for m in self.chosen]\n\n @property\n def groupdicts(self) -> List[Any]:\n return [m.groupdict for m in self.chosen]\n\n def get_match_suffix(self, args: HintsCLIOptions) -> str:\n if args.add_trailing_space == 'always':\n return ' '\n if args.add_trailing_space == 'never':\n return ''\n return ' ' if args.multiple else ''\n\n def reset(self) -> None:\n self.current_input = ''\n self.current_text: Optional[str] = None\n\n def init_terminal_state(self) -> None:\n self.cmd.set_cursor_visible(False)\n self.cmd.set_window_title(self.window_title)\n self.cmd.set_line_wrapping(False)\n\n def initialize(self) -> None:\n self.init_terminal_state()\n self.draw_screen()\n\n def on_text(self, text: str, in_bracketed_paste: bool = False) -> None:\n changed = False\n for c in text:\n if c in self.alphabet:\n self.current_input += c\n changed = True\n if changed:\n matches = [\n m for idx, m in self.index_map.items()\n if encode_hint(idx, self.alphabet).startswith(self.current_input)\n ]\n if len(matches) == 1:\n self.chosen.append(matches[0])\n if self.multiple:\n self.ignore_mark_indices.add(matches[0].index)\n self.reset()\n else:\n self.quit_loop(0)\n return\n self.current_text = None\n self.draw_screen()\n\n def on_key(self, key_event: KeyEvent) -> None:\n if key_event is backspace_key:\n self.current_input = self.current_input[:-1]\n self.current_text = None\n self.draw_screen()\n elif key_event is enter_key and self.current_input:\n try:\n idx = decode_hint(self.current_input, self.alphabet)\n self.chosen.append(self.index_map[idx])\n self.ignore_mark_indices.add(idx)\n except Exception:\n self.current_input = ''\n self.current_text = None\n self.draw_screen()\n else:\n if self.multiple:\n self.reset()\n self.draw_screen()\n else:\n self.quit_loop(0)\n elif key_event.key is ESCAPE:\n self.quit_loop(0 if self.multiple else 1)\n\n def on_interrupt(self) -> None:\n self.quit_loop(1)\n\n def on_eot(self) -> None:\n self.quit_loop(1)\n\n def on_resize(self, new_size: ScreenSize) -> None:\n self.draw_screen()\n\n def draw_screen(self) -> None:\n if self.current_text is None:\n self.current_text = render(self.text, self.current_input, self.all_marks, self.ignore_mark_indices, self.alphabet)\n self.cmd.clear_screen()\n self.write(self.current_text)\n\n\ndef regex_finditer(pat: Pattern, minimum_match_length: int, text: str) -> Generator[Tuple[int, int, Dict], None, None]:\n has_named_groups = bool(pat.groupindex)\n for m in pat.finditer(text):\n s, e = m.span(0 if has_named_groups else pat.groups)\n while e > s + 1 and text[e-1] == '\\0':\n e -= 1\n if e - s >= minimum_match_length:\n yield s, e, m.groupdict()\n\n\nclosing_bracket_map = {'(': ')', '[': ']', '{': '}', '<': '>', '*': '*', '\"': '\"', \"'\": \"'\"}\nopening_brackets = ''.join(closing_bracket_map)\nPostprocessorFunc = Callable[[str, int, int], Tuple[int, int]]\npostprocessor_map: Dict[str, PostprocessorFunc] = {}\n\n\ndef postprocessor(func: PostprocessorFunc) -> PostprocessorFunc:\n postprocessor_map[func.__name__] = func\n return func\n\n\n@postprocessor\ndef url(text: str, s: int, e: int) -> Tuple[int, int]:\n if s > 4 and text[s - 5:s] == 'link:': # asciidoc URLs\n url = text[s:e]\n idx = url.rfind('[')\n if idx > -1:\n e -= len(url) - idx\n while text[e - 1] in '.,?!' and e > 1: # remove trailing punctuation\n e -= 1\n # truncate url at closing bracket/quote\n if s > 0 and e <= len(text) and text[s-1] in opening_brackets:\n q = closing_bracket_map[text[s-1]]\n idx = text.find(q, s)\n if idx > s:\n e = idx\n # Restructured Text URLs\n if e > 3 and text[e-2:e] == '`_':\n e -= 2\n\n return s, e\n\n\n@postprocessor\ndef brackets(text: str, s: int, e: int) -> Tuple[int, int]:\n # Remove matching brackets\n if s < e <= len(text):\n before = text[s]\n if before in '({[<' and text[e-1] == closing_bracket_map[before]:\n s += 1\n e -= 1\n return s, e\n\n\n@postprocessor\ndef quotes(text: str, s: int, e: int) -> Tuple[int, int]:\n # Remove matching quotes\n if s < e <= len(text):\n before = text[s]\n if before in '\\'\"' and text[e-1] == before:\n s += 1\n e -= 1\n return s, e\n\n\ndef mark(pattern: str, post_processors: Iterable[PostprocessorFunc], text: str, args: HintsCLIOptions) -> Generator[Mark, None, None]:\n pat = re.compile(pattern)\n for idx, (s, e, groupdict) in enumerate(regex_finditer(pat, args.minimum_match_length, text)):\n for func in post_processors:\n s, e = func(text, s, e)\n mark_text = text[s:e].replace('\\n', '').replace('\\0', '')\n yield Mark(idx, s, e, mark_text, groupdict)\n\n\ndef run_loop(args: HintsCLIOptions, text: str, all_marks: Sequence[Mark], index_map: Dict[int, Mark], extra_cli_args: Sequence[str] = ()) -> Dict[str, Any]:\n loop = Loop()\n handler = Hints(text, all_marks, index_map, args)\n loop.loop(handler)\n if handler.chosen and loop.return_code == 0:\n return {\n 'match': handler.text_matches, 'programs': args.program,\n 'multiple_joiner': args.multiple_joiner, 'customize_processing': args.customize_processing,\n 'type': args.type, 'groupdicts': handler.groupdicts, 'extra_cli_args': extra_cli_args, 'linenum_action': args.linenum_action\n }\n raise SystemExit(loop.return_code)\n\n\ndef escape(chars: str) -> str:\n return chars.replace('\\\\', '\\\\\\\\').replace('-', r'\\-').replace(']', r'\\]')\n\n\ndef functions_for(args: HintsCLIOptions) -> Tuple[str, List[PostprocessorFunc]]:\n post_processors = []\n if args.type == 'url':\n if args.url_prefixes == 'default':\n url_prefixes = kitty_common_opts()['url_prefixes']\n else:\n url_prefixes = tuple(args.url_prefixes.split(','))\n from .url_regex import url_delimiters\n pattern = '(?:{})://[^{}]{{3,}}'.format(\n '|'.join(url_prefixes), url_delimiters\n )\n post_processors.append(url)\n elif args.type == 'path':\n pattern = r'(?:\\S*/\\S+)|(?:\\S+[.][a-zA-Z0-9]{2,7})'\n post_processors.extend((brackets, quotes))\n elif args.type == 'line':\n pattern = '(?m)^\\\\s*(.+)[\\\\s\\0]*$'\n elif args.type == 'hash':\n pattern = '[0-9a-f]{7,128}'\n elif args.type == 'word':\n chars = args.word_characters\n if chars is None:\n chars = kitty_common_opts()['select_by_word_characters']\n pattern = r'(?u)[{}\\w]{{{},}}'.format(escape(chars), args.minimum_match_length)\n post_processors.extend((brackets, quotes))\n else:\n pattern = args.regex\n return pattern, post_processors\n\n\ndef convert_text(text: str, cols: int) -> str:\n lines: List[str] = []\n empty_line = '\\0' * cols\n for full_line in text.split('\\n'):\n if full_line:\n if not full_line.rstrip('\\r'): # empty lines\n lines.extend(repeat(empty_line, len(full_line)))\n continue\n for line in full_line.split('\\r'):\n if line:\n lines.append(line.ljust(cols, '\\0'))\n return '\\n'.join(lines)\n\n\ndef parse_input(text: str) -> str:\n try:\n cols = int(os.environ['OVERLAID_WINDOW_COLS'])\n except KeyError:\n cols = screen_size_function()().cols\n return convert_text(text, cols)\n\n\ndef linenum_marks(text: str, args: HintsCLIOptions, Mark: Type[Mark], extra_cli_args: Sequence[str], *a: Any) -> Generator[Mark, None, None]:\n regex = args.regex\n if regex == DEFAULT_REGEX:\n regex = r'(?P<path>(?:\\S*/\\S+?)|(?:\\S+[.][a-zA-Z0-9]{2,7})):(?P<line>\\d+)'\n yield from mark(regex, [brackets, quotes], text, args)\n\n\ndef load_custom_processor(customize_processing: str) -> Any:\n if customize_processing.startswith('::import::'):\n import importlib\n m = importlib.import_module(customize_processing[len('::import::'):])\n return {k: getattr(m, k) for k in dir(m)}\n if customize_processing == '::linenum::':\n return {'mark': linenum_marks, 'handle_result': linenum_handle_result}\n from kitty.constants import resolve_custom_file\n custom_path = resolve_custom_file(customize_processing)\n import runpy\n return runpy.run_path(custom_path, run_name='__main__')\n\n\ndef run(args: HintsCLIOptions, text: str, extra_cli_args: Sequence[str] = ()) -> Optional[Dict[str, Any]]:\n try:\n text = parse_input(text)\n pattern, post_processors = functions_for(args)\n if args.type == 'linenum':\n args.customize_processing = '::linenum::'\n if args.customize_processing:\n m = load_custom_processor(args.customize_processing)\n if 'mark' in m:\n all_marks = tuple(m['mark'](text, args, Mark, extra_cli_args))\n else:\n all_marks = tuple(mark(pattern, post_processors, text, args))\n else:\n all_marks = tuple(mark(pattern, post_processors, text, args))\n if not all_marks:\n input(_('No {} found, press Enter to quit.').format(\n 'URLs' if args.type == 'url' else 'matches'\n ))\n return None\n\n largest_index = all_marks[-1].index\n offset = max(0, args.hints_offset)\n for m in all_marks:\n if args.ascending:\n m.index += offset\n else:\n m.index = largest_index - m.index + offset\n index_map = {m.index: m for m in all_marks}\n except Exception:\n import traceback\n traceback.print_exc()\n input('Press Enter to quit.')\n raise SystemExit(1)\n\n return run_loop(args, text, all_marks, index_map, extra_cli_args)\n\n\n# CLI {{{\nOPTIONS = r'''\n--program\ntype=list\nWhat program to use to open matched text. Defaults to the default open program\nfor the operating system. Use a value of :file:`-` to paste the match into the\nterminal window instead. A value of :file:`@` will copy the match to the\nclipboard. A value of :file:`*` will copy the match to the primary selection\n(on systems that support primary selections). A value of :file:`default` will\nrun the default open program. Can be specified multiple times to run multiple\nprograms.\n\n\n--type\ndefault=url\nchoices=url,regex,path,line,hash,word,linenum\nThe type of text to search for. A value of :code:`linenum` is special, it looks\nfor error messages using the pattern specified with :option:`--regex`, which\nmust have the named groups, :code:`path` and :code:`line`. If not specified,\nwill look for :code:`path:line`. The :option:`--linenum-action` option\ncontrols what to do with the selected error message, other options are ignored.\n\n\n--regex\ndefault={default_regex}\nThe regular expression to use when :option:`kitty +kitten hints --type`=regex.\nThe regular expression is in python syntax. If you specify a numbered group in\nthe regular expression only the group will be matched. This allow you to match\ntext ignoring a prefix/suffix, as needed. The default expression matches lines.\nTo match text over multiple lines you should prefix the regular expression with\n:code:`(?ms)`, which turns on MULTILINE and DOTALL modes for the regex engine.\nIf you specify named groups and a :option:`kitty +kitten hints --program` then\nthe program will be passed arguments corresponding to each named group of\nthe form key=value.\n\n\n--linenum-action\ndefault=self\ntype=choice\nchoices=self,window,tab,os_window,background\nThe action to perform on the matched errors. The actual action is whatever\narguments are provided to the kitten, for example:\n:code:`kitty + kitten hints --type=linenum vim +{line} {path}`\nwill open the matched path at the matched line number in vim. This option\ncontrols where the action is executed: :code:`self` means the current window,\n:code:`window` a new kitty window, :code:`tab` a new tab, :code:`os_window`\na new OS window and :code:`background` run in the background.\n\n\n--url-prefixes\ndefault=default\nComma separated list of recognized URL prefixes. Defaults, to\nthe list of prefixes defined in kitty.conf.\n\n\n--word-characters\nCharacters to consider as part of a word. In addition, all characters marked as\nalphanumeric in the unicode database will be considered as word characters.\nDefaults to the select_by_word_characters setting from kitty.conf.\n\n\n--minimum-match-length\ndefault=3\ntype=int\nThe minimum number of characters to consider a match.\n\n\n--multiple\ntype=bool-set\nSelect multiple matches and perform the action on all of them together at the end.\nIn this mode, press :kbd:`Esc` to finish selecting.\n\n\n--multiple-joiner\ndefault=auto\nString to use to join multiple selections when copying to the clipboard or\ninserting into the terminal. The special strings: \"space\", \"newline\", \"empty\",\n\"json\" and \"auto\" are interpreted as a space character, a newline an empty\njoiner, a JSON serialized list and an automatic choice, based on the type of\ntext being selected. In addition, integers are interpreted as zero-based\nindices into the list of selections. You can use 0 for the first selection and\n-1 for the last.\n\n\n--add-trailing-space\ndefault=auto\nchoices=auto,always,never\nAdd trailing space after matched text. Defaults to auto, which adds the space\nwhen used together with --multiple.\n\n\n--hints-offset\ndefault=1\ntype=int\nThe offset (from zero) at which to start hint numbering. Note that only numbers\ngreater than or equal to zero are respected.\n\n\n--alphabet\nThe list of characters to use for hints. The default is to use numbers and lowercase\nEnglish alphabets. Specify your preference as a string of characters. Note that\nunless you specify the hints offset as zero the first match will be highlighted with\nthe second character you specify.\n\n\n--ascending\ntype=bool-set\nHave the hints increase from top to bottom instead of decreasing from top to bottom.\n\n\n--customize-processing\nName of a python file in the kitty config directory which will be imported to provide\ncustom implementations for pattern finding and performing actions\non selected matches. See https://sw.kovidgoyal.net/kitty/kittens/hints.html\nfor details. You can also specify absolute paths to load the script from elsewhere.\n\n\n'''.format(\n default_regex=DEFAULT_REGEX,\n line='{{line}}', path='{{path}}'\n).format\nhelp_text = 'Select text from the screen using the keyboard. Defaults to searching for URLs.'\nusage = ''\n\n\ndef parse_hints_args(args: List[str]) -> Tuple[HintsCLIOptions, List[str]]:\n return parse_args(args, OPTIONS, usage, help_text, 'kitty +kitten hints', result_class=HintsCLIOptions)\n\n\ndef main(args: List[str]) -> Optional[Dict[str, Any]]:\n text = ''\n if sys.stdin.isatty():\n if '--help' not in args and '-h' not in args:\n print('You must pass the text to be hinted on STDIN', file=sys.stderr)\n input(_('Press Enter to quit'))\n return None\n else:\n text = sys.stdin.buffer.read().decode('utf-8')\n sys.stdin = open(os.ctermid())\n try:\n opts, items = parse_hints_args(args[1:])\n except SystemExit as e:\n if e.code != 0:\n print(e.args[0], file=sys.stderr)\n input(_('Press Enter to quit'))\n return None\n if items and not (opts.customize_processing or opts.type == 'linenum'):\n print('Extra command line arguments present: {}'.format(' '.join(items)), file=sys.stderr)\n input(_('Press Enter to quit'))\n return run(opts, text, items)\n\n\ndef linenum_handle_result(args: List[str], data: Dict[str, Any], target_window_id: int, boss: BossType, extra_cli_args: Sequence[str], *a: Any) -> None:\n for m, g in zip(data['match'], data['groupdicts']):\n if m:\n path, line = g['path'], g['line']\n path = os.path.expanduser(path.split(':')[-1])\n line = int(line)\n break\n else:\n return\n\n cmd = [x.format(path=path, line=line) for x in extra_cli_args or ('vim', '+{line}', '{path}')]\n w = boss.window_id_map.get(target_window_id)\n action = data['linenum_action']\n\n if action == 'self':\n if w is not None:\n import shlex\n text = ' '.join(shlex.quote(arg) for arg in cmd)\n w.paste_bytes(text + '\\r')\n elif action == 'background':\n import subprocess\n subprocess.Popen(cmd)\n else:\n getattr(boss, {\n 'window': 'new_window_with_cwd', 'tab': 'new_tab_with_cwd', 'os_window': 'new_os_window_with_cwd'\n }[action])(*cmd)\n\n\n@result_handler(type_of_input='screen')\ndef handle_result(args: List[str], data: Dict[str, Any], target_window_id: int, boss: BossType) -> None:\n if data['customize_processing']:\n m = load_custom_processor(data['customize_processing'])\n if 'handle_result' in m:\n m['handle_result'](args, data, target_window_id, boss, data['extra_cli_args'])\n return None\n\n programs = data['programs'] or ('default',)\n matches: List[str] = []\n groupdicts = []\n for m, g in zip(data['match'], data['groupdicts']):\n if m:\n matches.append(m)\n groupdicts.append(g)\n joiner = data['multiple_joiner']\n try:\n is_int: Optional[int] = int(joiner)\n except Exception:\n is_int = None\n text_type = data['type']\n\n @lru_cache()\n def joined_text() -> str:\n if is_int is not None:\n try:\n return matches[is_int]\n except IndexError:\n return matches[-1]\n if joiner == 'json':\n import json\n return json.dumps(matches, ensure_ascii=False, indent='\\t')\n if joiner == 'auto':\n q = '\\n\\r' if text_type in ('line', 'url') else ' '\n else:\n q = {'newline': '\\n\\r', 'space': ' '}.get(joiner, '')\n return q.join(matches)\n\n for program in programs:\n if program == '-':\n w = boss.window_id_map.get(target_window_id)\n if w is not None:\n w.paste(joined_text())\n elif program == '@':\n set_clipboard_string(joined_text())\n elif program == '*':\n set_primary_selection(joined_text())\n else:\n cwd = None\n w = boss.window_id_map.get(target_window_id)\n if w is not None:\n cwd = w.cwd_of_child\n program = None if program == 'default' else program\n for m, groupdict in zip(matches, groupdicts):\n if groupdict:\n m = []\n for k, v in groupdict.items():\n m.append('{}={}'.format(k, v or ''))\n boss.open_url(m, program, cwd=cwd)\n\n\nif __name__ == '__main__':\n # Run with kitty +kitten hints\n ans = main(sys.argv)\n if ans:\n print(ans)\nelif __name__ == '__doc__':\n cd = sys.cli_docs # type: ignore\n cd['usage'] = usage\n cd['options'] = OPTIONS\n cd['help_text'] = help_text\n# }}}\n", "path": "kittens/hints/main.py" } ]
diff --git a/kittens/hints/main.py b/kittens/hints/main.py index 27d8ecb36b5..3c391498359 100644 --- a/kittens/hints/main.py +++ b/kittens/hints/main.py @@ -559,7 +559,7 @@ def linenum_handle_result(args: List[str], data: Dict[str, Any], target_window_i for m, g in zip(data['match'], data['groupdicts']): if m: path, line = g['path'], g['line'] - path = path.split(':')[-1] + path = os.path.expanduser(path.split(':')[-1]) line = int(line) break else:
django-oscar__django-oscar-2066
UnicodeCSVWriter raises AttributeError: 'NoneType' object has no attribute 'writerows' when it used in a second variant in the `with` statement and a filename has passed to the constructor. I've tried smth like: <pre> from oscar.core.compat import UnicodeCSVWriter data = [[1, 2, 3], [4, 5, 6]] with UnicodeCSVWriter('test.csv') as writer: writer.writerows(data)` </pre> and have got AttributeError, while `test.csv` file was created but remains empty. However, it works perfectly in the first variant: `writer = UnicodeCSVWriter(open_file=fhandler)` It seems like `return self` should be in the end of the `__enter__` method (here: https://github.com/django-oscar/django-oscar/blob/master/src/oscar/core/compat.py#L154 )
[ { "content": "import csv\nimport sys\n\nfrom django.conf import settings\nfrom django.contrib.auth.models import User\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.utils import six\n\nfrom oscar.core.loading import get_model\n\n# A setting that can be used in foreign key declarations\nAUTH_USER_MODEL = getattr(settings, 'AUTH_USER_MODEL', 'auth.User')\ntry:\n AUTH_USER_APP_LABEL, AUTH_USER_MODEL_NAME = AUTH_USER_MODEL.rsplit('.', 1)\nexcept ValueError:\n raise ImproperlyConfigured(\"AUTH_USER_MODEL must be of the form\"\n \" 'app_label.model_name'\")\n\n\ndef get_user_model():\n \"\"\"\n Return the User model. Doesn't require the app cache to be fully\n initialised.\n\n This used to live in compat to support both Django 1.4's fixed User model\n and custom user models introduced thereafter.\n Support for Django 1.4 has since been dropped in Oscar, but our\n get_user_model remains because code relies on us annotating the _meta class\n with the additional fields, and other code might rely on it as well.\n \"\"\"\n\n try:\n model = get_model(AUTH_USER_APP_LABEL, AUTH_USER_MODEL_NAME)\n except LookupError:\n # Convert exception to an ImproperlyConfigured exception for\n # backwards compatibility with previous Oscar versions and the\n # original get_user_model method in Django.\n raise ImproperlyConfigured(\n \"AUTH_USER_MODEL refers to model '%s' that has not been installed\"\n % settings.AUTH_USER_MODEL)\n\n # Test if user model has any custom fields and add attributes to the _meta\n # class\n core_fields = set([f.name for f in User._meta.fields])\n model_fields = set([f.name for f in model._meta.fields])\n new_fields = model_fields.difference(core_fields)\n model._meta.has_additional_fields = len(new_fields) > 0\n model._meta.additional_fields = new_fields\n\n return model\n\n\ndef existing_user_fields(fields):\n \"\"\"\n Starting with Django 1.6, the User model can be overridden and it is no\n longer safe to assume the User model has certain fields. This helper\n function assists in writing portable forms Meta.fields definitions\n when those contain fields on the User model\n\n Usage:\n class UserForm(forms.Form):\n ...\n class Meta:\n # won't break if first_name is not defined on User model\n fields = existing_user_fields(['first_name', 'last_name'])\n \"\"\"\n user_fields = get_user_model()._meta.fields\n user_field_names = [field.name for field in user_fields]\n return [field for field in fields if field in user_field_names]\n\n\n# Python3 compatibility layer\n\n\"\"\"\nUnicode compatible wrapper for CSV reader and writer that abstracts away\ndifferences between Python 2 and 3. A package like unicodecsv would be\npreferable, but it's not Python 3 compatible yet.\n\nCode from http://python3porting.com/problems.html\nChanges:\n- Classes renamed to include CSV.\n- Unused 'codecs' import is dropped.\n- Added possibility to specify an open file to the writer to send as response\n of a view\n\"\"\"\n\n\nPY3 = sys.version > '3'\n\n\nclass UnicodeCSVReader:\n def __init__(self, filename, dialect=csv.excel,\n encoding=\"utf-8\", **kw):\n self.filename = filename\n self.dialect = dialect\n self.encoding = encoding\n self.kw = kw\n\n def __enter__(self):\n if PY3:\n self.f = open(self.filename, 'rt',\n encoding=self.encoding, newline='')\n else:\n self.f = open(self.filename, 'rbU')\n self.reader = csv.reader(self.f, dialect=self.dialect,\n **self.kw)\n return self\n\n def __exit__(self, type, value, traceback):\n self.f.close()\n\n def next(self):\n row = next(self.reader)\n if PY3:\n return row\n return [s.decode(\"utf-8\") for s in row]\n\n __next__ = next\n\n def __iter__(self):\n return self\n\n\nclass UnicodeCSVWriter:\n \"\"\"\n Python 2 and 3 compatible CSV writer. Supports two modes:\n * Writing to an open file or file-like object:\n writer = UnicodeCSVWriter(open_file=your_file)\n ...\n your_file.close()\n * Writing to a new file:\n with UnicodeCSVWriter(filename=filename) as writer:\n ...\n \"\"\"\n def __init__(self, filename=None, open_file=None, dialect=csv.excel,\n encoding=\"utf-8\", **kw):\n if filename is open_file is None:\n raise ImproperlyConfigured(\n \"You need to specify either a filename or an open file\")\n self.filename = filename\n self.f = open_file\n self.dialect = dialect\n self.encoding = encoding\n self.kw = kw\n self.writer = None\n\n def __enter__(self):\n assert self.filename is not None\n if PY3:\n self.f = open(self.filename, 'wt',\n encoding=self.encoding, newline='')\n else:\n self.f = open(self.filename, 'wb')\n\n def __exit__(self, type, value, traceback):\n assert self.filename is not None\n if self.filename is not None:\n self.f.close()\n\n def writerow(self, row):\n if self.writer is None:\n self.writer = csv.writer(self.f, dialect=self.dialect, **self.kw)\n if not PY3:\n row = [six.text_type(s).encode(self.encoding) for s in row]\n self.writer.writerow(list(row))\n\n def writerows(self, rows):\n for row in rows:\n self.writerow(row)\n", "path": "src/oscar/core/compat.py" } ]
[ { "content": "import csv\nimport sys\n\nfrom django.conf import settings\nfrom django.contrib.auth.models import User\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.utils import six\n\nfrom oscar.core.loading import get_model\n\n# A setting that can be used in foreign key declarations\nAUTH_USER_MODEL = getattr(settings, 'AUTH_USER_MODEL', 'auth.User')\ntry:\n AUTH_USER_APP_LABEL, AUTH_USER_MODEL_NAME = AUTH_USER_MODEL.rsplit('.', 1)\nexcept ValueError:\n raise ImproperlyConfigured(\"AUTH_USER_MODEL must be of the form\"\n \" 'app_label.model_name'\")\n\n\ndef get_user_model():\n \"\"\"\n Return the User model. Doesn't require the app cache to be fully\n initialised.\n\n This used to live in compat to support both Django 1.4's fixed User model\n and custom user models introduced thereafter.\n Support for Django 1.4 has since been dropped in Oscar, but our\n get_user_model remains because code relies on us annotating the _meta class\n with the additional fields, and other code might rely on it as well.\n \"\"\"\n\n try:\n model = get_model(AUTH_USER_APP_LABEL, AUTH_USER_MODEL_NAME)\n except LookupError:\n # Convert exception to an ImproperlyConfigured exception for\n # backwards compatibility with previous Oscar versions and the\n # original get_user_model method in Django.\n raise ImproperlyConfigured(\n \"AUTH_USER_MODEL refers to model '%s' that has not been installed\"\n % settings.AUTH_USER_MODEL)\n\n # Test if user model has any custom fields and add attributes to the _meta\n # class\n core_fields = set([f.name for f in User._meta.fields])\n model_fields = set([f.name for f in model._meta.fields])\n new_fields = model_fields.difference(core_fields)\n model._meta.has_additional_fields = len(new_fields) > 0\n model._meta.additional_fields = new_fields\n\n return model\n\n\ndef existing_user_fields(fields):\n \"\"\"\n Starting with Django 1.6, the User model can be overridden and it is no\n longer safe to assume the User model has certain fields. This helper\n function assists in writing portable forms Meta.fields definitions\n when those contain fields on the User model\n\n Usage:\n class UserForm(forms.Form):\n ...\n class Meta:\n # won't break if first_name is not defined on User model\n fields = existing_user_fields(['first_name', 'last_name'])\n \"\"\"\n user_fields = get_user_model()._meta.fields\n user_field_names = [field.name for field in user_fields]\n return [field for field in fields if field in user_field_names]\n\n\n# Python3 compatibility layer\n\n\"\"\"\nUnicode compatible wrapper for CSV reader and writer that abstracts away\ndifferences between Python 2 and 3. A package like unicodecsv would be\npreferable, but it's not Python 3 compatible yet.\n\nCode from http://python3porting.com/problems.html\nChanges:\n- Classes renamed to include CSV.\n- Unused 'codecs' import is dropped.\n- Added possibility to specify an open file to the writer to send as response\n of a view\n\"\"\"\n\n\nPY3 = sys.version > '3'\n\n\nclass UnicodeCSVReader:\n def __init__(self, filename, dialect=csv.excel,\n encoding=\"utf-8\", **kw):\n self.filename = filename\n self.dialect = dialect\n self.encoding = encoding\n self.kw = kw\n\n def __enter__(self):\n if PY3:\n self.f = open(self.filename, 'rt',\n encoding=self.encoding, newline='')\n else:\n self.f = open(self.filename, 'rbU')\n self.reader = csv.reader(self.f, dialect=self.dialect,\n **self.kw)\n return self\n\n def __exit__(self, type, value, traceback):\n self.f.close()\n\n def next(self):\n row = next(self.reader)\n if PY3:\n return row\n return [s.decode(\"utf-8\") for s in row]\n\n __next__ = next\n\n def __iter__(self):\n return self\n\n\nclass UnicodeCSVWriter:\n \"\"\"\n Python 2 and 3 compatible CSV writer. Supports two modes:\n * Writing to an open file or file-like object:\n writer = UnicodeCSVWriter(open_file=your_file)\n ...\n your_file.close()\n * Writing to a new file:\n with UnicodeCSVWriter(filename=filename) as writer:\n ...\n \"\"\"\n def __init__(self, filename=None, open_file=None, dialect=csv.excel,\n encoding=\"utf-8\", **kw):\n if filename is open_file is None:\n raise ImproperlyConfigured(\n \"You need to specify either a filename or an open file\")\n self.filename = filename\n self.f = open_file\n self.dialect = dialect\n self.encoding = encoding\n self.kw = kw\n self.writer = None\n\n def __enter__(self):\n assert self.filename is not None\n if PY3:\n self.f = open(self.filename, 'wt',\n encoding=self.encoding, newline='')\n else:\n self.f = open(self.filename, 'wb')\n return self\n\n def __exit__(self, type, value, traceback):\n assert self.filename is not None\n if self.filename is not None:\n self.f.close()\n\n def writerow(self, row):\n if self.writer is None:\n self.writer = csv.writer(self.f, dialect=self.dialect, **self.kw)\n if not PY3:\n row = [six.text_type(s).encode(self.encoding) for s in row]\n self.writer.writerow(list(row))\n\n def writerows(self, rows):\n for row in rows:\n self.writerow(row)\n", "path": "src/oscar/core/compat.py" } ]
diff --git a/src/oscar/core/compat.py b/src/oscar/core/compat.py index c1adc57a332..d6ec2c8408f 100644 --- a/src/oscar/core/compat.py +++ b/src/oscar/core/compat.py @@ -151,6 +151,7 @@ def __enter__(self): encoding=self.encoding, newline='') else: self.f = open(self.filename, 'wb') + return self def __exit__(self, type, value, traceback): assert self.filename is not None diff --git a/tests/unit/core/compat_tests.py b/tests/unit/core/compat_tests.py index ad5e7862ec3..4086030efd6 100644 --- a/tests/unit/core/compat_tests.py +++ b/tests/unit/core/compat_tests.py @@ -1,5 +1,6 @@ # -*- coding: utf-8 -*- import datetime +from tempfile import NamedTemporaryFile from django.utils import six from django.utils.six.moves import cStringIO @@ -7,6 +8,19 @@ from oscar.core.compat import UnicodeCSVWriter, existing_user_fields + +class unicodeobj(object): + + def __init__(self, s): + self.s = s + + def __str__(self): + return self.s + + def __unicode__(self): + return self.s + + class TestExistingUserFields(TestCase): def test_order(self): @@ -19,15 +33,17 @@ class TestUnicodeCSVWriter(TestCase): def test_can_write_different_values(self): writer = UnicodeCSVWriter(open_file=cStringIO()) s = u'ünįcodē' - class unicodeobj(object): - def __str__(self): - return s - def __unicode__(self): - return s - rows = [[s, unicodeobj(), 123, datetime.date.today()], ] + rows = [[s, unicodeobj(s), 123, datetime.date.today()], ] writer.writerows(rows) self.assertRaises(TypeError, writer.writerows, [object()]) + def test_context_manager(self): + tmp_file = NamedTemporaryFile() + with UnicodeCSVWriter(filename=tmp_file.name) as writer: + s = u'ünįcodē' + rows = [[s, unicodeobj(s), 123, datetime.date.today()], ] + writer.writerows(rows) + class TestPython3Compatibility(TestCase):
encode__django-rest-framework-2456
Checking for request.version raises AttributeError with BrowsableAPIRenderer I've encountered the following exception when using the Browsable API in conjunction with the new namespace versioning and HyperlinkedModelSerializers: ``` python AttributeError: 'WSGIRequest' object has no attribute 'version' ``` I've implemented `get_serializer_class()` as specified in the documentation: ``` python def get_serializer_class(self): if self.request.version == 'v1': return AccountSerializerVersion1 return AccountSerializer ``` I'm also using drf-nested-routers, and this is occurring on endpoints like /api/v1/car/1/tires/ where the Tire model has a ForeignKey to Car. This only happens on the Browsable API; I can perform a GET request to the same endpoint using the JSONRenderer without exceptions. Checking for request.version raises AttributeError with BrowsableAPIRenderer I've encountered the following exception when using the Browsable API in conjunction with the new namespace versioning and HyperlinkedModelSerializers: ``` python AttributeError: 'WSGIRequest' object has no attribute 'version' ``` I've implemented `get_serializer_class()` as specified in the documentation: ``` python def get_serializer_class(self): if self.request.version == 'v1': return AccountSerializerVersion1 return AccountSerializer ``` I'm also using drf-nested-routers, and this is occurring on endpoints like /api/v1/car/1/tires/ where the Tire model has a ForeignKey to Car. This only happens on the Browsable API; I can perform a GET request to the same endpoint using the JSONRenderer without exceptions.
[ { "content": "\"\"\"\nThe Request class is used as a wrapper around the standard request object.\n\nThe wrapped request then offers a richer API, in particular :\n\n - content automatically parsed according to `Content-Type` header,\n and available as `request.data`\n - full support of PUT method, including support for file uploads\n - form overloading of HTTP method, content type and content\n\"\"\"\nfrom __future__ import unicode_literals\nfrom django.conf import settings\nfrom django.http import QueryDict\nfrom django.http.multipartparser import parse_header\nfrom django.utils.datastructures import MultiValueDict\nfrom django.utils.datastructures import MergeDict as DjangoMergeDict\nfrom django.utils.six import BytesIO\nfrom rest_framework import HTTP_HEADER_ENCODING\nfrom rest_framework import exceptions\nfrom rest_framework.settings import api_settings\nimport warnings\n\n\ndef is_form_media_type(media_type):\n \"\"\"\n Return True if the media type is a valid form media type.\n \"\"\"\n base_media_type, params = parse_header(media_type.encode(HTTP_HEADER_ENCODING))\n return (base_media_type == 'application/x-www-form-urlencoded' or\n base_media_type == 'multipart/form-data')\n\n\nclass override_method(object):\n \"\"\"\n A context manager that temporarily overrides the method on a request,\n additionally setting the `view.request` attribute.\n\n Usage:\n\n with override_method(view, request, 'POST') as request:\n ... # Do stuff with `view` and `request`\n \"\"\"\n def __init__(self, view, request, method):\n self.view = view\n self.request = request\n self.method = method\n self.action = getattr(view, 'action', None)\n\n def __enter__(self):\n self.view.request = clone_request(self.request, self.method)\n if self.action is not None:\n # For viewsets we also set the `.action` attribute.\n action_map = getattr(self.view, 'action_map', {})\n self.view.action = action_map.get(self.method.lower())\n return self.view.request\n\n def __exit__(self, *args, **kwarg):\n self.view.request = self.request\n if self.action is not None:\n self.view.action = self.action\n\n\nclass MergeDict(DjangoMergeDict, dict):\n \"\"\"\n Using this as a workaround until the parsers API is properly\n addressed in 3.1.\n \"\"\"\n def __init__(self, *dicts):\n self.dicts = dicts\n\n\nclass Empty(object):\n \"\"\"\n Placeholder for unset attributes.\n Cannot use `None`, as that may be a valid value.\n \"\"\"\n pass\n\n\ndef _hasattr(obj, name):\n return not getattr(obj, name) is Empty\n\n\ndef clone_request(request, method):\n \"\"\"\n Internal helper method to clone a request, replacing with a different\n HTTP method. Used for checking permissions against other methods.\n \"\"\"\n ret = Request(request=request._request,\n parsers=request.parsers,\n authenticators=request.authenticators,\n negotiator=request.negotiator,\n parser_context=request.parser_context)\n ret._data = request._data\n ret._files = request._files\n ret._full_data = request._full_data\n ret._content_type = request._content_type\n ret._stream = request._stream\n ret._method = method\n if hasattr(request, '_user'):\n ret._user = request._user\n if hasattr(request, '_auth'):\n ret._auth = request._auth\n if hasattr(request, '_authenticator'):\n ret._authenticator = request._authenticator\n if hasattr(request, 'accepted_renderer'):\n ret.accepted_renderer = request.accepted_renderer\n if hasattr(request, 'accepted_media_type'):\n ret.accepted_media_type = request.accepted_media_type\n return ret\n\n\nclass ForcedAuthentication(object):\n \"\"\"\n This authentication class is used if the test client or request factory\n forcibly authenticated the request.\n \"\"\"\n\n def __init__(self, force_user, force_token):\n self.force_user = force_user\n self.force_token = force_token\n\n def authenticate(self, request):\n return (self.force_user, self.force_token)\n\n\nclass Request(object):\n \"\"\"\n Wrapper allowing to enhance a standard `HttpRequest` instance.\n\n Kwargs:\n - request(HttpRequest). The original request instance.\n - parsers_classes(list/tuple). The parsers to use for parsing the\n request content.\n - authentication_classes(list/tuple). The authentications used to try\n authenticating the request's user.\n \"\"\"\n\n _METHOD_PARAM = api_settings.FORM_METHOD_OVERRIDE\n _CONTENT_PARAM = api_settings.FORM_CONTENT_OVERRIDE\n _CONTENTTYPE_PARAM = api_settings.FORM_CONTENTTYPE_OVERRIDE\n\n def __init__(self, request, parsers=None, authenticators=None,\n negotiator=None, parser_context=None):\n self._request = request\n self.parsers = parsers or ()\n self.authenticators = authenticators or ()\n self.negotiator = negotiator or self._default_negotiator()\n self.parser_context = parser_context\n self._data = Empty\n self._files = Empty\n self._full_data = Empty\n self._method = Empty\n self._content_type = Empty\n self._stream = Empty\n\n if self.parser_context is None:\n self.parser_context = {}\n self.parser_context['request'] = self\n self.parser_context['encoding'] = request.encoding or settings.DEFAULT_CHARSET\n\n force_user = getattr(request, '_force_auth_user', None)\n force_token = getattr(request, '_force_auth_token', None)\n if (force_user is not None or force_token is not None):\n forced_auth = ForcedAuthentication(force_user, force_token)\n self.authenticators = (forced_auth,)\n\n def _default_negotiator(self):\n return api_settings.DEFAULT_CONTENT_NEGOTIATION_CLASS()\n\n @property\n def method(self):\n \"\"\"\n Returns the HTTP method.\n\n This allows the `method` to be overridden by using a hidden `form`\n field on a form POST request.\n \"\"\"\n if not _hasattr(self, '_method'):\n self._load_method_and_content_type()\n return self._method\n\n @property\n def content_type(self):\n \"\"\"\n Returns the content type header.\n\n This should be used instead of `request.META.get('HTTP_CONTENT_TYPE')`,\n as it allows the content type to be overridden by using a hidden form\n field on a form POST request.\n \"\"\"\n if not _hasattr(self, '_content_type'):\n self._load_method_and_content_type()\n return self._content_type\n\n @property\n def stream(self):\n \"\"\"\n Returns an object that may be used to stream the request content.\n \"\"\"\n if not _hasattr(self, '_stream'):\n self._load_stream()\n return self._stream\n\n @property\n def query_params(self):\n \"\"\"\n More semantically correct name for request.GET.\n \"\"\"\n return self._request.GET\n\n @property\n def QUERY_PARAMS(self):\n \"\"\"\n Synonym for `.query_params`, for backwards compatibility.\n \"\"\"\n warnings.warn(\n \"`request.QUERY_PARAMS` is pending deprecation. Use `request.query_params` instead.\",\n PendingDeprecationWarning,\n stacklevel=1\n )\n return self._request.GET\n\n @property\n def data(self):\n if not _hasattr(self, '_full_data'):\n self._load_data_and_files()\n return self._full_data\n\n @property\n def DATA(self):\n \"\"\"\n Parses the request body and returns the data.\n\n Similar to usual behaviour of `request.POST`, except that it handles\n arbitrary parsers, and also works on methods other than POST (eg PUT).\n \"\"\"\n warnings.warn(\n \"`request.DATA` is pending deprecation. Use `request.data` instead.\",\n PendingDeprecationWarning,\n stacklevel=1\n )\n if not _hasattr(self, '_data'):\n self._load_data_and_files()\n return self._data\n\n @property\n def FILES(self):\n \"\"\"\n Parses the request body and returns any files uploaded in the request.\n\n Similar to usual behaviour of `request.FILES`, except that it handles\n arbitrary parsers, and also works on methods other than POST (eg PUT).\n \"\"\"\n warnings.warn(\n \"`request.FILES` is pending deprecation. Use `request.data` instead.\",\n PendingDeprecationWarning,\n stacklevel=1\n )\n if not _hasattr(self, '_files'):\n self._load_data_and_files()\n return self._files\n\n @property\n def user(self):\n \"\"\"\n Returns the user associated with the current request, as authenticated\n by the authentication classes provided to the request.\n \"\"\"\n if not hasattr(self, '_user'):\n self._authenticate()\n return self._user\n\n @user.setter\n def user(self, value):\n \"\"\"\n Sets the user on the current request. This is necessary to maintain\n compatibility with django.contrib.auth where the user property is\n set in the login and logout functions.\n\n Note that we also set the user on Django's underlying `HttpRequest`\n instance, ensuring that it is available to any middleware in the stack.\n \"\"\"\n self._user = value\n self._request.user = value\n\n @property\n def auth(self):\n \"\"\"\n Returns any non-user authentication information associated with the\n request, such as an authentication token.\n \"\"\"\n if not hasattr(self, '_auth'):\n self._authenticate()\n return self._auth\n\n @auth.setter\n def auth(self, value):\n \"\"\"\n Sets any non-user authentication information associated with the\n request, such as an authentication token.\n \"\"\"\n self._auth = value\n self._request.auth = value\n\n @property\n def successful_authenticator(self):\n \"\"\"\n Return the instance of the authentication instance class that was used\n to authenticate the request, or `None`.\n \"\"\"\n if not hasattr(self, '_authenticator'):\n self._authenticate()\n return self._authenticator\n\n def _load_data_and_files(self):\n \"\"\"\n Parses the request content into `self.data`.\n \"\"\"\n if not _hasattr(self, '_content_type'):\n self._load_method_and_content_type()\n\n if not _hasattr(self, '_data'):\n self._data, self._files = self._parse()\n if self._files:\n self._full_data = MergeDict(self._data, self._files)\n else:\n self._full_data = self._data\n\n def _load_method_and_content_type(self):\n \"\"\"\n Sets the method and content_type, and then check if they've\n been overridden.\n \"\"\"\n self._content_type = self.META.get('HTTP_CONTENT_TYPE',\n self.META.get('CONTENT_TYPE', ''))\n\n self._perform_form_overloading()\n\n if not _hasattr(self, '_method'):\n self._method = self._request.method\n\n # Allow X-HTTP-METHOD-OVERRIDE header\n if 'HTTP_X_HTTP_METHOD_OVERRIDE' in self.META:\n self._method = self.META['HTTP_X_HTTP_METHOD_OVERRIDE'].upper()\n\n def _load_stream(self):\n \"\"\"\n Return the content body of the request, as a stream.\n \"\"\"\n try:\n content_length = int(\n self.META.get(\n 'CONTENT_LENGTH', self.META.get('HTTP_CONTENT_LENGTH')\n )\n )\n except (ValueError, TypeError):\n content_length = 0\n\n if content_length == 0:\n self._stream = None\n elif hasattr(self._request, 'read'):\n self._stream = self._request\n else:\n self._stream = BytesIO(self.raw_post_data)\n\n def _perform_form_overloading(self):\n \"\"\"\n If this is a form POST request, then we need to check if the method and\n content/content_type have been overridden by setting them in hidden\n form fields or not.\n \"\"\"\n\n USE_FORM_OVERLOADING = (\n self._METHOD_PARAM or\n (self._CONTENT_PARAM and self._CONTENTTYPE_PARAM)\n )\n\n # We only need to use form overloading on form POST requests.\n if (\n not USE_FORM_OVERLOADING\n or self._request.method != 'POST'\n or not is_form_media_type(self._content_type)\n ):\n return\n\n # At this point we're committed to parsing the request as form data.\n self._data = self._request.POST\n self._files = self._request.FILES\n self._full_data = MergeDict(self._data, self._files)\n\n # Method overloading - change the method and remove the param from the content.\n if (\n self._METHOD_PARAM and\n self._METHOD_PARAM in self._data\n ):\n self._method = self._data[self._METHOD_PARAM].upper()\n\n # Content overloading - modify the content type, and force re-parse.\n if (\n self._CONTENT_PARAM and\n self._CONTENTTYPE_PARAM and\n self._CONTENT_PARAM in self._data and\n self._CONTENTTYPE_PARAM in self._data\n ):\n self._content_type = self._data[self._CONTENTTYPE_PARAM]\n self._stream = BytesIO(self._data[self._CONTENT_PARAM].encode(self.parser_context['encoding']))\n self._data, self._files, self._full_data = (Empty, Empty, Empty)\n\n def _parse(self):\n \"\"\"\n Parse the request content, returning a two-tuple of (data, files)\n\n May raise an `UnsupportedMediaType`, or `ParseError` exception.\n \"\"\"\n stream = self.stream\n media_type = self.content_type\n\n if stream is None or media_type is None:\n empty_data = QueryDict('', encoding=self._request._encoding)\n empty_files = MultiValueDict()\n return (empty_data, empty_files)\n\n parser = self.negotiator.select_parser(self, self.parsers)\n\n if not parser:\n raise exceptions.UnsupportedMediaType(media_type)\n\n try:\n parsed = parser.parse(stream, media_type, self.parser_context)\n except:\n # If we get an exception during parsing, fill in empty data and\n # re-raise. Ensures we don't simply repeat the error when\n # attempting to render the browsable renderer response, or when\n # logging the request or similar.\n self._data = QueryDict('', encoding=self._request._encoding)\n self._files = MultiValueDict()\n self._full_data = self._data\n raise\n\n # Parser classes may return the raw data, or a\n # DataAndFiles object. Unpack the result as required.\n try:\n return (parsed.data, parsed.files)\n except AttributeError:\n empty_files = MultiValueDict()\n return (parsed, empty_files)\n\n def _authenticate(self):\n \"\"\"\n Attempt to authenticate the request using each authentication instance\n in turn.\n Returns a three-tuple of (authenticator, user, authtoken).\n \"\"\"\n for authenticator in self.authenticators:\n try:\n user_auth_tuple = authenticator.authenticate(self)\n except exceptions.APIException:\n self._not_authenticated()\n raise\n\n if user_auth_tuple is not None:\n self._authenticator = authenticator\n self.user, self.auth = user_auth_tuple\n return\n\n self._not_authenticated()\n\n def _not_authenticated(self):\n \"\"\"\n Return a three-tuple of (authenticator, user, authtoken), representing\n an unauthenticated request.\n\n By default this will be (None, AnonymousUser, None).\n \"\"\"\n self._authenticator = None\n\n if api_settings.UNAUTHENTICATED_USER:\n self.user = api_settings.UNAUTHENTICATED_USER()\n else:\n self.user = None\n\n if api_settings.UNAUTHENTICATED_TOKEN:\n self.auth = api_settings.UNAUTHENTICATED_TOKEN()\n else:\n self.auth = None\n\n def __getattr__(self, attr):\n \"\"\"\n Proxy other attributes to the underlying HttpRequest object.\n \"\"\"\n return getattr(self._request, attr)\n", "path": "rest_framework/request.py" } ]
[ { "content": "\"\"\"\nThe Request class is used as a wrapper around the standard request object.\n\nThe wrapped request then offers a richer API, in particular :\n\n - content automatically parsed according to `Content-Type` header,\n and available as `request.data`\n - full support of PUT method, including support for file uploads\n - form overloading of HTTP method, content type and content\n\"\"\"\nfrom __future__ import unicode_literals\nfrom django.conf import settings\nfrom django.http import QueryDict\nfrom django.http.multipartparser import parse_header\nfrom django.utils.datastructures import MultiValueDict\nfrom django.utils.datastructures import MergeDict as DjangoMergeDict\nfrom django.utils.six import BytesIO\nfrom rest_framework import HTTP_HEADER_ENCODING\nfrom rest_framework import exceptions\nfrom rest_framework.settings import api_settings\nimport warnings\n\n\ndef is_form_media_type(media_type):\n \"\"\"\n Return True if the media type is a valid form media type.\n \"\"\"\n base_media_type, params = parse_header(media_type.encode(HTTP_HEADER_ENCODING))\n return (base_media_type == 'application/x-www-form-urlencoded' or\n base_media_type == 'multipart/form-data')\n\n\nclass override_method(object):\n \"\"\"\n A context manager that temporarily overrides the method on a request,\n additionally setting the `view.request` attribute.\n\n Usage:\n\n with override_method(view, request, 'POST') as request:\n ... # Do stuff with `view` and `request`\n \"\"\"\n def __init__(self, view, request, method):\n self.view = view\n self.request = request\n self.method = method\n self.action = getattr(view, 'action', None)\n\n def __enter__(self):\n self.view.request = clone_request(self.request, self.method)\n if self.action is not None:\n # For viewsets we also set the `.action` attribute.\n action_map = getattr(self.view, 'action_map', {})\n self.view.action = action_map.get(self.method.lower())\n return self.view.request\n\n def __exit__(self, *args, **kwarg):\n self.view.request = self.request\n if self.action is not None:\n self.view.action = self.action\n\n\nclass MergeDict(DjangoMergeDict, dict):\n \"\"\"\n Using this as a workaround until the parsers API is properly\n addressed in 3.1.\n \"\"\"\n def __init__(self, *dicts):\n self.dicts = dicts\n\n\nclass Empty(object):\n \"\"\"\n Placeholder for unset attributes.\n Cannot use `None`, as that may be a valid value.\n \"\"\"\n pass\n\n\ndef _hasattr(obj, name):\n return not getattr(obj, name) is Empty\n\n\ndef clone_request(request, method):\n \"\"\"\n Internal helper method to clone a request, replacing with a different\n HTTP method. Used for checking permissions against other methods.\n \"\"\"\n ret = Request(request=request._request,\n parsers=request.parsers,\n authenticators=request.authenticators,\n negotiator=request.negotiator,\n parser_context=request.parser_context)\n ret._data = request._data\n ret._files = request._files\n ret._full_data = request._full_data\n ret._content_type = request._content_type\n ret._stream = request._stream\n ret._method = method\n if hasattr(request, '_user'):\n ret._user = request._user\n if hasattr(request, '_auth'):\n ret._auth = request._auth\n if hasattr(request, '_authenticator'):\n ret._authenticator = request._authenticator\n if hasattr(request, 'accepted_renderer'):\n ret.accepted_renderer = request.accepted_renderer\n if hasattr(request, 'accepted_media_type'):\n ret.accepted_media_type = request.accepted_media_type\n if hasattr(request, 'version'):\n ret.version = request.version\n return ret\n\n\nclass ForcedAuthentication(object):\n \"\"\"\n This authentication class is used if the test client or request factory\n forcibly authenticated the request.\n \"\"\"\n\n def __init__(self, force_user, force_token):\n self.force_user = force_user\n self.force_token = force_token\n\n def authenticate(self, request):\n return (self.force_user, self.force_token)\n\n\nclass Request(object):\n \"\"\"\n Wrapper allowing to enhance a standard `HttpRequest` instance.\n\n Kwargs:\n - request(HttpRequest). The original request instance.\n - parsers_classes(list/tuple). The parsers to use for parsing the\n request content.\n - authentication_classes(list/tuple). The authentications used to try\n authenticating the request's user.\n \"\"\"\n\n _METHOD_PARAM = api_settings.FORM_METHOD_OVERRIDE\n _CONTENT_PARAM = api_settings.FORM_CONTENT_OVERRIDE\n _CONTENTTYPE_PARAM = api_settings.FORM_CONTENTTYPE_OVERRIDE\n\n def __init__(self, request, parsers=None, authenticators=None,\n negotiator=None, parser_context=None):\n self._request = request\n self.parsers = parsers or ()\n self.authenticators = authenticators or ()\n self.negotiator = negotiator or self._default_negotiator()\n self.parser_context = parser_context\n self._data = Empty\n self._files = Empty\n self._full_data = Empty\n self._method = Empty\n self._content_type = Empty\n self._stream = Empty\n\n if self.parser_context is None:\n self.parser_context = {}\n self.parser_context['request'] = self\n self.parser_context['encoding'] = request.encoding or settings.DEFAULT_CHARSET\n\n force_user = getattr(request, '_force_auth_user', None)\n force_token = getattr(request, '_force_auth_token', None)\n if (force_user is not None or force_token is not None):\n forced_auth = ForcedAuthentication(force_user, force_token)\n self.authenticators = (forced_auth,)\n\n def _default_negotiator(self):\n return api_settings.DEFAULT_CONTENT_NEGOTIATION_CLASS()\n\n @property\n def method(self):\n \"\"\"\n Returns the HTTP method.\n\n This allows the `method` to be overridden by using a hidden `form`\n field on a form POST request.\n \"\"\"\n if not _hasattr(self, '_method'):\n self._load_method_and_content_type()\n return self._method\n\n @property\n def content_type(self):\n \"\"\"\n Returns the content type header.\n\n This should be used instead of `request.META.get('HTTP_CONTENT_TYPE')`,\n as it allows the content type to be overridden by using a hidden form\n field on a form POST request.\n \"\"\"\n if not _hasattr(self, '_content_type'):\n self._load_method_and_content_type()\n return self._content_type\n\n @property\n def stream(self):\n \"\"\"\n Returns an object that may be used to stream the request content.\n \"\"\"\n if not _hasattr(self, '_stream'):\n self._load_stream()\n return self._stream\n\n @property\n def query_params(self):\n \"\"\"\n More semantically correct name for request.GET.\n \"\"\"\n return self._request.GET\n\n @property\n def QUERY_PARAMS(self):\n \"\"\"\n Synonym for `.query_params`, for backwards compatibility.\n \"\"\"\n warnings.warn(\n \"`request.QUERY_PARAMS` is pending deprecation. Use `request.query_params` instead.\",\n PendingDeprecationWarning,\n stacklevel=1\n )\n return self._request.GET\n\n @property\n def data(self):\n if not _hasattr(self, '_full_data'):\n self._load_data_and_files()\n return self._full_data\n\n @property\n def DATA(self):\n \"\"\"\n Parses the request body and returns the data.\n\n Similar to usual behaviour of `request.POST`, except that it handles\n arbitrary parsers, and also works on methods other than POST (eg PUT).\n \"\"\"\n warnings.warn(\n \"`request.DATA` is pending deprecation. Use `request.data` instead.\",\n PendingDeprecationWarning,\n stacklevel=1\n )\n if not _hasattr(self, '_data'):\n self._load_data_and_files()\n return self._data\n\n @property\n def FILES(self):\n \"\"\"\n Parses the request body and returns any files uploaded in the request.\n\n Similar to usual behaviour of `request.FILES`, except that it handles\n arbitrary parsers, and also works on methods other than POST (eg PUT).\n \"\"\"\n warnings.warn(\n \"`request.FILES` is pending deprecation. Use `request.data` instead.\",\n PendingDeprecationWarning,\n stacklevel=1\n )\n if not _hasattr(self, '_files'):\n self._load_data_and_files()\n return self._files\n\n @property\n def user(self):\n \"\"\"\n Returns the user associated with the current request, as authenticated\n by the authentication classes provided to the request.\n \"\"\"\n if not hasattr(self, '_user'):\n self._authenticate()\n return self._user\n\n @user.setter\n def user(self, value):\n \"\"\"\n Sets the user on the current request. This is necessary to maintain\n compatibility with django.contrib.auth where the user property is\n set in the login and logout functions.\n\n Note that we also set the user on Django's underlying `HttpRequest`\n instance, ensuring that it is available to any middleware in the stack.\n \"\"\"\n self._user = value\n self._request.user = value\n\n @property\n def auth(self):\n \"\"\"\n Returns any non-user authentication information associated with the\n request, such as an authentication token.\n \"\"\"\n if not hasattr(self, '_auth'):\n self._authenticate()\n return self._auth\n\n @auth.setter\n def auth(self, value):\n \"\"\"\n Sets any non-user authentication information associated with the\n request, such as an authentication token.\n \"\"\"\n self._auth = value\n self._request.auth = value\n\n @property\n def successful_authenticator(self):\n \"\"\"\n Return the instance of the authentication instance class that was used\n to authenticate the request, or `None`.\n \"\"\"\n if not hasattr(self, '_authenticator'):\n self._authenticate()\n return self._authenticator\n\n def _load_data_and_files(self):\n \"\"\"\n Parses the request content into `self.data`.\n \"\"\"\n if not _hasattr(self, '_content_type'):\n self._load_method_and_content_type()\n\n if not _hasattr(self, '_data'):\n self._data, self._files = self._parse()\n if self._files:\n self._full_data = MergeDict(self._data, self._files)\n else:\n self._full_data = self._data\n\n def _load_method_and_content_type(self):\n \"\"\"\n Sets the method and content_type, and then check if they've\n been overridden.\n \"\"\"\n self._content_type = self.META.get('HTTP_CONTENT_TYPE',\n self.META.get('CONTENT_TYPE', ''))\n\n self._perform_form_overloading()\n\n if not _hasattr(self, '_method'):\n self._method = self._request.method\n\n # Allow X-HTTP-METHOD-OVERRIDE header\n if 'HTTP_X_HTTP_METHOD_OVERRIDE' in self.META:\n self._method = self.META['HTTP_X_HTTP_METHOD_OVERRIDE'].upper()\n\n def _load_stream(self):\n \"\"\"\n Return the content body of the request, as a stream.\n \"\"\"\n try:\n content_length = int(\n self.META.get(\n 'CONTENT_LENGTH', self.META.get('HTTP_CONTENT_LENGTH')\n )\n )\n except (ValueError, TypeError):\n content_length = 0\n\n if content_length == 0:\n self._stream = None\n elif hasattr(self._request, 'read'):\n self._stream = self._request\n else:\n self._stream = BytesIO(self.raw_post_data)\n\n def _perform_form_overloading(self):\n \"\"\"\n If this is a form POST request, then we need to check if the method and\n content/content_type have been overridden by setting them in hidden\n form fields or not.\n \"\"\"\n\n USE_FORM_OVERLOADING = (\n self._METHOD_PARAM or\n (self._CONTENT_PARAM and self._CONTENTTYPE_PARAM)\n )\n\n # We only need to use form overloading on form POST requests.\n if (\n not USE_FORM_OVERLOADING\n or self._request.method != 'POST'\n or not is_form_media_type(self._content_type)\n ):\n return\n\n # At this point we're committed to parsing the request as form data.\n self._data = self._request.POST\n self._files = self._request.FILES\n self._full_data = MergeDict(self._data, self._files)\n\n # Method overloading - change the method and remove the param from the content.\n if (\n self._METHOD_PARAM and\n self._METHOD_PARAM in self._data\n ):\n self._method = self._data[self._METHOD_PARAM].upper()\n\n # Content overloading - modify the content type, and force re-parse.\n if (\n self._CONTENT_PARAM and\n self._CONTENTTYPE_PARAM and\n self._CONTENT_PARAM in self._data and\n self._CONTENTTYPE_PARAM in self._data\n ):\n self._content_type = self._data[self._CONTENTTYPE_PARAM]\n self._stream = BytesIO(self._data[self._CONTENT_PARAM].encode(self.parser_context['encoding']))\n self._data, self._files, self._full_data = (Empty, Empty, Empty)\n\n def _parse(self):\n \"\"\"\n Parse the request content, returning a two-tuple of (data, files)\n\n May raise an `UnsupportedMediaType`, or `ParseError` exception.\n \"\"\"\n stream = self.stream\n media_type = self.content_type\n\n if stream is None or media_type is None:\n empty_data = QueryDict('', encoding=self._request._encoding)\n empty_files = MultiValueDict()\n return (empty_data, empty_files)\n\n parser = self.negotiator.select_parser(self, self.parsers)\n\n if not parser:\n raise exceptions.UnsupportedMediaType(media_type)\n\n try:\n parsed = parser.parse(stream, media_type, self.parser_context)\n except:\n # If we get an exception during parsing, fill in empty data and\n # re-raise. Ensures we don't simply repeat the error when\n # attempting to render the browsable renderer response, or when\n # logging the request or similar.\n self._data = QueryDict('', encoding=self._request._encoding)\n self._files = MultiValueDict()\n self._full_data = self._data\n raise\n\n # Parser classes may return the raw data, or a\n # DataAndFiles object. Unpack the result as required.\n try:\n return (parsed.data, parsed.files)\n except AttributeError:\n empty_files = MultiValueDict()\n return (parsed, empty_files)\n\n def _authenticate(self):\n \"\"\"\n Attempt to authenticate the request using each authentication instance\n in turn.\n Returns a three-tuple of (authenticator, user, authtoken).\n \"\"\"\n for authenticator in self.authenticators:\n try:\n user_auth_tuple = authenticator.authenticate(self)\n except exceptions.APIException:\n self._not_authenticated()\n raise\n\n if user_auth_tuple is not None:\n self._authenticator = authenticator\n self.user, self.auth = user_auth_tuple\n return\n\n self._not_authenticated()\n\n def _not_authenticated(self):\n \"\"\"\n Return a three-tuple of (authenticator, user, authtoken), representing\n an unauthenticated request.\n\n By default this will be (None, AnonymousUser, None).\n \"\"\"\n self._authenticator = None\n\n if api_settings.UNAUTHENTICATED_USER:\n self.user = api_settings.UNAUTHENTICATED_USER()\n else:\n self.user = None\n\n if api_settings.UNAUTHENTICATED_TOKEN:\n self.auth = api_settings.UNAUTHENTICATED_TOKEN()\n else:\n self.auth = None\n\n def __getattr__(self, attr):\n \"\"\"\n Proxy other attributes to the underlying HttpRequest object.\n \"\"\"\n return getattr(self._request, attr)\n", "path": "rest_framework/request.py" } ]
diff --git a/rest_framework/request.py b/rest_framework/request.py index cfbbdeccdc..ce2fcb4768 100644 --- a/rest_framework/request.py +++ b/rest_framework/request.py @@ -107,6 +107,8 @@ def clone_request(request, method): ret.accepted_renderer = request.accepted_renderer if hasattr(request, 'accepted_media_type'): ret.accepted_media_type = request.accepted_media_type + if hasattr(request, 'version'): + ret.version = request.version return ret diff --git a/tests/browsable_api/auth_urls.py b/tests/browsable_api/auth_urls.py index bce7dcf919..97bc103604 100644 --- a/tests/browsable_api/auth_urls.py +++ b/tests/browsable_api/auth_urls.py @@ -3,6 +3,7 @@ from .views import MockView + urlpatterns = patterns( '', (r'^$', MockView.as_view()), diff --git a/tests/test_metadata.py b/tests/test_metadata.py index 972a896a46..bdc84edf12 100644 --- a/tests/test_metadata.py +++ b/tests/test_metadata.py @@ -1,6 +1,7 @@ from __future__ import unicode_literals from rest_framework import exceptions, serializers, status, views from rest_framework.request import Request +from rest_framework.renderers import BrowsableAPIRenderer from rest_framework.test import APIRequestFactory request = Request(APIRequestFactory().options('/')) @@ -168,3 +169,17 @@ def get_object(self): response = view(request=request) assert response.status_code == status.HTTP_200_OK assert list(response.data['actions'].keys()) == ['POST'] + + def test_bug_2455_clone_request(self): + class ExampleView(views.APIView): + renderer_classes = (BrowsableAPIRenderer,) + + def post(self, request): + pass + + def get_serializer(self): + assert hasattr(self.request, 'version') + return serializers.Serializer() + + view = ExampleView.as_view() + view(request=request)
pyodide__pyodide-325
ValueError: invalid __array_struct__ when using js arrays of arrays and numpy When using a matrix (array of array of numbers) in javascript and trying to convert that to a numpy array, it fails with the error `ValueError: invalid __array_struct__` To reproduce: JavaScript: ``` window.A = [[1,2,3],[4,5,6]]; ``` Python: ``` import numpy from js import A m = numpy.array(A) ```
[ { "content": "\"\"\"\nA library of helper utilities for connecting Python to the browser environment.\n\"\"\"\n\nimport ast\nimport io\nfrom textwrap import dedent\n\n__version__ = '0.8.2'\n\n\ndef open_url(url):\n \"\"\"\n Fetches a given *url* and returns a io.StringIO to access its contents.\n \"\"\"\n from js import XMLHttpRequest\n\n req = XMLHttpRequest.new()\n req.open('GET', url, False)\n req.send(None)\n return io.StringIO(req.response)\n\n\ndef eval_code(code, ns):\n \"\"\"\n Runs a string of code, the last part of which may be an expression.\n \"\"\"\n # handle mis-indented input from multi-line strings\n code = dedent(code)\n\n mod = ast.parse(code)\n if len(mod.body) == 0:\n return None\n\n if isinstance(mod.body[-1], ast.Expr):\n expr = ast.Expression(mod.body[-1].value)\n del mod.body[-1]\n else:\n expr = None\n\n if len(mod.body):\n exec(compile(mod, '<exec>', mode='exec'), ns, ns)\n if expr is not None:\n return eval(compile(expr, '<eval>', mode='eval'), ns, ns)\n else:\n return None\n\n\ndef find_imports(code):\n \"\"\"\n Finds the imports in a string of code and returns a list of their package\n names.\n \"\"\"\n # handle mis-indented input from multi-line strings\n code = dedent(code)\n\n mod = ast.parse(code)\n imports = set()\n for node in ast.walk(mod):\n if isinstance(node, ast.Import):\n for name in node.names:\n name = name.name\n imports.add(name.split('.')[0])\n elif isinstance(node, ast.ImportFrom):\n name = node.module\n imports.add(name.split('.')[0])\n return list(imports)\n\n\n__all__ = ['open_url', 'eval_code', 'find_imports']\n", "path": "src/pyodide.py" } ]
[ { "content": "\"\"\"\nA library of helper utilities for connecting Python to the browser environment.\n\"\"\"\n\nimport ast\nimport io\nfrom textwrap import dedent\n\n__version__ = '0.8.2'\n\n\ndef open_url(url):\n \"\"\"\n Fetches a given *url* and returns a io.StringIO to access its contents.\n \"\"\"\n from js import XMLHttpRequest\n\n req = XMLHttpRequest.new()\n req.open('GET', url, False)\n req.send(None)\n return io.StringIO(req.response)\n\n\ndef eval_code(code, ns):\n \"\"\"\n Runs a string of code, the last part of which may be an expression.\n \"\"\"\n # handle mis-indented input from multi-line strings\n code = dedent(code)\n\n mod = ast.parse(code)\n if len(mod.body) == 0:\n return None\n\n if isinstance(mod.body[-1], ast.Expr):\n expr = ast.Expression(mod.body[-1].value)\n del mod.body[-1]\n else:\n expr = None\n\n if len(mod.body):\n exec(compile(mod, '<exec>', mode='exec'), ns, ns)\n if expr is not None:\n return eval(compile(expr, '<eval>', mode='eval'), ns, ns)\n else:\n return None\n\n\ndef find_imports(code):\n \"\"\"\n Finds the imports in a string of code and returns a list of their package\n names.\n \"\"\"\n # handle mis-indented input from multi-line strings\n code = dedent(code)\n\n mod = ast.parse(code)\n imports = set()\n for node in ast.walk(mod):\n if isinstance(node, ast.Import):\n for name in node.names:\n name = name.name\n imports.add(name.split('.')[0])\n elif isinstance(node, ast.ImportFrom):\n name = node.module\n imports.add(name.split('.')[0])\n return list(imports)\n\n\ndef as_nested_list(obj):\n \"\"\"\n Assumes a Javascript object is made of (possibly nested) arrays and\n converts them to nested Python lists.\n \"\"\"\n try:\n it = iter(obj)\n return [as_nested_list(x) for x in it]\n except TypeError:\n return obj\n\n\n__all__ = ['open_url', 'eval_code', 'find_imports', 'as_nested_list']\n", "path": "src/pyodide.py" } ]
diff --git a/docs/api_reference.md b/docs/api_reference.md index fdbb6453b09..7e96ab5d0fe 100644 --- a/docs/api_reference.md +++ b/docs/api_reference.md @@ -42,6 +42,21 @@ some preprocessing on the Python code first. Either the resulting object or `None`. +### pyodide.as_nested_list(obj) + +Converts Javascript nested arrays to Python nested lists. This conversion can not +be performed automatically, because Javascript Arrays and Objects can be combined +in ways that are ambiguous. + +*Parameters* + +| name | type | description | +|--------|-------|-----------------------| +| *obj* | JS Object | The object to convert | + +*Returns* + +The object as nested Python lists. ## Javascript API diff --git a/src/pyodide.py b/src/pyodide.py index 5e45e8ed5c8..395a231ce6a 100644 --- a/src/pyodide.py +++ b/src/pyodide.py @@ -67,4 +67,16 @@ def find_imports(code): return list(imports) -__all__ = ['open_url', 'eval_code', 'find_imports'] +def as_nested_list(obj): + """ + Assumes a Javascript object is made of (possibly nested) arrays and + converts them to nested Python lists. + """ + try: + it = iter(obj) + return [as_nested_list(x) for x in it] + except TypeError: + return obj + + +__all__ = ['open_url', 'eval_code', 'find_imports', 'as_nested_list']
DataBiosphere__toil-1003
cwltoil's writeFile uses root logger https://github.com/BD2KGenomics/toil/pull/867#issuecomment-227446745 cwltoil's writeFile uses root logger https://github.com/BD2KGenomics/toil/pull/867#issuecomment-227446745
[ { "content": "# Implement support for Common Workflow Language (CWL) for Toil.\n#\n# Copyright (C) 2015 Curoverse, Inc\n# Copyright (C) 2016 UCSC Computational Genomics Lab\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom toil.job import Job\nfrom toil.common import Toil\nfrom toil.version import version\nfrom toil.lib.bioio import setLoggingFromOptions\n\nfrom argparse import ArgumentParser\nimport cwltool.main\nimport cwltool.workflow\nimport cwltool.expression\nimport cwltool.builder\nfrom cwltool.process import adjustFiles, shortname, adjustFilesWithSecondary, fillInDefaults\nfrom cwltool.utils import aslist\nimport schema_salad.validate as validate\nimport schema_salad.ref_resolver\nimport os\nimport tempfile\nimport json\nimport sys\nimport logging\nimport copy\nimport shutil\nimport functools\nimport urlparse\n\ncwllogger = logging.getLogger(\"cwltool\")\n\n# The job object passed into CWLJob and CWLWorkflow\n# is a dict mapping to tuple of (key, dict)\n# the final dict is derived by evaluating each\n# tuple looking up the key in the supplied dict.\n#\n# This is necessary because Toil jobs return a single value (a dict)\n# but CWL permits steps to have multiple output parameters that may\n# feed into multiple other steps. This transformation maps the key in the\n# output object to the correct key of the input object.\n\nclass IndirectDict(dict):\n pass\n\nclass MergeInputs(object):\n def __init__(self, sources):\n self.sources = sources\n def resolve(self):\n raise NotImplementedError()\n\nclass MergeInputsNested(MergeInputs):\n def resolve(self):\n return [v[1][v[0]] for v in self.sources]\n\nclass MergeInputsFlattened(MergeInputs):\n def resolve(self):\n r = []\n for v in self.sources:\n v = v[1][v[0]]\n if isinstance(v, list):\n r.extend(v)\n else:\n r.append(v)\n return r\n\nclass StepValueFrom(object):\n def __init__(self, expr, inner, req):\n self.expr = expr\n self.inner = inner\n self.req = req\n\n def do_eval(self, inputs, ctx):\n return cwltool.expression.do_eval(self.expr, inputs, self.req,\n None, None, {}, context=ctx)\n\ndef resolve_indirect_inner(d):\n if isinstance(d, IndirectDict):\n r = {}\n for k, v in d.items():\n if isinstance(v, MergeInputs):\n r[k] = v.resolve()\n else:\n r[k] = v[1][v[0]]\n return r\n else:\n return d\n\ndef resolve_indirect(d):\n inner = IndirectDict() if isinstance(d, IndirectDict) else {}\n needEval = False\n for k, v in d.iteritems():\n if isinstance(v, StepValueFrom):\n inner[k] = v.inner\n needEval = True\n else:\n inner[k] = v\n res = resolve_indirect_inner(inner)\n if needEval:\n ev = {}\n for k, v in d.iteritems():\n if isinstance(v, StepValueFrom):\n ev[k] = v.do_eval(res, res[k])\n else:\n ev[k] = v\n return ev\n else:\n return res\n\ndef getFile(fileStore, dir, fileTuple, index=None, export=False, primary=None, rename_collision=False):\n fileStoreID, fileName = fileTuple\n\n if rename_collision is False:\n if primary:\n dir = os.path.dirname(primary)\n else:\n dir = tempfile.mkdtemp(dir=dir)\n\n dstPath = os.path.join(dir, fileName)\n if rename_collision:\n n = 1\n while os.path.exists(dstPath):\n n += 1\n stem, ext = os.path.splitext(dstPath)\n stem = \"%s_%s\" % (stem, n)\n dstPath = stem + ext\n\n if export:\n fileStore.exportFile(fileStoreID, \"file://\" + dstPath)\n else:\n srcPath = fileStore.readGlobalFile(fileStoreID)\n if srcPath != dstPath:\n if copy:\n shutil.copyfile(srcPath, dstPath)\n else:\n if os.path.exists(dstPath):\n if index.get(dstPath, None) != fileStoreID:\n raise Exception(\"Conflicting filesStoreID %s and %s both trying to link to %s\" % (index.get(dstPath, None), fileStoreID, dstPath))\n else:\n os.symlink(srcPath, dstPath)\n index[dstPath] = fileStoreID\n return dstPath\n\ndef writeFile(writeFunc, index, x):\n if x not in index:\n if not urlparse.urlparse(x).scheme:\n rp = os.path.realpath(x)\n else:\n rp = x\n try:\n index[x] = (writeFunc(rp), os.path.basename(x))\n except Exception as e:\n logging.error(\"Got exception '%s' while writing '%s'\", e, x)\n raise\n return index[x]\n\nclass ResolveIndirect(Job):\n def __init__(self, cwljob):\n super(ResolveIndirect, self).__init__()\n self.cwljob = cwljob\n\n def run(self, fileStore):\n return resolve_indirect(self.cwljob)\n\n\nclass CWLJob(Job):\n \"\"\"Execute a CWL tool wrapper.\"\"\"\n\n def __init__(self, tool, cwljob, **kwargs):\n builder = cwltool.builder.Builder()\n builder.job = {}\n builder.requirements = []\n builder.outdir = None\n builder.tmpdir = None\n builder.timeout = 0\n builder.resources = {}\n req = tool.evalResources(builder, {})\n super(CWLJob, self).__init__(cores=req[\"cores\"],\n memory=(req[\"ram\"]*1024*1024),\n disk=((req[\"tmpdirSize\"]*1024*1024) + (req[\"outdirSize\"]*1024*1024)))\n #super(CWLJob, self).__init__()\n self.cwltool = tool\n self.cwljob = cwljob\n self.executor_options = kwargs\n\n def run(self, fileStore):\n cwljob = resolve_indirect(self.cwljob)\n fillInDefaults(self.cwltool.tool[\"inputs\"], cwljob)\n\n inpdir = os.path.join(fileStore.getLocalTempDir(), \"inp\")\n outdir = os.path.join(fileStore.getLocalTempDir(), \"out\")\n tmpdir = os.path.join(fileStore.getLocalTempDir(), \"tmp\")\n os.mkdir(inpdir)\n os.mkdir(outdir)\n os.mkdir(tmpdir)\n\n # Copy input files out of the global file store.\n index={}\n adjustFilesWithSecondary(cwljob, functools.partial(getFile, fileStore, inpdir, index=index))\n\n # Run the tool\n output = cwltool.main.single_job_executor(self.cwltool, cwljob,\n os.getcwd(), None,\n outdir=outdir,\n tmpdir=tmpdir,\n **self.executor_options)\n\n # Copy output files into the global file store.\n adjustFiles(output, functools.partial(writeFile, fileStore.writeGlobalFile, {}))\n\n return output\n\n\ndef makeJob(tool, jobobj, **kwargs):\n if tool.tool[\"class\"] == \"Workflow\":\n wfjob = CWLWorkflow(tool, jobobj, **kwargs)\n followOn = ResolveIndirect(wfjob.rv())\n wfjob.addFollowOn(followOn)\n return (wfjob, followOn)\n else:\n job = CWLJob(tool, jobobj, **kwargs)\n return (job, job)\n\n\nclass CWLScatter(Job):\n def __init__(self, step, cwljob, **kwargs):\n super(CWLScatter, self).__init__()\n self.step = step\n self.cwljob = cwljob\n self.valueFrom = {shortname(i[\"id\"]): i[\"valueFrom\"] for i in step.tool[\"inputs\"] if \"valueFrom\" in i}\n self.executor_options = kwargs\n\n def valueFromFunc(self, k, v):\n if k in self.valueFrom:\n return cwltool.expression.do_eval(self.valueFrom[k], self.vfinputs, self.step.requirements,\n None, None, {}, context=v)\n else:\n return v\n\n def flat_crossproduct_scatter(self, joborder, scatter_keys, outputs):\n scatter_key = shortname(scatter_keys[0])\n l = len(joborder[scatter_key])\n for n in xrange(0, l):\n jo = copy.copy(joborder)\n jo[scatter_key] = self.valueFromFunc(scatter_key, joborder[scatter_key][n])\n if len(scatter_keys) == 1:\n (subjob, followOn) = makeJob(self.step.embedded_tool, jo, **self.executor_options)\n self.addChild(subjob)\n outputs.append(followOn.rv())\n else:\n self.flat_crossproduct_scatter(jo, scatter_keys[1:], outputs)\n\n def nested_crossproduct_scatter(self, joborder, scatter_keys):\n scatter_key = shortname(scatter_keys[0])\n l = len(joborder[scatter_key])\n outputs = []\n for n in xrange(0, l):\n jo = copy.copy(joborder)\n jo[scatter_key] = self.valueFromFunc(scatter_key, joborder[scatter_key][n])\n if len(scatter_keys) == 1:\n (subjob, followOn) = makeJob(self.step.embedded_tool, jo, **self.executor_options)\n self.addChild(subjob)\n outputs.append(followOn.rv())\n else:\n outputs.append(self.nested_crossproduct_scatter(jo, scatter_keys[1:]))\n return outputs\n\n def run(self, fileStore):\n cwljob = resolve_indirect(self.cwljob)\n\n if isinstance(self.step.tool[\"scatter\"], basestring):\n scatter = [self.step.tool[\"scatter\"]]\n else:\n scatter = self.step.tool[\"scatter\"]\n\n scatterMethod = self.step.tool.get(\"scatterMethod\", None)\n if len(scatter) == 1:\n scatterMethod = \"dotproduct\"\n outputs = []\n\n self.vfinputs = cwljob\n\n shortscatter = [shortname(s) for s in scatter]\n cwljob = {k: self.valueFromFunc(k, v) if k not in shortscatter else v\n for k,v in cwljob.items()}\n\n if scatterMethod == \"dotproduct\":\n for i in xrange(0, len(cwljob[shortname(scatter[0])])):\n copyjob = copy.copy(cwljob)\n for sc in scatter:\n scatter_key = shortname(sc)\n copyjob[scatter_key] = self.valueFromFunc(scatter_key, cwljob[scatter_key][i])\n (subjob, followOn) = makeJob(self.step.embedded_tool, copyjob, **self.executor_options)\n self.addChild(subjob)\n outputs.append(followOn.rv())\n elif scatterMethod == \"nested_crossproduct\":\n outputs = self.nested_crossproduct_scatter(cwljob, scatter)\n elif scatterMethod == \"flat_crossproduct\":\n self.flat_crossproduct_scatter(cwljob, scatter, outputs)\n else:\n if scatterMethod:\n raise validate.ValidationException(\n \"Unsupported complex scatter type '%s'\" % scatterMethod)\n else:\n raise validate.ValidationException(\n \"Must provide scatterMethod to scatter over multiple inputs\")\n\n return outputs\n\n\nclass CWLGather(Job):\n def __init__(self, step, outputs):\n super(CWLGather, self).__init__()\n self.step = step\n self.outputs = outputs\n\n def allkeys(self, obj, keys):\n if isinstance(obj, dict):\n for k in obj.keys():\n keys.add(k)\n elif isinstance(obj, list):\n for l in obj:\n self.allkeys(l, keys)\n\n def extract(self, obj, k):\n if isinstance(obj, dict):\n return obj.get(k)\n elif isinstance(obj, list):\n cp = []\n for l in obj:\n cp.append(self.extract(l, k))\n return cp\n\n def run(self, fileStore):\n outobj = {}\n keys = set()\n self.allkeys(self.outputs, keys)\n\n for k in keys:\n outobj[k] = self.extract(self.outputs, k)\n\n return outobj\n\n\nclass SelfJob(object):\n \"\"\"Fake job object to facilitate implementation of CWLWorkflow.run()\"\"\"\n\n def __init__(self, j, v):\n self.j = j\n self.v = v\n\n def rv(self):\n return self.v\n\n def addChild(self, c):\n return self.j.addChild(c)\n\n def hasChild(self, c):\n return self.j.hasChild(c)\n\n\nclass CWLWorkflow(Job):\n \"\"\"Traverse a CWL workflow graph and schedule a Toil job graph.\"\"\"\n\n def __init__(self, cwlwf, cwljob, **kwargs):\n super(CWLWorkflow, self).__init__()\n self.cwlwf = cwlwf\n self.cwljob = cwljob\n self.executor_options = kwargs\n\n def run(self, fileStore):\n cwljob = resolve_indirect(self.cwljob)\n\n # `promises` dict\n # from: each parameter (workflow input or step output)\n # that may be used as a \"source\" for a step input workflow output\n # parameter\n # to: the job that will produce that value.\n promises = {}\n\n # `jobs` dict from step id to job that implements that step.\n jobs = {}\n\n for inp in self.cwlwf.tool[\"inputs\"]:\n promises[inp[\"id\"]] = SelfJob(self, cwljob)\n\n alloutputs_fufilled = False\n while not alloutputs_fufilled:\n # Iteratively go over the workflow steps, scheduling jobs as their\n # dependencies can be fufilled by upstream workflow inputs or\n # step outputs. Loop exits when the workflow outputs\n # are satisfied.\n\n alloutputs_fufilled = True\n\n for step in self.cwlwf.steps:\n if step.tool[\"id\"] not in jobs:\n stepinputs_fufilled = True\n for inp in step.tool[\"inputs\"]:\n if \"source\" in inp:\n for s in aslist(inp[\"source\"]):\n if s not in promises:\n stepinputs_fufilled = False\n if stepinputs_fufilled:\n jobobj = {}\n\n for inp in step.tool[\"inputs\"]:\n key = shortname(inp[\"id\"])\n if \"source\" in inp:\n if inp.get(\"linkMerge\") or len(aslist(inp[\"source\"])) > 1:\n linkMerge = inp.get(\"linkMerge\", \"merge_nested\")\n if linkMerge == \"merge_nested\":\n jobobj[key] = (\n MergeInputsNested([(shortname(s), promises[s].rv())\n for s in aslist(inp[\"source\"])]))\n elif linkMerge == \"merge_flattened\":\n jobobj[key] = (\n MergeInputsFlattened([(shortname(s), promises[s].rv())\n for s in aslist(inp[\"source\"])]))\n else:\n raise validate.ValidationException(\n \"Unsupported linkMerge '%s'\", linkMerge)\n else:\n jobobj[key] = (\n shortname(inp[\"source\"]), promises[inp[\"source\"]].rv())\n elif \"default\" in inp:\n d = copy.copy(inp[\"default\"])\n jobobj[key] = (\"default\", {\"default\": d})\n\n if \"valueFrom\" in inp and \"scatter\" not in step.tool:\n if key in jobobj:\n jobobj[key] = StepValueFrom(inp[\"valueFrom\"],\n jobobj[key],\n self.cwlwf.requirements)\n else:\n jobobj[key] = StepValueFrom(inp[\"valueFrom\"],\n (\"None\", {\"None\": None}),\n self.cwlwf.requirements)\n\n if \"scatter\" in step.tool:\n wfjob = CWLScatter(step, IndirectDict(jobobj), **self.executor_options)\n followOn = CWLGather(step, wfjob.rv())\n wfjob.addFollowOn(followOn)\n else:\n (wfjob, followOn) = makeJob(step.embedded_tool, IndirectDict(jobobj),\n **self.executor_options)\n\n jobs[step.tool[\"id\"]] = followOn\n\n connected = False\n for inp in step.tool[\"inputs\"]:\n for s in aslist(inp.get(\"source\", [])):\n if not promises[s].hasChild(wfjob):\n promises[s].addChild(wfjob)\n connected = True\n if not connected:\n # workflow step has default inputs only, isn't connected to other jobs,\n # so add it as child of workflow.\n self.addChild(wfjob)\n\n for out in step.tool[\"outputs\"]:\n promises[out[\"id\"]] = followOn\n\n for inp in step.tool[\"inputs\"]:\n for s in aslist(inp.get(\"source\", [])):\n if s not in promises:\n alloutputs_fufilled = False\n\n # may need a test\n for out in self.cwlwf.tool[\"outputs\"]:\n if \"source\" in out:\n if out[\"source\"] not in promises:\n alloutputs_fufilled = False\n\n outobj = {}\n for out in self.cwlwf.tool[\"outputs\"]:\n outobj[shortname(out[\"id\"])] = (shortname(out[\"source\"]), promises[out[\"source\"]].rv())\n\n return IndirectDict(outobj)\n\n\ncwltool.process.supportedProcessRequirements = (\"DockerRequirement\",\n \"ExpressionEngineRequirement\",\n \"InlineJavascriptRequirement\",\n \"SchemaDefRequirement\",\n \"EnvVarRequirement\",\n \"CreateFileRequirement\",\n \"SubworkflowFeatureRequirement\",\n \"ScatterFeatureRequirement\",\n \"ShellCommandRequirement\",\n \"MultipleInputFeatureRequirement\",\n \"StepInputExpressionRequirement\",\n \"ResourceRequirement\")\n\ndef main(args=None, stdout=sys.stdout):\n parser = ArgumentParser()\n Job.Runner.addToilOptions(parser)\n parser.add_argument(\"cwltool\", type=str)\n parser.add_argument(\"cwljob\", type=str)\n\n # Will override the \"jobStore\" positional argument, enables\n # user to select jobStore or get a default from logic one below.\n parser.add_argument(\"--jobStore\", type=str)\n parser.add_argument(\"--conformance-test\", action=\"store_true\")\n parser.add_argument(\"--no-container\", action=\"store_true\")\n parser.add_argument(\"--quiet\", dest=\"logLevel\", action=\"store_const\", const=\"ERROR\")\n parser.add_argument(\"--basedir\", type=str)\n parser.add_argument(\"--outdir\", type=str, default=os.getcwd())\n parser.add_argument(\"--version\", action='version', version=version)\n parser.add_argument(\"--preserve-environment\", type=str, nargs='+',\n help=\"Preserve specified environment variables when running CommandLineTools\",\n metavar=(\"VAR1,VAR2\"),\n default=(\"PATH\",),\n dest=\"preserve_environment\")\n\n # mkdtemp actually creates the directory, but\n # toil requires that the directory not exist,\n # so make it and delete it and allow\n # toil to create it again (!)\n workdir = tempfile.mkdtemp()\n os.rmdir(workdir)\n\n if args is None:\n args = sys.argv[1:]\n\n options = parser.parse_args([workdir] + args)\n\n use_container = not options.no_container\n\n setLoggingFromOptions(options)\n if options.logLevel:\n cwllogger.setLevel(options.logLevel)\n\n uri = options.cwljob if urlparse.urlparse(options.cwljob).scheme else \"file://\" + os.path.abspath(options.cwljob)\n\n try:\n t = cwltool.main.load_tool(options.cwltool, False, True,\n cwltool.workflow.defaultMakeTool,\n True)\n except cwltool.process.UnsupportedRequirement as e:\n logging.error(e)\n return 33\n\n if options.conformance_test:\n loader = schema_salad.ref_resolver.Loader({})\n else:\n jobloaderctx = {\"path\": {\"@type\": \"@id\"}, \"format\": {\"@type\": \"@id\"}}\n jobloaderctx.update(t.metadata.get(\"$namespaces\", {}))\n loader = schema_salad.ref_resolver.Loader(jobloaderctx)\n\n job, _ = loader.resolve_ref(uri)\n\n if type(t) == int:\n return t\n\n fillInDefaults(t.tool[\"inputs\"], job)\n\n if options.conformance_test:\n adjustFiles(job, lambda x: x.replace(\"file://\", \"\"))\n stdout.write(json.dumps(\n cwltool.main.single_job_executor(t, job, options.basedir, options,\n conformance_test=True, use_container=use_container,\n preserve_environment=options.preserve_environment), indent=4))\n return 0\n\n if not options.basedir:\n options.basedir = os.path.dirname(os.path.abspath(options.cwljob))\n\n outdir = options.outdir\n\n with Toil(options) as toil:\n def importDefault(tool):\n adjustFiles(tool, lambda x: \"file://%s\" % x if not urlparse.urlparse(x).scheme else x)\n adjustFiles(tool, functools.partial(writeFile, toil.importFile, {}))\n return tool\n t.visit(importDefault)\n\n builder = t._init_job(job, os.path.dirname(os.path.abspath(options.cwljob)))\n (wf1, wf2) = makeJob(t, {}, use_container=use_container, preserve_environment=options.preserve_environment)\n adjustFiles(builder.job, lambda x: \"file://%s\" % x if not urlparse.urlparse(x).scheme else x)\n adjustFiles(builder.job, functools.partial(writeFile, toil.importFile, {}))\n wf1.cwljob = builder.job\n\n outobj = toil.start(wf1)\n outobj = resolve_indirect(outobj)\n\n adjustFilesWithSecondary(outobj, functools.partial(getFile, toil, outdir, index={}, export=True, rename_collision=True))\n\n stdout.write(json.dumps(outobj, indent=4))\n\n return 0\n", "path": "src/toil/cwl/cwltoil.py" } ]
[ { "content": "# Implement support for Common Workflow Language (CWL) for Toil.\n#\n# Copyright (C) 2015 Curoverse, Inc\n# Copyright (C) 2016 UCSC Computational Genomics Lab\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom toil.job import Job\nfrom toil.common import Toil\nfrom toil.version import version\nfrom toil.lib.bioio import setLoggingFromOptions\n\nfrom argparse import ArgumentParser\nimport cwltool.main\nimport cwltool.workflow\nimport cwltool.expression\nimport cwltool.builder\nfrom cwltool.process import adjustFiles, shortname, adjustFilesWithSecondary, fillInDefaults\nfrom cwltool.utils import aslist\nimport schema_salad.validate as validate\nimport schema_salad.ref_resolver\nimport os\nimport tempfile\nimport json\nimport sys\nimport logging\nimport copy\nimport shutil\nimport functools\nimport urlparse\n\ncwllogger = logging.getLogger(\"cwltool\")\n\n# The job object passed into CWLJob and CWLWorkflow\n# is a dict mapping to tuple of (key, dict)\n# the final dict is derived by evaluating each\n# tuple looking up the key in the supplied dict.\n#\n# This is necessary because Toil jobs return a single value (a dict)\n# but CWL permits steps to have multiple output parameters that may\n# feed into multiple other steps. This transformation maps the key in the\n# output object to the correct key of the input object.\n\nclass IndirectDict(dict):\n pass\n\nclass MergeInputs(object):\n def __init__(self, sources):\n self.sources = sources\n def resolve(self):\n raise NotImplementedError()\n\nclass MergeInputsNested(MergeInputs):\n def resolve(self):\n return [v[1][v[0]] for v in self.sources]\n\nclass MergeInputsFlattened(MergeInputs):\n def resolve(self):\n r = []\n for v in self.sources:\n v = v[1][v[0]]\n if isinstance(v, list):\n r.extend(v)\n else:\n r.append(v)\n return r\n\nclass StepValueFrom(object):\n def __init__(self, expr, inner, req):\n self.expr = expr\n self.inner = inner\n self.req = req\n\n def do_eval(self, inputs, ctx):\n return cwltool.expression.do_eval(self.expr, inputs, self.req,\n None, None, {}, context=ctx)\n\ndef resolve_indirect_inner(d):\n if isinstance(d, IndirectDict):\n r = {}\n for k, v in d.items():\n if isinstance(v, MergeInputs):\n r[k] = v.resolve()\n else:\n r[k] = v[1][v[0]]\n return r\n else:\n return d\n\ndef resolve_indirect(d):\n inner = IndirectDict() if isinstance(d, IndirectDict) else {}\n needEval = False\n for k, v in d.iteritems():\n if isinstance(v, StepValueFrom):\n inner[k] = v.inner\n needEval = True\n else:\n inner[k] = v\n res = resolve_indirect_inner(inner)\n if needEval:\n ev = {}\n for k, v in d.iteritems():\n if isinstance(v, StepValueFrom):\n ev[k] = v.do_eval(res, res[k])\n else:\n ev[k] = v\n return ev\n else:\n return res\n\ndef getFile(fileStore, dir, fileTuple, index=None, export=False, primary=None, rename_collision=False):\n fileStoreID, fileName = fileTuple\n\n if rename_collision is False:\n if primary:\n dir = os.path.dirname(primary)\n else:\n dir = tempfile.mkdtemp(dir=dir)\n\n dstPath = os.path.join(dir, fileName)\n if rename_collision:\n n = 1\n while os.path.exists(dstPath):\n n += 1\n stem, ext = os.path.splitext(dstPath)\n stem = \"%s_%s\" % (stem, n)\n dstPath = stem + ext\n\n if export:\n fileStore.exportFile(fileStoreID, \"file://\" + dstPath)\n else:\n srcPath = fileStore.readGlobalFile(fileStoreID)\n if srcPath != dstPath:\n if copy:\n shutil.copyfile(srcPath, dstPath)\n else:\n if os.path.exists(dstPath):\n if index.get(dstPath, None) != fileStoreID:\n raise Exception(\"Conflicting filesStoreID %s and %s both trying to link to %s\" % (index.get(dstPath, None), fileStoreID, dstPath))\n else:\n os.symlink(srcPath, dstPath)\n index[dstPath] = fileStoreID\n return dstPath\n\ndef writeFile(writeFunc, index, x):\n if x not in index:\n if not urlparse.urlparse(x).scheme:\n rp = os.path.realpath(x)\n else:\n rp = x\n try:\n index[x] = (writeFunc(rp), os.path.basename(x))\n except Exception as e:\n cwllogger.error(\"Got exception '%s' while copying '%s'\", e, x)\n raise\n return index[x]\n\nclass ResolveIndirect(Job):\n def __init__(self, cwljob):\n super(ResolveIndirect, self).__init__()\n self.cwljob = cwljob\n\n def run(self, fileStore):\n return resolve_indirect(self.cwljob)\n\n\nclass CWLJob(Job):\n \"\"\"Execute a CWL tool wrapper.\"\"\"\n\n def __init__(self, tool, cwljob, **kwargs):\n builder = cwltool.builder.Builder()\n builder.job = {}\n builder.requirements = []\n builder.outdir = None\n builder.tmpdir = None\n builder.timeout = 0\n builder.resources = {}\n req = tool.evalResources(builder, {})\n super(CWLJob, self).__init__(cores=req[\"cores\"],\n memory=(req[\"ram\"]*1024*1024),\n disk=((req[\"tmpdirSize\"]*1024*1024) + (req[\"outdirSize\"]*1024*1024)))\n #super(CWLJob, self).__init__()\n self.cwltool = tool\n self.cwljob = cwljob\n self.executor_options = kwargs\n\n def run(self, fileStore):\n cwljob = resolve_indirect(self.cwljob)\n fillInDefaults(self.cwltool.tool[\"inputs\"], cwljob)\n\n inpdir = os.path.join(fileStore.getLocalTempDir(), \"inp\")\n outdir = os.path.join(fileStore.getLocalTempDir(), \"out\")\n tmpdir = os.path.join(fileStore.getLocalTempDir(), \"tmp\")\n os.mkdir(inpdir)\n os.mkdir(outdir)\n os.mkdir(tmpdir)\n\n # Copy input files out of the global file store.\n index={}\n adjustFilesWithSecondary(cwljob, functools.partial(getFile, fileStore, inpdir, index=index))\n\n # Run the tool\n output = cwltool.main.single_job_executor(self.cwltool, cwljob,\n os.getcwd(), None,\n outdir=outdir,\n tmpdir=tmpdir,\n **self.executor_options)\n\n # Copy output files into the global file store.\n adjustFiles(output, functools.partial(writeFile, fileStore.writeGlobalFile, {}))\n\n return output\n\n\ndef makeJob(tool, jobobj, **kwargs):\n if tool.tool[\"class\"] == \"Workflow\":\n wfjob = CWLWorkflow(tool, jobobj, **kwargs)\n followOn = ResolveIndirect(wfjob.rv())\n wfjob.addFollowOn(followOn)\n return (wfjob, followOn)\n else:\n job = CWLJob(tool, jobobj, **kwargs)\n return (job, job)\n\n\nclass CWLScatter(Job):\n def __init__(self, step, cwljob, **kwargs):\n super(CWLScatter, self).__init__()\n self.step = step\n self.cwljob = cwljob\n self.valueFrom = {shortname(i[\"id\"]): i[\"valueFrom\"] for i in step.tool[\"inputs\"] if \"valueFrom\" in i}\n self.executor_options = kwargs\n\n def valueFromFunc(self, k, v):\n if k in self.valueFrom:\n return cwltool.expression.do_eval(self.valueFrom[k], self.vfinputs, self.step.requirements,\n None, None, {}, context=v)\n else:\n return v\n\n def flat_crossproduct_scatter(self, joborder, scatter_keys, outputs):\n scatter_key = shortname(scatter_keys[0])\n l = len(joborder[scatter_key])\n for n in xrange(0, l):\n jo = copy.copy(joborder)\n jo[scatter_key] = self.valueFromFunc(scatter_key, joborder[scatter_key][n])\n if len(scatter_keys) == 1:\n (subjob, followOn) = makeJob(self.step.embedded_tool, jo, **self.executor_options)\n self.addChild(subjob)\n outputs.append(followOn.rv())\n else:\n self.flat_crossproduct_scatter(jo, scatter_keys[1:], outputs)\n\n def nested_crossproduct_scatter(self, joborder, scatter_keys):\n scatter_key = shortname(scatter_keys[0])\n l = len(joborder[scatter_key])\n outputs = []\n for n in xrange(0, l):\n jo = copy.copy(joborder)\n jo[scatter_key] = self.valueFromFunc(scatter_key, joborder[scatter_key][n])\n if len(scatter_keys) == 1:\n (subjob, followOn) = makeJob(self.step.embedded_tool, jo, **self.executor_options)\n self.addChild(subjob)\n outputs.append(followOn.rv())\n else:\n outputs.append(self.nested_crossproduct_scatter(jo, scatter_keys[1:]))\n return outputs\n\n def run(self, fileStore):\n cwljob = resolve_indirect(self.cwljob)\n\n if isinstance(self.step.tool[\"scatter\"], basestring):\n scatter = [self.step.tool[\"scatter\"]]\n else:\n scatter = self.step.tool[\"scatter\"]\n\n scatterMethod = self.step.tool.get(\"scatterMethod\", None)\n if len(scatter) == 1:\n scatterMethod = \"dotproduct\"\n outputs = []\n\n self.vfinputs = cwljob\n\n shortscatter = [shortname(s) for s in scatter]\n cwljob = {k: self.valueFromFunc(k, v) if k not in shortscatter else v\n for k,v in cwljob.items()}\n\n if scatterMethod == \"dotproduct\":\n for i in xrange(0, len(cwljob[shortname(scatter[0])])):\n copyjob = copy.copy(cwljob)\n for sc in scatter:\n scatter_key = shortname(sc)\n copyjob[scatter_key] = self.valueFromFunc(scatter_key, cwljob[scatter_key][i])\n (subjob, followOn) = makeJob(self.step.embedded_tool, copyjob, **self.executor_options)\n self.addChild(subjob)\n outputs.append(followOn.rv())\n elif scatterMethod == \"nested_crossproduct\":\n outputs = self.nested_crossproduct_scatter(cwljob, scatter)\n elif scatterMethod == \"flat_crossproduct\":\n self.flat_crossproduct_scatter(cwljob, scatter, outputs)\n else:\n if scatterMethod:\n raise validate.ValidationException(\n \"Unsupported complex scatter type '%s'\" % scatterMethod)\n else:\n raise validate.ValidationException(\n \"Must provide scatterMethod to scatter over multiple inputs\")\n\n return outputs\n\n\nclass CWLGather(Job):\n def __init__(self, step, outputs):\n super(CWLGather, self).__init__()\n self.step = step\n self.outputs = outputs\n\n def allkeys(self, obj, keys):\n if isinstance(obj, dict):\n for k in obj.keys():\n keys.add(k)\n elif isinstance(obj, list):\n for l in obj:\n self.allkeys(l, keys)\n\n def extract(self, obj, k):\n if isinstance(obj, dict):\n return obj.get(k)\n elif isinstance(obj, list):\n cp = []\n for l in obj:\n cp.append(self.extract(l, k))\n return cp\n\n def run(self, fileStore):\n outobj = {}\n keys = set()\n self.allkeys(self.outputs, keys)\n\n for k in keys:\n outobj[k] = self.extract(self.outputs, k)\n\n return outobj\n\n\nclass SelfJob(object):\n \"\"\"Fake job object to facilitate implementation of CWLWorkflow.run()\"\"\"\n\n def __init__(self, j, v):\n self.j = j\n self.v = v\n\n def rv(self):\n return self.v\n\n def addChild(self, c):\n return self.j.addChild(c)\n\n def hasChild(self, c):\n return self.j.hasChild(c)\n\n\nclass CWLWorkflow(Job):\n \"\"\"Traverse a CWL workflow graph and schedule a Toil job graph.\"\"\"\n\n def __init__(self, cwlwf, cwljob, **kwargs):\n super(CWLWorkflow, self).__init__()\n self.cwlwf = cwlwf\n self.cwljob = cwljob\n self.executor_options = kwargs\n\n def run(self, fileStore):\n cwljob = resolve_indirect(self.cwljob)\n\n # `promises` dict\n # from: each parameter (workflow input or step output)\n # that may be used as a \"source\" for a step input workflow output\n # parameter\n # to: the job that will produce that value.\n promises = {}\n\n # `jobs` dict from step id to job that implements that step.\n jobs = {}\n\n for inp in self.cwlwf.tool[\"inputs\"]:\n promises[inp[\"id\"]] = SelfJob(self, cwljob)\n\n alloutputs_fufilled = False\n while not alloutputs_fufilled:\n # Iteratively go over the workflow steps, scheduling jobs as their\n # dependencies can be fufilled by upstream workflow inputs or\n # step outputs. Loop exits when the workflow outputs\n # are satisfied.\n\n alloutputs_fufilled = True\n\n for step in self.cwlwf.steps:\n if step.tool[\"id\"] not in jobs:\n stepinputs_fufilled = True\n for inp in step.tool[\"inputs\"]:\n if \"source\" in inp:\n for s in aslist(inp[\"source\"]):\n if s not in promises:\n stepinputs_fufilled = False\n if stepinputs_fufilled:\n jobobj = {}\n\n for inp in step.tool[\"inputs\"]:\n key = shortname(inp[\"id\"])\n if \"source\" in inp:\n if inp.get(\"linkMerge\") or len(aslist(inp[\"source\"])) > 1:\n linkMerge = inp.get(\"linkMerge\", \"merge_nested\")\n if linkMerge == \"merge_nested\":\n jobobj[key] = (\n MergeInputsNested([(shortname(s), promises[s].rv())\n for s in aslist(inp[\"source\"])]))\n elif linkMerge == \"merge_flattened\":\n jobobj[key] = (\n MergeInputsFlattened([(shortname(s), promises[s].rv())\n for s in aslist(inp[\"source\"])]))\n else:\n raise validate.ValidationException(\n \"Unsupported linkMerge '%s'\", linkMerge)\n else:\n jobobj[key] = (\n shortname(inp[\"source\"]), promises[inp[\"source\"]].rv())\n elif \"default\" in inp:\n d = copy.copy(inp[\"default\"])\n jobobj[key] = (\"default\", {\"default\": d})\n\n if \"valueFrom\" in inp and \"scatter\" not in step.tool:\n if key in jobobj:\n jobobj[key] = StepValueFrom(inp[\"valueFrom\"],\n jobobj[key],\n self.cwlwf.requirements)\n else:\n jobobj[key] = StepValueFrom(inp[\"valueFrom\"],\n (\"None\", {\"None\": None}),\n self.cwlwf.requirements)\n\n if \"scatter\" in step.tool:\n wfjob = CWLScatter(step, IndirectDict(jobobj), **self.executor_options)\n followOn = CWLGather(step, wfjob.rv())\n wfjob.addFollowOn(followOn)\n else:\n (wfjob, followOn) = makeJob(step.embedded_tool, IndirectDict(jobobj),\n **self.executor_options)\n\n jobs[step.tool[\"id\"]] = followOn\n\n connected = False\n for inp in step.tool[\"inputs\"]:\n for s in aslist(inp.get(\"source\", [])):\n if not promises[s].hasChild(wfjob):\n promises[s].addChild(wfjob)\n connected = True\n if not connected:\n # workflow step has default inputs only, isn't connected to other jobs,\n # so add it as child of workflow.\n self.addChild(wfjob)\n\n for out in step.tool[\"outputs\"]:\n promises[out[\"id\"]] = followOn\n\n for inp in step.tool[\"inputs\"]:\n for s in aslist(inp.get(\"source\", [])):\n if s not in promises:\n alloutputs_fufilled = False\n\n # may need a test\n for out in self.cwlwf.tool[\"outputs\"]:\n if \"source\" in out:\n if out[\"source\"] not in promises:\n alloutputs_fufilled = False\n\n outobj = {}\n for out in self.cwlwf.tool[\"outputs\"]:\n outobj[shortname(out[\"id\"])] = (shortname(out[\"source\"]), promises[out[\"source\"]].rv())\n\n return IndirectDict(outobj)\n\n\ncwltool.process.supportedProcessRequirements = (\"DockerRequirement\",\n \"ExpressionEngineRequirement\",\n \"InlineJavascriptRequirement\",\n \"SchemaDefRequirement\",\n \"EnvVarRequirement\",\n \"CreateFileRequirement\",\n \"SubworkflowFeatureRequirement\",\n \"ScatterFeatureRequirement\",\n \"ShellCommandRequirement\",\n \"MultipleInputFeatureRequirement\",\n \"StepInputExpressionRequirement\",\n \"ResourceRequirement\")\n\ndef main(args=None, stdout=sys.stdout):\n parser = ArgumentParser()\n Job.Runner.addToilOptions(parser)\n parser.add_argument(\"cwltool\", type=str)\n parser.add_argument(\"cwljob\", type=str)\n\n # Will override the \"jobStore\" positional argument, enables\n # user to select jobStore or get a default from logic one below.\n parser.add_argument(\"--jobStore\", type=str)\n parser.add_argument(\"--conformance-test\", action=\"store_true\")\n parser.add_argument(\"--no-container\", action=\"store_true\")\n parser.add_argument(\"--quiet\", dest=\"logLevel\", action=\"store_const\", const=\"ERROR\")\n parser.add_argument(\"--basedir\", type=str)\n parser.add_argument(\"--outdir\", type=str, default=os.getcwd())\n parser.add_argument(\"--version\", action='version', version=version)\n parser.add_argument(\"--preserve-environment\", type=str, nargs='+',\n help=\"Preserve specified environment variables when running CommandLineTools\",\n metavar=(\"VAR1,VAR2\"),\n default=(\"PATH\",),\n dest=\"preserve_environment\")\n\n # mkdtemp actually creates the directory, but\n # toil requires that the directory not exist,\n # so make it and delete it and allow\n # toil to create it again (!)\n workdir = tempfile.mkdtemp()\n os.rmdir(workdir)\n\n if args is None:\n args = sys.argv[1:]\n\n options = parser.parse_args([workdir] + args)\n\n use_container = not options.no_container\n\n setLoggingFromOptions(options)\n if options.logLevel:\n cwllogger.setLevel(options.logLevel)\n\n uri = options.cwljob if urlparse.urlparse(options.cwljob).scheme else \"file://\" + os.path.abspath(options.cwljob)\n\n try:\n t = cwltool.main.load_tool(options.cwltool, False, True,\n cwltool.workflow.defaultMakeTool,\n True)\n except cwltool.process.UnsupportedRequirement as e:\n logging.error(e)\n return 33\n\n if options.conformance_test:\n loader = schema_salad.ref_resolver.Loader({})\n else:\n jobloaderctx = {\"path\": {\"@type\": \"@id\"}, \"format\": {\"@type\": \"@id\"}}\n jobloaderctx.update(t.metadata.get(\"$namespaces\", {}))\n loader = schema_salad.ref_resolver.Loader(jobloaderctx)\n\n job, _ = loader.resolve_ref(uri)\n\n if type(t) == int:\n return t\n\n fillInDefaults(t.tool[\"inputs\"], job)\n\n if options.conformance_test:\n adjustFiles(job, lambda x: x.replace(\"file://\", \"\"))\n stdout.write(json.dumps(\n cwltool.main.single_job_executor(t, job, options.basedir, options,\n conformance_test=True, use_container=use_container,\n preserve_environment=options.preserve_environment), indent=4))\n return 0\n\n if not options.basedir:\n options.basedir = os.path.dirname(os.path.abspath(options.cwljob))\n\n outdir = options.outdir\n\n with Toil(options) as toil:\n def importDefault(tool):\n adjustFiles(tool, lambda x: \"file://%s\" % x if not urlparse.urlparse(x).scheme else x)\n adjustFiles(tool, functools.partial(writeFile, toil.importFile, {}))\n return tool\n t.visit(importDefault)\n\n builder = t._init_job(job, os.path.dirname(os.path.abspath(options.cwljob)))\n (wf1, wf2) = makeJob(t, {}, use_container=use_container, preserve_environment=options.preserve_environment)\n adjustFiles(builder.job, lambda x: \"file://%s\" % x if not urlparse.urlparse(x).scheme else x)\n adjustFiles(builder.job, functools.partial(writeFile, toil.importFile, {}))\n wf1.cwljob = builder.job\n\n outobj = toil.start(wf1)\n outobj = resolve_indirect(outobj)\n\n adjustFilesWithSecondary(outobj, functools.partial(getFile, toil, outdir, index={}, export=True, rename_collision=True))\n\n stdout.write(json.dumps(outobj, indent=4))\n\n return 0\n", "path": "src/toil/cwl/cwltoil.py" } ]
diff --git a/src/toil/cwl/cwltoil.py b/src/toil/cwl/cwltoil.py index afc7ffbe51..9b07411bf4 100755 --- a/src/toil/cwl/cwltoil.py +++ b/src/toil/cwl/cwltoil.py @@ -161,7 +161,7 @@ def writeFile(writeFunc, index, x): try: index[x] = (writeFunc(rp), os.path.basename(x)) except Exception as e: - logging.error("Got exception '%s' while writing '%s'", e, x) + cwllogger.error("Got exception '%s' while copying '%s'", e, x) raise return index[x]
akvo__akvo-rsr-2584
Indicator update - Actual Value Comment (IATI content) (estimate: 8) Created via Reamaze: Link: https://rsrsupport.reamaze.com/admin/conversations/deleted-result-comment-still-showing-and-missing-iati-content Assignee: Geert Soet Message: Hi product team, This morning I had a call with SNV Kenya and we noticed some things in Akvo RSR that seemed to be off: 1) Once you’ve deleted an indicator update, it still shows the comment of that update under actual value comment (see below) 2) In the hortimpact project also documents and images are included in the indicator updates, but the links to these images/docs don’t show up in the IATI report. Can we include these fields (in the same way as the main project picture) in the IATI file? 3) Another issue I found is that indicator descriptions do not show up in the IATI report. This is about project 3992: First screen shot is of an indicator incl description, second screen shot is the same indicator in the IATI report. Cheers, Annabelle Poelert Project Officer Akvo • 's-Gravenhekje 1A • 1011 TG • Amsterdam (NL) T +31 20 8200 175 • S Annabelle.akvo • T @annabellepoel I www.akvo.org &lt;http://www.akvo.org/&gt;
[ { "content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom akvo.codelists.models import IndicatorMeasure, IndicatorVocabulary\nfrom akvo.codelists.store.codelists_v202 import INDICATOR_MEASURE, INDICATOR_VOCABULARY\nfrom akvo.rsr.fields import ValidXMLCharField, ValidXMLTextField\nfrom akvo.rsr.mixins import TimestampsMixin\nfrom akvo.utils import codelist_choices\nfrom akvo.utils import codelist_value\nfrom akvo.utils import rsr_image_path\nfrom .result import Result\n\nfrom decimal import Decimal, InvalidOperation, DivisionByZero\n\nfrom django.conf import settings\nfrom django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.db.models.signals import post_save\nfrom django.dispatch import receiver\n\nfrom sorl.thumbnail.fields import ImageField\n\n\nclass Indicator(models.Model):\n result = models.ForeignKey('Result', verbose_name=_(u'result'), related_name='indicators')\n parent_indicator = models.ForeignKey(\n 'self', blank=True, null=True, default=None,\n verbose_name=_(u'parent indicator'), related_name='child_indicators'\n )\n title = ValidXMLCharField(\n _(u'indicator title'), blank=True, max_length=500,\n help_text=_(u'Within each result indicators can be defined. Indicators should be items '\n u'that can be counted and evaluated as the project continues and is completed.')\n )\n measure = ValidXMLCharField(\n _(u'indicator measure'), blank=True, max_length=1,\n choices=codelist_choices(INDICATOR_MEASURE),\n help_text=_(u'Choose how the indicator will be measured (in percentage or units).')\n )\n ascending = models.NullBooleanField(\n _(u'ascending'), blank=True,\n help_text=_(u'Choose ascending if the target value of the indicator is higher than the '\n u'baseline value (eg. people with access to sanitation). Choose descending if '\n u'the target value of the indicator is lower than the baseline value '\n u'(eg. people with diarrhea).'))\n description = ValidXMLCharField(\n _(u'indicator description'), blank=True, max_length=2000,\n help_text=_(u'You can provide further information of the indicator here.')\n )\n baseline_year = models.PositiveIntegerField(\n _(u'baseline year'), blank=True, null=True, max_length=4,\n help_text=_(u'The year the baseline value was taken.')\n )\n baseline_value = ValidXMLCharField(\n _(u'baseline value'), blank=True, max_length=50,\n help_text=_(u'The value of the baseline at the start of the project.')\n )\n baseline_comment = ValidXMLCharField(\n _(u'baseline comment'), blank=True, max_length=2000,\n help_text=_(u'Here you can provide extra information on the baseline value, if needed.')\n )\n order = models.PositiveSmallIntegerField(_(u'indicator order'), null=True, blank=True)\n default_periods = models.NullBooleanField(\n _(u'default indicator periods'), default=False, blank=True,\n help_text=_(u'Determines whether periods of indicator are used by default.')\n )\n\n def __unicode__(self):\n indicator_unicode = self.title if self.title else u'%s' % _(u'No indicator title')\n\n if self.periods.all():\n indicator_unicode += u' - %s %s' % (unicode(self.periods.count()),\n _(u'period(s)'))\n\n return indicator_unicode\n\n def save(self, *args, **kwargs):\n \"\"\"Update the values of child indicators, if a parent indicator is updated.\"\"\"\n # Update the values for an existing indicator\n if self.pk:\n for child_indicator in self.child_indicators.all():\n # Always copy title, measure and ascending. They should be the same as the parent.\n child_indicator.title = self.title\n child_indicator.measure = self.measure\n child_indicator.ascending = self.ascending\n\n # Only copy the description and baseline if the child has none (e.g. new)\n fields = ['description', 'baseline_year', 'baseline_value', 'baseline_comment']\n for field in fields:\n parent_field_value = getattr(self, field)\n if not getattr(child_indicator, field) and parent_field_value:\n setattr(child_indicator, field, parent_field_value)\n\n child_indicator.save()\n\n # Create a new indicator when it's added\n else:\n for child_result in self.result.child_results.all():\n child_result.project.add_indicator(child_result, self)\n\n if Indicator.objects.filter(result_id=self.result.id).exists():\n prev_indicator = Indicator.objects.filter(result_id=self.result.id).reverse()[0]\n if prev_indicator.order:\n self.order = prev_indicator.order + 1\n\n super(Indicator, self).save(*args, **kwargs)\n\n def clean(self):\n validation_errors = {}\n\n if self.pk and self.is_child_indicator():\n orig_indicator = Indicator.objects.get(pk=self.pk)\n\n # Don't allow some values to be changed when it is a child indicator\n if self.result != orig_indicator.result:\n validation_errors['result'] = u'%s' % \\\n _(u'It is not possible to update the result of this indicator, '\n u'because it is linked to a parent result.')\n if self.title != orig_indicator.title:\n validation_errors['title'] = u'%s' % \\\n _(u'It is not possible to update the title of this indicator, '\n u'because it is linked to a parent result.')\n if self.measure != orig_indicator.measure:\n validation_errors['measure'] = u'%s' % \\\n _(u'It is not possible to update the measure of this indicator, '\n u'because it is linked to a parent result.')\n if self.ascending != orig_indicator.ascending:\n validation_errors['ascending'] = u'%s' % \\\n _(u'It is not possible to update the ascending value of this indicator, '\n u'because it is linked to a parent result.')\n\n if validation_errors:\n raise ValidationError(validation_errors)\n\n def delete(self, *args, **kwargs):\n \"\"\"\n Check if indicator is ordered manually, and cascade following indicators if needed\n \"\"\"\n if self.order:\n sibling_indicators = Indicator.objects.filter(result_id=self.result.id)\n\n if not self == sibling_indicators.reverse()[0]:\n for ind in range(self.order + 1, len(sibling_indicators)):\n sibling_indicators[ind].order -= 1\n sibling_indicators[ind].save()\n\n super(Indicator, self).delete(*args, **kwargs)\n\n def iati_measure(self):\n return codelist_value(IndicatorMeasure, self, 'measure')\n\n def iati_measure_unicode(self):\n return str(self.iati_measure())\n\n def is_calculated(self):\n return self.result.project.is_impact_project\n\n def is_child_indicator(self):\n \"\"\"\n Indicates whether this indicator is linked to a parent indicator.\n \"\"\"\n return bool(self.parent_indicator)\n\n def is_parent_indicator(self):\n \"\"\"\n Indicates whether this indicator has children.\n \"\"\"\n return self.child_indicators.count() > 0\n\n @property\n def last_updated(self):\n from akvo.rsr.models import ProjectUpdate\n period_updates = ProjectUpdate.objects.filter(indicator_period__indicator=self)\n return period_updates.order_by('-created_at')[0].time_gmt if period_updates else None\n\n @property\n def baseline(self):\n \"\"\"\n Returns the baseline value of the indicator, if it can be converted to a number. Otherwise\n it'll return None.\n \"\"\"\n try:\n return Decimal(self.baseline_value)\n except (InvalidOperation, TypeError):\n return None\n\n @property\n def children_aggregate_percentage(self):\n \"\"\"\n Returns True if this indicator has percentage as a measure and has children that aggregate\n to this indicator.\n \"\"\"\n if self.measure == '2' and self.is_parent_indicator() and \\\n self.result.project.aggregate_children and \\\n any([ind.result.project.aggregate_to_parent for ind in self.child_indicators.all()]):\n return True\n return False\n\n class Meta:\n app_label = 'rsr'\n ordering = ['order', 'id']\n verbose_name = _(u'indicator')\n verbose_name_plural = _(u'indicators')\n\n\n# Add default indicator periods if necessary\n@receiver(post_save, sender=Indicator, dispatch_uid='add_default_periods')\ndef add_default_periods(sender, instance, created, **kwargs):\n if created:\n project = instance.result.project\n results = Result.objects.filter(project_id=project)\n default_indicator = Indicator.objects.filter(result_id__in=results,\n default_periods=True).first()\n\n if default_indicator:\n default_periods = IndicatorPeriod.objects.filter(indicator_id=default_indicator)\n\n for period in default_periods:\n period.pk = None\n\n # Blank all values except id and locked status\n period.target_value = ''\n period.target_comment = ''\n period.actual_value = ''\n period.actual_comment = ''\n\n period.indicator_id = instance.id\n period.save()\n\n\nclass IndicatorReference(models.Model):\n indicator = models.ForeignKey(Indicator, verbose_name=_(u'indicator'),\n related_name='references')\n reference = ValidXMLCharField(\n _(u'reference code'), blank=True, max_length=25,\n help_text=_(u'A code for an indicator defined in the specified vocabulary specified. '\n u'For more information on the indicator reference, see the '\n u'<a href=\"http://iatistandard.org/202/activity-standard/iati-activities/'\n u'iati-activity/result/indicator/reference/\" target=\"_blank\">IATI '\n u'codelist</a>.'))\n vocabulary = ValidXMLCharField(\n _(u'reference vocabulary'), blank=True, max_length=2,\n choices=codelist_choices(INDICATOR_VOCABULARY),\n help_text=_(u'This is the code for the vocabulary used to describe the sector. Sectors '\n u'should be mapped to DAC sectors to enable international comparison. '\n u'For more information on the indicator reference, see the '\n u'<a href=\"http://iatistandard.org/202/codelists/IndicatorVocabulary/\" '\n u'target=\"_blank\">IATI codelist</a>.'))\n vocabulary_uri = ValidXMLCharField(\n _(u'reference indicator URI'), blank=True, max_length=1000,\n help_text=_(u'If the vocabulary is 99 (reporting organisation), the URI where this '\n u'internal vocabulary is defined.'))\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'indicator reference')\n verbose_name_plural = _(u'indicator references')\n\n def __unicode__(self):\n return self.reference\n\n def iati_vocabulary(self):\n return codelist_value(IndicatorVocabulary, self, 'vocabulary')\n\n def iati_vocabulary_unicode(self):\n return str(self.iati_vocabulary())\n\n\nclass IndicatorPeriod(models.Model):\n indicator = models.ForeignKey(Indicator, verbose_name=_(u'indicator'), related_name='periods')\n parent_period = models.ForeignKey('self', blank=True, null=True, default=None,\n verbose_name=_(u'parent indicator period'),\n related_name='child_periods')\n locked = models.BooleanField(_(u'locked'), default=True, db_index=True)\n period_start = models.DateField(\n _(u'period start'), null=True, blank=True,\n help_text=_(u'The start date of the reporting period for this indicator.')\n )\n period_end = models.DateField(\n _(u'period end'), null=True, blank=True,\n help_text=_(u'The end date of the reporting period for this indicator.')\n )\n target_value = ValidXMLCharField(\n _(u'target value'), blank=True, max_length=50,\n help_text=_(u'The target value for the above period.')\n )\n target_comment = ValidXMLCharField(\n _(u'target value comment'), blank=True, max_length=2000,\n help_text=_(u'Here you can provide extra information on the target value, if needed.')\n )\n actual_value = ValidXMLCharField(\n _(u'actual value'), blank=True, max_length=50,\n help_text=_(u'A record of the achieved result for this period.')\n )\n actual_comment = ValidXMLCharField(\n _(u'actual value comment'), blank=True, max_length=2000,\n help_text=_(u'Here you can provide extra information on the actual value, if needed '\n u'(for instance, why the actual value differs from the target value).')\n )\n\n def __unicode__(self):\n if self.period_start:\n period_unicode = unicode(self.period_start)\n else:\n period_unicode = u'%s' % _(u'No start date')\n\n if self.period_end:\n period_unicode += u' - %s' % unicode(self.period_end)\n else:\n period_unicode += u' - %s' % _(u'No end date')\n\n if self.actual_value or self.target_value:\n period_unicode += u' ('\n\n if self.actual_value and self.target_value:\n period_unicode += u'actual: %s / target: %s)' % (unicode(self.actual_value),\n unicode(self.target_value))\n elif self.actual_value:\n period_unicode += u'actual: %s)' % unicode(self.actual_value)\n else:\n period_unicode += u'target: %s)' % unicode(self.target_value)\n\n return period_unicode\n\n def save(self, *args, **kwargs):\n actual_value_changed = False\n\n # When the general information of a parent period is updated, this information should also\n # be reflected in the child periods.\n if self.pk:\n for child_period in self.child_periods.all():\n # Always copy period start and end. They should be the same as the parent.\n child_period.period_start = self.period_start\n child_period.period_end = self.period_end\n\n # Only copy the target value and comments if the child has no values (in case the\n # child period is new). Afterwards, it is possible to adjust these values (update\n # the target for the child, for instance) and then these values should not be\n # overwritten.\n if not child_period.target_value and self.target_value:\n child_period.target_value = self.target_value\n if not child_period.target_comment and self.target_comment:\n child_period.target_comment = self.target_comment\n\n child_period.save()\n\n # Check if the actual value has changed\n orig_period = IndicatorPeriod.objects.get(pk=self.pk)\n if orig_period.actual_value != self.actual_value:\n actual_value_changed = True\n\n # In case the period is new and the period's indicator does have child indicators, the (new)\n # period should also be copied to the child indicator.\n else:\n for child_indicator in self.indicator.child_indicators.all():\n child_indicator.result.project.add_period(child_indicator, self)\n\n super(IndicatorPeriod, self).save(*args, **kwargs)\n\n # If the actual value has changed, the period has a parent period and aggregations are on,\n # then the the parent should be updated as well\n if actual_value_changed and self.is_child_period() and \\\n self.parent_period.indicator.result.project.aggregate_children and \\\n self.indicator.result.project.aggregate_to_parent:\n self.parent_period.recalculate_period()\n\n def clean(self):\n validation_errors = {}\n\n if self.pk:\n orig_period = IndicatorPeriod.objects.get(pk=self.pk)\n\n # Don't allow an actual value to be changed when the indicator period is calculated\n if self.is_calculated() and self.actual_value != orig_period.actual_value:\n validation_errors['actual_value'] = u'%s' % \\\n _(u'It is not possible to update the actual value of this indicator period, '\n u'because it is a calculated value. Please update the actual value through '\n u'a new update.')\n\n # Don't allow some values to be changed when it is a child period\n if self.is_child_period():\n if self.indicator != orig_period.indicator:\n validation_errors['indicator'] = u'%s' % \\\n _(u'It is not possible to update the indicator of this indicator period, '\n u'because it is linked to a parent result.')\n if self.period_start != orig_period.period_start:\n validation_errors['period_start'] = u'%s' % \\\n _(u'It is not possible to update the start period of this indicator, '\n u'because it is linked to a parent result.')\n if self.period_end != orig_period.period_end:\n validation_errors['period_end'] = u'%s' % \\\n _(u'It is not possible to update the end period of this indicator, '\n u'because it is linked to a parent result.')\n\n # Don't allow a start date before an end date\n if self.period_start and self.period_end and (self.period_start > self.period_end):\n validation_errors['period_start'] = u'%s' % _(u'Period start cannot be at a later time '\n u'than period end.')\n validation_errors['period_end'] = u'%s' % _(u'Period start cannot be at a later time '\n u'than period end.')\n\n # TODO: add validation that prevents creating a period for a child indicator\n if validation_errors:\n raise ValidationError(validation_errors)\n\n def recalculate_period(self, save=True, only_self=False):\n \"\"\"\n Re-calculate the values of all updates from the start. This will prevent strange values,\n for example when an update is deleted or edited after it has been approved.\n\n :param save; Boolean, saves actual value to period if True\n :param only_self; Boolean, to take into account if this is a parent or just re-calculate\n this period only\n :return Actual value of period\n \"\"\"\n\n # If this period is a parent period, the sum or average of the children should be\n # re-calculated\n if not only_self and self.is_parent_period() and \\\n self.indicator.result.project.aggregate_children:\n return self.recalculate_children(save)\n\n prev_val = '0'\n\n # For every approved update, add up the new value (if possible)\n for update in self.data.filter(status='A').order_by('created_at'):\n update.period_actual_value = prev_val\n update.save(recalculate=False)\n\n if update.relative_data:\n try:\n # Try to add up the update to the previous actual value\n prev_val = str(Decimal(prev_val) + Decimal(update.data))\n except InvalidOperation:\n # If not possible, the update data or previous value is a normal string\n prev_val = update.data\n else:\n prev_val = update.data\n\n # For every non-approved update, set the data to the current data\n for update in self.data.exclude(status='A'):\n update.period_actual_value = prev_val\n update.save(recalculate=False)\n\n # Special case: only_self and no data should give an empty string instead of '0'\n if only_self and not self.data.exists():\n prev_val = ''\n\n # Finally, update the actual value of the period itself\n if save:\n self.actual_value = prev_val\n self.save()\n\n # Return the actual value of the period itself\n return prev_val\n\n def recalculate_children(self, save=True):\n \"\"\"\n Re-calculate the actual value of this period based on the actual values of the child\n periods.\n\n In case the measurement is 'Percentage', it should be an average of all child periods.\n Otherwise, the child period values can just be added up.\n\n :param save; Boolean, saves to period if True\n :return Actual value of period\n \"\"\"\n if self.indicator.measure == '2':\n new_value = self.child_periods_average()\n else:\n new_value = self.child_periods_sum(include_self=True)\n\n if save:\n self.actual_value = new_value\n self.save()\n\n return new_value\n\n def update_actual_comment(self, save=True):\n \"\"\"\n Set the actual comment to the text of the latest approved update.\n\n :param save; Boolean, save period if True\n :return Actual comment of period\n \"\"\"\n\n approved_updates = self.data.filter(status=IndicatorPeriodData.STATUS_APPROVED_CODE)\n update_texts = [\n u'{}: {}'.format(update.last_modified_at.strftime('%d-%m-%Y'), update.text)\n for update in approved_updates.order_by('-created_at')\n ]\n actual_comment = u' | '.join(update_texts)\n if len(actual_comment) >= 2000: # max_size\n actual_comment = u'{} ...'.format(actual_comment[:1995])\n\n self.actual_comment = actual_comment\n if save:\n self.save()\n\n return self.actual_comment\n\n def is_calculated(self):\n \"\"\"\n When a period has got indicator updates, we consider the actual value to be a\n 'calculated' value, meaning that it's not possible to update the actual value directly.\n Only through indicator updates.\n \"\"\"\n return self.data.exists()\n\n def actual_value_is_decimal(self):\n\n try:\n Decimal(self.actual_value)\n return True\n except (InvalidOperation, TypeError):\n return not self.actual_value\n\n def is_child_period(self):\n \"\"\"\n Indicates whether this period is linked to a parent period\n \"\"\"\n return bool(self.parent_period)\n\n def is_parent_period(self):\n \"\"\"\n Indicates whether this result has child periods linked to it.\n \"\"\"\n return self.child_periods.count() > 0\n\n def child_periods_with_data(self):\n \"\"\"\n Returns the child indicator periods with numeric data\n \"\"\"\n children_with_data = []\n for child in self.child_periods.all():\n try:\n Decimal(child.actual_value)\n children_with_data += [child.pk]\n except (InvalidOperation, TypeError):\n pass\n return self.child_periods.filter(pk__in=children_with_data)\n\n # TODO: refactor child_periods_sum() and child_periods_average() and child_periods_with_data(),\n # they use each other in very inefficient ways I think\n def child_periods_sum(self, include_self=False):\n \"\"\"\n Returns the sum of child indicator periods.\n\n :param include_self; Boolean to include the updates on the period itself, as well as its'\n children\n :return String of the sum\n \"\"\"\n period_sum = 0\n\n # Loop through the child periods and sum up all the values\n for period in self.child_periods.all():\n if period.indicator.result.project.aggregate_to_parent and period.actual_value:\n try:\n period_sum += Decimal(period.actual_value)\n except (InvalidOperation, TypeError):\n pass\n\n if include_self:\n try:\n period_sum += Decimal(self.recalculate_period(save=False, only_self=True))\n except (InvalidOperation, TypeError):\n pass\n\n return str(period_sum)\n\n def child_periods_average(self):\n \"\"\"\n Returns the average of child indicator periods.\n\n :return String of the average\n \"\"\"\n if self.indicator.result.project.aggregate_children:\n child_periods = self.child_periods_with_data()\n for child in child_periods:\n if not (child.indicator.result.project.aggregate_to_parent and child.actual_value):\n child_periods = child_periods.exclude(pk=child.pk)\n\n number_of_child_periods = child_periods.count()\n if number_of_child_periods > 0:\n return str(Decimal(self.child_periods_sum()) / number_of_child_periods)\n return '0'\n\n def adjacent_period(self, next_period=True):\n \"\"\"\n Returns the next or previous indicator period, if we can find one with a start date,\n and we have a start date ourselves.\n\n :param next_period; Boolean indicating either the next (True) or previous (False) period.\n \"\"\"\n if not self.period_start:\n return None\n elif next_period:\n return self.indicator.periods.exclude(period_start=None).filter(\n period_start__gt=self.period_start).order_by('period_start').first()\n else:\n return self.indicator.periods.exclude(period_start=None).filter(\n period_start__lt=self.period_start).order_by('-period_start').first()\n\n @property\n def percent_accomplishment(self):\n \"\"\"\n Return the percentage completed for this indicator period. If not possible to convert the\n values to numbers, return None.\n \"\"\"\n try:\n return round(Decimal(self.actual_value) / Decimal(self.target_value) * 100, 1)\n except (InvalidOperation, TypeError, DivisionByZero):\n return None\n\n @property\n def percent_accomplishment_100(self):\n \"\"\"\n Similar to the percent_accomplishment property. However, it won't return any number bigger\n than 100.\n \"\"\"\n return max(self.percent_accomplishment, 100) if self.percent_accomplishment else None\n\n @property\n def actual(self):\n \"\"\"\n Returns the actual value of the indicator period, if it can be converted to a number.\n Otherwise it'll return the baseline value, which is a calculated value.\n \"\"\"\n try:\n return Decimal(self.actual_value)\n except (InvalidOperation, TypeError):\n return self.actual_value if self.actual_value else self.baseline\n\n @property\n def target(self):\n \"\"\"\n Returns the target value of the indicator period, if it can be converted to a number.\n Otherwise it'll return just the target value.\n \"\"\"\n try:\n return Decimal(self.target_value)\n except (InvalidOperation, TypeError):\n return self.target_value\n\n @property\n def baseline(self):\n \"\"\"\n Returns the baseline value of the indicator. The baseline is a calculated value:\n\n - If the period has no previous periods, then it's the baseline value of the indicator\n - If the period has a previous period, then it's the actual value of that period\n\n When this baseline value is empty, it returns 0. Otherwise (e.g. 'Available') it just\n returns the baseline value.\n \"\"\"\n previous_period = self.adjacent_period(False)\n baseline = self.indicator.baseline_value if not previous_period else previous_period.actual\n\n if not baseline:\n return Decimal(0)\n else:\n try:\n return Decimal(baseline)\n except (InvalidOperation, TypeError):\n return baseline\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'indicator period')\n verbose_name_plural = _(u'indicator periods')\n ordering = ['period_start']\n\n\ndef image_path(instance, file_name):\n \"\"\"\n Create a path like 'db/indicator_period/<period.id>/data_photo/<data.id>/image_name.ext'.\n\n :param instance; an IndicatorPeriodData instance\n :param file_name; the name of the file that is to be stored\n \"\"\"\n path = 'db/indicator_period/%d/data_photo/%%(instance_pk)s/%%(file_name)s' % instance.period.pk\n return rsr_image_path(instance, file_name, path)\n\n\ndef file_path(instance, file_name):\n \"\"\"\n Create a path like 'db/indicator_period/<period.id>/data_file/<data.id>/image_name.ext'.\n\n :param instance; an IndicatorPeriodData instance\n :param file_name; the name of the file that is to be stored\n \"\"\"\n path = 'db/indicator_period/%d/data_file/%%(instance_pk)s/%%(file_name)s' % instance.period.pk\n return rsr_image_path(instance, file_name, path)\n\n\nclass IndicatorPeriodData(TimestampsMixin, models.Model):\n \"\"\"\n Model for adding data to an indicator period.\n \"\"\"\n STATUS_NEW = unicode(_(u'new'))\n STATUS_DRAFT = unicode(_(u'draft'))\n STATUS_PENDING = unicode(_(u'pending approval'))\n STATUS_REVISION = unicode(_(u'return for revision'))\n STATUS_APPROVED = unicode(_(u'approved'))\n\n STATUS_NEW_CODE = u'N'\n STATUS_DRAFT_CODE = u'D'\n STATUS_PENDING_CODE = u'P'\n STATUS_REVISION_CODE = u'R'\n STATUS_APPROVED_CODE = u'A'\n\n STATUS_CODES_LIST = [STATUS_NEW_CODE, STATUS_DRAFT_CODE, STATUS_PENDING_CODE,\n STATUS_REVISION_CODE, STATUS_APPROVED_CODE]\n STATUSES_LABELS_LIST = [STATUS_NEW, STATUS_DRAFT, STATUS_PENDING, STATUS_REVISION,\n STATUS_APPROVED]\n STATUSES = zip(STATUS_CODES_LIST, STATUSES_LABELS_LIST)\n\n UPDATE_METHODS = (\n ('W', _(u'web')),\n ('M', _(u'mobile')),\n )\n\n period = models.ForeignKey(IndicatorPeriod, verbose_name=_(u'indicator period'),\n related_name='data')\n user = models.ForeignKey(settings.AUTH_USER_MODEL, verbose_name=_(u'user'), db_index=True)\n relative_data = models.BooleanField(_(u'relative data'), default=True)\n # TODO: rename to update of period_update; we're using the term Indicator update in the UI\n data = ValidXMLCharField(_(u'data'), max_length=300)\n period_actual_value = ValidXMLCharField(_(u'period actual value'), max_length=50, default='')\n status = ValidXMLCharField(_(u'status'), max_length=1, choices=STATUSES, db_index=True,\n default=STATUS_NEW_CODE)\n text = ValidXMLTextField(_(u'text'), blank=True)\n photo = ImageField(_(u'photo'), blank=True, upload_to=image_path)\n file = models.FileField(_(u'file'), blank=True, upload_to=file_path)\n update_method = ValidXMLCharField(_(u'update method'), blank=True, max_length=1,\n choices=UPDATE_METHODS, db_index=True, default='W')\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'indicator period data')\n verbose_name_plural = _(u'indicator period data')\n\n def save(self, recalculate=True, *args, **kwargs):\n super(IndicatorPeriodData, self).save(*args, **kwargs)\n\n # In case the status is approved, recalculate the period\n if recalculate and self.status == self.STATUS_APPROVED_CODE:\n self.period.recalculate_period()\n self.period.update_actual_comment()\n\n def delete(self, *args, **kwargs):\n old_status = self.status\n\n super(IndicatorPeriodData, self).delete(*args, **kwargs)\n\n # In case the status was approved, recalculate the period\n if old_status == self.STATUS_APPROVED_CODE:\n self.period.recalculate_period()\n\n def clean(self):\n \"\"\"\n Perform several checks before we can actually save the update data.\n \"\"\"\n validation_errors = {}\n\n project = self.period.indicator.result.project\n\n # Don't allow a data update to an unpublished project\n if not project.is_published():\n validation_errors['period'] = unicode(_(u'Indicator period must be part of a published '\n u'project to add data to it'))\n raise ValidationError(validation_errors)\n\n # Don't allow a data update to a non-Impact project\n if not project.is_impact_project:\n validation_errors['period'] = unicode(_(u'Indicator period must be part of an RSR '\n u'Impact project to add data to it'))\n raise ValidationError(validation_errors)\n\n # Don't allow a data update to a locked period\n if self.period.locked:\n validation_errors['period'] = unicode(_(u'Indicator period must be unlocked to add '\n u'data to it'))\n raise ValidationError(validation_errors)\n\n # Don't allow a data update to an aggregated parent period with 'percentage' as measurement\n if self.period.indicator.children_aggregate_percentage:\n validation_errors['period'] = unicode(\n _(u'Indicator period has an average aggregate of the child projects. Disable '\n u'aggregations to add data to it'))\n raise ValidationError(validation_errors)\n\n if self.pk:\n orig = IndicatorPeriodData.objects.get(pk=self.pk)\n\n # Don't allow for the indicator period to change\n if orig.period != self.period:\n validation_errors['period'] = unicode(_(u'Not allowed to change indicator period '\n u'in a data update'))\n if validation_errors:\n raise ValidationError(validation_errors)\n\n @property\n def status_display(self):\n \"\"\"\n Returns the display of the status.\n \"\"\"\n try:\n return dict(self.STATUSES)[self.status].capitalize()\n except KeyError:\n return u''\n\n @property\n def photo_url(self):\n \"\"\"\n Returns the full URL of the photo.\n \"\"\"\n return self.photo.url if self.photo else u''\n\n @property\n def file_url(self):\n \"\"\"\n Returns the full URL of the file.\n \"\"\"\n return self.file.url if self.file else u''\n\n def update_new_value(self):\n \"\"\"\n Returns a string with the new value, taking into account a relative update.\n \"\"\"\n if self.relative_data:\n try:\n add_up = Decimal(self.data) + Decimal(self.period_actual_value)\n relative = '+' + str(self.data) if self.data >= 0 else str(self.data)\n return \"{} ({})\".format(str(add_up), relative)\n except (InvalidOperation, TypeError):\n return self.data\n else:\n try:\n substract = Decimal(self.data) - Decimal(self.period_actual_value)\n relative = '+' + str(substract) if substract >= 0 else str(substract)\n return \"{} ({})\".format(self.data, relative)\n except (InvalidOperation, TypeError):\n return self.data\n\n\nclass IndicatorPeriodDataComment(TimestampsMixin, models.Model):\n \"\"\"\n Model for adding comments to data of an indicator period.\n \"\"\"\n data = models.ForeignKey(IndicatorPeriodData, verbose_name=_(u'indicator period data'),\n related_name='comments')\n user = models.ForeignKey(settings.AUTH_USER_MODEL, verbose_name=_(u'user'), db_index=True)\n comment = ValidXMLTextField(_(u'comment'), blank=True)\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'indicator period data comment')\n verbose_name_plural = _(u'indicator period data comments')\n\n\nclass IndicatorPeriodTargetLocation(models.Model):\n period = models.ForeignKey(IndicatorPeriod, verbose_name=_(u'indicator period'),\n related_name='target_locations')\n location = ValidXMLCharField(\n _(u'location'), blank=True, max_length=25,\n help_text=_(u'A location of the target of this indicator period. The location must be the '\n u'reference of an existing location of the current project.'))\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'indicator period target location')\n verbose_name_plural = _(u'indicator period target locations')\n\n def __unicode__(self):\n return self.location\n\n\nclass IndicatorPeriodActualLocation(models.Model):\n period = models.ForeignKey(IndicatorPeriod, verbose_name=_(u'indicator period'),\n related_name='actual_locations')\n location = ValidXMLCharField(\n _(u'location'), blank=True, max_length=25,\n help_text=_(u'A location of the actual of this indicator period. The location must be the '\n u'reference of an existing location of the current project.'))\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'indicator period actual location')\n verbose_name_plural = _(u'indicator period actual locations')\n\n def __unicode__(self):\n return self.location\n\n\nclass IndicatorPeriodTargetDimension(models.Model):\n period = models.ForeignKey(IndicatorPeriod, verbose_name=_(u'indicator period'),\n related_name='target_dimensions')\n name = ValidXMLCharField(\n _(u'dimension name'), blank=True, max_length=100,\n help_text=_(u'The name of a category being disaggregated in this target value of the '\n u'indicator period (e.g. \"Age\").'))\n value = ValidXMLCharField(\n _(u'dimension value'), blank=True, max_length=100,\n help_text=_(u'The value that is being being disaggregated (e.g. \"Older than 60 years\").'))\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'indicator period target dimension')\n verbose_name_plural = _(u'indicator period target dimensions')\n\n def __unicode__(self):\n return self.name + ': ' + self.value if self.name and self.value else ''\n\n\nclass IndicatorPeriodActualDimension(models.Model):\n period = models.ForeignKey(IndicatorPeriod, verbose_name=_(u'indicator period'),\n related_name='actual_dimensions')\n name = ValidXMLCharField(\n _(u'dimension name'), blank=True, max_length=100,\n help_text=_(u'The name of a category being disaggregated in this actual value of the '\n u'indicator period (e.g. \"Age\").'))\n value = ValidXMLCharField(\n _(u'dimension value'), blank=True, max_length=100,\n help_text=_(u'The value that is being being disaggregated (e.g. \"Older than 60 years\").'))\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'indicator period actual dimension')\n verbose_name_plural = _(u'indicator period actual dimensions')\n\n def __unicode__(self):\n return self.name + ': ' + self.value if self.name and self.value else ''\n", "path": "akvo/rsr/models/indicator.py" } ]
[ { "content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom akvo.codelists.models import IndicatorMeasure, IndicatorVocabulary\nfrom akvo.codelists.store.codelists_v202 import INDICATOR_MEASURE, INDICATOR_VOCABULARY\nfrom akvo.rsr.fields import ValidXMLCharField, ValidXMLTextField\nfrom akvo.rsr.mixins import TimestampsMixin\nfrom akvo.utils import codelist_choices\nfrom akvo.utils import codelist_value\nfrom akvo.utils import rsr_image_path\nfrom .result import Result\n\nfrom decimal import Decimal, InvalidOperation, DivisionByZero\n\nfrom django.conf import settings\nfrom django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.db.models.signals import post_save\nfrom django.dispatch import receiver\n\nfrom sorl.thumbnail.fields import ImageField\n\n\nclass Indicator(models.Model):\n result = models.ForeignKey('Result', verbose_name=_(u'result'), related_name='indicators')\n parent_indicator = models.ForeignKey(\n 'self', blank=True, null=True, default=None,\n verbose_name=_(u'parent indicator'), related_name='child_indicators'\n )\n title = ValidXMLCharField(\n _(u'indicator title'), blank=True, max_length=500,\n help_text=_(u'Within each result indicators can be defined. Indicators should be items '\n u'that can be counted and evaluated as the project continues and is completed.')\n )\n measure = ValidXMLCharField(\n _(u'indicator measure'), blank=True, max_length=1,\n choices=codelist_choices(INDICATOR_MEASURE),\n help_text=_(u'Choose how the indicator will be measured (in percentage or units).')\n )\n ascending = models.NullBooleanField(\n _(u'ascending'), blank=True,\n help_text=_(u'Choose ascending if the target value of the indicator is higher than the '\n u'baseline value (eg. people with access to sanitation). Choose descending if '\n u'the target value of the indicator is lower than the baseline value '\n u'(eg. people with diarrhea).'))\n description = ValidXMLCharField(\n _(u'indicator description'), blank=True, max_length=2000,\n help_text=_(u'You can provide further information of the indicator here.')\n )\n baseline_year = models.PositiveIntegerField(\n _(u'baseline year'), blank=True, null=True, max_length=4,\n help_text=_(u'The year the baseline value was taken.')\n )\n baseline_value = ValidXMLCharField(\n _(u'baseline value'), blank=True, max_length=50,\n help_text=_(u'The value of the baseline at the start of the project.')\n )\n baseline_comment = ValidXMLCharField(\n _(u'baseline comment'), blank=True, max_length=2000,\n help_text=_(u'Here you can provide extra information on the baseline value, if needed.')\n )\n order = models.PositiveSmallIntegerField(_(u'indicator order'), null=True, blank=True)\n default_periods = models.NullBooleanField(\n _(u'default indicator periods'), default=False, blank=True,\n help_text=_(u'Determines whether periods of indicator are used by default.')\n )\n\n def __unicode__(self):\n indicator_unicode = self.title if self.title else u'%s' % _(u'No indicator title')\n\n if self.periods.all():\n indicator_unicode += u' - %s %s' % (unicode(self.periods.count()),\n _(u'period(s)'))\n\n return indicator_unicode\n\n def save(self, *args, **kwargs):\n \"\"\"Update the values of child indicators, if a parent indicator is updated.\"\"\"\n # Update the values for an existing indicator\n if self.pk:\n for child_indicator in self.child_indicators.all():\n # Always copy title, measure and ascending. They should be the same as the parent.\n child_indicator.title = self.title\n child_indicator.measure = self.measure\n child_indicator.ascending = self.ascending\n\n # Only copy the description and baseline if the child has none (e.g. new)\n fields = ['description', 'baseline_year', 'baseline_value', 'baseline_comment']\n for field in fields:\n parent_field_value = getattr(self, field)\n if not getattr(child_indicator, field) and parent_field_value:\n setattr(child_indicator, field, parent_field_value)\n\n child_indicator.save()\n\n # Create a new indicator when it's added\n else:\n for child_result in self.result.child_results.all():\n child_result.project.add_indicator(child_result, self)\n\n if Indicator.objects.filter(result_id=self.result.id).exists():\n prev_indicator = Indicator.objects.filter(result_id=self.result.id).reverse()[0]\n if prev_indicator.order:\n self.order = prev_indicator.order + 1\n\n super(Indicator, self).save(*args, **kwargs)\n\n def clean(self):\n validation_errors = {}\n\n if self.pk and self.is_child_indicator():\n orig_indicator = Indicator.objects.get(pk=self.pk)\n\n # Don't allow some values to be changed when it is a child indicator\n if self.result != orig_indicator.result:\n validation_errors['result'] = u'%s' % \\\n _(u'It is not possible to update the result of this indicator, '\n u'because it is linked to a parent result.')\n if self.title != orig_indicator.title:\n validation_errors['title'] = u'%s' % \\\n _(u'It is not possible to update the title of this indicator, '\n u'because it is linked to a parent result.')\n if self.measure != orig_indicator.measure:\n validation_errors['measure'] = u'%s' % \\\n _(u'It is not possible to update the measure of this indicator, '\n u'because it is linked to a parent result.')\n if self.ascending != orig_indicator.ascending:\n validation_errors['ascending'] = u'%s' % \\\n _(u'It is not possible to update the ascending value of this indicator, '\n u'because it is linked to a parent result.')\n\n if validation_errors:\n raise ValidationError(validation_errors)\n\n def delete(self, *args, **kwargs):\n \"\"\"\n Check if indicator is ordered manually, and cascade following indicators if needed\n \"\"\"\n if self.order:\n sibling_indicators = Indicator.objects.filter(result_id=self.result.id)\n\n if not self == sibling_indicators.reverse()[0]:\n for ind in range(self.order + 1, len(sibling_indicators)):\n sibling_indicators[ind].order -= 1\n sibling_indicators[ind].save()\n\n super(Indicator, self).delete(*args, **kwargs)\n\n def iati_measure(self):\n return codelist_value(IndicatorMeasure, self, 'measure')\n\n def iati_measure_unicode(self):\n return str(self.iati_measure())\n\n def is_calculated(self):\n return self.result.project.is_impact_project\n\n def is_child_indicator(self):\n \"\"\"\n Indicates whether this indicator is linked to a parent indicator.\n \"\"\"\n return bool(self.parent_indicator)\n\n def is_parent_indicator(self):\n \"\"\"\n Indicates whether this indicator has children.\n \"\"\"\n return self.child_indicators.count() > 0\n\n @property\n def last_updated(self):\n from akvo.rsr.models import ProjectUpdate\n period_updates = ProjectUpdate.objects.filter(indicator_period__indicator=self)\n return period_updates.order_by('-created_at')[0].time_gmt if period_updates else None\n\n @property\n def baseline(self):\n \"\"\"\n Returns the baseline value of the indicator, if it can be converted to a number. Otherwise\n it'll return None.\n \"\"\"\n try:\n return Decimal(self.baseline_value)\n except (InvalidOperation, TypeError):\n return None\n\n @property\n def children_aggregate_percentage(self):\n \"\"\"\n Returns True if this indicator has percentage as a measure and has children that aggregate\n to this indicator.\n \"\"\"\n if self.measure == '2' and self.is_parent_indicator() and \\\n self.result.project.aggregate_children and \\\n any([ind.result.project.aggregate_to_parent for ind in self.child_indicators.all()]):\n return True\n return False\n\n class Meta:\n app_label = 'rsr'\n ordering = ['order', 'id']\n verbose_name = _(u'indicator')\n verbose_name_plural = _(u'indicators')\n\n\n# Add default indicator periods if necessary\n@receiver(post_save, sender=Indicator, dispatch_uid='add_default_periods')\ndef add_default_periods(sender, instance, created, **kwargs):\n if created:\n project = instance.result.project\n results = Result.objects.filter(project_id=project)\n default_indicator = Indicator.objects.filter(result_id__in=results,\n default_periods=True).first()\n\n if default_indicator:\n default_periods = IndicatorPeriod.objects.filter(indicator_id=default_indicator)\n\n for period in default_periods:\n period.pk = None\n\n # Blank all values except id and locked status\n period.target_value = ''\n period.target_comment = ''\n period.actual_value = ''\n period.actual_comment = ''\n\n period.indicator_id = instance.id\n period.save()\n\n\nclass IndicatorReference(models.Model):\n indicator = models.ForeignKey(Indicator, verbose_name=_(u'indicator'),\n related_name='references')\n reference = ValidXMLCharField(\n _(u'reference code'), blank=True, max_length=25,\n help_text=_(u'A code for an indicator defined in the specified vocabulary specified. '\n u'For more information on the indicator reference, see the '\n u'<a href=\"http://iatistandard.org/202/activity-standard/iati-activities/'\n u'iati-activity/result/indicator/reference/\" target=\"_blank\">IATI '\n u'codelist</a>.'))\n vocabulary = ValidXMLCharField(\n _(u'reference vocabulary'), blank=True, max_length=2,\n choices=codelist_choices(INDICATOR_VOCABULARY),\n help_text=_(u'This is the code for the vocabulary used to describe the sector. Sectors '\n u'should be mapped to DAC sectors to enable international comparison. '\n u'For more information on the indicator reference, see the '\n u'<a href=\"http://iatistandard.org/202/codelists/IndicatorVocabulary/\" '\n u'target=\"_blank\">IATI codelist</a>.'))\n vocabulary_uri = ValidXMLCharField(\n _(u'reference indicator URI'), blank=True, max_length=1000,\n help_text=_(u'If the vocabulary is 99 (reporting organisation), the URI where this '\n u'internal vocabulary is defined.'))\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'indicator reference')\n verbose_name_plural = _(u'indicator references')\n\n def __unicode__(self):\n return self.reference\n\n def iati_vocabulary(self):\n return codelist_value(IndicatorVocabulary, self, 'vocabulary')\n\n def iati_vocabulary_unicode(self):\n return str(self.iati_vocabulary())\n\n\nclass IndicatorPeriod(models.Model):\n indicator = models.ForeignKey(Indicator, verbose_name=_(u'indicator'), related_name='periods')\n parent_period = models.ForeignKey('self', blank=True, null=True, default=None,\n verbose_name=_(u'parent indicator period'),\n related_name='child_periods')\n locked = models.BooleanField(_(u'locked'), default=True, db_index=True)\n period_start = models.DateField(\n _(u'period start'), null=True, blank=True,\n help_text=_(u'The start date of the reporting period for this indicator.')\n )\n period_end = models.DateField(\n _(u'period end'), null=True, blank=True,\n help_text=_(u'The end date of the reporting period for this indicator.')\n )\n target_value = ValidXMLCharField(\n _(u'target value'), blank=True, max_length=50,\n help_text=_(u'The target value for the above period.')\n )\n target_comment = ValidXMLCharField(\n _(u'target value comment'), blank=True, max_length=2000,\n help_text=_(u'Here you can provide extra information on the target value, if needed.')\n )\n actual_value = ValidXMLCharField(\n _(u'actual value'), blank=True, max_length=50,\n help_text=_(u'A record of the achieved result for this period.')\n )\n actual_comment = ValidXMLCharField(\n _(u'actual value comment'), blank=True, max_length=2000,\n help_text=_(u'Here you can provide extra information on the actual value, if needed '\n u'(for instance, why the actual value differs from the target value).')\n )\n\n def __unicode__(self):\n if self.period_start:\n period_unicode = unicode(self.period_start)\n else:\n period_unicode = u'%s' % _(u'No start date')\n\n if self.period_end:\n period_unicode += u' - %s' % unicode(self.period_end)\n else:\n period_unicode += u' - %s' % _(u'No end date')\n\n if self.actual_value or self.target_value:\n period_unicode += u' ('\n\n if self.actual_value and self.target_value:\n period_unicode += u'actual: %s / target: %s)' % (unicode(self.actual_value),\n unicode(self.target_value))\n elif self.actual_value:\n period_unicode += u'actual: %s)' % unicode(self.actual_value)\n else:\n period_unicode += u'target: %s)' % unicode(self.target_value)\n\n return period_unicode\n\n def save(self, *args, **kwargs):\n actual_value_changed = False\n\n # When the general information of a parent period is updated, this information should also\n # be reflected in the child periods.\n if self.pk:\n for child_period in self.child_periods.all():\n # Always copy period start and end. They should be the same as the parent.\n child_period.period_start = self.period_start\n child_period.period_end = self.period_end\n\n # Only copy the target value and comments if the child has no values (in case the\n # child period is new). Afterwards, it is possible to adjust these values (update\n # the target for the child, for instance) and then these values should not be\n # overwritten.\n if not child_period.target_value and self.target_value:\n child_period.target_value = self.target_value\n if not child_period.target_comment and self.target_comment:\n child_period.target_comment = self.target_comment\n\n child_period.save()\n\n # Check if the actual value has changed\n orig_period = IndicatorPeriod.objects.get(pk=self.pk)\n if orig_period.actual_value != self.actual_value:\n actual_value_changed = True\n\n # In case the period is new and the period's indicator does have child indicators, the (new)\n # period should also be copied to the child indicator.\n else:\n for child_indicator in self.indicator.child_indicators.all():\n child_indicator.result.project.add_period(child_indicator, self)\n\n super(IndicatorPeriod, self).save(*args, **kwargs)\n\n # If the actual value has changed, the period has a parent period and aggregations are on,\n # then the the parent should be updated as well\n if actual_value_changed and self.is_child_period() and \\\n self.parent_period.indicator.result.project.aggregate_children and \\\n self.indicator.result.project.aggregate_to_parent:\n self.parent_period.recalculate_period()\n\n def clean(self):\n validation_errors = {}\n\n if self.pk:\n orig_period = IndicatorPeriod.objects.get(pk=self.pk)\n\n # Don't allow an actual value to be changed when the indicator period is calculated\n if self.is_calculated() and self.actual_value != orig_period.actual_value:\n validation_errors['actual_value'] = u'%s' % \\\n _(u'It is not possible to update the actual value of this indicator period, '\n u'because it is a calculated value. Please update the actual value through '\n u'a new update.')\n\n # Don't allow some values to be changed when it is a child period\n if self.is_child_period():\n if self.indicator != orig_period.indicator:\n validation_errors['indicator'] = u'%s' % \\\n _(u'It is not possible to update the indicator of this indicator period, '\n u'because it is linked to a parent result.')\n if self.period_start != orig_period.period_start:\n validation_errors['period_start'] = u'%s' % \\\n _(u'It is not possible to update the start period of this indicator, '\n u'because it is linked to a parent result.')\n if self.period_end != orig_period.period_end:\n validation_errors['period_end'] = u'%s' % \\\n _(u'It is not possible to update the end period of this indicator, '\n u'because it is linked to a parent result.')\n\n # Don't allow a start date before an end date\n if self.period_start and self.period_end and (self.period_start > self.period_end):\n validation_errors['period_start'] = u'%s' % _(u'Period start cannot be at a later time '\n u'than period end.')\n validation_errors['period_end'] = u'%s' % _(u'Period start cannot be at a later time '\n u'than period end.')\n\n # TODO: add validation that prevents creating a period for a child indicator\n if validation_errors:\n raise ValidationError(validation_errors)\n\n def recalculate_period(self, save=True, only_self=False):\n \"\"\"\n Re-calculate the values of all updates from the start. This will prevent strange values,\n for example when an update is deleted or edited after it has been approved.\n\n :param save; Boolean, saves actual value to period if True\n :param only_self; Boolean, to take into account if this is a parent or just re-calculate\n this period only\n :return Actual value of period\n \"\"\"\n\n # If this period is a parent period, the sum or average of the children should be\n # re-calculated\n if not only_self and self.is_parent_period() and \\\n self.indicator.result.project.aggregate_children:\n return self.recalculate_children(save)\n\n prev_val = '0'\n\n # For every approved update, add up the new value (if possible)\n for update in self.data.filter(status='A').order_by('created_at'):\n update.period_actual_value = prev_val\n update.save(recalculate=False)\n\n if update.relative_data:\n try:\n # Try to add up the update to the previous actual value\n prev_val = str(Decimal(prev_val) + Decimal(update.data))\n except InvalidOperation:\n # If not possible, the update data or previous value is a normal string\n prev_val = update.data\n else:\n prev_val = update.data\n\n # For every non-approved update, set the data to the current data\n for update in self.data.exclude(status='A'):\n update.period_actual_value = prev_val\n update.save(recalculate=False)\n\n # Special case: only_self and no data should give an empty string instead of '0'\n if only_self and not self.data.exists():\n prev_val = ''\n\n # Finally, update the actual value of the period itself\n if save:\n self.actual_value = prev_val\n self.save()\n\n # Return the actual value of the period itself\n return prev_val\n\n def recalculate_children(self, save=True):\n \"\"\"\n Re-calculate the actual value of this period based on the actual values of the child\n periods.\n\n In case the measurement is 'Percentage', it should be an average of all child periods.\n Otherwise, the child period values can just be added up.\n\n :param save; Boolean, saves to period if True\n :return Actual value of period\n \"\"\"\n if self.indicator.measure == '2':\n new_value = self.child_periods_average()\n else:\n new_value = self.child_periods_sum(include_self=True)\n\n if save:\n self.actual_value = new_value\n self.save()\n\n return new_value\n\n def update_actual_comment(self, save=True):\n \"\"\"\n Set the actual comment to the text of the latest approved update.\n\n :param save; Boolean, save period if True\n :return Actual comment of period\n \"\"\"\n\n approved_updates = self.data.filter(status=IndicatorPeriodData.STATUS_APPROVED_CODE)\n update_texts = [\n u'{}: {}'.format(update.last_modified_at.strftime('%d-%m-%Y'), update.text)\n for update in approved_updates.order_by('-created_at')\n ]\n actual_comment = u' | '.join(update_texts)\n if len(actual_comment) >= 2000: # max_size\n actual_comment = u'{} ...'.format(actual_comment[:1995])\n\n self.actual_comment = actual_comment\n if save:\n self.save()\n\n return self.actual_comment\n\n def is_calculated(self):\n \"\"\"\n When a period has got indicator updates, we consider the actual value to be a\n 'calculated' value, meaning that it's not possible to update the actual value directly.\n Only through indicator updates.\n \"\"\"\n return self.data.exists()\n\n def actual_value_is_decimal(self):\n\n try:\n Decimal(self.actual_value)\n return True\n except (InvalidOperation, TypeError):\n return not self.actual_value\n\n def is_child_period(self):\n \"\"\"\n Indicates whether this period is linked to a parent period\n \"\"\"\n return bool(self.parent_period)\n\n def is_parent_period(self):\n \"\"\"\n Indicates whether this result has child periods linked to it.\n \"\"\"\n return self.child_periods.count() > 0\n\n def child_periods_with_data(self):\n \"\"\"\n Returns the child indicator periods with numeric data\n \"\"\"\n children_with_data = []\n for child in self.child_periods.all():\n try:\n Decimal(child.actual_value)\n children_with_data += [child.pk]\n except (InvalidOperation, TypeError):\n pass\n return self.child_periods.filter(pk__in=children_with_data)\n\n # TODO: refactor child_periods_sum() and child_periods_average() and child_periods_with_data(),\n # they use each other in very inefficient ways I think\n def child_periods_sum(self, include_self=False):\n \"\"\"\n Returns the sum of child indicator periods.\n\n :param include_self; Boolean to include the updates on the period itself, as well as its'\n children\n :return String of the sum\n \"\"\"\n period_sum = 0\n\n # Loop through the child periods and sum up all the values\n for period in self.child_periods.all():\n if period.indicator.result.project.aggregate_to_parent and period.actual_value:\n try:\n period_sum += Decimal(period.actual_value)\n except (InvalidOperation, TypeError):\n pass\n\n if include_self:\n try:\n period_sum += Decimal(self.recalculate_period(save=False, only_self=True))\n except (InvalidOperation, TypeError):\n pass\n\n return str(period_sum)\n\n def child_periods_average(self):\n \"\"\"\n Returns the average of child indicator periods.\n\n :return String of the average\n \"\"\"\n if self.indicator.result.project.aggregate_children:\n child_periods = self.child_periods_with_data()\n for child in child_periods:\n if not (child.indicator.result.project.aggregate_to_parent and child.actual_value):\n child_periods = child_periods.exclude(pk=child.pk)\n\n number_of_child_periods = child_periods.count()\n if number_of_child_periods > 0:\n return str(Decimal(self.child_periods_sum()) / number_of_child_periods)\n return '0'\n\n def adjacent_period(self, next_period=True):\n \"\"\"\n Returns the next or previous indicator period, if we can find one with a start date,\n and we have a start date ourselves.\n\n :param next_period; Boolean indicating either the next (True) or previous (False) period.\n \"\"\"\n if not self.period_start:\n return None\n elif next_period:\n return self.indicator.periods.exclude(period_start=None).filter(\n period_start__gt=self.period_start).order_by('period_start').first()\n else:\n return self.indicator.periods.exclude(period_start=None).filter(\n period_start__lt=self.period_start).order_by('-period_start').first()\n\n @property\n def percent_accomplishment(self):\n \"\"\"\n Return the percentage completed for this indicator period. If not possible to convert the\n values to numbers, return None.\n \"\"\"\n try:\n return round(Decimal(self.actual_value) / Decimal(self.target_value) * 100, 1)\n except (InvalidOperation, TypeError, DivisionByZero):\n return None\n\n @property\n def percent_accomplishment_100(self):\n \"\"\"\n Similar to the percent_accomplishment property. However, it won't return any number bigger\n than 100.\n \"\"\"\n return max(self.percent_accomplishment, 100) if self.percent_accomplishment else None\n\n @property\n def actual(self):\n \"\"\"\n Returns the actual value of the indicator period, if it can be converted to a number.\n Otherwise it'll return the baseline value, which is a calculated value.\n \"\"\"\n try:\n return Decimal(self.actual_value)\n except (InvalidOperation, TypeError):\n return self.actual_value if self.actual_value else self.baseline\n\n @property\n def target(self):\n \"\"\"\n Returns the target value of the indicator period, if it can be converted to a number.\n Otherwise it'll return just the target value.\n \"\"\"\n try:\n return Decimal(self.target_value)\n except (InvalidOperation, TypeError):\n return self.target_value\n\n @property\n def baseline(self):\n \"\"\"\n Returns the baseline value of the indicator. The baseline is a calculated value:\n\n - If the period has no previous periods, then it's the baseline value of the indicator\n - If the period has a previous period, then it's the actual value of that period\n\n When this baseline value is empty, it returns 0. Otherwise (e.g. 'Available') it just\n returns the baseline value.\n \"\"\"\n previous_period = self.adjacent_period(False)\n baseline = self.indicator.baseline_value if not previous_period else previous_period.actual\n\n if not baseline:\n return Decimal(0)\n else:\n try:\n return Decimal(baseline)\n except (InvalidOperation, TypeError):\n return baseline\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'indicator period')\n verbose_name_plural = _(u'indicator periods')\n ordering = ['period_start']\n\n\ndef image_path(instance, file_name):\n \"\"\"\n Create a path like 'db/indicator_period/<period.id>/data_photo/<data.id>/image_name.ext'.\n\n :param instance; an IndicatorPeriodData instance\n :param file_name; the name of the file that is to be stored\n \"\"\"\n path = 'db/indicator_period/%d/data_photo/%%(instance_pk)s/%%(file_name)s' % instance.period.pk\n return rsr_image_path(instance, file_name, path)\n\n\ndef file_path(instance, file_name):\n \"\"\"\n Create a path like 'db/indicator_period/<period.id>/data_file/<data.id>/image_name.ext'.\n\n :param instance; an IndicatorPeriodData instance\n :param file_name; the name of the file that is to be stored\n \"\"\"\n path = 'db/indicator_period/%d/data_file/%%(instance_pk)s/%%(file_name)s' % instance.period.pk\n return rsr_image_path(instance, file_name, path)\n\n\nclass IndicatorPeriodData(TimestampsMixin, models.Model):\n \"\"\"\n Model for adding data to an indicator period.\n \"\"\"\n STATUS_NEW = unicode(_(u'new'))\n STATUS_DRAFT = unicode(_(u'draft'))\n STATUS_PENDING = unicode(_(u'pending approval'))\n STATUS_REVISION = unicode(_(u'return for revision'))\n STATUS_APPROVED = unicode(_(u'approved'))\n\n STATUS_NEW_CODE = u'N'\n STATUS_DRAFT_CODE = u'D'\n STATUS_PENDING_CODE = u'P'\n STATUS_REVISION_CODE = u'R'\n STATUS_APPROVED_CODE = u'A'\n\n STATUS_CODES_LIST = [STATUS_NEW_CODE, STATUS_DRAFT_CODE, STATUS_PENDING_CODE,\n STATUS_REVISION_CODE, STATUS_APPROVED_CODE]\n STATUSES_LABELS_LIST = [STATUS_NEW, STATUS_DRAFT, STATUS_PENDING, STATUS_REVISION,\n STATUS_APPROVED]\n STATUSES = zip(STATUS_CODES_LIST, STATUSES_LABELS_LIST)\n\n UPDATE_METHODS = (\n ('W', _(u'web')),\n ('M', _(u'mobile')),\n )\n\n period = models.ForeignKey(IndicatorPeriod, verbose_name=_(u'indicator period'),\n related_name='data')\n user = models.ForeignKey(settings.AUTH_USER_MODEL, verbose_name=_(u'user'), db_index=True)\n relative_data = models.BooleanField(_(u'relative data'), default=True)\n # TODO: rename to update of period_update; we're using the term Indicator update in the UI\n data = ValidXMLCharField(_(u'data'), max_length=300)\n period_actual_value = ValidXMLCharField(_(u'period actual value'), max_length=50, default='')\n status = ValidXMLCharField(_(u'status'), max_length=1, choices=STATUSES, db_index=True,\n default=STATUS_NEW_CODE)\n text = ValidXMLTextField(_(u'text'), blank=True)\n photo = ImageField(_(u'photo'), blank=True, upload_to=image_path)\n file = models.FileField(_(u'file'), blank=True, upload_to=file_path)\n update_method = ValidXMLCharField(_(u'update method'), blank=True, max_length=1,\n choices=UPDATE_METHODS, db_index=True, default='W')\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'indicator period data')\n verbose_name_plural = _(u'indicator period data')\n\n def save(self, recalculate=True, *args, **kwargs):\n super(IndicatorPeriodData, self).save(*args, **kwargs)\n\n # In case the status is approved, recalculate the period\n if recalculate and self.status == self.STATUS_APPROVED_CODE:\n self.period.recalculate_period()\n self.period.update_actual_comment()\n\n def delete(self, *args, **kwargs):\n old_status = self.status\n\n super(IndicatorPeriodData, self).delete(*args, **kwargs)\n\n # In case the status was approved, recalculate the period\n if old_status == self.STATUS_APPROVED_CODE:\n self.period.recalculate_period()\n self.period.update_actual_comment()\n\n def clean(self):\n \"\"\"\n Perform several checks before we can actually save the update data.\n \"\"\"\n validation_errors = {}\n\n project = self.period.indicator.result.project\n\n # Don't allow a data update to an unpublished project\n if not project.is_published():\n validation_errors['period'] = unicode(_(u'Indicator period must be part of a published '\n u'project to add data to it'))\n raise ValidationError(validation_errors)\n\n # Don't allow a data update to a non-Impact project\n if not project.is_impact_project:\n validation_errors['period'] = unicode(_(u'Indicator period must be part of an RSR '\n u'Impact project to add data to it'))\n raise ValidationError(validation_errors)\n\n # Don't allow a data update to a locked period\n if self.period.locked:\n validation_errors['period'] = unicode(_(u'Indicator period must be unlocked to add '\n u'data to it'))\n raise ValidationError(validation_errors)\n\n # Don't allow a data update to an aggregated parent period with 'percentage' as measurement\n if self.period.indicator.children_aggregate_percentage:\n validation_errors['period'] = unicode(\n _(u'Indicator period has an average aggregate of the child projects. Disable '\n u'aggregations to add data to it'))\n raise ValidationError(validation_errors)\n\n if self.pk:\n orig = IndicatorPeriodData.objects.get(pk=self.pk)\n\n # Don't allow for the indicator period to change\n if orig.period != self.period:\n validation_errors['period'] = unicode(_(u'Not allowed to change indicator period '\n u'in a data update'))\n if validation_errors:\n raise ValidationError(validation_errors)\n\n @property\n def status_display(self):\n \"\"\"\n Returns the display of the status.\n \"\"\"\n try:\n return dict(self.STATUSES)[self.status].capitalize()\n except KeyError:\n return u''\n\n @property\n def photo_url(self):\n \"\"\"\n Returns the full URL of the photo.\n \"\"\"\n return self.photo.url if self.photo else u''\n\n @property\n def file_url(self):\n \"\"\"\n Returns the full URL of the file.\n \"\"\"\n return self.file.url if self.file else u''\n\n def update_new_value(self):\n \"\"\"\n Returns a string with the new value, taking into account a relative update.\n \"\"\"\n if self.relative_data:\n try:\n add_up = Decimal(self.data) + Decimal(self.period_actual_value)\n relative = '+' + str(self.data) if self.data >= 0 else str(self.data)\n return \"{} ({})\".format(str(add_up), relative)\n except (InvalidOperation, TypeError):\n return self.data\n else:\n try:\n substract = Decimal(self.data) - Decimal(self.period_actual_value)\n relative = '+' + str(substract) if substract >= 0 else str(substract)\n return \"{} ({})\".format(self.data, relative)\n except (InvalidOperation, TypeError):\n return self.data\n\n\nclass IndicatorPeriodDataComment(TimestampsMixin, models.Model):\n \"\"\"\n Model for adding comments to data of an indicator period.\n \"\"\"\n data = models.ForeignKey(IndicatorPeriodData, verbose_name=_(u'indicator period data'),\n related_name='comments')\n user = models.ForeignKey(settings.AUTH_USER_MODEL, verbose_name=_(u'user'), db_index=True)\n comment = ValidXMLTextField(_(u'comment'), blank=True)\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'indicator period data comment')\n verbose_name_plural = _(u'indicator period data comments')\n\n\nclass IndicatorPeriodTargetLocation(models.Model):\n period = models.ForeignKey(IndicatorPeriod, verbose_name=_(u'indicator period'),\n related_name='target_locations')\n location = ValidXMLCharField(\n _(u'location'), blank=True, max_length=25,\n help_text=_(u'A location of the target of this indicator period. The location must be the '\n u'reference of an existing location of the current project.'))\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'indicator period target location')\n verbose_name_plural = _(u'indicator period target locations')\n\n def __unicode__(self):\n return self.location\n\n\nclass IndicatorPeriodActualLocation(models.Model):\n period = models.ForeignKey(IndicatorPeriod, verbose_name=_(u'indicator period'),\n related_name='actual_locations')\n location = ValidXMLCharField(\n _(u'location'), blank=True, max_length=25,\n help_text=_(u'A location of the actual of this indicator period. The location must be the '\n u'reference of an existing location of the current project.'))\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'indicator period actual location')\n verbose_name_plural = _(u'indicator period actual locations')\n\n def __unicode__(self):\n return self.location\n\n\nclass IndicatorPeriodTargetDimension(models.Model):\n period = models.ForeignKey(IndicatorPeriod, verbose_name=_(u'indicator period'),\n related_name='target_dimensions')\n name = ValidXMLCharField(\n _(u'dimension name'), blank=True, max_length=100,\n help_text=_(u'The name of a category being disaggregated in this target value of the '\n u'indicator period (e.g. \"Age\").'))\n value = ValidXMLCharField(\n _(u'dimension value'), blank=True, max_length=100,\n help_text=_(u'The value that is being being disaggregated (e.g. \"Older than 60 years\").'))\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'indicator period target dimension')\n verbose_name_plural = _(u'indicator period target dimensions')\n\n def __unicode__(self):\n return self.name + ': ' + self.value if self.name and self.value else ''\n\n\nclass IndicatorPeriodActualDimension(models.Model):\n period = models.ForeignKey(IndicatorPeriod, verbose_name=_(u'indicator period'),\n related_name='actual_dimensions')\n name = ValidXMLCharField(\n _(u'dimension name'), blank=True, max_length=100,\n help_text=_(u'The name of a category being disaggregated in this actual value of the '\n u'indicator period (e.g. \"Age\").'))\n value = ValidXMLCharField(\n _(u'dimension value'), blank=True, max_length=100,\n help_text=_(u'The value that is being being disaggregated (e.g. \"Older than 60 years\").'))\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'indicator period actual dimension')\n verbose_name_plural = _(u'indicator period actual dimensions')\n\n def __unicode__(self):\n return self.name + ': ' + self.value if self.name and self.value else ''\n", "path": "akvo/rsr/models/indicator.py" } ]
diff --git a/akvo/rsr/models/indicator.py b/akvo/rsr/models/indicator.py index 0ebdb5eecd..a31f155800 100644 --- a/akvo/rsr/models/indicator.py +++ b/akvo/rsr/models/indicator.py @@ -760,6 +760,7 @@ def delete(self, *args, **kwargs): # In case the status was approved, recalculate the period if old_status == self.STATUS_APPROVED_CODE: self.period.recalculate_period() + self.period.update_actual_comment() def clean(self): """ diff --git a/akvo/rsr/tests/models/test_indicator.py b/akvo/rsr/tests/models/test_indicator.py index 211ad3c82d..6110ff0331 100644 --- a/akvo/rsr/tests/models/test_indicator.py +++ b/akvo/rsr/tests/models/test_indicator.py @@ -106,3 +106,20 @@ def test_multiple_period_data_updates_actual_comment(self): self.assertIn(data_1.text, period.actual_comment) # newer update's text appears before older one's self.assertLess(period.actual_comment.index(data_2.text), period.actual_comment.index(data_1.text)) + + def test_period_data_deletion_updates_actual_comment(self): + + # Given + period = self.period + user = self.user + data = IndicatorPeriodData.objects.create(text='period data comment', + period=period, + user=user, + status=IndicatorPeriodData.STATUS_APPROVED_CODE) + + # When + data.delete() + + # Then + period = IndicatorPeriod.objects.get(id=period.id) + self.assertNotIn(data.text, period.actual_comment)
OpenNMT__OpenNMT-py-342
#layers for encoder equals #layers for decoder I just noticed that for the default RNN encoder, the `enc_layers` parameter is ignored and the number of layers of the encoder is equal to the number of layers of the decoder (see [this line](https://github.com/OpenNMT/OpenNMT-py/blob/master/onmt/ModelConstructor.py#L70)). Is there some reasoning behind this or is it an error? #layers for encoder equals #layers for decoder I just noticed that for the default RNN encoder, the `enc_layers` parameter is ignored and the number of layers of the encoder is equal to the number of layers of the decoder (see [this line](https://github.com/OpenNMT/OpenNMT-py/blob/master/onmt/ModelConstructor.py#L70)). Is there some reasoning behind this or is it an error?
[ { "content": "\"\"\"\nThis file is for models creation, which consults options\nand creates each encoder and decoder accordingly.\n\"\"\"\nimport torch.nn as nn\n\nimport onmt\nimport onmt.Models\nimport onmt.modules\nfrom onmt.IO import ONMTDataset\nfrom onmt.Models import NMTModel, MeanEncoder, RNNEncoder, \\\n StdRNNDecoder, InputFeedRNNDecoder\nfrom onmt.modules import Embeddings, ImageEncoder, CopyGenerator, \\\n TransformerEncoder, TransformerDecoder, \\\n CNNEncoder, CNNDecoder\n\n\ndef make_embeddings(opt, word_dict, feature_dicts, for_encoder=True):\n \"\"\"\n Make an Embeddings instance.\n Args:\n opt: the option in current environment.\n word_dict(Vocab): words dictionary.\n feature_dicts([Vocab], optional): a list of feature dictionary.\n for_encoder(bool): make Embeddings for encoder or decoder?\n \"\"\"\n if for_encoder:\n embedding_dim = opt.src_word_vec_size\n else:\n embedding_dim = opt.tgt_word_vec_size\n\n word_padding_idx = word_dict.stoi[onmt.IO.PAD_WORD]\n num_word_embeddings = len(word_dict)\n\n feats_padding_idx = [feat_dict.stoi[onmt.IO.PAD_WORD]\n for feat_dict in feature_dicts]\n num_feat_embeddings = [len(feat_dict) for feat_dict in\n feature_dicts]\n\n return Embeddings(embedding_dim,\n opt.position_encoding,\n opt.feat_merge,\n opt.feat_vec_exponent,\n opt.feat_vec_size,\n opt.dropout,\n word_padding_idx,\n feats_padding_idx,\n num_word_embeddings,\n num_feat_embeddings)\n\n\ndef make_encoder(opt, embeddings):\n \"\"\"\n Various encoder dispatcher function.\n Args:\n opt: the option in current environment.\n embeddings (Embeddings): vocab embeddings for this encoder.\n \"\"\"\n if opt.encoder_type == \"transformer\":\n return TransformerEncoder(opt.enc_layers, opt.rnn_size,\n opt.dropout, embeddings)\n elif opt.encoder_type == \"cnn\":\n return CNNEncoder(opt.enc_layers, opt.rnn_size,\n opt.cnn_kernel_width,\n opt.dropout, embeddings)\n elif opt.encoder_type == \"mean\":\n return MeanEncoder(opt.enc_layers, embeddings)\n else:\n # \"rnn\" or \"brnn\"\n return RNNEncoder(opt.rnn_type, opt.brnn, opt.dec_layers,\n opt.rnn_size, opt.dropout, embeddings)\n\n\ndef make_decoder(opt, embeddings):\n \"\"\"\n Various decoder dispatcher function.\n Args:\n opt: the option in current environment.\n embeddings (Embeddings): vocab embeddings for this decoder.\n \"\"\"\n if opt.decoder_type == \"transformer\":\n return TransformerDecoder(opt.dec_layers, opt.rnn_size,\n opt.global_attention, opt.copy_attn,\n opt.dropout, embeddings)\n elif opt.decoder_type == \"cnn\":\n return CNNDecoder(opt.dec_layers, opt.rnn_size,\n opt.global_attention, opt.copy_attn,\n opt.cnn_kernel_width, opt.dropout,\n embeddings)\n elif opt.input_feed:\n return InputFeedRNNDecoder(opt.rnn_type, opt.brnn,\n opt.dec_layers, opt.rnn_size,\n opt.global_attention,\n opt.coverage_attn,\n opt.context_gate,\n opt.copy_attn,\n opt.dropout,\n embeddings)\n else:\n return StdRNNDecoder(opt.rnn_type, opt.brnn,\n opt.dec_layers, opt.rnn_size,\n opt.global_attention,\n opt.coverage_attn,\n opt.context_gate,\n opt.copy_attn,\n opt.dropout,\n embeddings)\n\n\ndef make_base_model(model_opt, fields, gpu, checkpoint=None):\n \"\"\"\n Args:\n model_opt: the option loaded from checkpoint.\n fields: `Field` objects for the model.\n gpu(bool): whether to use gpu.\n checkpoint: the model gnerated by train phase, or a resumed snapshot\n model from a stopped training.\n Returns:\n the NMTModel.\n \"\"\"\n assert model_opt.model_type in [\"text\", \"img\"], \\\n (\"Unsupported model type %s\" % (model_opt.model_type))\n\n # Make encoder.\n if model_opt.model_type == \"text\":\n src_dict = fields[\"src\"].vocab\n feature_dicts = ONMTDataset.collect_feature_dicts(fields)\n src_embeddings = make_embeddings(model_opt, src_dict,\n feature_dicts)\n encoder = make_encoder(model_opt, src_embeddings)\n else:\n encoder = ImageEncoder(model_opt.layers,\n model_opt.brnn,\n model_opt.rnn_size,\n model_opt.dropout)\n\n # Make decoder.\n tgt_dict = fields[\"tgt\"].vocab\n # TODO: prepare for a future where tgt features are possible.\n feature_dicts = []\n tgt_embeddings = make_embeddings(model_opt, tgt_dict,\n feature_dicts, for_encoder=False)\n\n # Share the embedding matrix - preprocess with share_vocab required\n if model_opt.share_embeddings:\n tgt_embeddings.word_lut.weight = src_embeddings.word_lut.weight\n\n decoder = make_decoder(model_opt, tgt_embeddings)\n\n # Make NMTModel(= encoder + decoder).\n model = NMTModel(encoder, decoder)\n\n # Make Generator.\n if not model_opt.copy_attn:\n generator = nn.Sequential(\n nn.Linear(model_opt.rnn_size, len(fields[\"tgt\"].vocab)),\n nn.LogSoftmax())\n if model_opt.share_decoder_embeddings:\n generator[0].weight = decoder.embeddings.word_lut.weight\n else:\n generator = CopyGenerator(model_opt, fields[\"src\"].vocab,\n fields[\"tgt\"].vocab)\n\n # Load the model states from checkpoint or initialize them.\n if checkpoint is not None:\n print('Loading model parameters.')\n model.load_state_dict(checkpoint['model'])\n generator.load_state_dict(checkpoint['generator'])\n else:\n if model_opt.param_init != 0.0:\n print('Intializing model parameters.')\n for p in model.parameters():\n p.data.uniform_(-model_opt.param_init, model_opt.param_init)\n for p in generator.parameters():\n p.data.uniform_(-model_opt.param_init, model_opt.param_init)\n model.encoder.embeddings.load_pretrained_vectors(\n model_opt.pre_word_vecs_enc, model_opt.fix_word_vecs_enc)\n model.decoder.embeddings.load_pretrained_vectors(\n model_opt.pre_word_vecs_dec, model_opt.fix_word_vecs_dec)\n\n # Add generator to model (this registers it as parameter of model).\n model.generator = generator\n\n # Make the whole model leverage GPU if indicated to do so.\n if gpu:\n model.cuda()\n else:\n model.cpu()\n\n return model\n", "path": "onmt/ModelConstructor.py" } ]
[ { "content": "\"\"\"\nThis file is for models creation, which consults options\nand creates each encoder and decoder accordingly.\n\"\"\"\nimport torch.nn as nn\n\nimport onmt\nimport onmt.Models\nimport onmt.modules\nfrom onmt.IO import ONMTDataset\nfrom onmt.Models import NMTModel, MeanEncoder, RNNEncoder, \\\n StdRNNDecoder, InputFeedRNNDecoder\nfrom onmt.modules import Embeddings, ImageEncoder, CopyGenerator, \\\n TransformerEncoder, TransformerDecoder, \\\n CNNEncoder, CNNDecoder\n\n\ndef make_embeddings(opt, word_dict, feature_dicts, for_encoder=True):\n \"\"\"\n Make an Embeddings instance.\n Args:\n opt: the option in current environment.\n word_dict(Vocab): words dictionary.\n feature_dicts([Vocab], optional): a list of feature dictionary.\n for_encoder(bool): make Embeddings for encoder or decoder?\n \"\"\"\n if for_encoder:\n embedding_dim = opt.src_word_vec_size\n else:\n embedding_dim = opt.tgt_word_vec_size\n\n word_padding_idx = word_dict.stoi[onmt.IO.PAD_WORD]\n num_word_embeddings = len(word_dict)\n\n feats_padding_idx = [feat_dict.stoi[onmt.IO.PAD_WORD]\n for feat_dict in feature_dicts]\n num_feat_embeddings = [len(feat_dict) for feat_dict in\n feature_dicts]\n\n return Embeddings(embedding_dim,\n opt.position_encoding,\n opt.feat_merge,\n opt.feat_vec_exponent,\n opt.feat_vec_size,\n opt.dropout,\n word_padding_idx,\n feats_padding_idx,\n num_word_embeddings,\n num_feat_embeddings)\n\n\ndef make_encoder(opt, embeddings):\n \"\"\"\n Various encoder dispatcher function.\n Args:\n opt: the option in current environment.\n embeddings (Embeddings): vocab embeddings for this encoder.\n \"\"\"\n if opt.encoder_type == \"transformer\":\n return TransformerEncoder(opt.enc_layers, opt.rnn_size,\n opt.dropout, embeddings)\n elif opt.encoder_type == \"cnn\":\n return CNNEncoder(opt.enc_layers, opt.rnn_size,\n opt.cnn_kernel_width,\n opt.dropout, embeddings)\n elif opt.encoder_type == \"mean\":\n return MeanEncoder(opt.enc_layers, embeddings)\n else:\n # \"rnn\" or \"brnn\"\n return RNNEncoder(opt.rnn_type, opt.brnn, opt.enc_layers,\n opt.rnn_size, opt.dropout, embeddings)\n\n\ndef make_decoder(opt, embeddings):\n \"\"\"\n Various decoder dispatcher function.\n Args:\n opt: the option in current environment.\n embeddings (Embeddings): vocab embeddings for this decoder.\n \"\"\"\n if opt.decoder_type == \"transformer\":\n return TransformerDecoder(opt.dec_layers, opt.rnn_size,\n opt.global_attention, opt.copy_attn,\n opt.dropout, embeddings)\n elif opt.decoder_type == \"cnn\":\n return CNNDecoder(opt.dec_layers, opt.rnn_size,\n opt.global_attention, opt.copy_attn,\n opt.cnn_kernel_width, opt.dropout,\n embeddings)\n elif opt.input_feed:\n return InputFeedRNNDecoder(opt.rnn_type, opt.brnn,\n opt.dec_layers, opt.rnn_size,\n opt.global_attention,\n opt.coverage_attn,\n opt.context_gate,\n opt.copy_attn,\n opt.dropout,\n embeddings)\n else:\n return StdRNNDecoder(opt.rnn_type, opt.brnn,\n opt.dec_layers, opt.rnn_size,\n opt.global_attention,\n opt.coverage_attn,\n opt.context_gate,\n opt.copy_attn,\n opt.dropout,\n embeddings)\n\n\ndef make_base_model(model_opt, fields, gpu, checkpoint=None):\n \"\"\"\n Args:\n model_opt: the option loaded from checkpoint.\n fields: `Field` objects for the model.\n gpu(bool): whether to use gpu.\n checkpoint: the model gnerated by train phase, or a resumed snapshot\n model from a stopped training.\n Returns:\n the NMTModel.\n \"\"\"\n assert model_opt.model_type in [\"text\", \"img\"], \\\n (\"Unsupported model type %s\" % (model_opt.model_type))\n\n # Make encoder.\n if model_opt.model_type == \"text\":\n src_dict = fields[\"src\"].vocab\n feature_dicts = ONMTDataset.collect_feature_dicts(fields)\n src_embeddings = make_embeddings(model_opt, src_dict,\n feature_dicts)\n encoder = make_encoder(model_opt, src_embeddings)\n else:\n encoder = ImageEncoder(model_opt.layers,\n model_opt.brnn,\n model_opt.rnn_size,\n model_opt.dropout)\n\n # Make decoder.\n tgt_dict = fields[\"tgt\"].vocab\n # TODO: prepare for a future where tgt features are possible.\n feature_dicts = []\n tgt_embeddings = make_embeddings(model_opt, tgt_dict,\n feature_dicts, for_encoder=False)\n\n # Share the embedding matrix - preprocess with share_vocab required\n if model_opt.share_embeddings:\n tgt_embeddings.word_lut.weight = src_embeddings.word_lut.weight\n\n decoder = make_decoder(model_opt, tgt_embeddings)\n\n # Make NMTModel(= encoder + decoder).\n model = NMTModel(encoder, decoder)\n\n # Make Generator.\n if not model_opt.copy_attn:\n generator = nn.Sequential(\n nn.Linear(model_opt.rnn_size, len(fields[\"tgt\"].vocab)),\n nn.LogSoftmax())\n if model_opt.share_decoder_embeddings:\n generator[0].weight = decoder.embeddings.word_lut.weight\n else:\n generator = CopyGenerator(model_opt, fields[\"src\"].vocab,\n fields[\"tgt\"].vocab)\n\n # Load the model states from checkpoint or initialize them.\n if checkpoint is not None:\n print('Loading model parameters.')\n model.load_state_dict(checkpoint['model'])\n generator.load_state_dict(checkpoint['generator'])\n else:\n if model_opt.param_init != 0.0:\n print('Intializing model parameters.')\n for p in model.parameters():\n p.data.uniform_(-model_opt.param_init, model_opt.param_init)\n for p in generator.parameters():\n p.data.uniform_(-model_opt.param_init, model_opt.param_init)\n model.encoder.embeddings.load_pretrained_vectors(\n model_opt.pre_word_vecs_enc, model_opt.fix_word_vecs_enc)\n model.decoder.embeddings.load_pretrained_vectors(\n model_opt.pre_word_vecs_dec, model_opt.fix_word_vecs_dec)\n\n # Add generator to model (this registers it as parameter of model).\n model.generator = generator\n\n # Make the whole model leverage GPU if indicated to do so.\n if gpu:\n model.cuda()\n else:\n model.cpu()\n\n return model\n", "path": "onmt/ModelConstructor.py" } ]
diff --git a/onmt/ModelConstructor.py b/onmt/ModelConstructor.py index ff2783f5d5..135ac679fa 100644 --- a/onmt/ModelConstructor.py +++ b/onmt/ModelConstructor.py @@ -67,7 +67,7 @@ def make_encoder(opt, embeddings): return MeanEncoder(opt.enc_layers, embeddings) else: # "rnn" or "brnn" - return RNNEncoder(opt.rnn_type, opt.brnn, opt.dec_layers, + return RNNEncoder(opt.rnn_type, opt.brnn, opt.enc_layers, opt.rnn_size, opt.dropout, embeddings)
qtile__qtile-1644
Can't use asyncio event loop in widgets I am creating a widget that uses asyncio to run some external command (with `asyncio.create_subprocess_exec`). It doesn't work, and raises the `RuntimeError("Cannot add child handler, the child watcher does not have a loop attached")` exception instead. If my understanding of the code is correct, calling `set_event_loop` after `new_event_loop` should fix this issue, but I'm not sure whether it will cause other problems.
[ { "content": "import asyncio\nimport os\n\nfrom libqtile import ipc\nfrom libqtile.backend import base\nfrom libqtile.core.manager import Qtile\n\n\nclass SessionManager:\n def __init__(\n self, kore: base.Core, config, *, fname: str = None, no_spawn=False, state=None\n ) -> None:\n \"\"\"Manages a qtile session\n\n :param kore:\n The core backend to use for the session.\n :param config:\n The configuration to use for the qtile instance.\n :param fname:\n The file name to use as the qtile socket file.\n :param no_spawn:\n If the instance has already been started, then don't re-run the\n startup once hook.\n :param state:\n The state to restart the qtile instance with.\n \"\"\"\n eventloop = asyncio.new_event_loop()\n\n self.qtile = Qtile(kore, config, eventloop, no_spawn=no_spawn, state=state)\n\n if fname is None:\n # Dots might appear in the host part of the display name\n # during remote X sessions. Let's strip the host part first\n display_name = kore.display_name\n display_number = display_name.partition(\":\")[2]\n if \".\" not in display_number:\n display_name += \".0\"\n fname = ipc.find_sockfile(display_name)\n\n if os.path.exists(fname):\n os.unlink(fname)\n self.server = ipc.Server(fname, self.qtile.server.call, eventloop)\n\n def loop(self) -> None:\n \"\"\"Run the event loop\"\"\"\n with self.server:\n self.qtile.loop()\n", "path": "libqtile/core/session_manager.py" } ]
[ { "content": "import asyncio\nimport os\n\nfrom libqtile import ipc\nfrom libqtile.backend import base\nfrom libqtile.core.manager import Qtile\n\n\nclass SessionManager:\n def __init__(\n self, kore: base.Core, config, *, fname: str = None, no_spawn=False, state=None\n ) -> None:\n \"\"\"Manages a qtile session\n\n :param kore:\n The core backend to use for the session.\n :param config:\n The configuration to use for the qtile instance.\n :param fname:\n The file name to use as the qtile socket file.\n :param no_spawn:\n If the instance has already been started, then don't re-run the\n startup once hook.\n :param state:\n The state to restart the qtile instance with.\n \"\"\"\n eventloop = asyncio.new_event_loop()\n asyncio.set_event_loop(eventloop)\n\n self.qtile = Qtile(kore, config, eventloop, no_spawn=no_spawn, state=state)\n\n if fname is None:\n # Dots might appear in the host part of the display name\n # during remote X sessions. Let's strip the host part first\n display_name = kore.display_name\n display_number = display_name.partition(\":\")[2]\n if \".\" not in display_number:\n display_name += \".0\"\n fname = ipc.find_sockfile(display_name)\n\n if os.path.exists(fname):\n os.unlink(fname)\n self.server = ipc.Server(fname, self.qtile.server.call, eventloop)\n\n def loop(self) -> None:\n \"\"\"Run the event loop\"\"\"\n with self.server:\n self.qtile.loop()\n", "path": "libqtile/core/session_manager.py" } ]
diff --git a/libqtile/core/session_manager.py b/libqtile/core/session_manager.py index ce6eed19ca..0df811fe40 100644 --- a/libqtile/core/session_manager.py +++ b/libqtile/core/session_manager.py @@ -25,6 +25,7 @@ def __init__( The state to restart the qtile instance with. """ eventloop = asyncio.new_event_loop() + asyncio.set_event_loop(eventloop) self.qtile = Qtile(kore, config, eventloop, no_spawn=no_spawn, state=state)
litestar-org__litestar-1229
Bug: LoggingMiddleware is sending obfuscated session id to client **Describe the bug** When using the logging middleware and session middleware together, the logging middleware's cookie obfuscation is overwriting the session name with "*****" and that name is being pushed down to the client. The initial set-cookie has the correct session id but subsequent requests do not. **To Reproduce** I created a test function in tests/middleware/test_logging_middleware.py which I believe confirms the bug: ```python def test_logging_with_session_middleware() -> None: @post("/") async def set_session(request: Request) -> None: request.set_session({"hello": "world"}) @get("/") async def get_session() -> None: pass logging_middleware_config = LoggingMiddlewareConfig() session_config = MemoryBackendConfig() with create_test_client( [set_session, get_session], logging_config=LoggingConfig(), middleware=[logging_middleware_config.middleware, session_config.middleware], ) as client: response = client.post("/") assert response.status_code == HTTP_201_CREATED assert len(client.cookies.get("session", "")) == 64 response = client.get("/") assert response.status_code == HTTP_200_OK assert len(client.cookies.get("session", "")) == 64 ``` The test results in the following exception: ``` > assert len(client.cookies.get("session", "")) == 64 E AssertionError: assert 5 == 64 E + where 5 = len('*****') E + where '*****' = <bound method Cookies.get of <Cookies[<Cookie session=***** for testserver.local />]>>('session', '') E + where <bound method Cookies.get of <Cookies[<Cookie session=***** for testserver.local />]>> = <Cookies[<Cookie session=***** for testserver.local />]>.get E + where <Cookies[<Cookie session=***** for testserver.local />]> = <starlite.testing.client.sync_client.TestClient object at 0x7f4cbf7bea40>.cookies ``` **Additional Context** Starlite version: 1.51.4
[ { "content": "from typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Coroutine,\n Dict,\n Literal,\n Optional,\n Set,\n Tuple,\n Union,\n cast,\n)\n\nfrom typing_extensions import TypedDict\n\nfrom starlite.connection.request import Request\nfrom starlite.datastructures.upload_file import UploadFile\nfrom starlite.enums import HttpMethod, RequestEncodingType\nfrom starlite.parsers import parse_cookie_string\n\nif TYPE_CHECKING:\n from starlite.connection import ASGIConnection\n from starlite.types import Method\n from starlite.types.asgi_types import HTTPResponseBodyEvent, HTTPResponseStartEvent\n\n\ndef obfuscate(values: Dict[str, Any], fields_to_obfuscate: Set[str]) -> Dict[str, Any]:\n \"\"\"Obfuscate values in a dictionary, replacing values with `******`\n\n Args:\n values: A dictionary of strings\n fields_to_obfuscate: keys to obfuscate\n\n Returns:\n A dictionary with obfuscated strings\n \"\"\"\n for key in values:\n if key.lower() in fields_to_obfuscate:\n values[key] = \"*****\"\n return values\n\n\nRequestExtractorField = Literal[\n \"path\", \"method\", \"content_type\", \"headers\", \"cookies\", \"query\", \"path_params\", \"body\", \"scheme\", \"client\"\n]\n\nResponseExtractorField = Literal[\"status_code\", \"headers\", \"body\", \"cookies\"]\n\n\nclass ExtractedRequestData(TypedDict, total=False):\n \"\"\"Dictionary representing extracted request data.\"\"\"\n\n body: Coroutine\n client: Tuple[str, int]\n content_type: Tuple[str, Dict[str, str]]\n cookies: Dict[str, str]\n headers: Dict[str, str]\n method: \"Method\"\n path: str\n path_params: Dict[str, Any]\n query: Union[bytes, Dict[str, Any]]\n scheme: str\n\n\nclass ConnectionDataExtractor:\n \"\"\"Utility class to extract data from an :class:`ASGIConnection <starlite.connection.ASGIConnection>`,\n :class:`Request <starlite.connection.Request>` or :class:`WebSocket <starlite.connection.WebSocket>` instance.\n \"\"\"\n\n __slots__ = (\n \"connection_extractors\",\n \"request_extractors\",\n \"parse_body\",\n \"parse_query\",\n \"obfuscate_headers\",\n \"obfuscate_cookies\",\n )\n\n def __init__(\n self,\n extract_body: bool = True,\n extract_client: bool = True,\n extract_content_type: bool = True,\n extract_cookies: bool = True,\n extract_headers: bool = True,\n extract_method: bool = True,\n extract_path: bool = True,\n extract_path_params: bool = True,\n extract_query: bool = True,\n extract_scheme: bool = True,\n obfuscate_cookies: Optional[Set[str]] = None,\n obfuscate_headers: Optional[Set[str]] = None,\n parse_body: bool = False,\n parse_query: bool = False,\n ):\n \"\"\"Initialize ``ConnectionDataExtractor``\n\n Args:\n extract_body: Whether to extract body, (for requests only).\n extract_client: Whether to extract the client (host, port) mapping.\n extract_content_type: Whether to extract the content type and any options.\n extract_cookies: Whether to extract cookies.\n extract_headers: Whether to extract headers.\n extract_method: Whether to extract the HTTP method, (for requests only).\n extract_path: Whether to extract the path.\n extract_path_params: Whether to extract path parameters.\n extract_query: Whether to extract query parameters.\n extract_scheme: Whether to extract the http scheme.\n obfuscate_headers: headers keys to obfuscate. Obfuscated values are replaced with '*****'.\n obfuscate_cookies: cookie keys to obfuscate. Obfuscated values are replaced with '*****'.\n parse_body: Whether to parse the body value or return the raw byte string, (for requests only).\n parse_query: Whether to parse query parameters or return the raw byte string.\n \"\"\"\n self.parse_body = parse_body\n self.parse_query = parse_query\n self.obfuscate_headers = {h.lower() for h in (obfuscate_headers or set())}\n self.obfuscate_cookies = {c.lower() for c in (obfuscate_cookies or set())}\n self.connection_extractors: Dict[str, Callable[[\"ASGIConnection[Any, Any, Any]\"], Any]] = {}\n self.request_extractors: Dict[RequestExtractorField, Callable[[\"Request[Any, Any]\"], Any]] = {}\n if extract_scheme:\n self.connection_extractors[\"scheme\"] = self.extract_scheme\n if extract_client:\n self.connection_extractors[\"client\"] = self.extract_client\n if extract_path:\n self.connection_extractors[\"path\"] = self.extract_path\n if extract_headers:\n self.connection_extractors[\"headers\"] = self.extract_headers\n if extract_cookies:\n self.connection_extractors[\"cookies\"] = self.extract_cookies\n if extract_query:\n self.connection_extractors[\"query\"] = self.extract_query\n if extract_path_params:\n self.connection_extractors[\"path_params\"] = self.extract_path_params\n if extract_method:\n self.request_extractors[\"method\"] = self.extract_method\n if extract_content_type:\n self.request_extractors[\"content_type\"] = self.extract_content_type\n if extract_body:\n self.request_extractors[\"body\"] = self.extract_body\n\n def __call__(self, connection: \"ASGIConnection[Any, Any, Any]\") -> ExtractedRequestData:\n \"\"\"Extract data from the connection, returning a dictionary of values.\n\n Notes:\n - The value for ``body`` - if present - is an unresolved Coroutine and as such should be awaited by the receiver.\n\n Args:\n connection: An ASGI connection or its subclasses.\n\n Returns:\n A string keyed dictionary of extracted values.\n \"\"\"\n extractors = (\n {**self.connection_extractors, **self.request_extractors} # type: ignore\n if isinstance(connection, Request)\n else self.connection_extractors\n )\n return cast(\"ExtractedRequestData\", {key: extractor(connection) for key, extractor in extractors.items()})\n\n @staticmethod\n def extract_scheme(connection: \"ASGIConnection[Any, Any, Any]\") -> str:\n \"\"\"Extract the scheme from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n The connection's scope[\"scheme\"] value\n \"\"\"\n return connection.scope[\"scheme\"]\n\n @staticmethod\n def extract_client(connection: \"ASGIConnection[Any, Any, Any]\") -> Tuple[str, int]:\n \"\"\"Extract the client from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n The connection's scope[\"client\"] value or a default value.\n \"\"\"\n return connection.scope.get(\"client\") or (\"\", 0)\n\n @staticmethod\n def extract_path(connection: \"ASGIConnection[Any, Any, Any]\") -> str:\n \"\"\"Extract the path from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n The connection's scope[\"path\"] value\n \"\"\"\n return connection.scope[\"path\"]\n\n def extract_headers(self, connection: \"ASGIConnection[Any, Any, Any]\") -> Dict[str, str]:\n \"\"\"Extract headers from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n A dictionary with the connection's headers.\n \"\"\"\n headers = {k.decode(\"latin-1\"): v.decode(\"latin-1\") for k, v in connection.scope[\"headers\"]}\n return obfuscate(headers, self.obfuscate_headers) if self.obfuscate_headers else headers\n\n def extract_cookies(self, connection: \"ASGIConnection[Any, Any, Any]\") -> Dict[str, str]:\n \"\"\"Extract cookies from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n A dictionary with the connection's cookies.\n \"\"\"\n return obfuscate(connection.cookies, self.obfuscate_cookies) if self.obfuscate_cookies else connection.cookies\n\n def extract_query(self, connection: \"ASGIConnection[Any, Any, Any]\") -> Any:\n \"\"\"Extract query from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n Either a dictionary with the connection's parsed query string or the raw query byte-string.\n \"\"\"\n return connection.query_params.dict() if self.parse_query else connection.scope.get(\"query_string\", b\"\")\n\n @staticmethod\n def extract_path_params(connection: \"ASGIConnection[Any, Any, Any]\") -> Dict[str, Any]:\n \"\"\"Extract the path parameters from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n A dictionary with the connection's path parameters.\n \"\"\"\n return connection.path_params\n\n @staticmethod\n def extract_method(request: \"Request[Any, Any]\") -> \"Method\":\n \"\"\"Extract the method from an ``ASGIConnection``\n\n Args:\n request: A :class:`Request <starlite.connection.Request>` instance.\n\n Returns:\n The request's scope[\"method\"] value.\n \"\"\"\n return request.scope[\"method\"]\n\n @staticmethod\n def extract_content_type(request: \"Request[Any, Any]\") -> Tuple[str, Dict[str, str]]:\n \"\"\"Extract the content-type from an ``ASGIConnection``\n\n Args:\n request: A :class:`Request <starlite.connection.Request>` instance.\n\n Returns:\n A tuple containing the request's parsed 'Content-Type' header.\n \"\"\"\n return request.content_type\n\n async def extract_body(self, request: \"Request[Any, Any]\") -> Any:\n \"\"\"Extract the body from an ``ASGIConnection``\n\n Args:\n request: A :class:`Request <starlite.connection.Request>` instance.\n\n Returns:\n Either the parsed request body or the raw byte-string.\n \"\"\"\n if request.method != HttpMethod.GET:\n if not self.parse_body:\n return await request.body()\n request_encoding_type = request.content_type[0]\n if request_encoding_type == RequestEncodingType.JSON:\n return await request.json()\n form_data = await request.form()\n if request_encoding_type == RequestEncodingType.URL_ENCODED:\n return dict(form_data)\n return {\n key: repr(value) if isinstance(value, UploadFile) else value for key, value in form_data.multi_items()\n }\n return None\n\n\nclass ExtractedResponseData(TypedDict, total=False):\n \"\"\"Dictionary representing extracted response data.\"\"\"\n\n body: bytes\n status_code: int\n headers: Dict[str, str]\n cookies: Dict[str, str]\n\n\nclass ResponseDataExtractor:\n \"\"\"Utility class to extract data from a ``Message``\"\"\"\n\n __slots__ = (\"extractors\", \"parse_headers\", \"obfuscate_headers\", \"obfuscate_cookies\")\n\n def __init__(\n self,\n extract_body: bool = True,\n extract_cookies: bool = True,\n extract_headers: bool = True,\n extract_status_code: bool = True,\n obfuscate_cookies: Optional[Set[str]] = None,\n obfuscate_headers: Optional[Set[str]] = None,\n ):\n \"\"\"Initialize ``ResponseDataExtractor`` with options.\n\n Args:\n extract_body: Whether to extract the body.\n extract_cookies: Whether to extract the cookies.\n extract_headers: Whether to extract the headers.\n extract_status_code: Whether to extract the status code.\n obfuscate_cookies: cookie keys to obfuscate. Obfuscated values are replaced with '*****'.\n obfuscate_headers: headers keys to obfuscate. Obfuscated values are replaced with '*****'.\n \"\"\"\n self.obfuscate_headers = {h.lower() for h in (obfuscate_headers or set())}\n self.obfuscate_cookies = {c.lower() for c in (obfuscate_cookies or set())}\n self.extractors: Dict[\n ResponseExtractorField, Callable[[Tuple[\"HTTPResponseStartEvent\", \"HTTPResponseBodyEvent\"]], Any]\n ] = {}\n if extract_body:\n self.extractors[\"body\"] = self.extract_response_body\n if extract_status_code:\n self.extractors[\"status_code\"] = self.extract_status_code\n if extract_headers:\n self.extractors[\"headers\"] = self.extract_headers\n if extract_cookies:\n self.extractors[\"cookies\"] = self.extract_cookies\n\n def __call__(self, messages: Tuple[\"HTTPResponseStartEvent\", \"HTTPResponseBodyEvent\"]) -> ExtractedResponseData:\n \"\"\"Extract data from the response, returning a dictionary of values.\n\n Args:\n messages: A tuple containing\n :class:`HTTPResponseStartEvent <starlite.types.asgi_types.HTTPResponseStartEvent>`\n and :class:`HTTPResponseBodyEvent <starlite.types.asgi_types.HTTPResponseBodyEvent>`.\n\n Returns:\n A string keyed dictionary of extracted values.\n \"\"\"\n return cast(\"ExtractedResponseData\", {key: extractor(messages) for key, extractor in self.extractors.items()})\n\n @staticmethod\n def extract_response_body(messages: Tuple[\"HTTPResponseStartEvent\", \"HTTPResponseBodyEvent\"]) -> bytes:\n \"\"\"Extract the response body from a ``Message``\n\n Args:\n messages: A tuple containing\n :class:`HTTPResponseStartEvent <starlite.types.asgi_types.HTTPResponseStartEvent>`\n and :class:`HTTPResponseBodyEvent <starlite.types.asgi_types.HTTPResponseBodyEvent>`.\n\n Returns:\n The Response's body as a byte-string.\n \"\"\"\n return messages[1][\"body\"]\n\n @staticmethod\n def extract_status_code(messages: Tuple[\"HTTPResponseStartEvent\", \"HTTPResponseBodyEvent\"]) -> int:\n \"\"\"Extract a status code from a ``Message``\n\n Args:\n messages: A tuple containing\n :class:`HTTPResponseStartEvent <starlite.types.asgi_types.HTTPResponseStartEvent>`\n and :class:`HTTPResponseBodyEvent <starlite.types.asgi_types.HTTPResponseBodyEvent>`.\n\n Returns:\n The Response's status-code.\n \"\"\"\n return messages[0][\"status\"]\n\n def extract_headers(self, messages: Tuple[\"HTTPResponseStartEvent\", \"HTTPResponseBodyEvent\"]) -> Dict[str, str]:\n \"\"\"Extract headers from a ``Message``\n\n Args:\n messages: A tuple containing\n :class:`HTTPResponseStartEvent <starlite.types.asgi_types.HTTPResponseStartEvent>`\n and :class:`HTTPResponseBodyEvent <starlite.types.asgi_types.HTTPResponseBodyEvent>`.\n\n Returns:\n The Response's headers dict.\n \"\"\"\n headers = {\n key.decode(\"latin-1\"): value.decode(\"latin-1\")\n for key, value in filter(lambda x: x[0].lower() != b\"set-cookie\", messages[0][\"headers\"])\n }\n return (\n obfuscate(\n headers,\n self.obfuscate_headers,\n )\n if self.obfuscate_headers\n else headers\n )\n\n def extract_cookies(self, messages: Tuple[\"HTTPResponseStartEvent\", \"HTTPResponseBodyEvent\"]) -> Dict[str, str]:\n \"\"\"Extract cookies from a ``Message``\n\n Args:\n messages: A tuple containing\n :class:`HTTPResponseStartEvent <starlite.types.asgi_types.HTTPResponseStartEvent>`\n and :class:`HTTPResponseBodyEvent <starlite.types.asgi_types.HTTPResponseBodyEvent>`.\n\n Returns:\n The Response's cookies dict.\n \"\"\"\n cookie_string = \";\".join(\n list( # noqa: C417\n map(\n lambda x: x[1].decode(\"latin-1\"),\n filter(lambda x: x[0].lower() == b\"set-cookie\", messages[0][\"headers\"]),\n )\n )\n )\n if cookie_string:\n parsed_cookies = parse_cookie_string(cookie_string)\n return obfuscate(parsed_cookies, self.obfuscate_cookies) if self.obfuscate_cookies else parsed_cookies\n return {}\n", "path": "starlite/utils/extractors.py" } ]
[ { "content": "from typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Coroutine,\n Dict,\n Literal,\n Optional,\n Set,\n Tuple,\n Union,\n cast,\n)\n\nfrom typing_extensions import TypedDict\n\nfrom starlite.connection.request import Request\nfrom starlite.datastructures.upload_file import UploadFile\nfrom starlite.enums import HttpMethod, RequestEncodingType\nfrom starlite.parsers import parse_cookie_string\n\nif TYPE_CHECKING:\n from starlite.connection import ASGIConnection\n from starlite.types import Method\n from starlite.types.asgi_types import HTTPResponseBodyEvent, HTTPResponseStartEvent\n\n\ndef obfuscate(values: Dict[str, Any], fields_to_obfuscate: Set[str]) -> Dict[str, Any]:\n \"\"\"Obfuscate values in a dictionary, replacing values with `******`\n\n Args:\n values: A dictionary of strings\n fields_to_obfuscate: keys to obfuscate\n\n Returns:\n A dictionary with obfuscated strings\n \"\"\"\n return {key: \"*****\" if key.lower() in fields_to_obfuscate else value for key, value in values.items()}\n\n\nRequestExtractorField = Literal[\n \"path\", \"method\", \"content_type\", \"headers\", \"cookies\", \"query\", \"path_params\", \"body\", \"scheme\", \"client\"\n]\n\nResponseExtractorField = Literal[\"status_code\", \"headers\", \"body\", \"cookies\"]\n\n\nclass ExtractedRequestData(TypedDict, total=False):\n \"\"\"Dictionary representing extracted request data.\"\"\"\n\n body: Coroutine\n client: Tuple[str, int]\n content_type: Tuple[str, Dict[str, str]]\n cookies: Dict[str, str]\n headers: Dict[str, str]\n method: \"Method\"\n path: str\n path_params: Dict[str, Any]\n query: Union[bytes, Dict[str, Any]]\n scheme: str\n\n\nclass ConnectionDataExtractor:\n \"\"\"Utility class to extract data from an :class:`ASGIConnection <starlite.connection.ASGIConnection>`,\n :class:`Request <starlite.connection.Request>` or :class:`WebSocket <starlite.connection.WebSocket>` instance.\n \"\"\"\n\n __slots__ = (\n \"connection_extractors\",\n \"request_extractors\",\n \"parse_body\",\n \"parse_query\",\n \"obfuscate_headers\",\n \"obfuscate_cookies\",\n )\n\n def __init__(\n self,\n extract_body: bool = True,\n extract_client: bool = True,\n extract_content_type: bool = True,\n extract_cookies: bool = True,\n extract_headers: bool = True,\n extract_method: bool = True,\n extract_path: bool = True,\n extract_path_params: bool = True,\n extract_query: bool = True,\n extract_scheme: bool = True,\n obfuscate_cookies: Optional[Set[str]] = None,\n obfuscate_headers: Optional[Set[str]] = None,\n parse_body: bool = False,\n parse_query: bool = False,\n ):\n \"\"\"Initialize ``ConnectionDataExtractor``\n\n Args:\n extract_body: Whether to extract body, (for requests only).\n extract_client: Whether to extract the client (host, port) mapping.\n extract_content_type: Whether to extract the content type and any options.\n extract_cookies: Whether to extract cookies.\n extract_headers: Whether to extract headers.\n extract_method: Whether to extract the HTTP method, (for requests only).\n extract_path: Whether to extract the path.\n extract_path_params: Whether to extract path parameters.\n extract_query: Whether to extract query parameters.\n extract_scheme: Whether to extract the http scheme.\n obfuscate_headers: headers keys to obfuscate. Obfuscated values are replaced with '*****'.\n obfuscate_cookies: cookie keys to obfuscate. Obfuscated values are replaced with '*****'.\n parse_body: Whether to parse the body value or return the raw byte string, (for requests only).\n parse_query: Whether to parse query parameters or return the raw byte string.\n \"\"\"\n self.parse_body = parse_body\n self.parse_query = parse_query\n self.obfuscate_headers = {h.lower() for h in (obfuscate_headers or set())}\n self.obfuscate_cookies = {c.lower() for c in (obfuscate_cookies or set())}\n self.connection_extractors: Dict[str, Callable[[\"ASGIConnection[Any, Any, Any]\"], Any]] = {}\n self.request_extractors: Dict[RequestExtractorField, Callable[[\"Request[Any, Any]\"], Any]] = {}\n if extract_scheme:\n self.connection_extractors[\"scheme\"] = self.extract_scheme\n if extract_client:\n self.connection_extractors[\"client\"] = self.extract_client\n if extract_path:\n self.connection_extractors[\"path\"] = self.extract_path\n if extract_headers:\n self.connection_extractors[\"headers\"] = self.extract_headers\n if extract_cookies:\n self.connection_extractors[\"cookies\"] = self.extract_cookies\n if extract_query:\n self.connection_extractors[\"query\"] = self.extract_query\n if extract_path_params:\n self.connection_extractors[\"path_params\"] = self.extract_path_params\n if extract_method:\n self.request_extractors[\"method\"] = self.extract_method\n if extract_content_type:\n self.request_extractors[\"content_type\"] = self.extract_content_type\n if extract_body:\n self.request_extractors[\"body\"] = self.extract_body\n\n def __call__(self, connection: \"ASGIConnection[Any, Any, Any]\") -> ExtractedRequestData:\n \"\"\"Extract data from the connection, returning a dictionary of values.\n\n Notes:\n - The value for ``body`` - if present - is an unresolved Coroutine and as such should be awaited by the receiver.\n\n Args:\n connection: An ASGI connection or its subclasses.\n\n Returns:\n A string keyed dictionary of extracted values.\n \"\"\"\n extractors = (\n {**self.connection_extractors, **self.request_extractors} # type: ignore\n if isinstance(connection, Request)\n else self.connection_extractors\n )\n return cast(\"ExtractedRequestData\", {key: extractor(connection) for key, extractor in extractors.items()})\n\n @staticmethod\n def extract_scheme(connection: \"ASGIConnection[Any, Any, Any]\") -> str:\n \"\"\"Extract the scheme from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n The connection's scope[\"scheme\"] value\n \"\"\"\n return connection.scope[\"scheme\"]\n\n @staticmethod\n def extract_client(connection: \"ASGIConnection[Any, Any, Any]\") -> Tuple[str, int]:\n \"\"\"Extract the client from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n The connection's scope[\"client\"] value or a default value.\n \"\"\"\n return connection.scope.get(\"client\") or (\"\", 0)\n\n @staticmethod\n def extract_path(connection: \"ASGIConnection[Any, Any, Any]\") -> str:\n \"\"\"Extract the path from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n The connection's scope[\"path\"] value\n \"\"\"\n return connection.scope[\"path\"]\n\n def extract_headers(self, connection: \"ASGIConnection[Any, Any, Any]\") -> Dict[str, str]:\n \"\"\"Extract headers from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n A dictionary with the connection's headers.\n \"\"\"\n headers = {k.decode(\"latin-1\"): v.decode(\"latin-1\") for k, v in connection.scope[\"headers\"]}\n return obfuscate(headers, self.obfuscate_headers) if self.obfuscate_headers else headers\n\n def extract_cookies(self, connection: \"ASGIConnection[Any, Any, Any]\") -> Dict[str, str]:\n \"\"\"Extract cookies from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n A dictionary with the connection's cookies.\n \"\"\"\n return obfuscate(connection.cookies, self.obfuscate_cookies) if self.obfuscate_cookies else connection.cookies\n\n def extract_query(self, connection: \"ASGIConnection[Any, Any, Any]\") -> Any:\n \"\"\"Extract query from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n Either a dictionary with the connection's parsed query string or the raw query byte-string.\n \"\"\"\n return connection.query_params.dict() if self.parse_query else connection.scope.get(\"query_string\", b\"\")\n\n @staticmethod\n def extract_path_params(connection: \"ASGIConnection[Any, Any, Any]\") -> Dict[str, Any]:\n \"\"\"Extract the path parameters from an ``ASGIConnection``\n\n Args:\n connection: An :class:`ASGIConnection <starlite.connection.ASGIConnection>` instance.\n\n Returns:\n A dictionary with the connection's path parameters.\n \"\"\"\n return connection.path_params\n\n @staticmethod\n def extract_method(request: \"Request[Any, Any]\") -> \"Method\":\n \"\"\"Extract the method from an ``ASGIConnection``\n\n Args:\n request: A :class:`Request <starlite.connection.Request>` instance.\n\n Returns:\n The request's scope[\"method\"] value.\n \"\"\"\n return request.scope[\"method\"]\n\n @staticmethod\n def extract_content_type(request: \"Request[Any, Any]\") -> Tuple[str, Dict[str, str]]:\n \"\"\"Extract the content-type from an ``ASGIConnection``\n\n Args:\n request: A :class:`Request <starlite.connection.Request>` instance.\n\n Returns:\n A tuple containing the request's parsed 'Content-Type' header.\n \"\"\"\n return request.content_type\n\n async def extract_body(self, request: \"Request[Any, Any]\") -> Any:\n \"\"\"Extract the body from an ``ASGIConnection``\n\n Args:\n request: A :class:`Request <starlite.connection.Request>` instance.\n\n Returns:\n Either the parsed request body or the raw byte-string.\n \"\"\"\n if request.method != HttpMethod.GET:\n if not self.parse_body:\n return await request.body()\n request_encoding_type = request.content_type[0]\n if request_encoding_type == RequestEncodingType.JSON:\n return await request.json()\n form_data = await request.form()\n if request_encoding_type == RequestEncodingType.URL_ENCODED:\n return dict(form_data)\n return {\n key: repr(value) if isinstance(value, UploadFile) else value for key, value in form_data.multi_items()\n }\n return None\n\n\nclass ExtractedResponseData(TypedDict, total=False):\n \"\"\"Dictionary representing extracted response data.\"\"\"\n\n body: bytes\n status_code: int\n headers: Dict[str, str]\n cookies: Dict[str, str]\n\n\nclass ResponseDataExtractor:\n \"\"\"Utility class to extract data from a ``Message``\"\"\"\n\n __slots__ = (\"extractors\", \"parse_headers\", \"obfuscate_headers\", \"obfuscate_cookies\")\n\n def __init__(\n self,\n extract_body: bool = True,\n extract_cookies: bool = True,\n extract_headers: bool = True,\n extract_status_code: bool = True,\n obfuscate_cookies: Optional[Set[str]] = None,\n obfuscate_headers: Optional[Set[str]] = None,\n ):\n \"\"\"Initialize ``ResponseDataExtractor`` with options.\n\n Args:\n extract_body: Whether to extract the body.\n extract_cookies: Whether to extract the cookies.\n extract_headers: Whether to extract the headers.\n extract_status_code: Whether to extract the status code.\n obfuscate_cookies: cookie keys to obfuscate. Obfuscated values are replaced with '*****'.\n obfuscate_headers: headers keys to obfuscate. Obfuscated values are replaced with '*****'.\n \"\"\"\n self.obfuscate_headers = {h.lower() for h in (obfuscate_headers or set())}\n self.obfuscate_cookies = {c.lower() for c in (obfuscate_cookies or set())}\n self.extractors: Dict[\n ResponseExtractorField, Callable[[Tuple[\"HTTPResponseStartEvent\", \"HTTPResponseBodyEvent\"]], Any]\n ] = {}\n if extract_body:\n self.extractors[\"body\"] = self.extract_response_body\n if extract_status_code:\n self.extractors[\"status_code\"] = self.extract_status_code\n if extract_headers:\n self.extractors[\"headers\"] = self.extract_headers\n if extract_cookies:\n self.extractors[\"cookies\"] = self.extract_cookies\n\n def __call__(self, messages: Tuple[\"HTTPResponseStartEvent\", \"HTTPResponseBodyEvent\"]) -> ExtractedResponseData:\n \"\"\"Extract data from the response, returning a dictionary of values.\n\n Args:\n messages: A tuple containing\n :class:`HTTPResponseStartEvent <starlite.types.asgi_types.HTTPResponseStartEvent>`\n and :class:`HTTPResponseBodyEvent <starlite.types.asgi_types.HTTPResponseBodyEvent>`.\n\n Returns:\n A string keyed dictionary of extracted values.\n \"\"\"\n return cast(\"ExtractedResponseData\", {key: extractor(messages) for key, extractor in self.extractors.items()})\n\n @staticmethod\n def extract_response_body(messages: Tuple[\"HTTPResponseStartEvent\", \"HTTPResponseBodyEvent\"]) -> bytes:\n \"\"\"Extract the response body from a ``Message``\n\n Args:\n messages: A tuple containing\n :class:`HTTPResponseStartEvent <starlite.types.asgi_types.HTTPResponseStartEvent>`\n and :class:`HTTPResponseBodyEvent <starlite.types.asgi_types.HTTPResponseBodyEvent>`.\n\n Returns:\n The Response's body as a byte-string.\n \"\"\"\n return messages[1][\"body\"]\n\n @staticmethod\n def extract_status_code(messages: Tuple[\"HTTPResponseStartEvent\", \"HTTPResponseBodyEvent\"]) -> int:\n \"\"\"Extract a status code from a ``Message``\n\n Args:\n messages: A tuple containing\n :class:`HTTPResponseStartEvent <starlite.types.asgi_types.HTTPResponseStartEvent>`\n and :class:`HTTPResponseBodyEvent <starlite.types.asgi_types.HTTPResponseBodyEvent>`.\n\n Returns:\n The Response's status-code.\n \"\"\"\n return messages[0][\"status\"]\n\n def extract_headers(self, messages: Tuple[\"HTTPResponseStartEvent\", \"HTTPResponseBodyEvent\"]) -> Dict[str, str]:\n \"\"\"Extract headers from a ``Message``\n\n Args:\n messages: A tuple containing\n :class:`HTTPResponseStartEvent <starlite.types.asgi_types.HTTPResponseStartEvent>`\n and :class:`HTTPResponseBodyEvent <starlite.types.asgi_types.HTTPResponseBodyEvent>`.\n\n Returns:\n The Response's headers dict.\n \"\"\"\n headers = {\n key.decode(\"latin-1\"): value.decode(\"latin-1\")\n for key, value in filter(lambda x: x[0].lower() != b\"set-cookie\", messages[0][\"headers\"])\n }\n return (\n obfuscate(\n headers,\n self.obfuscate_headers,\n )\n if self.obfuscate_headers\n else headers\n )\n\n def extract_cookies(self, messages: Tuple[\"HTTPResponseStartEvent\", \"HTTPResponseBodyEvent\"]) -> Dict[str, str]:\n \"\"\"Extract cookies from a ``Message``\n\n Args:\n messages: A tuple containing\n :class:`HTTPResponseStartEvent <starlite.types.asgi_types.HTTPResponseStartEvent>`\n and :class:`HTTPResponseBodyEvent <starlite.types.asgi_types.HTTPResponseBodyEvent>`.\n\n Returns:\n The Response's cookies dict.\n \"\"\"\n cookie_string = \";\".join(\n list( # noqa: C417\n map(\n lambda x: x[1].decode(\"latin-1\"),\n filter(lambda x: x[0].lower() == b\"set-cookie\", messages[0][\"headers\"]),\n )\n )\n )\n if cookie_string:\n parsed_cookies = parse_cookie_string(cookie_string)\n return obfuscate(parsed_cookies, self.obfuscate_cookies) if self.obfuscate_cookies else parsed_cookies\n return {}\n", "path": "starlite/utils/extractors.py" } ]
diff --git a/starlite/utils/extractors.py b/starlite/utils/extractors.py index 6219352321..360130df8d 100644 --- a/starlite/utils/extractors.py +++ b/starlite/utils/extractors.py @@ -35,10 +35,7 @@ def obfuscate(values: Dict[str, Any], fields_to_obfuscate: Set[str]) -> Dict[str Returns: A dictionary with obfuscated strings """ - for key in values: - if key.lower() in fields_to_obfuscate: - values[key] = "*****" - return values + return {key: "*****" if key.lower() in fields_to_obfuscate else value for key, value in values.items()} RequestExtractorField = Literal[ diff --git a/tests/middleware/test_logging_middleware.py b/tests/middleware/test_logging_middleware.py index 9d1fe88ebe..b83dc336ef 100644 --- a/tests/middleware/test_logging_middleware.py +++ b/tests/middleware/test_logging_middleware.py @@ -6,8 +6,10 @@ from starlite import Cookie, LoggingConfig, Response, StructLoggingConfig, get, post from starlite.config.compression import CompressionConfig +from starlite.connection import Request from starlite.middleware import LoggingMiddlewareConfig -from starlite.status_codes import HTTP_200_OK +from starlite.middleware.session.memory_backend import MemoryBackendConfig +from starlite.status_codes import HTTP_200_OK, HTTP_201_CREATED from starlite.testing import create_test_client if TYPE_CHECKING: @@ -190,3 +192,34 @@ async def hello_world_handler() -> Dict[str, str]: response = client.get("/") assert response.status_code == HTTP_200_OK assert len(caplog.messages) == 2 + + +def test_logging_middleware_with_session_middleware() -> None: + # https://github.com/starlite-api/starlite/issues/1228 + + @post("/") + async def set_session(request: Request) -> None: + request.set_session({"hello": "world"}) + + @get("/") + async def get_session() -> None: + pass + + logging_middleware_config = LoggingMiddlewareConfig() + session_config = MemoryBackendConfig() + + with create_test_client( + [set_session, get_session], + logging_config=LoggingConfig(), + middleware=[logging_middleware_config.middleware, session_config.middleware], + ) as client: + response = client.post("/") + assert response.status_code == HTTP_201_CREATED + assert "session" in client.cookies + assert client.cookies["session"] != "*****" + session_id = client.cookies["session"] + + response = client.get("/") + assert response.status_code == HTTP_200_OK + assert "session" in client.cookies + assert client.cookies["session"] == session_id
frappe__frappe-20434
Enable Scheduler from desk Feature to enable scheduler from desk.
[ { "content": "# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors\n# License: MIT. See LICENSE\n\"\"\"\nEvents:\n\talways\n\tdaily\n\tmonthly\n\tweekly\n\"\"\"\n\n# imports - standard imports\nimport os\nimport time\nfrom typing import NoReturn\n\n# imports - module imports\nimport frappe\nfrom frappe.installer import update_site_config\nfrom frappe.utils import cint, get_datetime, get_sites, now_datetime\nfrom frappe.utils.background_jobs import get_jobs\n\nDATETIME_FORMAT = \"%Y-%m-%d %H:%M:%S\"\n\n\ndef cprint(*args, **kwargs):\n\t\"\"\"Prints only if called from STDOUT\"\"\"\n\ttry:\n\t\tos.get_terminal_size()\n\t\tprint(*args, **kwargs)\n\texcept Exception:\n\t\tpass\n\n\ndef start_scheduler() -> NoReturn:\n\t\"\"\"Run enqueue_events_for_all_sites based on scheduler tick.\n\tSpecify scheduler_interval in seconds in common_site_config.json\"\"\"\n\n\ttick = cint(frappe.get_conf().scheduler_tick_interval) or 60\n\n\twhile True:\n\t\ttime.sleep(tick)\n\t\tenqueue_events_for_all_sites()\n\n\ndef enqueue_events_for_all_sites() -> None:\n\t\"\"\"Loop through sites and enqueue events that are not already queued\"\"\"\n\n\tif os.path.exists(os.path.join(\".\", \".restarting\")):\n\t\t# Don't add task to queue if webserver is in restart mode\n\t\treturn\n\n\twith frappe.init_site():\n\t\tsites = get_sites()\n\n\tfor site in sites:\n\t\ttry:\n\t\t\tenqueue_events_for_site(site=site)\n\t\texcept Exception:\n\t\t\tfrappe.logger(\"scheduler\").debug(f\"Failed to enqueue events for site: {site}\", exc_info=True)\n\n\ndef enqueue_events_for_site(site: str) -> None:\n\tdef log_exc():\n\t\tfrappe.logger(\"scheduler\").error(f\"Exception in Enqueue Events for Site {site}\", exc_info=True)\n\n\ttry:\n\t\tfrappe.init(site=site)\n\t\tfrappe.connect()\n\t\tif is_scheduler_inactive():\n\t\t\treturn\n\n\t\tenqueue_events(site=site)\n\n\t\tfrappe.logger(\"scheduler\").debug(f\"Queued events for site {site}\")\n\texcept Exception as e:\n\t\tif frappe.db.is_access_denied(e):\n\t\t\tfrappe.logger(\"scheduler\").debug(f\"Access denied for site {site}\")\n\t\tlog_exc()\n\n\tfinally:\n\t\tfrappe.destroy()\n\n\ndef enqueue_events(site: str) -> list[str] | None:\n\tif schedule_jobs_based_on_activity():\n\t\tenqueued_jobs = []\n\t\tfor job_type in frappe.get_all(\"Scheduled Job Type\", (\"name\", \"method\"), {\"stopped\": 0}):\n\t\t\tjob_type = frappe.get_cached_doc(\"Scheduled Job Type\", job_type.name)\n\t\t\tif _enqueued := job_type.enqueue():\n\t\t\t\tenqueued_jobs.append(job_type.method)\n\n\t\treturn enqueued_jobs\n\n\ndef is_scheduler_inactive(verbose=True) -> bool:\n\tif frappe.local.conf.maintenance_mode:\n\t\tif verbose:\n\t\t\tcprint(f\"{frappe.local.site}: Maintenance mode is ON\")\n\t\treturn True\n\n\tif frappe.local.conf.pause_scheduler:\n\t\tif verbose:\n\t\t\tcprint(f\"{frappe.local.site}: frappe.conf.pause_scheduler is SET\")\n\t\treturn True\n\n\tif is_scheduler_disabled(verbose=verbose):\n\t\treturn True\n\n\treturn False\n\n\ndef is_scheduler_disabled(verbose=True) -> bool:\n\tif frappe.conf.disable_scheduler:\n\t\tif verbose:\n\t\t\tcprint(f\"{frappe.local.site}: frappe.conf.disable_scheduler is SET\")\n\t\treturn True\n\n\tscheduler_disabled = not frappe.utils.cint(\n\t\tfrappe.db.get_single_value(\"System Settings\", \"enable_scheduler\")\n\t)\n\tif scheduler_disabled:\n\t\tif verbose:\n\t\t\tcprint(f\"{frappe.local.site}: SystemSettings.enable_scheduler is UNSET\")\n\treturn scheduler_disabled\n\n\ndef toggle_scheduler(enable):\n\tfrappe.db.set_single_value(\"System Settings\", \"enable_scheduler\", int(enable))\n\n\ndef enable_scheduler():\n\ttoggle_scheduler(True)\n\n\ndef disable_scheduler():\n\ttoggle_scheduler(False)\n\n\ndef schedule_jobs_based_on_activity(check_time=None):\n\t\"\"\"Returns True for active sites defined by Activity Log\n\tReturns True for inactive sites once in 24 hours\"\"\"\n\tif is_dormant(check_time=check_time):\n\t\t# ensure last job is one day old\n\t\tlast_job_timestamp = _get_last_modified_timestamp(\"Scheduled Job Log\")\n\t\tif not last_job_timestamp:\n\t\t\treturn True\n\t\telse:\n\t\t\tif ((check_time or now_datetime()) - last_job_timestamp).total_seconds() >= 86400:\n\t\t\t\t# one day is passed since jobs are run, so lets do this\n\t\t\t\treturn True\n\t\t\telse:\n\t\t\t\t# schedulers run in the last 24 hours, do nothing\n\t\t\t\treturn False\n\telse:\n\t\t# site active, lets run the jobs\n\t\treturn True\n\n\ndef is_dormant(check_time=None):\n\tlast_activity_log_timestamp = _get_last_modified_timestamp(\"Activity Log\")\n\tsince = (frappe.get_system_settings(\"dormant_days\") or 4) * 86400\n\tif not last_activity_log_timestamp:\n\t\treturn True\n\tif ((check_time or now_datetime()) - last_activity_log_timestamp).total_seconds() >= since:\n\t\treturn True\n\treturn False\n\n\ndef _get_last_modified_timestamp(doctype):\n\ttimestamp = frappe.db.get_value(\n\t\tdoctype, filters={}, fieldname=\"modified\", order_by=\"modified desc\"\n\t)\n\tif timestamp:\n\t\treturn get_datetime(timestamp)\n\n\[email protected]()\ndef activate_scheduler():\n\tif is_scheduler_disabled():\n\t\tenable_scheduler()\n\tif frappe.conf.pause_scheduler:\n\t\tupdate_site_config(\"pause_scheduler\", 0)\n\n\[email protected]()\ndef get_scheduler_status():\n\tif is_scheduler_inactive():\n\t\treturn {\"status\": \"inactive\"}\n\treturn {\"status\": \"active\"}\n", "path": "frappe/utils/scheduler.py" } ]
[ { "content": "# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors\n# License: MIT. See LICENSE\n\"\"\"\nEvents:\n\talways\n\tdaily\n\tmonthly\n\tweekly\n\"\"\"\n\n# imports - standard imports\nimport os\nimport time\nfrom typing import NoReturn\n\n# imports - module imports\nimport frappe\nfrom frappe.installer import update_site_config\nfrom frappe.utils import cint, get_datetime, get_sites, now_datetime\nfrom frappe.utils.background_jobs import get_jobs\n\nDATETIME_FORMAT = \"%Y-%m-%d %H:%M:%S\"\n\n\ndef cprint(*args, **kwargs):\n\t\"\"\"Prints only if called from STDOUT\"\"\"\n\ttry:\n\t\tos.get_terminal_size()\n\t\tprint(*args, **kwargs)\n\texcept Exception:\n\t\tpass\n\n\ndef start_scheduler() -> NoReturn:\n\t\"\"\"Run enqueue_events_for_all_sites based on scheduler tick.\n\tSpecify scheduler_interval in seconds in common_site_config.json\"\"\"\n\n\ttick = cint(frappe.get_conf().scheduler_tick_interval) or 60\n\n\twhile True:\n\t\ttime.sleep(tick)\n\t\tenqueue_events_for_all_sites()\n\n\ndef enqueue_events_for_all_sites() -> None:\n\t\"\"\"Loop through sites and enqueue events that are not already queued\"\"\"\n\n\tif os.path.exists(os.path.join(\".\", \".restarting\")):\n\t\t# Don't add task to queue if webserver is in restart mode\n\t\treturn\n\n\twith frappe.init_site():\n\t\tsites = get_sites()\n\n\tfor site in sites:\n\t\ttry:\n\t\t\tenqueue_events_for_site(site=site)\n\t\texcept Exception:\n\t\t\tfrappe.logger(\"scheduler\").debug(f\"Failed to enqueue events for site: {site}\", exc_info=True)\n\n\ndef enqueue_events_for_site(site: str) -> None:\n\tdef log_exc():\n\t\tfrappe.logger(\"scheduler\").error(f\"Exception in Enqueue Events for Site {site}\", exc_info=True)\n\n\ttry:\n\t\tfrappe.init(site=site)\n\t\tfrappe.connect()\n\t\tif is_scheduler_inactive():\n\t\t\treturn\n\n\t\tenqueue_events(site=site)\n\n\t\tfrappe.logger(\"scheduler\").debug(f\"Queued events for site {site}\")\n\texcept Exception as e:\n\t\tif frappe.db.is_access_denied(e):\n\t\t\tfrappe.logger(\"scheduler\").debug(f\"Access denied for site {site}\")\n\t\tlog_exc()\n\n\tfinally:\n\t\tfrappe.destroy()\n\n\ndef enqueue_events(site: str) -> list[str] | None:\n\tif schedule_jobs_based_on_activity():\n\t\tenqueued_jobs = []\n\t\tfor job_type in frappe.get_all(\"Scheduled Job Type\", (\"name\", \"method\"), {\"stopped\": 0}):\n\t\t\tjob_type = frappe.get_cached_doc(\"Scheduled Job Type\", job_type.name)\n\t\t\tif _enqueued := job_type.enqueue():\n\t\t\t\tenqueued_jobs.append(job_type.method)\n\n\t\treturn enqueued_jobs\n\n\ndef is_scheduler_inactive(verbose=True) -> bool:\n\tif frappe.local.conf.maintenance_mode:\n\t\tif verbose:\n\t\t\tcprint(f\"{frappe.local.site}: Maintenance mode is ON\")\n\t\treturn True\n\n\tif frappe.local.conf.pause_scheduler:\n\t\tif verbose:\n\t\t\tcprint(f\"{frappe.local.site}: frappe.conf.pause_scheduler is SET\")\n\t\treturn True\n\n\tif is_scheduler_disabled(verbose=verbose):\n\t\treturn True\n\n\treturn False\n\n\ndef is_scheduler_disabled(verbose=True) -> bool:\n\tif frappe.conf.disable_scheduler:\n\t\tif verbose:\n\t\t\tcprint(f\"{frappe.local.site}: frappe.conf.disable_scheduler is SET\")\n\t\treturn True\n\n\tscheduler_disabled = not frappe.utils.cint(\n\t\tfrappe.db.get_single_value(\"System Settings\", \"enable_scheduler\")\n\t)\n\tif scheduler_disabled:\n\t\tif verbose:\n\t\t\tcprint(f\"{frappe.local.site}: SystemSettings.enable_scheduler is UNSET\")\n\treturn scheduler_disabled\n\n\ndef toggle_scheduler(enable):\n\tfrappe.db.set_single_value(\"System Settings\", \"enable_scheduler\", int(enable))\n\n\ndef enable_scheduler():\n\ttoggle_scheduler(True)\n\n\ndef disable_scheduler():\n\ttoggle_scheduler(False)\n\n\ndef schedule_jobs_based_on_activity(check_time=None):\n\t\"\"\"Returns True for active sites defined by Activity Log\n\tReturns True for inactive sites once in 24 hours\"\"\"\n\tif is_dormant(check_time=check_time):\n\t\t# ensure last job is one day old\n\t\tlast_job_timestamp = _get_last_modified_timestamp(\"Scheduled Job Log\")\n\t\tif not last_job_timestamp:\n\t\t\treturn True\n\t\telse:\n\t\t\tif ((check_time or now_datetime()) - last_job_timestamp).total_seconds() >= 86400:\n\t\t\t\t# one day is passed since jobs are run, so lets do this\n\t\t\t\treturn True\n\t\t\telse:\n\t\t\t\t# schedulers run in the last 24 hours, do nothing\n\t\t\t\treturn False\n\telse:\n\t\t# site active, lets run the jobs\n\t\treturn True\n\n\ndef is_dormant(check_time=None):\n\tlast_activity_log_timestamp = _get_last_modified_timestamp(\"Activity Log\")\n\tsince = (frappe.get_system_settings(\"dormant_days\") or 4) * 86400\n\tif not last_activity_log_timestamp:\n\t\treturn True\n\tif ((check_time or now_datetime()) - last_activity_log_timestamp).total_seconds() >= since:\n\t\treturn True\n\treturn False\n\n\ndef _get_last_modified_timestamp(doctype):\n\ttimestamp = frappe.db.get_value(\n\t\tdoctype, filters={}, fieldname=\"modified\", order_by=\"modified desc\"\n\t)\n\tif timestamp:\n\t\treturn get_datetime(timestamp)\n\n\[email protected]()\ndef activate_scheduler():\n\tfrappe.only_for(\"Administrator\")\n\n\tif frappe.local.conf.maintenance_mode:\n\t\tfrappe.throw(frappe._(\"Scheduler can not be re-enabled when maintenance mode is active.\"))\n\n\tif is_scheduler_disabled():\n\t\tenable_scheduler()\n\tif frappe.conf.pause_scheduler:\n\t\tupdate_site_config(\"pause_scheduler\", 0)\n\n\[email protected]()\ndef get_scheduler_status():\n\tif is_scheduler_inactive():\n\t\treturn {\"status\": \"inactive\"}\n\treturn {\"status\": \"active\"}\n", "path": "frappe/utils/scheduler.py" } ]
diff --git a/frappe/core/doctype/rq_job/rq_job_list.js b/frappe/core/doctype/rq_job/rq_job_list.js index 5f6646cd6561..fed56a16fe01 100644 --- a/frappe/core/doctype/rq_job/rq_job_list.js +++ b/frappe/core/doctype/rq_job/rq_job_list.js @@ -4,11 +4,15 @@ frappe.listview_settings["RQ Job"] = { onload(listview) { if (!has_common(frappe.user_roles, ["Administrator", "System Manager"])) return; - listview.page.add_inner_button(__("Remove Failed Jobs"), () => { - frappe.confirm(__("Are you sure you want to remove all failed jobs?"), () => { - frappe.xcall("frappe.core.doctype.rq_job.rq_job.remove_failed_jobs"); - }); - }); + listview.page.add_inner_button( + __("Remove Failed Jobs"), + () => { + frappe.confirm(__("Are you sure you want to remove all failed jobs?"), () => { + frappe.xcall("frappe.core.doctype.rq_job.rq_job.remove_failed_jobs"); + }); + }, + __("Actions") + ); if (listview.list_view_settings) { listview.list_view_settings.disable_count = 1; @@ -20,6 +24,25 @@ frappe.listview_settings["RQ Job"] = { listview.page.set_indicator(__("Scheduler: Active"), "green"); } else { listview.page.set_indicator(__("Scheduler: Inactive"), "red"); + listview.page.add_inner_button( + __("Enable Scheduler"), + () => { + frappe.confirm(__("Are you sure you want to re-enable scheduler?"), () => { + frappe + .xcall("frappe.utils.scheduler.activate_scheduler") + .then(() => { + frappe.show_alert(__("Enabled Scheduler")); + }) + .catch((e) => { + frappe.show_alert({ + message: __("Failed to enable scheduler: {0}", e), + indicator: "error", + }); + }); + }); + }, + __("Actions") + ); } }); diff --git a/frappe/utils/scheduler.py b/frappe/utils/scheduler.py index 8cda71ee9a0e..529a3c7bf717 100755 --- a/frappe/utils/scheduler.py +++ b/frappe/utils/scheduler.py @@ -176,6 +176,11 @@ def _get_last_modified_timestamp(doctype): @frappe.whitelist() def activate_scheduler(): + frappe.only_for("Administrator") + + if frappe.local.conf.maintenance_mode: + frappe.throw(frappe._("Scheduler can not be re-enabled when maintenance mode is active.")) + if is_scheduler_disabled(): enable_scheduler() if frappe.conf.pause_scheduler:
pypi__warehouse-6426
Invalid HTML for select element This html is generated by the Python form code. template: https://github.com/pypa/warehouse/blob/master/warehouse/templates/manage/roles.html field: `{{ form.role_name }}` ERROR: The first child “option” element of a “select” element with a “required” attribute, and without a “multiple” attribute, and without a “size” attribute whose value is greater than “1”, must have either an empty “value” attribute, or must have no text content. Consider either adding a placeholder option label, or adding a “size” attribute with a value equal to the number of “option” elements. (433) Reference: https://maxdesign.com.au/articles/select-required/
[ { "content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport json\n\nimport wtforms\n\nimport warehouse.utils.otp as otp\nimport warehouse.utils.webauthn as webauthn\n\nfrom warehouse import forms\nfrom warehouse.accounts.forms import (\n NewEmailMixin,\n NewPasswordMixin,\n PasswordMixin,\n TOTPValueMixin,\n WebAuthnCredentialMixin,\n)\n\n\nclass RoleNameMixin:\n\n role_name = wtforms.SelectField(\n \"Select role\",\n choices=[(\"Maintainer\", \"Maintainer\"), (\"Owner\", \"Owner\")],\n validators=[wtforms.validators.DataRequired(message=\"Select role\")],\n )\n\n\nclass UsernameMixin:\n\n username = wtforms.StringField(\n validators=[wtforms.validators.DataRequired(message=\"Specify username\")]\n )\n\n def validate_username(self, field):\n userid = self.user_service.find_userid(field.data)\n\n if userid is None:\n raise wtforms.validators.ValidationError(\n \"No user found with that username. Try again.\"\n )\n\n\nclass CreateRoleForm(RoleNameMixin, UsernameMixin, forms.Form):\n def __init__(self, *args, user_service, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n\n\nclass ChangeRoleForm(RoleNameMixin, forms.Form):\n pass\n\n\nclass SaveAccountForm(forms.Form):\n\n __params__ = [\"name\"]\n\n name = wtforms.StringField()\n\n\nclass AddEmailForm(NewEmailMixin, forms.Form):\n\n __params__ = [\"email\"]\n\n def __init__(self, *args, user_service, user_id, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n self.user_id = user_id\n\n\nclass ChangePasswordForm(PasswordMixin, NewPasswordMixin, forms.Form):\n\n __params__ = [\"password\", \"new_password\", \"password_confirm\"]\n\n def __init__(self, *args, user_service, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n\n\nclass DeleteTOTPForm(UsernameMixin, forms.Form):\n\n __params__ = [\"confirm_username\"]\n\n def __init__(self, *args, user_service, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n\n\nclass ProvisionTOTPForm(TOTPValueMixin, forms.Form):\n\n __params__ = [\"totp_value\"]\n\n def __init__(self, *args, totp_secret, **kwargs):\n super().__init__(*args, **kwargs)\n self.totp_secret = totp_secret\n\n def validate_totp_value(self, field):\n totp_value = field.data.encode(\"utf8\")\n if not otp.verify_totp(self.totp_secret, totp_value):\n raise wtforms.validators.ValidationError(\"Invalid TOTP code. Try again?\")\n\n\nclass DeleteWebAuthnForm(forms.Form):\n __params__ = [\"confirm_device_name\"]\n\n label = wtforms.StringField(\n validators=[\n wtforms.validators.DataRequired(message=\"Specify a device name\"),\n wtforms.validators.Length(\n max=64, message=(\"Label must be 64 characters or less\")\n ),\n ]\n )\n\n def __init__(self, *args, user_service, user_id, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n self.user_id = user_id\n\n def validate_label(self, field):\n label = field.data\n\n webauthn = self.user_service.get_webauthn_by_label(self.user_id, label)\n if webauthn is None:\n raise wtforms.validators.ValidationError(\"No WebAuthn key with given label\")\n self.webauthn = webauthn\n\n\nclass ProvisionWebAuthnForm(WebAuthnCredentialMixin, forms.Form):\n __params__ = [\"label\", \"credential\"]\n\n label = wtforms.StringField(\n validators=[\n wtforms.validators.DataRequired(message=\"Specify a label\"),\n wtforms.validators.Length(\n max=64, message=(\"Label must be 64 characters or less\")\n ),\n ]\n )\n\n def __init__(\n self, *args, user_service, user_id, challenge, rp_id, origin, **kwargs\n ):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n self.user_id = user_id\n self.challenge = challenge\n self.rp_id = rp_id\n self.origin = origin\n\n def validate_credential(self, field):\n try:\n credential_dict = json.loads(field.data.encode(\"utf8\"))\n except json.JSONDecodeError:\n raise wtforms.validators.ValidationError(\n \"Invalid WebAuthn credential: Bad payload\"\n )\n\n try:\n validated_credential = self.user_service.verify_webauthn_credential(\n credential_dict,\n challenge=self.challenge,\n rp_id=self.rp_id,\n origin=self.origin,\n )\n except webauthn.RegistrationRejectedException as e:\n raise wtforms.validators.ValidationError(str(e))\n\n self.validated_credential = validated_credential\n\n def validate_label(self, field):\n label = field.data\n\n if self.user_service.get_webauthn_by_label(self.user_id, label) is not None:\n raise wtforms.validators.ValidationError(f\"Label '{label}' already in use\")\n\n\nclass CreateMacaroonForm(forms.Form):\n __params__ = [\"description\", \"token_scope\"]\n\n def __init__(self, *args, user_id, macaroon_service, project_names, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_id = user_id\n self.macaroon_service = macaroon_service\n self.project_names = project_names\n\n description = wtforms.StringField(\n validators=[\n wtforms.validators.DataRequired(message=\"Specify a token name\"),\n wtforms.validators.Length(\n max=100, message=\"Description must be 100 characters or less\"\n ),\n ]\n )\n\n token_scope = wtforms.StringField(\n validators=[wtforms.validators.DataRequired(message=\"Specify the token scope\")]\n )\n\n def validate_description(self, field):\n description = field.data\n\n if (\n self.macaroon_service.get_macaroon_by_description(self.user_id, description)\n is not None\n ):\n raise wtforms.validators.ValidationError(\"API token name already in use\")\n\n def validate_token_scope(self, field):\n scope = field.data\n\n try:\n _, scope_kind = scope.split(\":\", 1)\n except ValueError:\n raise wtforms.ValidationError(f\"Unknown token scope: {scope}\")\n\n if scope_kind == \"unspecified\":\n raise wtforms.ValidationError(f\"Specify the token scope\")\n\n if scope_kind == \"user\":\n self.validated_scope = scope_kind\n return\n\n try:\n scope_kind, scope_value = scope_kind.split(\":\", 1)\n except ValueError:\n raise wtforms.ValidationError(f\"Unknown token scope: {scope}\")\n\n if scope_kind != \"project\":\n raise wtforms.ValidationError(f\"Unknown token scope: {scope}\")\n if scope_value not in self.project_names:\n raise wtforms.ValidationError(\n f\"Unknown or invalid project name: {scope_value}\"\n )\n\n self.validated_scope = {\"projects\": [scope_value]}\n\n\nclass DeleteMacaroonForm(forms.Form):\n __params__ = [\"macaroon_id\"]\n\n macaroon_id = wtforms.StringField(\n validators=[wtforms.validators.DataRequired(message=\"Identifier required\")]\n )\n\n def __init__(self, *args, macaroon_service, **kwargs):\n super().__init__(*args, **kwargs)\n self.macaroon_service = macaroon_service\n\n def validate_macaroon_id(self, field):\n macaroon_id = field.data\n if self.macaroon_service.find_macaroon(macaroon_id) is None:\n raise wtforms.validators.ValidationError(\"No such macaroon\")\n", "path": "warehouse/manage/forms.py" } ]
[ { "content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport json\n\nimport wtforms\n\nimport warehouse.utils.otp as otp\nimport warehouse.utils.webauthn as webauthn\n\nfrom warehouse import forms\nfrom warehouse.accounts.forms import (\n NewEmailMixin,\n NewPasswordMixin,\n PasswordMixin,\n TOTPValueMixin,\n WebAuthnCredentialMixin,\n)\n\n\nclass RoleNameMixin:\n\n role_name = wtforms.SelectField(\n \"Select role\",\n choices=[(\"\", \"Select role\"), (\"Maintainer\", \"Maintainer\"), (\"Owner\", \"Owner\")],\n validators=[wtforms.validators.DataRequired(message=\"Select role\")],\n )\n\n\nclass UsernameMixin:\n\n username = wtforms.StringField(\n validators=[wtforms.validators.DataRequired(message=\"Specify username\")]\n )\n\n def validate_username(self, field):\n userid = self.user_service.find_userid(field.data)\n\n if userid is None:\n raise wtforms.validators.ValidationError(\n \"No user found with that username. Try again.\"\n )\n\n\nclass CreateRoleForm(RoleNameMixin, UsernameMixin, forms.Form):\n def __init__(self, *args, user_service, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n\n\nclass ChangeRoleForm(RoleNameMixin, forms.Form):\n pass\n\n\nclass SaveAccountForm(forms.Form):\n\n __params__ = [\"name\"]\n\n name = wtforms.StringField()\n\n\nclass AddEmailForm(NewEmailMixin, forms.Form):\n\n __params__ = [\"email\"]\n\n def __init__(self, *args, user_service, user_id, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n self.user_id = user_id\n\n\nclass ChangePasswordForm(PasswordMixin, NewPasswordMixin, forms.Form):\n\n __params__ = [\"password\", \"new_password\", \"password_confirm\"]\n\n def __init__(self, *args, user_service, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n\n\nclass DeleteTOTPForm(UsernameMixin, forms.Form):\n\n __params__ = [\"confirm_username\"]\n\n def __init__(self, *args, user_service, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n\n\nclass ProvisionTOTPForm(TOTPValueMixin, forms.Form):\n\n __params__ = [\"totp_value\"]\n\n def __init__(self, *args, totp_secret, **kwargs):\n super().__init__(*args, **kwargs)\n self.totp_secret = totp_secret\n\n def validate_totp_value(self, field):\n totp_value = field.data.encode(\"utf8\")\n if not otp.verify_totp(self.totp_secret, totp_value):\n raise wtforms.validators.ValidationError(\"Invalid TOTP code. Try again?\")\n\n\nclass DeleteWebAuthnForm(forms.Form):\n __params__ = [\"confirm_device_name\"]\n\n label = wtforms.StringField(\n validators=[\n wtforms.validators.DataRequired(message=\"Specify a device name\"),\n wtforms.validators.Length(\n max=64, message=(\"Label must be 64 characters or less\")\n ),\n ]\n )\n\n def __init__(self, *args, user_service, user_id, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n self.user_id = user_id\n\n def validate_label(self, field):\n label = field.data\n\n webauthn = self.user_service.get_webauthn_by_label(self.user_id, label)\n if webauthn is None:\n raise wtforms.validators.ValidationError(\"No WebAuthn key with given label\")\n self.webauthn = webauthn\n\n\nclass ProvisionWebAuthnForm(WebAuthnCredentialMixin, forms.Form):\n __params__ = [\"label\", \"credential\"]\n\n label = wtforms.StringField(\n validators=[\n wtforms.validators.DataRequired(message=\"Specify a label\"),\n wtforms.validators.Length(\n max=64, message=(\"Label must be 64 characters or less\")\n ),\n ]\n )\n\n def __init__(\n self, *args, user_service, user_id, challenge, rp_id, origin, **kwargs\n ):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n self.user_id = user_id\n self.challenge = challenge\n self.rp_id = rp_id\n self.origin = origin\n\n def validate_credential(self, field):\n try:\n credential_dict = json.loads(field.data.encode(\"utf8\"))\n except json.JSONDecodeError:\n raise wtforms.validators.ValidationError(\n \"Invalid WebAuthn credential: Bad payload\"\n )\n\n try:\n validated_credential = self.user_service.verify_webauthn_credential(\n credential_dict,\n challenge=self.challenge,\n rp_id=self.rp_id,\n origin=self.origin,\n )\n except webauthn.RegistrationRejectedException as e:\n raise wtforms.validators.ValidationError(str(e))\n\n self.validated_credential = validated_credential\n\n def validate_label(self, field):\n label = field.data\n\n if self.user_service.get_webauthn_by_label(self.user_id, label) is not None:\n raise wtforms.validators.ValidationError(f\"Label '{label}' already in use\")\n\n\nclass CreateMacaroonForm(forms.Form):\n __params__ = [\"description\", \"token_scope\"]\n\n def __init__(self, *args, user_id, macaroon_service, project_names, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_id = user_id\n self.macaroon_service = macaroon_service\n self.project_names = project_names\n\n description = wtforms.StringField(\n validators=[\n wtforms.validators.DataRequired(message=\"Specify a token name\"),\n wtforms.validators.Length(\n max=100, message=\"Description must be 100 characters or less\"\n ),\n ]\n )\n\n token_scope = wtforms.StringField(\n validators=[wtforms.validators.DataRequired(message=\"Specify the token scope\")]\n )\n\n def validate_description(self, field):\n description = field.data\n\n if (\n self.macaroon_service.get_macaroon_by_description(self.user_id, description)\n is not None\n ):\n raise wtforms.validators.ValidationError(\"API token name already in use\")\n\n def validate_token_scope(self, field):\n scope = field.data\n\n try:\n _, scope_kind = scope.split(\":\", 1)\n except ValueError:\n raise wtforms.ValidationError(f\"Unknown token scope: {scope}\")\n\n if scope_kind == \"unspecified\":\n raise wtforms.ValidationError(f\"Specify the token scope\")\n\n if scope_kind == \"user\":\n self.validated_scope = scope_kind\n return\n\n try:\n scope_kind, scope_value = scope_kind.split(\":\", 1)\n except ValueError:\n raise wtforms.ValidationError(f\"Unknown token scope: {scope}\")\n\n if scope_kind != \"project\":\n raise wtforms.ValidationError(f\"Unknown token scope: {scope}\")\n if scope_value not in self.project_names:\n raise wtforms.ValidationError(\n f\"Unknown or invalid project name: {scope_value}\"\n )\n\n self.validated_scope = {\"projects\": [scope_value]}\n\n\nclass DeleteMacaroonForm(forms.Form):\n __params__ = [\"macaroon_id\"]\n\n macaroon_id = wtforms.StringField(\n validators=[wtforms.validators.DataRequired(message=\"Identifier required\")]\n )\n\n def __init__(self, *args, macaroon_service, **kwargs):\n super().__init__(*args, **kwargs)\n self.macaroon_service = macaroon_service\n\n def validate_macaroon_id(self, field):\n macaroon_id = field.data\n if self.macaroon_service.find_macaroon(macaroon_id) is None:\n raise wtforms.validators.ValidationError(\"No such macaroon\")\n", "path": "warehouse/manage/forms.py" } ]
diff --git a/warehouse/manage/forms.py b/warehouse/manage/forms.py index 25227ca778c4..64dcf92ee79e 100644 --- a/warehouse/manage/forms.py +++ b/warehouse/manage/forms.py @@ -31,7 +31,7 @@ class RoleNameMixin: role_name = wtforms.SelectField( "Select role", - choices=[("Maintainer", "Maintainer"), ("Owner", "Owner")], + choices=[("", "Select role"), ("Maintainer", "Maintainer"), ("Owner", "Owner")], validators=[wtforms.validators.DataRequired(message="Select role")], )
getmoto__moto-2114
Lambda publish_version returns wrong status code In boto3,when lambda publish_version is success,boto3 returns Http status code 201. But, moto returns Http status code 200 moto and boto version ``` boto3 1.9.71 botocore 1.12.71 moto 1.3.7 ```
[ { "content": "from __future__ import unicode_literals\n\nimport json\n\ntry:\n from urllib import unquote\nexcept ImportError:\n from urllib.parse import unquote\n\nfrom moto.core.utils import amz_crc32, amzn_request_id, path_url\nfrom moto.core.responses import BaseResponse\nfrom .models import lambda_backends\n\n\nclass LambdaResponse(BaseResponse):\n @property\n def json_body(self):\n \"\"\"\n :return: JSON\n :rtype: dict\n \"\"\"\n return json.loads(self.body)\n\n @property\n def lambda_backend(self):\n \"\"\"\n Get backend\n :return: Lambda Backend\n :rtype: moto.awslambda.models.LambdaBackend\n \"\"\"\n return lambda_backends[self.region]\n\n def root(self, request, full_url, headers):\n self.setup_class(request, full_url, headers)\n if request.method == 'GET':\n return self._list_functions(request, full_url, headers)\n elif request.method == 'POST':\n return self._create_function(request, full_url, headers)\n else:\n raise ValueError(\"Cannot handle request\")\n\n def function(self, request, full_url, headers):\n self.setup_class(request, full_url, headers)\n if request.method == 'GET':\n return self._get_function(request, full_url, headers)\n elif request.method == 'DELETE':\n return self._delete_function(request, full_url, headers)\n else:\n raise ValueError(\"Cannot handle request\")\n\n def versions(self, request, full_url, headers):\n self.setup_class(request, full_url, headers)\n if request.method == 'GET':\n # This is ListVersionByFunction\n\n path = request.path if hasattr(request, 'path') else path_url(request.url)\n function_name = path.split('/')[-2]\n return self._list_versions_by_function(function_name)\n\n elif request.method == 'POST':\n return self._publish_function(request, full_url, headers)\n else:\n raise ValueError(\"Cannot handle request\")\n\n @amz_crc32\n @amzn_request_id\n def invoke(self, request, full_url, headers):\n self.setup_class(request, full_url, headers)\n if request.method == 'POST':\n return self._invoke(request, full_url)\n else:\n raise ValueError(\"Cannot handle request\")\n\n @amz_crc32\n @amzn_request_id\n def invoke_async(self, request, full_url, headers):\n self.setup_class(request, full_url, headers)\n if request.method == 'POST':\n return self._invoke_async(request, full_url)\n else:\n raise ValueError(\"Cannot handle request\")\n\n def tag(self, request, full_url, headers):\n self.setup_class(request, full_url, headers)\n if request.method == 'GET':\n return self._list_tags(request, full_url)\n elif request.method == 'POST':\n return self._tag_resource(request, full_url)\n elif request.method == 'DELETE':\n return self._untag_resource(request, full_url)\n else:\n raise ValueError(\"Cannot handle {0} request\".format(request.method))\n\n def policy(self, request, full_url, headers):\n if request.method == 'GET':\n return self._get_policy(request, full_url, headers)\n if request.method == 'POST':\n return self._add_policy(request, full_url, headers)\n\n def _add_policy(self, request, full_url, headers):\n path = request.path if hasattr(request, 'path') else path_url(request.url)\n function_name = path.split('/')[-2]\n if self.lambda_backend.get_function(function_name):\n policy = request.body.decode('utf8')\n self.lambda_backend.add_policy(function_name, policy)\n return 200, {}, json.dumps(dict(Statement=policy))\n else:\n return 404, {}, \"{}\"\n\n def _get_policy(self, request, full_url, headers):\n path = request.path if hasattr(request, 'path') else path_url(request.url)\n function_name = path.split('/')[-2]\n if self.lambda_backend.get_function(function_name):\n lambda_function = self.lambda_backend.get_function(function_name)\n return 200, {}, json.dumps(dict(Policy=\"{\\\"Statement\\\":[\" + lambda_function.policy + \"]}\"))\n else:\n return 404, {}, \"{}\"\n\n def _invoke(self, request, full_url):\n response_headers = {}\n\n function_name = self.path.rsplit('/', 2)[-2]\n qualifier = self._get_param('qualifier')\n\n fn = self.lambda_backend.get_function(function_name, qualifier)\n if fn:\n payload = fn.invoke(self.body, self.headers, response_headers)\n response_headers['Content-Length'] = str(len(payload))\n return 202, response_headers, payload\n else:\n return 404, response_headers, \"{}\"\n\n def _invoke_async(self, request, full_url):\n response_headers = {}\n\n function_name = self.path.rsplit('/', 3)[-3]\n\n fn = self.lambda_backend.get_function(function_name, None)\n if fn:\n payload = fn.invoke(self.body, self.headers, response_headers)\n response_headers['Content-Length'] = str(len(payload))\n return 202, response_headers, payload\n else:\n return 404, response_headers, \"{}\"\n\n def _list_functions(self, request, full_url, headers):\n result = {\n 'Functions': []\n }\n\n for fn in self.lambda_backend.list_functions():\n json_data = fn.get_configuration()\n\n result['Functions'].append(json_data)\n\n return 200, {}, json.dumps(result)\n\n def _list_versions_by_function(self, function_name):\n result = {\n 'Versions': []\n }\n\n functions = self.lambda_backend.list_versions_by_function(function_name)\n if functions:\n for fn in functions:\n json_data = fn.get_configuration()\n result['Versions'].append(json_data)\n\n return 200, {}, json.dumps(result)\n\n def _create_function(self, request, full_url, headers):\n try:\n fn = self.lambda_backend.create_function(self.json_body)\n except ValueError as e:\n return 400, {}, json.dumps({\"Error\": {\"Code\": e.args[0], \"Message\": e.args[1]}})\n else:\n config = fn.get_configuration()\n return 201, {}, json.dumps(config)\n\n def _publish_function(self, request, full_url, headers):\n function_name = self.path.rsplit('/', 2)[-2]\n\n fn = self.lambda_backend.publish_function(function_name)\n if fn:\n config = fn.get_configuration()\n return 200, {}, json.dumps(config)\n else:\n return 404, {}, \"{}\"\n\n def _delete_function(self, request, full_url, headers):\n function_name = self.path.rsplit('/', 1)[-1]\n qualifier = self._get_param('Qualifier', None)\n\n if self.lambda_backend.delete_function(function_name, qualifier):\n return 204, {}, \"\"\n else:\n return 404, {}, \"{}\"\n\n def _get_function(self, request, full_url, headers):\n function_name = self.path.rsplit('/', 1)[-1]\n qualifier = self._get_param('Qualifier', None)\n\n fn = self.lambda_backend.get_function(function_name, qualifier)\n\n if fn:\n code = fn.get_code()\n\n return 200, {}, json.dumps(code)\n else:\n return 404, {}, \"{}\"\n\n def _get_aws_region(self, full_url):\n region = self.region_regex.search(full_url)\n if region:\n return region.group(1)\n else:\n return self.default_region\n\n def _list_tags(self, request, full_url):\n function_arn = unquote(self.path.rsplit('/', 1)[-1])\n\n fn = self.lambda_backend.get_function_by_arn(function_arn)\n if fn:\n return 200, {}, json.dumps({'Tags': fn.tags})\n else:\n return 404, {}, \"{}\"\n\n def _tag_resource(self, request, full_url):\n function_arn = unquote(self.path.rsplit('/', 1)[-1])\n\n if self.lambda_backend.tag_resource(function_arn, self.json_body['Tags']):\n return 200, {}, \"{}\"\n else:\n return 404, {}, \"{}\"\n\n def _untag_resource(self, request, full_url):\n function_arn = unquote(self.path.rsplit('/', 1)[-1])\n tag_keys = self.querystring['tagKeys']\n\n if self.lambda_backend.untag_resource(function_arn, tag_keys):\n return 204, {}, \"{}\"\n else:\n return 404, {}, \"{}\"\n", "path": "moto/awslambda/responses.py" } ]
[ { "content": "from __future__ import unicode_literals\n\nimport json\n\ntry:\n from urllib import unquote\nexcept ImportError:\n from urllib.parse import unquote\n\nfrom moto.core.utils import amz_crc32, amzn_request_id, path_url\nfrom moto.core.responses import BaseResponse\nfrom .models import lambda_backends\n\n\nclass LambdaResponse(BaseResponse):\n @property\n def json_body(self):\n \"\"\"\n :return: JSON\n :rtype: dict\n \"\"\"\n return json.loads(self.body)\n\n @property\n def lambda_backend(self):\n \"\"\"\n Get backend\n :return: Lambda Backend\n :rtype: moto.awslambda.models.LambdaBackend\n \"\"\"\n return lambda_backends[self.region]\n\n def root(self, request, full_url, headers):\n self.setup_class(request, full_url, headers)\n if request.method == 'GET':\n return self._list_functions(request, full_url, headers)\n elif request.method == 'POST':\n return self._create_function(request, full_url, headers)\n else:\n raise ValueError(\"Cannot handle request\")\n\n def function(self, request, full_url, headers):\n self.setup_class(request, full_url, headers)\n if request.method == 'GET':\n return self._get_function(request, full_url, headers)\n elif request.method == 'DELETE':\n return self._delete_function(request, full_url, headers)\n else:\n raise ValueError(\"Cannot handle request\")\n\n def versions(self, request, full_url, headers):\n self.setup_class(request, full_url, headers)\n if request.method == 'GET':\n # This is ListVersionByFunction\n\n path = request.path if hasattr(request, 'path') else path_url(request.url)\n function_name = path.split('/')[-2]\n return self._list_versions_by_function(function_name)\n\n elif request.method == 'POST':\n return self._publish_function(request, full_url, headers)\n else:\n raise ValueError(\"Cannot handle request\")\n\n @amz_crc32\n @amzn_request_id\n def invoke(self, request, full_url, headers):\n self.setup_class(request, full_url, headers)\n if request.method == 'POST':\n return self._invoke(request, full_url)\n else:\n raise ValueError(\"Cannot handle request\")\n\n @amz_crc32\n @amzn_request_id\n def invoke_async(self, request, full_url, headers):\n self.setup_class(request, full_url, headers)\n if request.method == 'POST':\n return self._invoke_async(request, full_url)\n else:\n raise ValueError(\"Cannot handle request\")\n\n def tag(self, request, full_url, headers):\n self.setup_class(request, full_url, headers)\n if request.method == 'GET':\n return self._list_tags(request, full_url)\n elif request.method == 'POST':\n return self._tag_resource(request, full_url)\n elif request.method == 'DELETE':\n return self._untag_resource(request, full_url)\n else:\n raise ValueError(\"Cannot handle {0} request\".format(request.method))\n\n def policy(self, request, full_url, headers):\n if request.method == 'GET':\n return self._get_policy(request, full_url, headers)\n if request.method == 'POST':\n return self._add_policy(request, full_url, headers)\n\n def _add_policy(self, request, full_url, headers):\n path = request.path if hasattr(request, 'path') else path_url(request.url)\n function_name = path.split('/')[-2]\n if self.lambda_backend.get_function(function_name):\n policy = request.body.decode('utf8')\n self.lambda_backend.add_policy(function_name, policy)\n return 200, {}, json.dumps(dict(Statement=policy))\n else:\n return 404, {}, \"{}\"\n\n def _get_policy(self, request, full_url, headers):\n path = request.path if hasattr(request, 'path') else path_url(request.url)\n function_name = path.split('/')[-2]\n if self.lambda_backend.get_function(function_name):\n lambda_function = self.lambda_backend.get_function(function_name)\n return 200, {}, json.dumps(dict(Policy=\"{\\\"Statement\\\":[\" + lambda_function.policy + \"]}\"))\n else:\n return 404, {}, \"{}\"\n\n def _invoke(self, request, full_url):\n response_headers = {}\n\n function_name = self.path.rsplit('/', 2)[-2]\n qualifier = self._get_param('qualifier')\n\n fn = self.lambda_backend.get_function(function_name, qualifier)\n if fn:\n payload = fn.invoke(self.body, self.headers, response_headers)\n response_headers['Content-Length'] = str(len(payload))\n return 202, response_headers, payload\n else:\n return 404, response_headers, \"{}\"\n\n def _invoke_async(self, request, full_url):\n response_headers = {}\n\n function_name = self.path.rsplit('/', 3)[-3]\n\n fn = self.lambda_backend.get_function(function_name, None)\n if fn:\n payload = fn.invoke(self.body, self.headers, response_headers)\n response_headers['Content-Length'] = str(len(payload))\n return 202, response_headers, payload\n else:\n return 404, response_headers, \"{}\"\n\n def _list_functions(self, request, full_url, headers):\n result = {\n 'Functions': []\n }\n\n for fn in self.lambda_backend.list_functions():\n json_data = fn.get_configuration()\n\n result['Functions'].append(json_data)\n\n return 200, {}, json.dumps(result)\n\n def _list_versions_by_function(self, function_name):\n result = {\n 'Versions': []\n }\n\n functions = self.lambda_backend.list_versions_by_function(function_name)\n if functions:\n for fn in functions:\n json_data = fn.get_configuration()\n result['Versions'].append(json_data)\n\n return 200, {}, json.dumps(result)\n\n def _create_function(self, request, full_url, headers):\n try:\n fn = self.lambda_backend.create_function(self.json_body)\n except ValueError as e:\n return 400, {}, json.dumps({\"Error\": {\"Code\": e.args[0], \"Message\": e.args[1]}})\n else:\n config = fn.get_configuration()\n return 201, {}, json.dumps(config)\n\n def _publish_function(self, request, full_url, headers):\n function_name = self.path.rsplit('/', 2)[-2]\n\n fn = self.lambda_backend.publish_function(function_name)\n if fn:\n config = fn.get_configuration()\n return 201, {}, json.dumps(config)\n else:\n return 404, {}, \"{}\"\n\n def _delete_function(self, request, full_url, headers):\n function_name = self.path.rsplit('/', 1)[-1]\n qualifier = self._get_param('Qualifier', None)\n\n if self.lambda_backend.delete_function(function_name, qualifier):\n return 204, {}, \"\"\n else:\n return 404, {}, \"{}\"\n\n def _get_function(self, request, full_url, headers):\n function_name = self.path.rsplit('/', 1)[-1]\n qualifier = self._get_param('Qualifier', None)\n\n fn = self.lambda_backend.get_function(function_name, qualifier)\n\n if fn:\n code = fn.get_code()\n\n return 200, {}, json.dumps(code)\n else:\n return 404, {}, \"{}\"\n\n def _get_aws_region(self, full_url):\n region = self.region_regex.search(full_url)\n if region:\n return region.group(1)\n else:\n return self.default_region\n\n def _list_tags(self, request, full_url):\n function_arn = unquote(self.path.rsplit('/', 1)[-1])\n\n fn = self.lambda_backend.get_function_by_arn(function_arn)\n if fn:\n return 200, {}, json.dumps({'Tags': fn.tags})\n else:\n return 404, {}, \"{}\"\n\n def _tag_resource(self, request, full_url):\n function_arn = unquote(self.path.rsplit('/', 1)[-1])\n\n if self.lambda_backend.tag_resource(function_arn, self.json_body['Tags']):\n return 200, {}, \"{}\"\n else:\n return 404, {}, \"{}\"\n\n def _untag_resource(self, request, full_url):\n function_arn = unquote(self.path.rsplit('/', 1)[-1])\n tag_keys = self.querystring['tagKeys']\n\n if self.lambda_backend.untag_resource(function_arn, tag_keys):\n return 204, {}, \"{}\"\n else:\n return 404, {}, \"{}\"\n", "path": "moto/awslambda/responses.py" } ]
diff --git a/moto/awslambda/responses.py b/moto/awslambda/responses.py index d4eb73bc3137..1c43ef84bcf1 100644 --- a/moto/awslambda/responses.py +++ b/moto/awslambda/responses.py @@ -183,7 +183,7 @@ def _publish_function(self, request, full_url, headers): fn = self.lambda_backend.publish_function(function_name) if fn: config = fn.get_configuration() - return 200, {}, json.dumps(config) + return 201, {}, json.dumps(config) else: return 404, {}, "{}" diff --git a/tests/test_awslambda/test_lambda.py b/tests/test_awslambda/test_lambda.py index 7f3b44b79555..479aaaa8a17c 100644 --- a/tests/test_awslambda/test_lambda.py +++ b/tests/test_awslambda/test_lambda.py @@ -471,7 +471,8 @@ def test_publish(): function_list['Functions'].should.have.length_of(1) latest_arn = function_list['Functions'][0]['FunctionArn'] - conn.publish_version(FunctionName='testFunction') + res = conn.publish_version(FunctionName='testFunction') + assert res['ResponseMetadata']['HTTPStatusCode'] == 201 function_list = conn.list_functions() function_list['Functions'].should.have.length_of(2) @@ -853,8 +854,8 @@ def test_list_versions_by_function(): Publish=True, ) - conn.publish_version(FunctionName='testFunction') - + res = conn.publish_version(FunctionName='testFunction') + assert res['ResponseMetadata']['HTTPStatusCode'] == 201 versions = conn.list_versions_by_function(FunctionName='testFunction') assert versions['Versions'][0]['FunctionArn'] == 'arn:aws:lambda:us-west-2:123456789012:function:testFunction:$LATEST'
aws__aws-cli-573
aws ec2 replace-network-acl-entry --protocol ? How can I specify a protocol? When I specify --protocol tcp or --protocol udp, the command fails: A client error (InvalidParameterValue) occurred when calling the ReplaceNetworkAclEntry operation: Invalid value 'tcp' for IP protocol. Unknown protocol. A client error (InvalidParameterValue) occurred when calling the ReplaceNetworkAclEntry operation: Invalid value 'udp' for IP protocol. Unknown protocol. The command create-network-acl-entry accepts --protocol tcp or --protocol udp.
[ { "content": "# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\"\"\"\nThis customization allows the user to specify the values \"tcp\", \"udp\",\nor \"icmp\" as values for the --protocol parameter. The actual Protocol\nparameter of the operation accepts only integer protocol numbers.\n\"\"\"\n\ndef _fix_args(operation, endpoint, params, **kwargs):\n if 'protocol' in params:\n if params['protocol'] == 'tcp':\n params['protocol'] = '6'\n elif params['protocol'] == 'udp':\n params['protocol'] = '17'\n elif params['protocol'] == 'icmp':\n params['protocol'] = '1'\n elif params['protocol'] == 'all':\n params['protocol'] = '-1'\n\n\ndef register_protocol_args(cli):\n ('before-parameter-build.ec2.RunInstances', _fix_args),\n cli.register('before-parameter-build.ec2.CreateNetworkAclEntry',\n _fix_args)\n \n", "path": "awscli/customizations/ec2protocolarg.py" } ]
[ { "content": "# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\"\"\"\nThis customization allows the user to specify the values \"tcp\", \"udp\",\nor \"icmp\" as values for the --protocol parameter. The actual Protocol\nparameter of the operation accepts only integer protocol numbers.\n\"\"\"\n\ndef _fix_args(operation, endpoint, params, **kwargs):\n if 'protocol' in params:\n if params['protocol'] == 'tcp':\n params['protocol'] = '6'\n elif params['protocol'] == 'udp':\n params['protocol'] = '17'\n elif params['protocol'] == 'icmp':\n params['protocol'] = '1'\n elif params['protocol'] == 'all':\n params['protocol'] = '-1'\n\n\ndef register_protocol_args(cli):\n cli.register('before-parameter-build.ec2.CreateNetworkAclEntry',\n _fix_args)\n cli.register('before-parameter-build.ec2.ReplaceNetworkAclEntry',\n _fix_args)\n \n", "path": "awscli/customizations/ec2protocolarg.py" } ]
diff --git a/awscli/customizations/ec2protocolarg.py b/awscli/customizations/ec2protocolarg.py index f1fb4d46d418..d598ad87a8a4 100644 --- a/awscli/customizations/ec2protocolarg.py +++ b/awscli/customizations/ec2protocolarg.py @@ -29,7 +29,8 @@ def _fix_args(operation, endpoint, params, **kwargs): def register_protocol_args(cli): - ('before-parameter-build.ec2.RunInstances', _fix_args), cli.register('before-parameter-build.ec2.CreateNetworkAclEntry', _fix_args) + cli.register('before-parameter-build.ec2.ReplaceNetworkAclEntry', + _fix_args) diff --git a/tests/unit/ec2/test_replace_network_acl_entry.py b/tests/unit/ec2/test_replace_network_acl_entry.py new file mode 100644 index 000000000000..6198011616d6 --- /dev/null +++ b/tests/unit/ec2/test_replace_network_acl_entry.py @@ -0,0 +1,120 @@ +#!/usr/bin/env python +# Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"). You +# may not use this file except in compliance with the License. A copy of +# the License is located at +# +# http://aws.amazon.com/apache2.0/ +# +# or in the "license" file accompanying this file. This file is +# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF +# ANY KIND, either express or implied. See the License for the specific +# language governing permissions and limitations under the License. +from tests.unit import BaseAWSCommandParamsTest + + +class TestReplaceNetworkACLEntry(BaseAWSCommandParamsTest): + + prefix = 'ec2 replace-network-acl-entry' + + def test_tcp(self): + cmdline = self.prefix + cmdline += ' --network-acl-id acl-12345678' + cmdline += ' --rule-number 100' + cmdline += ' --protocol tcp' + cmdline += ' --rule-action allow' + cmdline += ' --ingress' + cmdline += ' --port-range From=22,To=22' + cmdline += ' --cidr-block 0.0.0.0/0' + result = {'NetworkAclId': 'acl-12345678', + 'RuleNumber': '100', + 'Protocol': '6', + 'RuleAction': 'allow', + 'Egress': 'false', + 'CidrBlock': '0.0.0.0/0', + 'PortRange.From': '22', + 'PortRange.To': '22' + } + self.assert_params_for_cmd(cmdline, result) + + def test_udp(self): + cmdline = self.prefix + cmdline += ' --network-acl-id acl-12345678' + cmdline += ' --rule-number 100' + cmdline += ' --protocol udp' + cmdline += ' --rule-action allow' + cmdline += ' --ingress' + cmdline += ' --port-range From=22,To=22' + cmdline += ' --cidr-block 0.0.0.0/0' + result = {'NetworkAclId': 'acl-12345678', + 'RuleNumber': '100', + 'Protocol': '17', + 'RuleAction': 'allow', + 'Egress': 'false', + 'CidrBlock': '0.0.0.0/0', + 'PortRange.From': '22', + 'PortRange.To': '22' + } + self.assert_params_for_cmd(cmdline, result) + + def test_icmp(self): + cmdline = self.prefix + cmdline += ' --network-acl-id acl-12345678' + cmdline += ' --rule-number 100' + cmdline += ' --protocol icmp' + cmdline += ' --rule-action allow' + cmdline += ' --ingress' + cmdline += ' --port-range From=22,To=22' + cmdline += ' --cidr-block 0.0.0.0/0' + result = {'NetworkAclId': 'acl-12345678', + 'RuleNumber': '100', + 'Protocol': '1', + 'RuleAction': 'allow', + 'Egress': 'false', + 'CidrBlock': '0.0.0.0/0', + 'PortRange.From': '22', + 'PortRange.To': '22' + } + self.assert_params_for_cmd(cmdline, result) + + def test_all(self): + cmdline = self.prefix + cmdline += ' --network-acl-id acl-12345678' + cmdline += ' --rule-number 100' + cmdline += ' --protocol all' + cmdline += ' --rule-action allow' + cmdline += ' --ingress' + cmdline += ' --port-range From=22,To=22' + cmdline += ' --cidr-block 0.0.0.0/0' + result = {'NetworkAclId': 'acl-12345678', + 'RuleNumber': '100', + 'Protocol': '-1', + 'RuleAction': 'allow', + 'Egress': 'false', + 'CidrBlock': '0.0.0.0/0', + 'PortRange.From': '22', + 'PortRange.To': '22' + } + self.assert_params_for_cmd(cmdline, result) + + def test_number(self): + cmdline = self.prefix + cmdline += ' --network-acl-id acl-12345678' + cmdline += ' --rule-number 100' + cmdline += ' --protocol 99' + cmdline += ' --rule-action allow' + cmdline += ' --ingress' + cmdline += ' --port-range From=22,To=22' + cmdline += ' --cidr-block 0.0.0.0/0' + result = {'NetworkAclId': 'acl-12345678', + 'RuleNumber': '100', + 'Protocol': '99', + 'RuleAction': 'allow', + 'Egress': 'false', + 'CidrBlock': '0.0.0.0/0', + 'PortRange.From': '22', + 'PortRange.To': '22' + } + self.assert_params_for_cmd(cmdline, result) +
bentoml__BentoML-922
Yatai does not handle the STS assumed role in operator.get_arn_role_from_current_aws_user() **Describe the bug** Yatai causes the error: ``` Error: sagemaker deploy failed: INTERNAL:Not supported role type assumed-role; sts arn is arn:aws:sts::103365315157:assumed-role/<rolename>/<username> ``` because [bentoml/yatai/deployment/sagemaker/operator.py](https://github.com/bentoml/BentoML/blob/master/bentoml/yatai/deployment/sagemaker/operator.py) does not handle assumed role by checking type_role[0] is either "user", "root", or "role". ``` def get_arn_role_from_current_aws_user(): sts_client = boto3.client("sts") identity = sts_client.get_caller_identity() sts_arn = identity["Arn"] sts_arn_list = sts_arn.split(":") type_role = sts_arn_list[-1].split("/") iam_client = boto3.client("iam") if type_role[0] in ("user", "root"): role_list = iam_client.list_roles() arn = None for role in role_list["Roles"]: policy_document = role["AssumeRolePolicyDocument"] statement = policy_document["Statement"][0] if ( statement["Effect"] == "Allow" and statement["Principal"].get("Service", None) == "sagemaker.amazonaws.com" ): arn = role["Arn"] if arn is None: raise YataiDeploymentException( "Can't find proper Arn role for Sagemaker, please create one and try " "again" ) return arn elif type_role[0] == "role": role_response = iam_client.get_role(RoleName=type_role[1]) return role_response["Role"]["Arn"] raise YataiDeploymentException( "Not supported role type {}; sts arn is {}".format(type_role[0], sts_arn) # <----- ) ``` However, as in [Boto3 STS get_caller_identity] (https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sts.html#STS.Client.get_caller_identity), type_role[0] can be "assumed-role". Which is common in AWS when a user need to switch roles depending on the environment. ``` response = client.get_caller_identity( ) print(response) ----- Expected Output: { 'Account': '123456789012', 'Arn': 'arn:aws:sts::123456789012:assumed-role/my-role-name/my-role-session-name', <----- "assumed-role" 'UserId': 'AKIAI44QH8DHBEXAMPLE:my-role-session-name', 'ResponseMetadata': { '...': '...', }, } ``` **To Reproduce** Steps to reproduce the behavior: 1. With AWS CLI, assume role (https://docs.aws.amazon.com/cli/latest/reference/sts/assume-role.html) 2. Run Sagemaker deployment. ``` $ bentoml sagemaker deploy my-first-sagemaker-deployment -b IrisClassifier:20200713155751_57AC77 --api-name predict ``` 3. See error **Expected behavior** Assumed role situation is handled and causes no error at deployment. **Screenshots/Logs** If applicable, add screenshots, logs or error outputs to help explain your problem. ``` $ bentoml sagemaker deploy my-first-sagemaker-deployment -b IrisClassifier:20200713155751_57AC77 --api-name predict ... ==> WARNING: A newer version of conda exists. <== current version: 4.8.2 latest version: 4.8.3 Please update conda by running $ conda update -n base -c defaults conda [2020-07-14 10:47:28,586] INFO - # # To activate this environment, use # # $ conda activate base # # To deactivate an active environment, use # # $ conda deactivate |[2020-07-14 10:47:28,921] INFO - + pip install -r ./requirements.txt /[2020-07-14 10:47:29,645] INFO - Collecting scikit-learn |[2020-07-14 10:47:29,694] INFO - Downloading scikit_learn-0.23.1-cp37-cp37m-manylinux1_x86_64.whl (6.8 MB) \[2020-07-14 10:47:31,479] INFO - Collecting pandas [2020-07-14 10:47:31,494] INFO - Downloading pandas-1.0.5-cp37-cp37m-manylinux1_x86_64.whl (10.1 MB) /[2020-07-14 10:47:33,745] INFO - Requirement already satisfied: bentoml==0.8.3 in /opt/conda/lib/python3.7/site-packages (from -r ./requirements.txt (line 3)) (0.8.3) \[2020-07-14 10:47:33,969] INFO - Collecting threadpoolctl>=2.0.0 [2020-07-14 10:47:33,981] INFO - Downloading threadpoolctl-2.1.0-py3-none-any.whl (12 kB) |[2020-07-14 10:47:34,294] INFO - Collecting scipy>=0.19.1 [2020-07-14 10:47:34,308] INFO - Downloading scipy-1.5.1-cp37-cp37m-manylinux1_x86_64.whl (25.9 MB) |[2020-07-14 10:47:40,018] INFO - Collecting joblib>=0.11 [2020-07-14 10:47:40,031] INFO - Downloading joblib-0.16.0-py3-none-any.whl (300 kB) \[2020-07-14 10:47:40,134] INFO - Requirement already satisfied: numpy>=1.13.3 in /opt/conda/lib/python3.7/site-packages (from scikit-learn->-r ./requirements.txt (line 1)) (1.19.0) [2020-07-14 10:47:40,136] INFO - Requirement already satisfied: python-dateutil>=2.6.1 in /opt/conda/lib/python3.7/site-packages (from pandas->-r ./requirements.txt (line 2)) (2.8.0) /[2020-07-14 10:47:40,325] INFO - Collecting pytz>=2017.2 [2020-07-14 10:47:40,339] INFO - Downloading pytz-2020.1-py2.py3-none-any.whl (510 kB) \[2020-07-14 10:47:40,544] INFO - Requirement already satisfied: gunicorn in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (20.0.4) [2020-07-14 10:47:40,551] INFO - Requirement already satisfied: prometheus-client in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.8.0) [2020-07-14 10:47:40,555] INFO - Requirement already satisfied: psutil in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (5.7.0) [2020-07-14 10:47:40,559] INFO - Requirement already satisfied: alembic in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.4.2) [2020-07-14 10:47:40,564] INFO - Requirement already satisfied: ruamel.yaml>=0.15.0 in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.16.10) [2020-07-14 10:47:40,575] INFO - Requirement already satisfied: tabulate in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.8.7) [2020-07-14 10:47:40,579] INFO - Requirement already satisfied: aiohttp in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (3.6.2) [2020-07-14 10:47:40,592] INFO - Requirement already satisfied: certifi in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (2020.6.20) [2020-07-14 10:47:40,594] INFO - Requirement already satisfied: python-json-logger in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.1.11) [2020-07-14 10:47:40,595] INFO - Requirement already satisfied: requests in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (2.22.0) [2020-07-14 10:47:40,611] INFO - Requirement already satisfied: sqlalchemy-utils in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.36.7) -[2020-07-14 10:47:40,674] INFO - Requirement already satisfied: sqlalchemy>=1.3.0 in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.3.18) [2020-07-14 10:47:40,688] INFO - Requirement already satisfied: grpcio<=1.27.2 in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.27.2) [2020-07-14 10:47:40,692] INFO - Requirement already satisfied: click>=7.0 in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (7.1.2) [2020-07-14 10:47:40,693] INFO - Requirement already satisfied: py-zipkin in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.20.0) [2020-07-14 10:47:40,698] INFO - Requirement already satisfied: configparser in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (5.0.0) [2020-07-14 10:47:40,707] INFO - Requirement already satisfied: flask in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.1.2) [2020-07-14 10:47:40,724] INFO - Requirement already satisfied: packaging in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (20.4) [2020-07-14 10:47:40,728] INFO - Requirement already satisfied: humanfriendly in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (8.2) [2020-07-14 10:47:40,733] INFO - Requirement already satisfied: docker in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (4.2.2) /[2020-07-14 10:47:40,747] INFO - Requirement already satisfied: boto3 in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.14.17) [2020-07-14 10:47:40,751] INFO - Requirement already satisfied: cerberus in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.3.2) [2020-07-14 10:47:40,754] INFO - Requirement already satisfied: protobuf>=3.6.0 in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (3.12.2) [2020-07-14 10:47:40,757] INFO - Requirement already satisfied: six>=1.5 in /opt/conda/lib/python3.7/site-packages (from python-dateutil>=2.6.1->pandas->-r ./requirements.txt (line 2)) (1.14.0) [2020-07-14 10:47:40,759] INFO - Requirement already satisfied: setuptools>=3.0 in /opt/conda/lib/python3.7/site-packages (from gunicorn->bentoml==0.8.3->-r ./requirements.txt (line 3)) (45.2.0.post20200210) [2020-07-14 10:47:40,770] INFO - Requirement already satisfied: python-editor>=0.3 in /opt/conda/lib/python3.7/site-packages (from alembic->bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.0.4) [2020-07-14 10:47:40,771] INFO - Requirement already satisfied: Mako in /opt/conda/lib/python3.7/site-packages (from alembic->bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.1.3) [2020-07-14 10:47:40,777] INFO - Requirement already satisfied: ruamel.yaml.clib>=0.1.2; platform_python_implementation == "CPython" and python_version < "3.9" in /opt/conda/lib/python3.7/site-packages (from ruamel.yaml>=0.15.0->bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.2.0) [2020-07-14 10:47:40,779] INFO - Requirement already satisfied: chardet<4.0,>=2.0 in /opt/conda/lib/python3.7/site-packages (from aiohttp->bentoml==0.8.3->-r ./requirements.txt (line 3)) (3.0.4) [2020-07-14 10:47:40,781] INFO - Requirement already satisfied: multidict<5.0,>=4.5 in /opt/conda/lib/python3.7/site-packages (from aiohttp->bentoml==0.8.3->-r ./requirements.txt (line 3)) (4.7.6) [2020-07-14 10:47:40,784] INFO - Requirement already satisfied: attrs>=17.3.0 in /opt/conda/lib/python3.7/site-packages (from aiohttp->bentoml==0.8.3->-r ./requirements.txt (line 3)) (19.3.0) [2020-07-14 10:47:40,827] INFO - Requirement already satisfied: async-timeout<4.0,>=3.0 in /opt/conda/lib/python3.7/site-packages (from aiohttp->bentoml==0.8.3->-r ./requirements.txt (line 3)) (3.0.1) [2020-07-14 10:47:40,829] INFO - Requirement already satisfied: yarl<2.0,>=1.0 in /opt/conda/lib/python3.7/site-packages (from aiohttp->bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.4.2) [2020-07-14 10:47:40,836] INFO - Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests->bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.25.8) |[2020-07-14 10:47:40,850] INFO - Requirement already satisfied: idna<2.9,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests->bentoml==0.8.3->-r ./requirements.txt (line 3)) (2.8) [2020-07-14 10:47:40,853] INFO - Requirement already satisfied: thriftpy2>=0.4.0 in /opt/conda/lib/python3.7/site-packages (from py-zipkin->bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.4.11) [2020-07-14 10:47:40,866] INFO - Requirement already satisfied: Werkzeug>=0.15 in /opt/conda/lib/python3.7/site-packages (from flask->bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.0.1) [2020-07-14 10:47:40,878] INFO - Requirement already satisfied: itsdangerous>=0.24 in /opt/conda/lib/python3.7/site-packages (from flask->bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.1.0) [2020-07-14 10:47:40,880] INFO - Requirement already satisfied: Jinja2>=2.10.1 in /opt/conda/lib/python3.7/site-packages (from flask->bentoml==0.8.3->-r ./requirements.txt (line 3)) (2.11.2) [2020-07-14 10:47:40,885] INFO - Requirement already satisfied: pyparsing>=2.0.2 in /opt/conda/lib/python3.7/site-packages (from packaging->bentoml==0.8.3->-r ./requirements.txt (line 3)) (2.4.7) [2020-07-14 10:47:40,887] INFO - Requirement already satisfied: websocket-client>=0.32.0 in /opt/conda/lib/python3.7/site-packages (from docker->bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.57.0) [2020-07-14 10:47:40,891] INFO - Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /opt/conda/lib/python3.7/site-packages (from boto3->bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.10.0) [2020-07-14 10:47:40,893] INFO - Requirement already satisfied: botocore<1.18.0,>=1.17.17 in /opt/conda/lib/python3.7/site-packages (from boto3->bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.17.17) [2020-07-14 10:47:40,903] INFO - Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /opt/conda/lib/python3.7/site-packages (from boto3->bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.3.3) [2020-07-14 10:47:40,907] INFO - Requirement already satisfied: MarkupSafe>=0.9.2 in /opt/conda/lib/python3.7/site-packages (from Mako->alembic->bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.1.1) [2020-07-14 10:47:40,910] INFO - Requirement already satisfied: ply<4.0,>=3.4 in /opt/conda/lib/python3.7/site-packages (from thriftpy2>=0.4.0->py-zipkin->bentoml==0.8.3->-r ./requirements.txt (line 3)) (3.11) [2020-07-14 10:47:40,912] INFO - Requirement already satisfied: docutils<0.16,>=0.10 in /opt/conda/lib/python3.7/site-packages (from botocore<1.18.0,>=1.17.17->boto3->bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.15.2) \[2020-07-14 10:47:41,027] INFO - Installing collected packages: threadpoolctl, scipy, joblib, scikit-learn, pytz, pandas -[2020-07-14 10:47:49,756] INFO - Successfully installed joblib-0.16.0 pandas-1.0.5 pytz-2020.1 scikit-learn-0.23.1 scipy-1.5.1 threadpoolctl-2.1.0 |[2020-07-14 10:47:49,994] INFO - + for filename in ./bundled_pip_dependencies/*.tar.gz + '[' -e './bundled_pip_dependencies/*.tar.gz' ']' + continue |[2020-07-14 10:47:53,748] INFO - ---> 895e4a390376 [2020-07-14 10:47:53,749] INFO - Step 9/9 : ENV PATH="/bento:$PATH" [2020-07-14 10:47:53,749] INFO - \[2020-07-14 10:47:53,797] INFO - ---> Running in 2c5c72de1601 -[2020-07-14 10:47:53,890] INFO - ---> 05b6fd2ed048 [2020-07-14 10:47:53,892] INFO - Successfully built 05b6fd2ed048 [2020-07-14 10:47:53,898] INFO - Successfully tagged 103365315157.dkr.ecr.ap-southeast-2.amazonaws.com/irisclassifier-sagemaker:20200713155751_57AC77 Error: sagemaker deploy failed: INTERNAL:Not supported role type assumed-role; sts arn is arn:aws:sts::103365315157:assumed-role/<rolename>/<username> ``` To give us more information for diagnosing the issue, make sure to enable debug logging: Add the following lines to your Python code before invoking BentoML: ```python import bentoml import logging bentoml.config().set('core', 'debug', 'true') bentoml.configure_logging(logging.DEBUG) ``` And use the `--verbose` option when running `bentoml` CLI command, e.g.: ```bash bentoml get IrisClassifier --verbose ``` **Environment:** - OS: [e.g. MacOS 10.15.5] - Python/BentoML Version [e.g. Python 3.7.6, BentoML-0.8.3]
[ { "content": "import base64\nimport json\nimport logging\nimport os\nimport shutil\nfrom urllib.parse import urlparse\n\nimport boto3\nimport docker\nfrom botocore.exceptions import ClientError\n\nfrom bentoml.exceptions import (\n YataiDeploymentException,\n AWSServiceError,\n InvalidArgument,\n BentoMLException,\n)\nfrom bentoml.saved_bundle import loader\nfrom bentoml.utils.tempdir import TempDirectory\nfrom bentoml.yatai.deployment.operator import DeploymentOperatorBase\nfrom bentoml.yatai.deployment.utils import (\n process_docker_api_line,\n generate_aws_compatible_string,\n get_default_aws_region,\n ensure_docker_available_or_raise,\n raise_if_api_names_not_found_in_bento_service_metadata,\n)\nfrom bentoml.yatai.proto.deployment_pb2 import (\n DeploymentState,\n ApplyDeploymentResponse,\n DeleteDeploymentResponse,\n DescribeDeploymentResponse,\n)\nfrom bentoml.yatai.proto.repository_pb2 import GetBentoRequest, BentoUri\nfrom bentoml.yatai.status import Status\n\nlogger = logging.getLogger(__name__)\n\n\nBENTO_SERVICE_SAGEMAKER_DOCKERFILE = \"\"\"\\\nFROM {docker_base_image}\n\n# the env var $PORT is required by heroku container runtime\nENV PORT 8080\nEXPOSE $PORT\n\nRUN apt-get update --fix-missing && \\\n apt-get install -y nginx && \\\n apt-get clean\n\n# gevent required by AWS Sagemaker\nRUN pip install gevent==1.4\n\n# copy over model files\nCOPY . /bento\nWORKDIR /bento\n\nRUN if [ -f /bento/bentoml-init.sh ]; then bash -c /bento/bentoml-init.sh; fi\n\nENV PATH=\"/bento:$PATH\"\n\"\"\" # noqa: E501\n\n\ndef strip_scheme(url):\n \"\"\" Stripe url's schema\n e.g. http://some.url/path -> some.url/path\n :param url: String\n :return: String\n \"\"\"\n parsed = urlparse(url)\n scheme = \"%s://\" % parsed.scheme\n return parsed.geturl().replace(scheme, \"\", 1)\n\n\ndef get_arn_role_from_current_aws_user():\n sts_client = boto3.client(\"sts\")\n identity = sts_client.get_caller_identity()\n sts_arn = identity[\"Arn\"]\n sts_arn_list = sts_arn.split(\":\")\n type_role = sts_arn_list[-1].split(\"/\")\n iam_client = boto3.client(\"iam\")\n if type_role[0] in (\"user\", \"root\"):\n role_list = iam_client.list_roles()\n arn = None\n for role in role_list[\"Roles\"]:\n policy_document = role[\"AssumeRolePolicyDocument\"]\n statement = policy_document[\"Statement\"][0]\n if (\n statement[\"Effect\"] == \"Allow\"\n and statement[\"Principal\"].get(\"Service\", None)\n == \"sagemaker.amazonaws.com\"\n ):\n arn = role[\"Arn\"]\n if arn is None:\n raise YataiDeploymentException(\n \"Can't find proper Arn role for Sagemaker, please create one and try \"\n \"again\"\n )\n return arn\n elif type_role[0] == \"role\":\n role_response = iam_client.get_role(RoleName=type_role[1])\n return role_response[\"Role\"][\"Arn\"]\n\n raise YataiDeploymentException(\n \"Not supported role type {}; sts arn is {}\".format(type_role[0], sts_arn)\n )\n\n\ndef create_and_push_docker_image_to_ecr(\n region, bento_name, bento_version, snapshot_path\n):\n \"\"\"Create BentoService sagemaker image and push to AWS ECR\n\n Example: https://github.com/awslabs/amazon-sagemaker-examples/blob/\\\n master/advanced_functionality/scikit_bring_your_own/container/build_and_push.sh\n 1. get aws account info and login ecr\n 2. create ecr repository, if not exist\n 3. build tag and push docker image\n\n Args:\n region(String)\n bento_name(String)\n bento_version(String)\n snapshot_path(Path)\n\n Returns:\n str: AWS ECR Tag\n \"\"\"\n ecr_client = boto3.client(\"ecr\", region)\n token = ecr_client.get_authorization_token()\n logger.debug(\"Getting docker login info from AWS\")\n username, password = (\n base64.b64decode(token[\"authorizationData\"][0][\"authorizationToken\"])\n .decode(\"utf-8\")\n .split(\":\")\n )\n registry_url = token[\"authorizationData\"][0][\"proxyEndpoint\"]\n auth_config_payload = {\"username\": username, \"password\": password}\n\n docker_api = docker.APIClient()\n\n image_name = bento_name.lower() + \"-sagemaker\"\n ecr_tag = strip_scheme(\n \"{registry_url}/{image_name}:{version}\".format(\n registry_url=registry_url, image_name=image_name, version=bento_version\n )\n )\n\n logger.debug(\"Building docker image: %s\", ecr_tag)\n for line in docker_api.build(\n path=snapshot_path, dockerfile=\"Dockerfile-sagemaker\", tag=ecr_tag\n ):\n process_docker_api_line(line)\n\n try:\n ecr_client.describe_repositories(repositoryNames=[image_name])[\"repositories\"]\n except ecr_client.exceptions.RepositoryNotFoundException:\n ecr_client.create_repository(repositoryName=image_name)\n\n logger.debug(\"Pushing image to AWS ECR at %s\", ecr_tag)\n for line in docker_api.push(ecr_tag, stream=True, auth_config=auth_config_payload):\n process_docker_api_line(line)\n logger.debug(\"Finished pushing image: %s\", ecr_tag)\n return ecr_tag\n\n\n# Sagemaker response status: 'OutOfService'|'Creating'|'Updating'|\n# 'SystemUpdating'|'RollingBack'|'InService'|\n# 'Deleting'|'Failed'\nENDPOINT_STATUS_TO_STATE = {\n \"InService\": DeploymentState.RUNNING,\n \"Deleting\": DeploymentState.INACTIVATED,\n \"Creating\": DeploymentState.PENDING,\n \"Updating\": DeploymentState.PENDING,\n \"RollingBack\": DeploymentState.PENDING,\n \"SystemUpdating\": DeploymentState.PENDING,\n \"OutOfService\": DeploymentState.INACTIVATED,\n \"Failed\": DeploymentState.ERROR,\n}\n\n\ndef _aws_client_error_to_bentoml_exception(e, message_prefix=None):\n \"\"\"parse botocore.exceptions.ClientError into Bento StatusProto\n\n We handle two most common errors when deploying to Sagemaker.\n 1. Authenication issue/invalid access(InvalidSignatureException)\n 2. resources not found (ValidationException)\n It will return correlated StatusProto(NOT_FOUND, UNAUTHENTICATED)\n\n Args:\n e: ClientError from botocore.exceptions\n Returns:\n StatusProto\n \"\"\"\n error_response = e.response.get('Error', {})\n error_code = error_response.get('Code', 'Unknown')\n error_message = error_response.get('Message', 'Unknown')\n error_log_message = (\n f'AWS ClientError - operation: {e.operation_name}, '\n f'code: {error_code}, message: {error_message}'\n )\n if message_prefix:\n error_log_message = f'{message_prefix}; {error_log_message}'\n logger.error(error_log_message)\n return AWSServiceError(error_log_message)\n\n\ndef _get_sagemaker_resource_names(deployment_pb):\n sagemaker_model_name = generate_aws_compatible_string(\n (deployment_pb.namespace, 10),\n (deployment_pb.name, 12),\n (deployment_pb.spec.bento_name, 20),\n (deployment_pb.spec.bento_version, 18),\n )\n sagemaker_endpoint_config_name = generate_aws_compatible_string(\n (deployment_pb.namespace, 10),\n (deployment_pb.name, 12),\n (deployment_pb.spec.bento_name, 20),\n (deployment_pb.spec.bento_version, 18),\n )\n sagemaker_endpoint_name = generate_aws_compatible_string(\n deployment_pb.namespace, deployment_pb.name\n )\n return sagemaker_model_name, sagemaker_endpoint_config_name, sagemaker_endpoint_name\n\n\ndef _delete_sagemaker_model_if_exist(sagemaker_client, sagemaker_model_name):\n try:\n delete_model_response = sagemaker_client.delete_model(\n ModelName=sagemaker_model_name\n )\n logger.debug(\"AWS delete model response: %s\", delete_model_response)\n except ClientError as e:\n error_response = e.response.get('Error', {})\n error_code = error_response.get('Code', 'Unknown')\n error_message = error_response.get('Message', 'Unknown')\n if (\n error_code == 'ValidationException'\n and \"Could not find model\" in error_message\n ):\n # sagemaker model does not exist\n return\n\n raise _aws_client_error_to_bentoml_exception(\n e, f\"Failed to cleanup sagemaker model '{sagemaker_model_name}'\"\n )\n\n return\n\n\ndef _delete_sagemaker_endpoint_config_if_exist(\n sagemaker_client, sagemaker_endpoint_config_name\n):\n try:\n delete_endpoint_config_response = sagemaker_client.delete_endpoint_config(\n EndpointConfigName=sagemaker_endpoint_config_name\n )\n logger.debug(\n \"AWS delete endpoint config response: %s\", delete_endpoint_config_response\n )\n except ClientError as e:\n error_response = e.response.get('Error', {})\n error_code = error_response.get('Code', 'Unknown')\n error_message = error_response.get('Message', 'Unknown')\n if (\n error_code == 'ValidationException'\n and \"Could not find endpoint configuration\" in error_message\n ):\n # endpoint config does not exist\n return\n\n raise _aws_client_error_to_bentoml_exception(\n e,\n f\"Failed to cleanup sagemaker endpoint config \"\n f\"'{sagemaker_endpoint_config_name}' after creation failed\",\n )\n return\n\n\ndef _delete_sagemaker_endpoint_if_exist(sagemaker_client, sagemaker_endpoint_name):\n try:\n delete_endpoint_response = sagemaker_client.delete_endpoint(\n EndpointName=sagemaker_endpoint_name\n )\n logger.debug(\"AWS delete endpoint response: %s\", delete_endpoint_response)\n except ClientError as e:\n error_response = e.response.get('Error', {})\n error_code = error_response.get('Code', 'Unknown')\n error_message = error_response.get('Message', 'Unknown')\n if (\n error_code == 'ValidationException'\n and \"Could not find endpoint\" in error_message\n ):\n # sagemaker endpoint does not exist\n return\n\n raise _aws_client_error_to_bentoml_exception(\n e, f\"Failed to delete sagemaker endpoint '{sagemaker_endpoint_name}'\"\n )\n\n\ndef delete_sagemaker_deployment_resources_if_exist(deployment_pb):\n sagemaker_config = deployment_pb.spec.sagemaker_operator_config\n sagemaker_client = boto3.client('sagemaker', sagemaker_config.region)\n\n (\n sagemaker_model_name,\n sagemaker_endpoint_config_name,\n sagemaker_endpoint_name,\n ) = _get_sagemaker_resource_names(deployment_pb)\n\n _delete_sagemaker_model_if_exist(sagemaker_client, sagemaker_model_name)\n _delete_sagemaker_endpoint_config_if_exist(\n sagemaker_client, sagemaker_endpoint_config_name\n )\n _delete_sagemaker_endpoint_if_exist(sagemaker_client, sagemaker_endpoint_name)\n\n\ndef _init_sagemaker_project(sagemaker_project_dir, bento_path, docker_base_image):\n shutil.copytree(bento_path, sagemaker_project_dir)\n\n with open(os.path.join(sagemaker_project_dir, 'Dockerfile-sagemaker'), \"w\") as f:\n f.write(\n BENTO_SERVICE_SAGEMAKER_DOCKERFILE.format(\n docker_base_image=docker_base_image\n )\n )\n\n nginx_conf_path = os.path.join(os.path.dirname(__file__), 'nginx.conf')\n shutil.copy(nginx_conf_path, os.path.join(sagemaker_project_dir, 'nginx.conf'))\n\n wsgi_py_path = os.path.join(os.path.dirname(__file__), 'wsgi.py')\n shutil.copy(wsgi_py_path, os.path.join(sagemaker_project_dir, 'wsgi.py'))\n\n serve_file_path = os.path.join(os.path.dirname(__file__), 'serve')\n shutil.copy(serve_file_path, os.path.join(sagemaker_project_dir, 'serve'))\n\n # permission 755 is required for entry script 'serve'\n os.chmod(os.path.join(sagemaker_project_dir, \"serve\"), 0o755)\n return sagemaker_project_dir\n\n\ndef _create_sagemaker_model(\n sagemaker_client, sagemaker_model_name, ecr_image_path, spec\n):\n execution_role_arn = get_arn_role_from_current_aws_user()\n\n sagemaker_model_info = {\n \"ModelName\": sagemaker_model_name,\n \"PrimaryContainer\": {\n \"ContainerHostname\": sagemaker_model_name,\n \"Image\": ecr_image_path,\n \"Environment\": {\n \"API_NAME\": spec.api_name,\n 'BENTOML_GUNICORN_TIMEOUT': str(spec.timeout),\n },\n },\n \"ExecutionRoleArn\": execution_role_arn,\n }\n\n # Will set envvar, if user defined gunicorn workers per instance. EnvVar needs\n # to be string instead of the int.\n if spec.num_of_gunicorn_workers_per_instance:\n sagemaker_model_info['PrimaryContainer']['Environment'][\n 'BENTOML_GUNICORN_NUM_OF_WORKERS'\n ] = str(spec.num_of_gunicorn_workers_per_instance)\n\n try:\n create_model_response = sagemaker_client.create_model(**sagemaker_model_info)\n except ClientError as e:\n raise _aws_client_error_to_bentoml_exception(\n e, \"Failed to create sagemaker model\"\n )\n logger.debug(\"AWS create model response: %s\", create_model_response)\n\n\ndef _create_sagemaker_endpoint_config(\n sagemaker_client, sagemaker_model_name, endpoint_config_name, sagemaker_config\n):\n production_variants = [\n {\n \"VariantName\": sagemaker_model_name,\n \"ModelName\": sagemaker_model_name,\n \"InitialInstanceCount\": sagemaker_config.instance_count,\n \"InstanceType\": sagemaker_config.instance_type,\n }\n ]\n\n logger.debug(\"Creating Sagemaker endpoint %s configuration\", endpoint_config_name)\n try:\n create_config_response = sagemaker_client.create_endpoint_config(\n EndpointConfigName=endpoint_config_name,\n ProductionVariants=production_variants,\n )\n except ClientError as e:\n raise _aws_client_error_to_bentoml_exception(\n e, \"Failed to create sagemaker endpoint config\"\n )\n logger.debug(\"AWS create endpoint config response: %s\", create_config_response)\n\n\ndef _create_sagemaker_endpoint(sagemaker_client, endpoint_name, endpoint_config_name):\n try:\n logger.debug(\"Creating sagemaker endpoint %s\", endpoint_name)\n create_endpoint_response = sagemaker_client.create_endpoint(\n EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name\n )\n logger.debug(\"AWS create endpoint response: %s\", create_endpoint_response)\n except ClientError as e:\n raise _aws_client_error_to_bentoml_exception(\n e, \"Failed to create sagemaker endpoint\"\n )\n\n\ndef _update_sagemaker_endpoint(sagemaker_client, endpoint_name, endpoint_config_name):\n try:\n logger.debug(\"Updating sagemaker endpoint %s\", endpoint_name)\n update_endpoint_response = sagemaker_client.update_endpoint(\n EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name\n )\n logger.debug(\"AWS update endpoint response: %s\", str(update_endpoint_response))\n except ClientError as e:\n raise _aws_client_error_to_bentoml_exception(\n e, \"Failed to update sagemaker endpoint\"\n )\n\n\nclass SageMakerDeploymentOperator(DeploymentOperatorBase):\n def add(self, deployment_pb):\n try:\n deployment_spec = deployment_pb.spec\n sagemaker_config = deployment_spec.sagemaker_operator_config\n sagemaker_config.region = (\n sagemaker_config.region or get_default_aws_region()\n )\n if not sagemaker_config.region:\n raise InvalidArgument('AWS region is missing')\n\n ensure_docker_available_or_raise()\n if sagemaker_config is None:\n raise YataiDeploymentException('Sagemaker configuration is missing.')\n\n bento_pb = self.yatai_service.GetBento(\n GetBentoRequest(\n bento_name=deployment_spec.bento_name,\n bento_version=deployment_spec.bento_version,\n )\n )\n if bento_pb.bento.uri.type not in (BentoUri.LOCAL, BentoUri.S3):\n raise BentoMLException(\n 'BentoML currently not support {} repository'.format(\n BentoUri.StorageType.Name(bento_pb.bento.uri.type)\n )\n )\n return self._add(deployment_pb, bento_pb, bento_pb.bento.uri.uri)\n\n except BentoMLException as error:\n deployment_pb.state.state = DeploymentState.ERROR\n deployment_pb.state.error_message = (\n f'Error creating SageMaker deployment: {str(error)}'\n )\n return ApplyDeploymentResponse(\n status=error.status_proto, deployment=deployment_pb\n )\n\n def _add(self, deployment_pb, bento_pb, bento_path):\n if loader._is_remote_path(bento_path):\n with loader._resolve_remote_bundle_path(bento_path) as local_path:\n return self._add(deployment_pb, bento_pb, local_path)\n\n deployment_spec = deployment_pb.spec\n sagemaker_config = deployment_spec.sagemaker_operator_config\n\n raise_if_api_names_not_found_in_bento_service_metadata(\n bento_pb.bento.bento_service_metadata, [sagemaker_config.api_name]\n )\n\n sagemaker_client = boto3.client('sagemaker', sagemaker_config.region)\n\n with TempDirectory() as temp_dir:\n sagemaker_project_dir = os.path.join(temp_dir, deployment_spec.bento_name)\n _init_sagemaker_project(\n sagemaker_project_dir,\n bento_path,\n bento_pb.bento.bento_service_metadata.env.docker_base_image,\n )\n ecr_image_path = create_and_push_docker_image_to_ecr(\n sagemaker_config.region,\n deployment_spec.bento_name,\n deployment_spec.bento_version,\n sagemaker_project_dir,\n )\n\n try:\n (\n sagemaker_model_name,\n sagemaker_endpoint_config_name,\n sagemaker_endpoint_name,\n ) = _get_sagemaker_resource_names(deployment_pb)\n\n _create_sagemaker_model(\n sagemaker_client, sagemaker_model_name, ecr_image_path, sagemaker_config\n )\n _create_sagemaker_endpoint_config(\n sagemaker_client,\n sagemaker_model_name,\n sagemaker_endpoint_config_name,\n sagemaker_config,\n )\n _create_sagemaker_endpoint(\n sagemaker_client,\n sagemaker_endpoint_name,\n sagemaker_endpoint_config_name,\n )\n except AWSServiceError as e:\n delete_sagemaker_deployment_resources_if_exist(deployment_pb)\n raise e\n\n return ApplyDeploymentResponse(status=Status.OK(), deployment=deployment_pb)\n\n def update(self, deployment_pb, previous_deployment):\n try:\n ensure_docker_available_or_raise()\n deployment_spec = deployment_pb.spec\n bento_pb = self.yatai_service.GetBento(\n GetBentoRequest(\n bento_name=deployment_spec.bento_name,\n bento_version=deployment_spec.bento_version,\n )\n )\n if bento_pb.bento.uri.type not in (BentoUri.LOCAL, BentoUri.S3):\n raise BentoMLException(\n 'BentoML currently not support {} repository'.format(\n BentoUri.StorageType.Name(bento_pb.bento.uri.type)\n )\n )\n return self._update(\n deployment_pb, previous_deployment, bento_pb, bento_pb.bento.uri.uri\n )\n except BentoMLException as error:\n deployment_pb.state.state = DeploymentState.ERROR\n deployment_pb.state.error_message = (\n f'Error updating SageMaker deployment: {str(error)}'\n )\n return ApplyDeploymentResponse(\n status=error.status_proto, deployment=deployment_pb\n )\n\n def _update(self, deployment_pb, current_deployment, bento_pb, bento_path):\n if loader._is_remote_path(bento_path):\n with loader._resolve_remote_bundle_path(bento_path) as local_path:\n return self._update(\n deployment_pb, current_deployment, bento_pb, local_path\n )\n updated_deployment_spec = deployment_pb.spec\n updated_sagemaker_config = updated_deployment_spec.sagemaker_operator_config\n sagemaker_client = boto3.client('sagemaker', updated_sagemaker_config.region)\n\n try:\n raise_if_api_names_not_found_in_bento_service_metadata(\n bento_pb.bento.bento_service_metadata,\n [updated_sagemaker_config.api_name],\n )\n describe_latest_deployment_state = self.describe(deployment_pb)\n current_deployment_spec = current_deployment.spec\n current_sagemaker_config = current_deployment_spec.sagemaker_operator_config\n latest_deployment_state = json.loads(\n describe_latest_deployment_state.state.info_json\n )\n\n current_ecr_image_tag = latest_deployment_state['ProductionVariants'][0][\n 'DeployedImages'\n ][0]['SpecifiedImage']\n if (\n updated_deployment_spec.bento_name != current_deployment_spec.bento_name\n or updated_deployment_spec.bento_version\n != current_deployment_spec.bento_version\n ):\n logger.debug(\n 'BentoService tag is different from current deployment, '\n 'creating new docker image and push to ECR'\n )\n with TempDirectory() as temp_dir:\n sagemaker_project_dir = os.path.join(\n temp_dir, updated_deployment_spec.bento_name\n )\n _init_sagemaker_project(\n sagemaker_project_dir,\n bento_path,\n bento_pb.bento.bento_service_metadata.env.docker_base_image,\n )\n ecr_image_path = create_and_push_docker_image_to_ecr(\n updated_sagemaker_config.region,\n updated_deployment_spec.bento_name,\n updated_deployment_spec.bento_version,\n sagemaker_project_dir,\n )\n else:\n logger.debug('Using existing ECR image for Sagemaker model')\n ecr_image_path = current_ecr_image_tag\n\n (\n updated_sagemaker_model_name,\n updated_sagemaker_endpoint_config_name,\n sagemaker_endpoint_name,\n ) = _get_sagemaker_resource_names(deployment_pb)\n (\n current_sagemaker_model_name,\n current_sagemaker_endpoint_config_name,\n _,\n ) = _get_sagemaker_resource_names(current_deployment)\n\n if (\n updated_sagemaker_config.api_name != current_sagemaker_config.api_name\n or updated_sagemaker_config.num_of_gunicorn_workers_per_instance\n != current_sagemaker_config.num_of_gunicorn_workers_per_instance\n or ecr_image_path != current_ecr_image_tag\n ):\n logger.debug(\n 'Sagemaker model requires update. Delete current sagemaker model %s'\n 'and creating new model %s',\n current_sagemaker_model_name,\n updated_sagemaker_model_name,\n )\n _delete_sagemaker_model_if_exist(\n sagemaker_client, current_sagemaker_model_name\n )\n _create_sagemaker_model(\n sagemaker_client,\n updated_sagemaker_model_name,\n ecr_image_path,\n updated_sagemaker_config,\n )\n # When bento service tag is not changed, we need to delete the current\n # endpoint configuration in order to create new one to avoid name collation\n if (\n current_sagemaker_endpoint_config_name\n == updated_sagemaker_endpoint_config_name\n ):\n logger.debug(\n 'Current sagemaker config name %s is same as updated one, '\n 'delete it before create new endpoint config',\n current_sagemaker_endpoint_config_name,\n )\n _delete_sagemaker_endpoint_config_if_exist(\n sagemaker_client, current_sagemaker_endpoint_config_name\n )\n logger.debug(\n 'Create new endpoint configuration %s',\n updated_sagemaker_endpoint_config_name,\n )\n _create_sagemaker_endpoint_config(\n sagemaker_client,\n updated_sagemaker_model_name,\n updated_sagemaker_endpoint_config_name,\n updated_sagemaker_config,\n )\n logger.debug(\n 'Updating endpoint to new endpoint configuration %s',\n updated_sagemaker_endpoint_config_name,\n )\n _update_sagemaker_endpoint(\n sagemaker_client,\n sagemaker_endpoint_name,\n updated_sagemaker_endpoint_config_name,\n )\n logger.debug(\n 'Delete old sagemaker endpoint config %s',\n current_sagemaker_endpoint_config_name,\n )\n _delete_sagemaker_endpoint_config_if_exist(\n sagemaker_client, current_sagemaker_endpoint_config_name\n )\n except AWSServiceError as e:\n delete_sagemaker_deployment_resources_if_exist(deployment_pb)\n raise e\n\n return ApplyDeploymentResponse(status=Status.OK(), deployment=deployment_pb)\n\n def delete(self, deployment_pb):\n try:\n deployment_spec = deployment_pb.spec\n sagemaker_config = deployment_spec.sagemaker_operator_config\n sagemaker_config.region = (\n sagemaker_config.region or get_default_aws_region()\n )\n if not sagemaker_config.region:\n raise InvalidArgument('AWS region is missing')\n\n delete_sagemaker_deployment_resources_if_exist(deployment_pb)\n\n return DeleteDeploymentResponse(status=Status.OK())\n except BentoMLException as error:\n return DeleteDeploymentResponse(status=error.status_proto)\n\n def describe(self, deployment_pb):\n try:\n deployment_spec = deployment_pb.spec\n sagemaker_config = deployment_spec.sagemaker_operator_config\n sagemaker_config.region = (\n sagemaker_config.region or get_default_aws_region()\n )\n if not sagemaker_config.region:\n raise InvalidArgument('AWS region is missing')\n sagemaker_client = boto3.client('sagemaker', sagemaker_config.region)\n _, _, sagemaker_endpoint_name = _get_sagemaker_resource_names(deployment_pb)\n\n try:\n endpoint_status_response = sagemaker_client.describe_endpoint(\n EndpointName=sagemaker_endpoint_name\n )\n except ClientError as e:\n raise _aws_client_error_to_bentoml_exception(\n e,\n f\"Failed to fetch current status of sagemaker endpoint \"\n f\"'{sagemaker_endpoint_name}'\",\n )\n\n logger.debug(\"AWS describe endpoint response: %s\", endpoint_status_response)\n endpoint_status = endpoint_status_response[\"EndpointStatus\"]\n\n service_state = ENDPOINT_STATUS_TO_STATE[endpoint_status]\n\n deployment_state = DeploymentState(\n state=service_state,\n info_json=json.dumps(endpoint_status_response, default=str),\n )\n deployment_state.timestamp.GetCurrentTime()\n\n return DescribeDeploymentResponse(\n state=deployment_state, status=Status.OK()\n )\n except BentoMLException as error:\n return DescribeDeploymentResponse(status=error.status_proto)\n", "path": "bentoml/yatai/deployment/sagemaker/operator.py" } ]
[ { "content": "import base64\nimport json\nimport logging\nimport os\nimport shutil\nfrom urllib.parse import urlparse\n\nimport boto3\nimport docker\nfrom botocore.exceptions import ClientError\n\nfrom bentoml.exceptions import (\n YataiDeploymentException,\n AWSServiceError,\n InvalidArgument,\n BentoMLException,\n)\nfrom bentoml.saved_bundle import loader\nfrom bentoml.utils.tempdir import TempDirectory\nfrom bentoml.yatai.deployment.operator import DeploymentOperatorBase\nfrom bentoml.yatai.deployment.utils import (\n process_docker_api_line,\n generate_aws_compatible_string,\n get_default_aws_region,\n ensure_docker_available_or_raise,\n raise_if_api_names_not_found_in_bento_service_metadata,\n)\nfrom bentoml.yatai.proto.deployment_pb2 import (\n DeploymentState,\n ApplyDeploymentResponse,\n DeleteDeploymentResponse,\n DescribeDeploymentResponse,\n)\nfrom bentoml.yatai.proto.repository_pb2 import GetBentoRequest, BentoUri\nfrom bentoml.yatai.status import Status\n\nlogger = logging.getLogger(__name__)\n\n\nBENTO_SERVICE_SAGEMAKER_DOCKERFILE = \"\"\"\\\nFROM {docker_base_image}\n\n# the env var $PORT is required by heroku container runtime\nENV PORT 8080\nEXPOSE $PORT\n\nRUN apt-get update --fix-missing && \\\n apt-get install -y nginx && \\\n apt-get clean\n\n# gevent required by AWS Sagemaker\nRUN pip install gevent==1.4\n\n# copy over model files\nCOPY . /bento\nWORKDIR /bento\n\nRUN if [ -f /bento/bentoml-init.sh ]; then bash -c /bento/bentoml-init.sh; fi\n\nENV PATH=\"/bento:$PATH\"\n\"\"\" # noqa: E501\n\n\ndef strip_scheme(url):\n \"\"\" Stripe url's schema\n e.g. http://some.url/path -> some.url/path\n :param url: String\n :return: String\n \"\"\"\n parsed = urlparse(url)\n scheme = \"%s://\" % parsed.scheme\n return parsed.geturl().replace(scheme, \"\", 1)\n\n\ndef get_arn_role_from_current_aws_user():\n sts_client = boto3.client(\"sts\")\n identity = sts_client.get_caller_identity()\n sts_arn = identity[\"Arn\"]\n sts_arn_list = sts_arn.split(\":\")\n type_role = sts_arn_list[-1].split(\"/\")\n iam_client = boto3.client(\"iam\")\n if type_role[0] in (\"user\", \"root\"):\n role_list = iam_client.list_roles()\n arn = None\n for role in role_list[\"Roles\"]:\n policy_document = role[\"AssumeRolePolicyDocument\"]\n statement = policy_document[\"Statement\"][0]\n if (\n statement[\"Effect\"] == \"Allow\"\n and statement[\"Principal\"].get(\"Service\", None)\n == \"sagemaker.amazonaws.com\"\n ):\n arn = role[\"Arn\"]\n if arn is None:\n raise YataiDeploymentException(\n \"Can't find proper Arn role for Sagemaker, please create one and try \"\n \"again\"\n )\n return arn\n elif type_role[0] in [\"role\", \"assumed-role\"]:\n role_response = iam_client.get_role(RoleName=type_role[1])\n return role_response[\"Role\"][\"Arn\"]\n\n raise YataiDeploymentException(\n \"Not supported role type {}; sts arn is {}\".format(type_role[0], sts_arn)\n )\n\n\ndef create_and_push_docker_image_to_ecr(\n region, bento_name, bento_version, snapshot_path\n):\n \"\"\"Create BentoService sagemaker image and push to AWS ECR\n\n Example: https://github.com/awslabs/amazon-sagemaker-examples/blob/\\\n master/advanced_functionality/scikit_bring_your_own/container/build_and_push.sh\n 1. get aws account info and login ecr\n 2. create ecr repository, if not exist\n 3. build tag and push docker image\n\n Args:\n region(String)\n bento_name(String)\n bento_version(String)\n snapshot_path(Path)\n\n Returns:\n str: AWS ECR Tag\n \"\"\"\n ecr_client = boto3.client(\"ecr\", region)\n token = ecr_client.get_authorization_token()\n logger.debug(\"Getting docker login info from AWS\")\n username, password = (\n base64.b64decode(token[\"authorizationData\"][0][\"authorizationToken\"])\n .decode(\"utf-8\")\n .split(\":\")\n )\n registry_url = token[\"authorizationData\"][0][\"proxyEndpoint\"]\n auth_config_payload = {\"username\": username, \"password\": password}\n\n docker_api = docker.APIClient()\n\n image_name = bento_name.lower() + \"-sagemaker\"\n ecr_tag = strip_scheme(\n \"{registry_url}/{image_name}:{version}\".format(\n registry_url=registry_url, image_name=image_name, version=bento_version\n )\n )\n\n logger.debug(\"Building docker image: %s\", ecr_tag)\n for line in docker_api.build(\n path=snapshot_path, dockerfile=\"Dockerfile-sagemaker\", tag=ecr_tag\n ):\n process_docker_api_line(line)\n\n try:\n ecr_client.describe_repositories(repositoryNames=[image_name])[\"repositories\"]\n except ecr_client.exceptions.RepositoryNotFoundException:\n ecr_client.create_repository(repositoryName=image_name)\n\n logger.debug(\"Pushing image to AWS ECR at %s\", ecr_tag)\n for line in docker_api.push(ecr_tag, stream=True, auth_config=auth_config_payload):\n process_docker_api_line(line)\n logger.debug(\"Finished pushing image: %s\", ecr_tag)\n return ecr_tag\n\n\n# Sagemaker response status: 'OutOfService'|'Creating'|'Updating'|\n# 'SystemUpdating'|'RollingBack'|'InService'|\n# 'Deleting'|'Failed'\nENDPOINT_STATUS_TO_STATE = {\n \"InService\": DeploymentState.RUNNING,\n \"Deleting\": DeploymentState.INACTIVATED,\n \"Creating\": DeploymentState.PENDING,\n \"Updating\": DeploymentState.PENDING,\n \"RollingBack\": DeploymentState.PENDING,\n \"SystemUpdating\": DeploymentState.PENDING,\n \"OutOfService\": DeploymentState.INACTIVATED,\n \"Failed\": DeploymentState.ERROR,\n}\n\n\ndef _aws_client_error_to_bentoml_exception(e, message_prefix=None):\n \"\"\"parse botocore.exceptions.ClientError into Bento StatusProto\n\n We handle two most common errors when deploying to Sagemaker.\n 1. Authenication issue/invalid access(InvalidSignatureException)\n 2. resources not found (ValidationException)\n It will return correlated StatusProto(NOT_FOUND, UNAUTHENTICATED)\n\n Args:\n e: ClientError from botocore.exceptions\n Returns:\n StatusProto\n \"\"\"\n error_response = e.response.get('Error', {})\n error_code = error_response.get('Code', 'Unknown')\n error_message = error_response.get('Message', 'Unknown')\n error_log_message = (\n f'AWS ClientError - operation: {e.operation_name}, '\n f'code: {error_code}, message: {error_message}'\n )\n if message_prefix:\n error_log_message = f'{message_prefix}; {error_log_message}'\n logger.error(error_log_message)\n return AWSServiceError(error_log_message)\n\n\ndef _get_sagemaker_resource_names(deployment_pb):\n sagemaker_model_name = generate_aws_compatible_string(\n (deployment_pb.namespace, 10),\n (deployment_pb.name, 12),\n (deployment_pb.spec.bento_name, 20),\n (deployment_pb.spec.bento_version, 18),\n )\n sagemaker_endpoint_config_name = generate_aws_compatible_string(\n (deployment_pb.namespace, 10),\n (deployment_pb.name, 12),\n (deployment_pb.spec.bento_name, 20),\n (deployment_pb.spec.bento_version, 18),\n )\n sagemaker_endpoint_name = generate_aws_compatible_string(\n deployment_pb.namespace, deployment_pb.name\n )\n return sagemaker_model_name, sagemaker_endpoint_config_name, sagemaker_endpoint_name\n\n\ndef _delete_sagemaker_model_if_exist(sagemaker_client, sagemaker_model_name):\n try:\n delete_model_response = sagemaker_client.delete_model(\n ModelName=sagemaker_model_name\n )\n logger.debug(\"AWS delete model response: %s\", delete_model_response)\n except ClientError as e:\n error_response = e.response.get('Error', {})\n error_code = error_response.get('Code', 'Unknown')\n error_message = error_response.get('Message', 'Unknown')\n if (\n error_code == 'ValidationException'\n and \"Could not find model\" in error_message\n ):\n # sagemaker model does not exist\n return\n\n raise _aws_client_error_to_bentoml_exception(\n e, f\"Failed to cleanup sagemaker model '{sagemaker_model_name}'\"\n )\n\n return\n\n\ndef _delete_sagemaker_endpoint_config_if_exist(\n sagemaker_client, sagemaker_endpoint_config_name\n):\n try:\n delete_endpoint_config_response = sagemaker_client.delete_endpoint_config(\n EndpointConfigName=sagemaker_endpoint_config_name\n )\n logger.debug(\n \"AWS delete endpoint config response: %s\", delete_endpoint_config_response\n )\n except ClientError as e:\n error_response = e.response.get('Error', {})\n error_code = error_response.get('Code', 'Unknown')\n error_message = error_response.get('Message', 'Unknown')\n if (\n error_code == 'ValidationException'\n and \"Could not find endpoint configuration\" in error_message\n ):\n # endpoint config does not exist\n return\n\n raise _aws_client_error_to_bentoml_exception(\n e,\n f\"Failed to cleanup sagemaker endpoint config \"\n f\"'{sagemaker_endpoint_config_name}' after creation failed\",\n )\n return\n\n\ndef _delete_sagemaker_endpoint_if_exist(sagemaker_client, sagemaker_endpoint_name):\n try:\n delete_endpoint_response = sagemaker_client.delete_endpoint(\n EndpointName=sagemaker_endpoint_name\n )\n logger.debug(\"AWS delete endpoint response: %s\", delete_endpoint_response)\n except ClientError as e:\n error_response = e.response.get('Error', {})\n error_code = error_response.get('Code', 'Unknown')\n error_message = error_response.get('Message', 'Unknown')\n if (\n error_code == 'ValidationException'\n and \"Could not find endpoint\" in error_message\n ):\n # sagemaker endpoint does not exist\n return\n\n raise _aws_client_error_to_bentoml_exception(\n e, f\"Failed to delete sagemaker endpoint '{sagemaker_endpoint_name}'\"\n )\n\n\ndef delete_sagemaker_deployment_resources_if_exist(deployment_pb):\n sagemaker_config = deployment_pb.spec.sagemaker_operator_config\n sagemaker_client = boto3.client('sagemaker', sagemaker_config.region)\n\n (\n sagemaker_model_name,\n sagemaker_endpoint_config_name,\n sagemaker_endpoint_name,\n ) = _get_sagemaker_resource_names(deployment_pb)\n\n _delete_sagemaker_model_if_exist(sagemaker_client, sagemaker_model_name)\n _delete_sagemaker_endpoint_config_if_exist(\n sagemaker_client, sagemaker_endpoint_config_name\n )\n _delete_sagemaker_endpoint_if_exist(sagemaker_client, sagemaker_endpoint_name)\n\n\ndef _init_sagemaker_project(sagemaker_project_dir, bento_path, docker_base_image):\n shutil.copytree(bento_path, sagemaker_project_dir)\n\n with open(os.path.join(sagemaker_project_dir, 'Dockerfile-sagemaker'), \"w\") as f:\n f.write(\n BENTO_SERVICE_SAGEMAKER_DOCKERFILE.format(\n docker_base_image=docker_base_image\n )\n )\n\n nginx_conf_path = os.path.join(os.path.dirname(__file__), 'nginx.conf')\n shutil.copy(nginx_conf_path, os.path.join(sagemaker_project_dir, 'nginx.conf'))\n\n wsgi_py_path = os.path.join(os.path.dirname(__file__), 'wsgi.py')\n shutil.copy(wsgi_py_path, os.path.join(sagemaker_project_dir, 'wsgi.py'))\n\n serve_file_path = os.path.join(os.path.dirname(__file__), 'serve')\n shutil.copy(serve_file_path, os.path.join(sagemaker_project_dir, 'serve'))\n\n # permission 755 is required for entry script 'serve'\n os.chmod(os.path.join(sagemaker_project_dir, \"serve\"), 0o755)\n return sagemaker_project_dir\n\n\ndef _create_sagemaker_model(\n sagemaker_client, sagemaker_model_name, ecr_image_path, spec\n):\n execution_role_arn = get_arn_role_from_current_aws_user()\n\n sagemaker_model_info = {\n \"ModelName\": sagemaker_model_name,\n \"PrimaryContainer\": {\n \"ContainerHostname\": sagemaker_model_name,\n \"Image\": ecr_image_path,\n \"Environment\": {\n \"API_NAME\": spec.api_name,\n 'BENTOML_GUNICORN_TIMEOUT': str(spec.timeout),\n },\n },\n \"ExecutionRoleArn\": execution_role_arn,\n }\n\n # Will set envvar, if user defined gunicorn workers per instance. EnvVar needs\n # to be string instead of the int.\n if spec.num_of_gunicorn_workers_per_instance:\n sagemaker_model_info['PrimaryContainer']['Environment'][\n 'BENTOML_GUNICORN_NUM_OF_WORKERS'\n ] = str(spec.num_of_gunicorn_workers_per_instance)\n\n try:\n create_model_response = sagemaker_client.create_model(**sagemaker_model_info)\n except ClientError as e:\n raise _aws_client_error_to_bentoml_exception(\n e, \"Failed to create sagemaker model\"\n )\n logger.debug(\"AWS create model response: %s\", create_model_response)\n\n\ndef _create_sagemaker_endpoint_config(\n sagemaker_client, sagemaker_model_name, endpoint_config_name, sagemaker_config\n):\n production_variants = [\n {\n \"VariantName\": sagemaker_model_name,\n \"ModelName\": sagemaker_model_name,\n \"InitialInstanceCount\": sagemaker_config.instance_count,\n \"InstanceType\": sagemaker_config.instance_type,\n }\n ]\n\n logger.debug(\"Creating Sagemaker endpoint %s configuration\", endpoint_config_name)\n try:\n create_config_response = sagemaker_client.create_endpoint_config(\n EndpointConfigName=endpoint_config_name,\n ProductionVariants=production_variants,\n )\n except ClientError as e:\n raise _aws_client_error_to_bentoml_exception(\n e, \"Failed to create sagemaker endpoint config\"\n )\n logger.debug(\"AWS create endpoint config response: %s\", create_config_response)\n\n\ndef _create_sagemaker_endpoint(sagemaker_client, endpoint_name, endpoint_config_name):\n try:\n logger.debug(\"Creating sagemaker endpoint %s\", endpoint_name)\n create_endpoint_response = sagemaker_client.create_endpoint(\n EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name\n )\n logger.debug(\"AWS create endpoint response: %s\", create_endpoint_response)\n except ClientError as e:\n raise _aws_client_error_to_bentoml_exception(\n e, \"Failed to create sagemaker endpoint\"\n )\n\n\ndef _update_sagemaker_endpoint(sagemaker_client, endpoint_name, endpoint_config_name):\n try:\n logger.debug(\"Updating sagemaker endpoint %s\", endpoint_name)\n update_endpoint_response = sagemaker_client.update_endpoint(\n EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name\n )\n logger.debug(\"AWS update endpoint response: %s\", str(update_endpoint_response))\n except ClientError as e:\n raise _aws_client_error_to_bentoml_exception(\n e, \"Failed to update sagemaker endpoint\"\n )\n\n\nclass SageMakerDeploymentOperator(DeploymentOperatorBase):\n def add(self, deployment_pb):\n try:\n deployment_spec = deployment_pb.spec\n sagemaker_config = deployment_spec.sagemaker_operator_config\n sagemaker_config.region = (\n sagemaker_config.region or get_default_aws_region()\n )\n if not sagemaker_config.region:\n raise InvalidArgument('AWS region is missing')\n\n ensure_docker_available_or_raise()\n if sagemaker_config is None:\n raise YataiDeploymentException('Sagemaker configuration is missing.')\n\n bento_pb = self.yatai_service.GetBento(\n GetBentoRequest(\n bento_name=deployment_spec.bento_name,\n bento_version=deployment_spec.bento_version,\n )\n )\n if bento_pb.bento.uri.type not in (BentoUri.LOCAL, BentoUri.S3):\n raise BentoMLException(\n 'BentoML currently not support {} repository'.format(\n BentoUri.StorageType.Name(bento_pb.bento.uri.type)\n )\n )\n return self._add(deployment_pb, bento_pb, bento_pb.bento.uri.uri)\n\n except BentoMLException as error:\n deployment_pb.state.state = DeploymentState.ERROR\n deployment_pb.state.error_message = (\n f'Error creating SageMaker deployment: {str(error)}'\n )\n return ApplyDeploymentResponse(\n status=error.status_proto, deployment=deployment_pb\n )\n\n def _add(self, deployment_pb, bento_pb, bento_path):\n if loader._is_remote_path(bento_path):\n with loader._resolve_remote_bundle_path(bento_path) as local_path:\n return self._add(deployment_pb, bento_pb, local_path)\n\n deployment_spec = deployment_pb.spec\n sagemaker_config = deployment_spec.sagemaker_operator_config\n\n raise_if_api_names_not_found_in_bento_service_metadata(\n bento_pb.bento.bento_service_metadata, [sagemaker_config.api_name]\n )\n\n sagemaker_client = boto3.client('sagemaker', sagemaker_config.region)\n\n with TempDirectory() as temp_dir:\n sagemaker_project_dir = os.path.join(temp_dir, deployment_spec.bento_name)\n _init_sagemaker_project(\n sagemaker_project_dir,\n bento_path,\n bento_pb.bento.bento_service_metadata.env.docker_base_image,\n )\n ecr_image_path = create_and_push_docker_image_to_ecr(\n sagemaker_config.region,\n deployment_spec.bento_name,\n deployment_spec.bento_version,\n sagemaker_project_dir,\n )\n\n try:\n (\n sagemaker_model_name,\n sagemaker_endpoint_config_name,\n sagemaker_endpoint_name,\n ) = _get_sagemaker_resource_names(deployment_pb)\n\n _create_sagemaker_model(\n sagemaker_client, sagemaker_model_name, ecr_image_path, sagemaker_config\n )\n _create_sagemaker_endpoint_config(\n sagemaker_client,\n sagemaker_model_name,\n sagemaker_endpoint_config_name,\n sagemaker_config,\n )\n _create_sagemaker_endpoint(\n sagemaker_client,\n sagemaker_endpoint_name,\n sagemaker_endpoint_config_name,\n )\n except AWSServiceError as e:\n delete_sagemaker_deployment_resources_if_exist(deployment_pb)\n raise e\n\n return ApplyDeploymentResponse(status=Status.OK(), deployment=deployment_pb)\n\n def update(self, deployment_pb, previous_deployment):\n try:\n ensure_docker_available_or_raise()\n deployment_spec = deployment_pb.spec\n bento_pb = self.yatai_service.GetBento(\n GetBentoRequest(\n bento_name=deployment_spec.bento_name,\n bento_version=deployment_spec.bento_version,\n )\n )\n if bento_pb.bento.uri.type not in (BentoUri.LOCAL, BentoUri.S3):\n raise BentoMLException(\n 'BentoML currently not support {} repository'.format(\n BentoUri.StorageType.Name(bento_pb.bento.uri.type)\n )\n )\n return self._update(\n deployment_pb, previous_deployment, bento_pb, bento_pb.bento.uri.uri\n )\n except BentoMLException as error:\n deployment_pb.state.state = DeploymentState.ERROR\n deployment_pb.state.error_message = (\n f'Error updating SageMaker deployment: {str(error)}'\n )\n return ApplyDeploymentResponse(\n status=error.status_proto, deployment=deployment_pb\n )\n\n def _update(self, deployment_pb, current_deployment, bento_pb, bento_path):\n if loader._is_remote_path(bento_path):\n with loader._resolve_remote_bundle_path(bento_path) as local_path:\n return self._update(\n deployment_pb, current_deployment, bento_pb, local_path\n )\n updated_deployment_spec = deployment_pb.spec\n updated_sagemaker_config = updated_deployment_spec.sagemaker_operator_config\n sagemaker_client = boto3.client('sagemaker', updated_sagemaker_config.region)\n\n try:\n raise_if_api_names_not_found_in_bento_service_metadata(\n bento_pb.bento.bento_service_metadata,\n [updated_sagemaker_config.api_name],\n )\n describe_latest_deployment_state = self.describe(deployment_pb)\n current_deployment_spec = current_deployment.spec\n current_sagemaker_config = current_deployment_spec.sagemaker_operator_config\n latest_deployment_state = json.loads(\n describe_latest_deployment_state.state.info_json\n )\n\n current_ecr_image_tag = latest_deployment_state['ProductionVariants'][0][\n 'DeployedImages'\n ][0]['SpecifiedImage']\n if (\n updated_deployment_spec.bento_name != current_deployment_spec.bento_name\n or updated_deployment_spec.bento_version\n != current_deployment_spec.bento_version\n ):\n logger.debug(\n 'BentoService tag is different from current deployment, '\n 'creating new docker image and push to ECR'\n )\n with TempDirectory() as temp_dir:\n sagemaker_project_dir = os.path.join(\n temp_dir, updated_deployment_spec.bento_name\n )\n _init_sagemaker_project(\n sagemaker_project_dir,\n bento_path,\n bento_pb.bento.bento_service_metadata.env.docker_base_image,\n )\n ecr_image_path = create_and_push_docker_image_to_ecr(\n updated_sagemaker_config.region,\n updated_deployment_spec.bento_name,\n updated_deployment_spec.bento_version,\n sagemaker_project_dir,\n )\n else:\n logger.debug('Using existing ECR image for Sagemaker model')\n ecr_image_path = current_ecr_image_tag\n\n (\n updated_sagemaker_model_name,\n updated_sagemaker_endpoint_config_name,\n sagemaker_endpoint_name,\n ) = _get_sagemaker_resource_names(deployment_pb)\n (\n current_sagemaker_model_name,\n current_sagemaker_endpoint_config_name,\n _,\n ) = _get_sagemaker_resource_names(current_deployment)\n\n if (\n updated_sagemaker_config.api_name != current_sagemaker_config.api_name\n or updated_sagemaker_config.num_of_gunicorn_workers_per_instance\n != current_sagemaker_config.num_of_gunicorn_workers_per_instance\n or ecr_image_path != current_ecr_image_tag\n ):\n logger.debug(\n 'Sagemaker model requires update. Delete current sagemaker model %s'\n 'and creating new model %s',\n current_sagemaker_model_name,\n updated_sagemaker_model_name,\n )\n _delete_sagemaker_model_if_exist(\n sagemaker_client, current_sagemaker_model_name\n )\n _create_sagemaker_model(\n sagemaker_client,\n updated_sagemaker_model_name,\n ecr_image_path,\n updated_sagemaker_config,\n )\n # When bento service tag is not changed, we need to delete the current\n # endpoint configuration in order to create new one to avoid name collation\n if (\n current_sagemaker_endpoint_config_name\n == updated_sagemaker_endpoint_config_name\n ):\n logger.debug(\n 'Current sagemaker config name %s is same as updated one, '\n 'delete it before create new endpoint config',\n current_sagemaker_endpoint_config_name,\n )\n _delete_sagemaker_endpoint_config_if_exist(\n sagemaker_client, current_sagemaker_endpoint_config_name\n )\n logger.debug(\n 'Create new endpoint configuration %s',\n updated_sagemaker_endpoint_config_name,\n )\n _create_sagemaker_endpoint_config(\n sagemaker_client,\n updated_sagemaker_model_name,\n updated_sagemaker_endpoint_config_name,\n updated_sagemaker_config,\n )\n logger.debug(\n 'Updating endpoint to new endpoint configuration %s',\n updated_sagemaker_endpoint_config_name,\n )\n _update_sagemaker_endpoint(\n sagemaker_client,\n sagemaker_endpoint_name,\n updated_sagemaker_endpoint_config_name,\n )\n logger.debug(\n 'Delete old sagemaker endpoint config %s',\n current_sagemaker_endpoint_config_name,\n )\n _delete_sagemaker_endpoint_config_if_exist(\n sagemaker_client, current_sagemaker_endpoint_config_name\n )\n except AWSServiceError as e:\n delete_sagemaker_deployment_resources_if_exist(deployment_pb)\n raise e\n\n return ApplyDeploymentResponse(status=Status.OK(), deployment=deployment_pb)\n\n def delete(self, deployment_pb):\n try:\n deployment_spec = deployment_pb.spec\n sagemaker_config = deployment_spec.sagemaker_operator_config\n sagemaker_config.region = (\n sagemaker_config.region or get_default_aws_region()\n )\n if not sagemaker_config.region:\n raise InvalidArgument('AWS region is missing')\n\n delete_sagemaker_deployment_resources_if_exist(deployment_pb)\n\n return DeleteDeploymentResponse(status=Status.OK())\n except BentoMLException as error:\n return DeleteDeploymentResponse(status=error.status_proto)\n\n def describe(self, deployment_pb):\n try:\n deployment_spec = deployment_pb.spec\n sagemaker_config = deployment_spec.sagemaker_operator_config\n sagemaker_config.region = (\n sagemaker_config.region or get_default_aws_region()\n )\n if not sagemaker_config.region:\n raise InvalidArgument('AWS region is missing')\n sagemaker_client = boto3.client('sagemaker', sagemaker_config.region)\n _, _, sagemaker_endpoint_name = _get_sagemaker_resource_names(deployment_pb)\n\n try:\n endpoint_status_response = sagemaker_client.describe_endpoint(\n EndpointName=sagemaker_endpoint_name\n )\n except ClientError as e:\n raise _aws_client_error_to_bentoml_exception(\n e,\n f\"Failed to fetch current status of sagemaker endpoint \"\n f\"'{sagemaker_endpoint_name}'\",\n )\n\n logger.debug(\"AWS describe endpoint response: %s\", endpoint_status_response)\n endpoint_status = endpoint_status_response[\"EndpointStatus\"]\n\n service_state = ENDPOINT_STATUS_TO_STATE[endpoint_status]\n\n deployment_state = DeploymentState(\n state=service_state,\n info_json=json.dumps(endpoint_status_response, default=str),\n )\n deployment_state.timestamp.GetCurrentTime()\n\n return DescribeDeploymentResponse(\n state=deployment_state, status=Status.OK()\n )\n except BentoMLException as error:\n return DescribeDeploymentResponse(status=error.status_proto)\n", "path": "bentoml/yatai/deployment/sagemaker/operator.py" } ]
diff --git a/bentoml/yatai/deployment/sagemaker/operator.py b/bentoml/yatai/deployment/sagemaker/operator.py index 0529a8e83db..39cbfe268b9 100644 --- a/bentoml/yatai/deployment/sagemaker/operator.py +++ b/bentoml/yatai/deployment/sagemaker/operator.py @@ -97,7 +97,7 @@ def get_arn_role_from_current_aws_user(): "again" ) return arn - elif type_role[0] == "role": + elif type_role[0] in ["role", "assumed-role"]: role_response = iam_client.get_role(RoleName=type_role[1]) return role_response["Role"]["Arn"]
ros__ros_comm-1835
rosparam still uses unsafe yaml.load https://github.com/ros/ros_comm/blob/5da095d06bccbea708394b399215d8a066797266/tools/rosparam/src/rosparam/__init__.py#L371
[ { "content": "# Software License Agreement (BSD License)\n#\n# Copyright (c) 2008, Willow Garage, Inc.\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions\n# are met:\n#\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above\n# copyright notice, this list of conditions and the following\n# disclaimer in the documentation and/or other materials provided\n# with the distribution.\n# * Neither the name of Willow Garage, Inc. nor the names of its\n# contributors may be used to endorse or promote products derived\n# from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n# \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS\n# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE\n# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,\n# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,\n# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN\n# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n# POSSIBILITY OF SUCH DAMAGE.\n#\n# Revision $Id: rosparam 1641 2008-07-28 21:39:33Z sfkwc $\n\n\"\"\"\nImplementation of the rosparam as well as a library for modifying the\nstate of the ROS Parameter Server using YAML files.\n\"\"\"\n\nfrom __future__ import print_function\n\nNAME = 'rosparam'\n\n## namespace key. Use of this in a YAML document specifies the\n## namespace of all the params. NOTE: phasing out most use of this\n## key. It's still useful in corner cases, but most of its\n## functionality can be achieved with command-line arguments.\nNS = '_ns'\n\nimport base64\nimport math\nimport os\nimport re\nimport sys\nimport socket\ntry:\n from xmlrpc.client import Binary\nexcept ImportError:\n from xmlrpclib import Binary\n\nfrom optparse import OptionParser\n\nimport yaml\n\nimport rosgraph\nfrom rosgraph.names import script_resolve_name, ns_join, get_ros_namespace, make_caller_id, make_global_ns, GLOBALNS\n\nclass RosParamException(Exception):\n \"\"\"\n rosparam base exception type\n \"\"\"\n pass\nclass RosParamIOException(RosParamException):\n \"\"\"\n Exception for communication-based (i/o) errors.\n \"\"\"\n pass\n\n# pyyaml customizations for binary and angle data\n\ndef represent_xml_binary(loader, data):\n \"\"\"\n Adds a pyyaml serializer to handle xmlrpclib.Binary objects\n \"\"\"\n data = base64.b64encode(data.data)\n return loader.represent_scalar(u'tag:yaml.org,2002:binary', data, style='|')\n\ndef represent_foo(loader, data):\n return loader.represent_scalar(u'#', data)\n \ndef construct_yaml_binary(loader, node):\n \"\"\"\n Overrides pyaml's constructor for binary data. Wraps binary data in\n xmlrpclib.Binary container instead of straight string\n representation.\n \"\"\"\n return Binary(loader.construct_yaml_binary(node))\n \n# register the (de)serializers with pyyaml\nyaml.add_representer(Binary,represent_xml_binary)\nyaml.add_constructor(u'tag:yaml.org,2002:binary', construct_yaml_binary)\nyaml.SafeLoader.add_constructor(u'tag:yaml.org,2002:binary', construct_yaml_binary)\n\ndef construct_angle_radians(loader, node):\n \"\"\"\n python-yaml utility for converting rad(num) into float value\n \"\"\"\n value = loader.construct_scalar(node).strip()\n exprvalue = value.replace('pi', 'math.pi')\n if exprvalue.startswith(\"rad(\"):\n exprvalue = exprvalue[4:-1]\n try:\n return float(eval(exprvalue))\n except SyntaxError as e:\n raise RosParamException(\"invalid radian expression: %s\"%value)\n\ndef construct_angle_degrees(loader, node):\n \"\"\"\n python-yaml utility for converting deg(num) into float value\n \"\"\"\n value = loader.construct_scalar(node)\n exprvalue = value\n if exprvalue.startswith(\"deg(\"):\n exprvalue = exprvalue.strip()[4:-1]\n try:\n return float(exprvalue) * math.pi / 180.0\n except ValueError:\n raise RosParamException(\"invalid degree value: %s\"%value)\n\n\n# utilities\n\ndef _get_caller_id():\n \"\"\"\n :returns: caller ID for rosparam ROS client calls, ``str``\n \"\"\"\n return make_caller_id('rosparam-%s'%os.getpid())\n\ndef print_params(params, ns):\n \"\"\"\n Print contents of param dictionary to screen\n \"\"\"\n if type(params) == dict:\n for k, v in params.items():\n if type(v) == dict:\n print_params(v, ns_join(ns, k))\n else:\n print(\"%s=%s\"%(ns_join(ns, k), v))\n else:\n print(params)\n \n# yaml processing\n\ndef load_file(filename, default_namespace=None, verbose=False):\n \"\"\"\n Load the YAML document from the specified file\n \n :param filename: name of filename, ``str``\n :param default_namespace: namespace to load filename into, ``str``\n :returns [(dict, str)...]: list of parameter dictionary and\n corresponding namespaces for each YAML document in the file\n :raises: :exc:`RosParamException`: if unable to load contents of filename\n \"\"\"\n if not filename or filename == '-':\n f = sys.stdin\n if verbose:\n print(\"reading parameters from stdin\")\n return load_str(f.read(), filename, default_namespace=default_namespace, verbose=verbose)\n else:\n if not os.path.isfile(filename):\n raise RosParamException(\"file [%s] does not exist\"%filename)\n if verbose:\n print(\"reading parameters from [%s]\"%filename)\n with open(filename, 'r') as f:\n return load_str(f.read(), filename, default_namespace=default_namespace, verbose=verbose)\n \ndef load_str(str, filename, default_namespace=None, verbose=False):\n \"\"\"\n Load the YAML document as a string\n \n :param filename: name of filename, only used for debugging, ``str``\n :param default_namespace: namespace to load filename into, ``str``\n :param str: YAML text, ``str``\n :returns: list of parameter dictionary and\n corresponding namespaces for each YAML document in the file, ``[(dict, str)...]``\n \"\"\"\n paramlist = []\n default_namespace = default_namespace or get_ros_namespace()\n for doc in yaml.safe_load_all(str):\n if NS in doc:\n ns = ns_join(default_namespace, doc.get(NS, None))\n if verbose:\n print(\"reading parameters into namespace [%s]\"%ns)\n del doc[NS]\n else:\n ns = default_namespace\n paramlist.append((doc, ns))\n return paramlist\n\n\n# DUMP/GET\n\ndef get_param_server():\n return rosgraph.Master(_get_caller_id())\n\ndef get_param(param):\n \"\"\"\n Download a parameter from Parameter Server\n\n :param param: parameter name to retrieve from parameter\n server. If param is a parameter namespace, entire parameter\n subtree will be downloaded, ``str``\n \"\"\"\n try:\n return get_param_server().getParam(param)\n except socket.error:\n raise RosParamIOException(\"Unable to communicate with master!\")\n \n# #698\ndef _pretty_print(value, indent=''):\n \"\"\"\n Pretty print get value\n :param value: value to print\n :param indent: indent level, used for recursive calls, ``str``\n \"\"\"\n keys = list(value.keys())\n keys.sort()\n for k in keys:\n v = value[k]\n if type(v) == dict:\n print(\"%s%s:\"%(indent, k))\n _pretty_print(v, indent+' ')\n elif type(v) == str:\n if '\\n' in v:\n print(indent+'%s: |'%k)\n for l in v.split('\\n'):\n print(indent+' '+l)\n else:\n print(\"%s%s: %s\"%(indent, k, v))\n else:\n dump = yaml.dump(v)\n # #1617\n # newer versions of python-yaml append the '...' document end\n # syntax. as YAML functions fine w/o it, and as it is\n # confusing to users who are just getting a single scalar, we\n # strip it\n if dump.endswith('\\n...\\n'):\n dump = dump[:-4]\n \n sys.stdout.write(\"%s%s: %s\"%(indent, k, dump))\n \ndef _rosparam_cmd_get_param(param, pretty=False, verbose=False):\n \"\"\"\n Download a parameter tree and print to screen\n :param param: parameter name to retrieve from Parameter\n Server. If param is a parameter namespace, entire parameter\n subtree will be downloaded, ``str``\n \"\"\"\n # yaml.dump has a \\n at the end, so use stdout.write instead of print\n if verbose:\n print(\"getting parameter [%s]\"%param)\n try:\n val = get_param(param)\n except rosgraph.masterapi.Error as e:\n raise RosParamException(str(e))\n if pretty and type(val) in [dict, str]:\n if type(val) == dict:\n _pretty_print(val)\n else:\n print(val)\n else:\n dump = yaml.dump(val)\n # #1617\n # newer versions of python-yaml append the '...' document end\n # syntax. as YAML functions fine w/o it, and as it is\n # confusing to users who are just getting a single scalar, we\n # strip it\n if dump.endswith('\\n...\\n'):\n dump = dump[:-5]\n\n # #3761 add newline in output\n sys.stdout.write(\"%s\\n\"%(dump))\n\ndef dump_params(filename, param, verbose=False):\n \"\"\"\n Download a parameter tree from the Parameter Server and store in a yaml file\n\n :param filename: name of file to save YAML representation, ``str``\n :param param: name of parameter/namespace to dump, ``str``\n :param verbose: print verbose output for debugging, ``bool``\n \"\"\"\n tree = get_param(param)\n if verbose:\n print_params(tree, param)\n if not filename:\n f = sys.stdout\n yaml.dump(tree, f)\n else:\n f = open(filename, 'w')\n try:\n yaml.dump(tree, f)\n finally:\n f.close()\n\n\ndef delete_param(param, verbose=False):\n \"\"\"\n Delete a parameter from the Parameter Server\n\n :param param: parameter name, ``str``\n :param verbose: print verbose output for debugging, ``bool``\n \"\"\"\n try:\n if param == GLOBALNS:\n # not allowed to delete the root of the tree as it must always\n # have a value. the equivalent command is setting the root to an\n # empty dictionary\n get_param_server().setParam(GLOBALNS, {})\n if verbose:\n print(\"deleted ENTIRE parameter server\")\n else:\n get_param_server().deleteParam(param)\n if verbose:\n print(\"deleted parameter [%s]\"%param)\n except socket.error:\n raise RosParamIOException(\"Unable to communicate with master!\")\n \n# LOAD/SET\n\ndef set_param_raw(param, value, verbose=False):\n \"\"\"\n Set param on the Parameter Server. Unlike L{set_param()}, this\n takes in a Python value to set instead of YAML.\n \n :param param: parameter name, ``str``\n :param value XmlRpcLegalValue: value to upload, ``XmlRpcLegalValue``\n \"\"\"\n if type(value) == dict:\n # #1098 changing dictionary behavior to be an update, rather\n # than replace behavior.\n for k, v in value.items():\n # dictionary keys must be non-unicode strings\n if isinstance(k, str):\n set_param_raw(ns_join(param, k), v, verbose=verbose)\n else:\n raise RosParamException(\"YAML dictionaries must have string keys. Invalid dictionary is:\\n%s\"%value)\n else:\n try:\n expected_type = long\n except NameError :\n expected_type = int\n \n if type(value) == expected_type:\n if value > sys.maxsize:\n raise RosParamException(\"Overflow: Parameter Server integers must be 32-bit signed integers:\\n\\t-%s <= value <= %s\"%(maxint - 1, maxint))\n \n try:\n get_param_server().setParam(param, value)\n except socket.error:\n raise RosParamIOException(\"Unable to communicate with master!\")\n if verbose:\n print(\"set parameter [%s] to [%s]\"%(param, value))\n\ndef set_param(param, value, verbose=False):\n \"\"\"\n Set param on the ROS parameter server using a YAML value.\n \n :param param: parameter name, ``str``\n :param value: yaml-encoded value, ``str``\n \"\"\"\n set_param_raw(param, yaml.load(value), verbose=verbose)\n\ndef upload_params(ns, values, verbose=False):\n \"\"\"\n Upload params to the Parameter Server\n :param values: key/value dictionary, where keys are parameter names and values are parameter values, ``dict``\n :param ns: namespace to load parameters into, ``str``\n \"\"\"\n if ns == '/' and not type(values) == dict:\n raise RosParamException(\"global / can only be set to a dictionary\")\n if verbose:\n print_params(values, ns)\n set_param_raw(ns, values)\n\n\n# LIST\n\ndef list_params(ns):\n \"\"\"\n Get list of parameters in ns\n\n :param ns: namespace to match, ``str``\n \"\"\"\n try:\n ns = make_global_ns(ns)\n names = get_param_server().getParamNames()\n names.sort()\n return [n for n in names if n.startswith(ns)]\n except socket.error:\n raise RosParamIOException(\"Unable to communicate with master!\")\n\n# COMMAND-LINE PARSING\n \ndef _rosparam_cmd_get_dump(cmd, argv):\n \"\"\"\n Process command line for rosparam get/dump, e.g.::\n rosparam get param\n rosparam dump file.yaml [namespace]\n\n :param cmd: command ('get' or 'dump'), ``str``\n :param argv: command-line args, ``str``\n \"\"\"\n # get and dump are equivalent functionality, just different arguments\n if cmd == 'dump':\n parser = OptionParser(usage=\"usage: %prog dump [options] file [namespace]\", prog=NAME)\n elif cmd == 'get':\n parser = OptionParser(usage=\"usage: %prog get [options] parameter\", prog=NAME) \n parser.add_option(\"-p\", dest=\"pretty\", default=False,\n action=\"store_true\", help=\"pretty print. WARNING: not YAML-safe\")\n\n parser.add_option(\"-v\", dest=\"verbose\", default=False,\n action=\"store_true\", help=\"turn on verbose output\")\n options, args = parser.parse_args(argv[2:])\n\n arg = None\n ns = ''\n \n if len(args) == 0:\n if cmd == 'get':\n parser.error(\"invalid arguments. Please specify a parameter name\")\n elif len(args) == 1:\n arg = args[0]\n elif len(args) == 2 and cmd == 'dump':\n arg = args[0]\n ns = args[1]\n else:\n parser.error(\"too many arguments\")\n\n if cmd == 'get':\n _rosparam_cmd_get_param(script_resolve_name(NAME, arg), pretty=options.pretty, verbose=options.verbose)\n else:\n if options.verbose:\n print(\"dumping namespace [%s] to file [%s]\"%(ns, arg))\n dump_params(arg, script_resolve_name(NAME, ns), verbose=options.verbose)\n\ndef _set_optparse_neg_args(parser, argv):\n # we don't use optparse to parse actual arguments, just options,\n # due to the fact that optparse doesn't handle negative numbers as\n # arguments. This parsing is complicated by the fact that we still\n # need to respect argument-bearing options like --textfile.\n args = []\n optparse_args = []\n skip = False\n for s in argv[2:]:\n if s.startswith('-'):\n if s in ['-t', '--textfile', '-b', '--binfile']:\n skip = True\n optparse_args.append(s)\n elif skip:\n parser.error(\"-t and --textfile options require an argument\")\n elif len(s) > 1 and ord(s[1]) >= ord('0') and ord(s[1]) <= ord('9'):\n args.append(s)\n else:\n optparse_args.append(s)\n else:\n if skip:\n skip = False\n optparse_args.append(s) \n else:\n args.append(s)\n options, _ = parser.parse_args(optparse_args)\n return options, args\n\n# TODO: break this into separate routines, has gotten too ugly to multiplex\ndef _rosparam_cmd_set_load(cmd, argv):\n \"\"\"\n Process command line for rosparam set/load, e.g.::\n rosparam load file.yaml [namespace]\n rosparam set param value\n\n :param cmd: command name, ``str``\n :param argv: command-line args, ``str``\n \"\"\"\n if cmd == 'load':\n parser = OptionParser(usage=\"usage: %prog load [options] file [namespace]\", prog=NAME)\n elif cmd == 'set':\n parser = OptionParser(usage=\"usage: %prog set [options] parameter value\", prog=NAME)\n parser.add_option(\"-t\", \"--textfile\", dest=\"text_file\", default=None,\n metavar=\"TEXT_FILE\", help=\"set parameters to contents of text file\")\n parser.add_option(\"-b\", \"--binfile\", dest=\"bin_file\", default=None,\n metavar=\"BINARY_FILE\", help=\"set parameters to contents of binary file\")\n\n parser.add_option(\"-v\", dest=\"verbose\", default=False,\n action=\"store_true\", help=\"turn on verbose output\")\n if cmd == 'set':\n options, args = _set_optparse_neg_args(parser, argv)\n if options.text_file and options.bin_file:\n parser.error(\"you may only specify one of --textfile or --binfile\")\n else:\n options, args = parser.parse_args(argv[2:])\n\n arg2 = None\n if len(args) == 0:\n if cmd == 'load':\n parser.error(\"invalid arguments. Please specify a file name or - for stdin\")\n elif cmd == 'set':\n parser.error(\"invalid arguments. Please specify a parameter name\")\n elif len(args) == 1:\n arg = args[0]\n if cmd == 'set' and not (options.text_file or options.bin_file):\n parser.error(\"invalid arguments. Please specify a parameter value\")\n elif len(args) == 2:\n arg = args[0]\n arg2 = args[1]\n else:\n parser.error(\"too many arguments\")\n\n if cmd == 'set':\n name = script_resolve_name(NAME, arg)\n # #2647\n if options.text_file:\n if not os.path.isfile(options.text_file):\n parser.error(\"file '%s' does not exist\"%(options.text_file))\n with open(options.text_file) as f:\n arg2 = f.read()\n set_param_raw(name, arg2, verbose=options.verbose) \n elif options.bin_file:\n with open(options.bin_file, 'rb') as f:\n arg2 = Binary(f.read())\n set_param_raw(name, arg2, verbose=options.verbose) \n else:\n # #2237: the empty string is really hard to specify on the\n # command-line due to bash quoting rules. We cheat here and\n # let an empty Python string be an empty YAML string (instead\n # of YAML null, which has no meaning to the Parameter Server\n # anyway).\n if arg2 == '':\n arg2 = '!!str'\n set_param(name, arg2, verbose=options.verbose)\n else:\n paramlist = load_file(arg, default_namespace=script_resolve_name(NAME, arg2), verbose=options.verbose)\n for params,ns in paramlist:\n upload_params(ns, params, verbose=options.verbose)\n\ndef _rosparam_cmd_list(argv):\n \"\"\"\n Process command line for rosparam set/load, e.g.::\n rosparam load file.yaml [namespace]\n rosparam set param value\n\n :param argv: command-line args, ``str``\n \"\"\"\n parser = OptionParser(usage=\"usage: %prog list [namespace]\", prog=NAME)\n options, args = parser.parse_args(argv[2:])\n\n ns = GLOBALNS\n if len(args) == 1:\n ns = script_resolve_name(NAME, args[0])\n elif len(args) == 2:\n parser.error(\"too many arguments\")\n\n print('\\n'.join(list_params(ns)))\n\n\ndef _rosparam_cmd_delete(argv):\n \"\"\"\n Process command line for rosparam delete, e.g.::\n rosparam delete param \n\n :param cmd: command name, ``str``\n :param argv: command-line args, ``str``\n \"\"\"\n parser = OptionParser(usage=\"usage: %prog delete [options] parameter\", prog=NAME)\n parser.add_option(\"-v\", dest=\"verbose\", default=False,\n action=\"store_true\", help=\"turn on verbose output\")\n options, args = parser.parse_args(argv[2:])\n\n arg2 = None\n if len(args) == 0:\n parser.error(\"invalid arguments. Please specify a parameter name\")\n elif len(args) == 1:\n arg = args[0]\n else:\n parser.error(\"too many arguments\")\n\n try:\n delete_param(script_resolve_name(NAME, arg), verbose=options.verbose)\n except rosgraph.masterapi.Error as e:\n raise RosParamException(str(e))\n\ndef _fullusage():\n \"\"\"\n Prints rosparam usage\n \"\"\"\n print(\"\"\"rosparam is a command-line tool for getting, setting, and deleting parameters from the ROS Parameter Server.\n\nCommands:\n\\trosparam set\\tset parameter\n\\trosparam get\\tget parameter\n\\trosparam load\\tload parameters from file\n\\trosparam dump\\tdump parameters to file\n\\trosparam delete\\tdelete parameter\n\\trosparam list\\tlist parameter names\n\"\"\")\n sys.exit(0)\n\ndef yamlmain(argv=None):\n \"\"\"\n Command-line main routine. Loads in one or more input files\n \n :param argv: command-line arguments or None to use sys.argv, ``[str]``\n \"\"\"\n if argv is None:\n argv = sys.argv\n if len(argv) == 1:\n _fullusage()\n try:\n command = argv[1]\n if command in ['get', 'dump']:\n _rosparam_cmd_get_dump(command, argv)\n elif command in ['set', 'load']:\n _rosparam_cmd_set_load(command, argv)\n elif command in ['delete']:\n _rosparam_cmd_delete(argv)\n elif command == 'list':\n _rosparam_cmd_list(argv)\n else:\n _fullusage()\n except RosParamException as e:\n print(\"ERROR: \"+str(e), file=sys.stderr)\n sys.exit(1)\n\n# YAML configuration. Doxygen does not like these being higher up in the code\n\nyaml.add_constructor(u'!radians', construct_angle_radians)\nyaml.add_constructor(u'!degrees', construct_angle_degrees)\nyaml.SafeLoader.add_constructor(u'!radians', construct_angle_radians)\nyaml.SafeLoader.add_constructor(u'!degrees', construct_angle_degrees)\n\n# allow both !degrees 180, !radians 2*pi\npattern = re.compile(r'^deg\\([^\\)]*\\)$')\nyaml.add_implicit_resolver(u'!degrees', pattern, first=\"deg(\")\nyaml.SafeLoader.add_implicit_resolver(u'!degrees', pattern, first=\"deg(\")\npattern = re.compile(r'^rad\\([^\\)]*\\)$')\nyaml.add_implicit_resolver(u'!radians', pattern, first=\"rad(\")\nyaml.SafeLoader.add_implicit_resolver(u'!radians', pattern, first=\"rad(\")\n\n", "path": "tools/rosparam/src/rosparam/__init__.py" } ]
[ { "content": "# Software License Agreement (BSD License)\n#\n# Copyright (c) 2008, Willow Garage, Inc.\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions\n# are met:\n#\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above\n# copyright notice, this list of conditions and the following\n# disclaimer in the documentation and/or other materials provided\n# with the distribution.\n# * Neither the name of Willow Garage, Inc. nor the names of its\n# contributors may be used to endorse or promote products derived\n# from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n# \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS\n# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE\n# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,\n# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,\n# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN\n# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n# POSSIBILITY OF SUCH DAMAGE.\n#\n# Revision $Id: rosparam 1641 2008-07-28 21:39:33Z sfkwc $\n\n\"\"\"\nImplementation of the rosparam as well as a library for modifying the\nstate of the ROS Parameter Server using YAML files.\n\"\"\"\n\nfrom __future__ import print_function\n\nNAME = 'rosparam'\n\n## namespace key. Use of this in a YAML document specifies the\n## namespace of all the params. NOTE: phasing out most use of this\n## key. It's still useful in corner cases, but most of its\n## functionality can be achieved with command-line arguments.\nNS = '_ns'\n\nimport base64\nimport math\nimport os\nimport re\nimport sys\nimport socket\ntry:\n from xmlrpc.client import Binary\nexcept ImportError:\n from xmlrpclib import Binary\n\nfrom optparse import OptionParser\n\nimport yaml\n\nimport rosgraph\nfrom rosgraph.names import script_resolve_name, ns_join, get_ros_namespace, make_caller_id, make_global_ns, GLOBALNS\n\nclass RosParamException(Exception):\n \"\"\"\n rosparam base exception type\n \"\"\"\n pass\nclass RosParamIOException(RosParamException):\n \"\"\"\n Exception for communication-based (i/o) errors.\n \"\"\"\n pass\n\n# pyyaml customizations for binary and angle data\n\ndef represent_xml_binary(loader, data):\n \"\"\"\n Adds a pyyaml serializer to handle xmlrpclib.Binary objects\n \"\"\"\n data = base64.b64encode(data.data)\n return loader.represent_scalar(u'tag:yaml.org,2002:binary', data, style='|')\n\ndef represent_foo(loader, data):\n return loader.represent_scalar(u'#', data)\n \ndef construct_yaml_binary(loader, node):\n \"\"\"\n Overrides pyaml's constructor for binary data. Wraps binary data in\n xmlrpclib.Binary container instead of straight string\n representation.\n \"\"\"\n return Binary(loader.construct_yaml_binary(node))\n \n# register the (de)serializers with pyyaml\nyaml.add_representer(Binary,represent_xml_binary)\nyaml.add_constructor(u'tag:yaml.org,2002:binary', construct_yaml_binary)\nyaml.SafeLoader.add_constructor(u'tag:yaml.org,2002:binary', construct_yaml_binary)\n\ndef construct_angle_radians(loader, node):\n \"\"\"\n python-yaml utility for converting rad(num) into float value\n \"\"\"\n value = loader.construct_scalar(node).strip()\n exprvalue = value.replace('pi', 'math.pi')\n if exprvalue.startswith(\"rad(\"):\n exprvalue = exprvalue[4:-1]\n try:\n return float(eval(exprvalue))\n except SyntaxError as e:\n raise RosParamException(\"invalid radian expression: %s\"%value)\n\ndef construct_angle_degrees(loader, node):\n \"\"\"\n python-yaml utility for converting deg(num) into float value\n \"\"\"\n value = loader.construct_scalar(node)\n exprvalue = value\n if exprvalue.startswith(\"deg(\"):\n exprvalue = exprvalue.strip()[4:-1]\n try:\n return float(exprvalue) * math.pi / 180.0\n except ValueError:\n raise RosParamException(\"invalid degree value: %s\"%value)\n\n\n# utilities\n\ndef _get_caller_id():\n \"\"\"\n :returns: caller ID for rosparam ROS client calls, ``str``\n \"\"\"\n return make_caller_id('rosparam-%s'%os.getpid())\n\ndef print_params(params, ns):\n \"\"\"\n Print contents of param dictionary to screen\n \"\"\"\n if type(params) == dict:\n for k, v in params.items():\n if type(v) == dict:\n print_params(v, ns_join(ns, k))\n else:\n print(\"%s=%s\"%(ns_join(ns, k), v))\n else:\n print(params)\n \n# yaml processing\n\ndef load_file(filename, default_namespace=None, verbose=False):\n \"\"\"\n Load the YAML document from the specified file\n \n :param filename: name of filename, ``str``\n :param default_namespace: namespace to load filename into, ``str``\n :returns [(dict, str)...]: list of parameter dictionary and\n corresponding namespaces for each YAML document in the file\n :raises: :exc:`RosParamException`: if unable to load contents of filename\n \"\"\"\n if not filename or filename == '-':\n f = sys.stdin\n if verbose:\n print(\"reading parameters from stdin\")\n return load_str(f.read(), filename, default_namespace=default_namespace, verbose=verbose)\n else:\n if not os.path.isfile(filename):\n raise RosParamException(\"file [%s] does not exist\"%filename)\n if verbose:\n print(\"reading parameters from [%s]\"%filename)\n with open(filename, 'r') as f:\n return load_str(f.read(), filename, default_namespace=default_namespace, verbose=verbose)\n \ndef load_str(str, filename, default_namespace=None, verbose=False):\n \"\"\"\n Load the YAML document as a string\n \n :param filename: name of filename, only used for debugging, ``str``\n :param default_namespace: namespace to load filename into, ``str``\n :param str: YAML text, ``str``\n :returns: list of parameter dictionary and\n corresponding namespaces for each YAML document in the file, ``[(dict, str)...]``\n \"\"\"\n paramlist = []\n default_namespace = default_namespace or get_ros_namespace()\n for doc in yaml.safe_load_all(str):\n if NS in doc:\n ns = ns_join(default_namespace, doc.get(NS, None))\n if verbose:\n print(\"reading parameters into namespace [%s]\"%ns)\n del doc[NS]\n else:\n ns = default_namespace\n paramlist.append((doc, ns))\n return paramlist\n\n\n# DUMP/GET\n\ndef get_param_server():\n return rosgraph.Master(_get_caller_id())\n\ndef get_param(param):\n \"\"\"\n Download a parameter from Parameter Server\n\n :param param: parameter name to retrieve from parameter\n server. If param is a parameter namespace, entire parameter\n subtree will be downloaded, ``str``\n \"\"\"\n try:\n return get_param_server().getParam(param)\n except socket.error:\n raise RosParamIOException(\"Unable to communicate with master!\")\n \n# #698\ndef _pretty_print(value, indent=''):\n \"\"\"\n Pretty print get value\n :param value: value to print\n :param indent: indent level, used for recursive calls, ``str``\n \"\"\"\n keys = list(value.keys())\n keys.sort()\n for k in keys:\n v = value[k]\n if type(v) == dict:\n print(\"%s%s:\"%(indent, k))\n _pretty_print(v, indent+' ')\n elif type(v) == str:\n if '\\n' in v:\n print(indent+'%s: |'%k)\n for l in v.split('\\n'):\n print(indent+' '+l)\n else:\n print(\"%s%s: %s\"%(indent, k, v))\n else:\n dump = yaml.dump(v)\n # #1617\n # newer versions of python-yaml append the '...' document end\n # syntax. as YAML functions fine w/o it, and as it is\n # confusing to users who are just getting a single scalar, we\n # strip it\n if dump.endswith('\\n...\\n'):\n dump = dump[:-4]\n \n sys.stdout.write(\"%s%s: %s\"%(indent, k, dump))\n \ndef _rosparam_cmd_get_param(param, pretty=False, verbose=False):\n \"\"\"\n Download a parameter tree and print to screen\n :param param: parameter name to retrieve from Parameter\n Server. If param is a parameter namespace, entire parameter\n subtree will be downloaded, ``str``\n \"\"\"\n # yaml.dump has a \\n at the end, so use stdout.write instead of print\n if verbose:\n print(\"getting parameter [%s]\"%param)\n try:\n val = get_param(param)\n except rosgraph.masterapi.Error as e:\n raise RosParamException(str(e))\n if pretty and type(val) in [dict, str]:\n if type(val) == dict:\n _pretty_print(val)\n else:\n print(val)\n else:\n dump = yaml.dump(val)\n # #1617\n # newer versions of python-yaml append the '...' document end\n # syntax. as YAML functions fine w/o it, and as it is\n # confusing to users who are just getting a single scalar, we\n # strip it\n if dump.endswith('\\n...\\n'):\n dump = dump[:-5]\n\n # #3761 add newline in output\n sys.stdout.write(\"%s\\n\"%(dump))\n\ndef dump_params(filename, param, verbose=False):\n \"\"\"\n Download a parameter tree from the Parameter Server and store in a yaml file\n\n :param filename: name of file to save YAML representation, ``str``\n :param param: name of parameter/namespace to dump, ``str``\n :param verbose: print verbose output for debugging, ``bool``\n \"\"\"\n tree = get_param(param)\n if verbose:\n print_params(tree, param)\n if not filename:\n f = sys.stdout\n yaml.dump(tree, f)\n else:\n f = open(filename, 'w')\n try:\n yaml.dump(tree, f)\n finally:\n f.close()\n\n\ndef delete_param(param, verbose=False):\n \"\"\"\n Delete a parameter from the Parameter Server\n\n :param param: parameter name, ``str``\n :param verbose: print verbose output for debugging, ``bool``\n \"\"\"\n try:\n if param == GLOBALNS:\n # not allowed to delete the root of the tree as it must always\n # have a value. the equivalent command is setting the root to an\n # empty dictionary\n get_param_server().setParam(GLOBALNS, {})\n if verbose:\n print(\"deleted ENTIRE parameter server\")\n else:\n get_param_server().deleteParam(param)\n if verbose:\n print(\"deleted parameter [%s]\"%param)\n except socket.error:\n raise RosParamIOException(\"Unable to communicate with master!\")\n \n# LOAD/SET\n\ndef set_param_raw(param, value, verbose=False):\n \"\"\"\n Set param on the Parameter Server. Unlike L{set_param()}, this\n takes in a Python value to set instead of YAML.\n \n :param param: parameter name, ``str``\n :param value XmlRpcLegalValue: value to upload, ``XmlRpcLegalValue``\n \"\"\"\n if type(value) == dict:\n # #1098 changing dictionary behavior to be an update, rather\n # than replace behavior.\n for k, v in value.items():\n # dictionary keys must be non-unicode strings\n if isinstance(k, str):\n set_param_raw(ns_join(param, k), v, verbose=verbose)\n else:\n raise RosParamException(\"YAML dictionaries must have string keys. Invalid dictionary is:\\n%s\"%value)\n else:\n try:\n expected_type = long\n except NameError :\n expected_type = int\n \n if type(value) == expected_type:\n if value > sys.maxsize:\n raise RosParamException(\"Overflow: Parameter Server integers must be 32-bit signed integers:\\n\\t-%s <= value <= %s\"%(maxint - 1, maxint))\n \n try:\n get_param_server().setParam(param, value)\n except socket.error:\n raise RosParamIOException(\"Unable to communicate with master!\")\n if verbose:\n print(\"set parameter [%s] to [%s]\"%(param, value))\n\ndef set_param(param, value, verbose=False):\n \"\"\"\n Set param on the ROS parameter server using a YAML value.\n \n :param param: parameter name, ``str``\n :param value: yaml-encoded value, ``str``\n \"\"\"\n set_param_raw(param, yaml.safe_load(value), verbose=verbose)\n\ndef upload_params(ns, values, verbose=False):\n \"\"\"\n Upload params to the Parameter Server\n :param values: key/value dictionary, where keys are parameter names and values are parameter values, ``dict``\n :param ns: namespace to load parameters into, ``str``\n \"\"\"\n if ns == '/' and not type(values) == dict:\n raise RosParamException(\"global / can only be set to a dictionary\")\n if verbose:\n print_params(values, ns)\n set_param_raw(ns, values)\n\n\n# LIST\n\ndef list_params(ns):\n \"\"\"\n Get list of parameters in ns\n\n :param ns: namespace to match, ``str``\n \"\"\"\n try:\n ns = make_global_ns(ns)\n names = get_param_server().getParamNames()\n names.sort()\n return [n for n in names if n.startswith(ns)]\n except socket.error:\n raise RosParamIOException(\"Unable to communicate with master!\")\n\n# COMMAND-LINE PARSING\n \ndef _rosparam_cmd_get_dump(cmd, argv):\n \"\"\"\n Process command line for rosparam get/dump, e.g.::\n rosparam get param\n rosparam dump file.yaml [namespace]\n\n :param cmd: command ('get' or 'dump'), ``str``\n :param argv: command-line args, ``str``\n \"\"\"\n # get and dump are equivalent functionality, just different arguments\n if cmd == 'dump':\n parser = OptionParser(usage=\"usage: %prog dump [options] file [namespace]\", prog=NAME)\n elif cmd == 'get':\n parser = OptionParser(usage=\"usage: %prog get [options] parameter\", prog=NAME) \n parser.add_option(\"-p\", dest=\"pretty\", default=False,\n action=\"store_true\", help=\"pretty print. WARNING: not YAML-safe\")\n\n parser.add_option(\"-v\", dest=\"verbose\", default=False,\n action=\"store_true\", help=\"turn on verbose output\")\n options, args = parser.parse_args(argv[2:])\n\n arg = None\n ns = ''\n \n if len(args) == 0:\n if cmd == 'get':\n parser.error(\"invalid arguments. Please specify a parameter name\")\n elif len(args) == 1:\n arg = args[0]\n elif len(args) == 2 and cmd == 'dump':\n arg = args[0]\n ns = args[1]\n else:\n parser.error(\"too many arguments\")\n\n if cmd == 'get':\n _rosparam_cmd_get_param(script_resolve_name(NAME, arg), pretty=options.pretty, verbose=options.verbose)\n else:\n if options.verbose:\n print(\"dumping namespace [%s] to file [%s]\"%(ns, arg))\n dump_params(arg, script_resolve_name(NAME, ns), verbose=options.verbose)\n\ndef _set_optparse_neg_args(parser, argv):\n # we don't use optparse to parse actual arguments, just options,\n # due to the fact that optparse doesn't handle negative numbers as\n # arguments. This parsing is complicated by the fact that we still\n # need to respect argument-bearing options like --textfile.\n args = []\n optparse_args = []\n skip = False\n for s in argv[2:]:\n if s.startswith('-'):\n if s in ['-t', '--textfile', '-b', '--binfile']:\n skip = True\n optparse_args.append(s)\n elif skip:\n parser.error(\"-t and --textfile options require an argument\")\n elif len(s) > 1 and ord(s[1]) >= ord('0') and ord(s[1]) <= ord('9'):\n args.append(s)\n else:\n optparse_args.append(s)\n else:\n if skip:\n skip = False\n optparse_args.append(s) \n else:\n args.append(s)\n options, _ = parser.parse_args(optparse_args)\n return options, args\n\n# TODO: break this into separate routines, has gotten too ugly to multiplex\ndef _rosparam_cmd_set_load(cmd, argv):\n \"\"\"\n Process command line for rosparam set/load, e.g.::\n rosparam load file.yaml [namespace]\n rosparam set param value\n\n :param cmd: command name, ``str``\n :param argv: command-line args, ``str``\n \"\"\"\n if cmd == 'load':\n parser = OptionParser(usage=\"usage: %prog load [options] file [namespace]\", prog=NAME)\n elif cmd == 'set':\n parser = OptionParser(usage=\"usage: %prog set [options] parameter value\", prog=NAME)\n parser.add_option(\"-t\", \"--textfile\", dest=\"text_file\", default=None,\n metavar=\"TEXT_FILE\", help=\"set parameters to contents of text file\")\n parser.add_option(\"-b\", \"--binfile\", dest=\"bin_file\", default=None,\n metavar=\"BINARY_FILE\", help=\"set parameters to contents of binary file\")\n\n parser.add_option(\"-v\", dest=\"verbose\", default=False,\n action=\"store_true\", help=\"turn on verbose output\")\n if cmd == 'set':\n options, args = _set_optparse_neg_args(parser, argv)\n if options.text_file and options.bin_file:\n parser.error(\"you may only specify one of --textfile or --binfile\")\n else:\n options, args = parser.parse_args(argv[2:])\n\n arg2 = None\n if len(args) == 0:\n if cmd == 'load':\n parser.error(\"invalid arguments. Please specify a file name or - for stdin\")\n elif cmd == 'set':\n parser.error(\"invalid arguments. Please specify a parameter name\")\n elif len(args) == 1:\n arg = args[0]\n if cmd == 'set' and not (options.text_file or options.bin_file):\n parser.error(\"invalid arguments. Please specify a parameter value\")\n elif len(args) == 2:\n arg = args[0]\n arg2 = args[1]\n else:\n parser.error(\"too many arguments\")\n\n if cmd == 'set':\n name = script_resolve_name(NAME, arg)\n # #2647\n if options.text_file:\n if not os.path.isfile(options.text_file):\n parser.error(\"file '%s' does not exist\"%(options.text_file))\n with open(options.text_file) as f:\n arg2 = f.read()\n set_param_raw(name, arg2, verbose=options.verbose) \n elif options.bin_file:\n with open(options.bin_file, 'rb') as f:\n arg2 = Binary(f.read())\n set_param_raw(name, arg2, verbose=options.verbose) \n else:\n # #2237: the empty string is really hard to specify on the\n # command-line due to bash quoting rules. We cheat here and\n # let an empty Python string be an empty YAML string (instead\n # of YAML null, which has no meaning to the Parameter Server\n # anyway).\n if arg2 == '':\n arg2 = '!!str'\n set_param(name, arg2, verbose=options.verbose)\n else:\n paramlist = load_file(arg, default_namespace=script_resolve_name(NAME, arg2), verbose=options.verbose)\n for params,ns in paramlist:\n upload_params(ns, params, verbose=options.verbose)\n\ndef _rosparam_cmd_list(argv):\n \"\"\"\n Process command line for rosparam set/load, e.g.::\n rosparam load file.yaml [namespace]\n rosparam set param value\n\n :param argv: command-line args, ``str``\n \"\"\"\n parser = OptionParser(usage=\"usage: %prog list [namespace]\", prog=NAME)\n options, args = parser.parse_args(argv[2:])\n\n ns = GLOBALNS\n if len(args) == 1:\n ns = script_resolve_name(NAME, args[0])\n elif len(args) == 2:\n parser.error(\"too many arguments\")\n\n print('\\n'.join(list_params(ns)))\n\n\ndef _rosparam_cmd_delete(argv):\n \"\"\"\n Process command line for rosparam delete, e.g.::\n rosparam delete param \n\n :param cmd: command name, ``str``\n :param argv: command-line args, ``str``\n \"\"\"\n parser = OptionParser(usage=\"usage: %prog delete [options] parameter\", prog=NAME)\n parser.add_option(\"-v\", dest=\"verbose\", default=False,\n action=\"store_true\", help=\"turn on verbose output\")\n options, args = parser.parse_args(argv[2:])\n\n arg2 = None\n if len(args) == 0:\n parser.error(\"invalid arguments. Please specify a parameter name\")\n elif len(args) == 1:\n arg = args[0]\n else:\n parser.error(\"too many arguments\")\n\n try:\n delete_param(script_resolve_name(NAME, arg), verbose=options.verbose)\n except rosgraph.masterapi.Error as e:\n raise RosParamException(str(e))\n\ndef _fullusage():\n \"\"\"\n Prints rosparam usage\n \"\"\"\n print(\"\"\"rosparam is a command-line tool for getting, setting, and deleting parameters from the ROS Parameter Server.\n\nCommands:\n\\trosparam set\\tset parameter\n\\trosparam get\\tget parameter\n\\trosparam load\\tload parameters from file\n\\trosparam dump\\tdump parameters to file\n\\trosparam delete\\tdelete parameter\n\\trosparam list\\tlist parameter names\n\"\"\")\n sys.exit(0)\n\ndef yamlmain(argv=None):\n \"\"\"\n Command-line main routine. Loads in one or more input files\n \n :param argv: command-line arguments or None to use sys.argv, ``[str]``\n \"\"\"\n if argv is None:\n argv = sys.argv\n if len(argv) == 1:\n _fullusage()\n try:\n command = argv[1]\n if command in ['get', 'dump']:\n _rosparam_cmd_get_dump(command, argv)\n elif command in ['set', 'load']:\n _rosparam_cmd_set_load(command, argv)\n elif command in ['delete']:\n _rosparam_cmd_delete(argv)\n elif command == 'list':\n _rosparam_cmd_list(argv)\n else:\n _fullusage()\n except RosParamException as e:\n print(\"ERROR: \"+str(e), file=sys.stderr)\n sys.exit(1)\n\n# YAML configuration. Doxygen does not like these being higher up in the code\n\nyaml.add_constructor(u'!radians', construct_angle_radians)\nyaml.add_constructor(u'!degrees', construct_angle_degrees)\nyaml.SafeLoader.add_constructor(u'!radians', construct_angle_radians)\nyaml.SafeLoader.add_constructor(u'!degrees', construct_angle_degrees)\n\n# allow both !degrees 180, !radians 2*pi\npattern = re.compile(r'^deg\\([^\\)]*\\)$')\nyaml.add_implicit_resolver(u'!degrees', pattern, first=\"deg(\")\nyaml.SafeLoader.add_implicit_resolver(u'!degrees', pattern, first=\"deg(\")\npattern = re.compile(r'^rad\\([^\\)]*\\)$')\nyaml.add_implicit_resolver(u'!radians', pattern, first=\"rad(\")\nyaml.SafeLoader.add_implicit_resolver(u'!radians', pattern, first=\"rad(\")\n\n", "path": "tools/rosparam/src/rosparam/__init__.py" } ]
diff --git a/tools/rosparam/src/rosparam/__init__.py b/tools/rosparam/src/rosparam/__init__.py index 3279ab97d5..fd8b0569f3 100644 --- a/tools/rosparam/src/rosparam/__init__.py +++ b/tools/rosparam/src/rosparam/__init__.py @@ -368,7 +368,7 @@ def set_param(param, value, verbose=False): :param param: parameter name, ``str`` :param value: yaml-encoded value, ``str`` """ - set_param_raw(param, yaml.load(value), verbose=verbose) + set_param_raw(param, yaml.safe_load(value), verbose=verbose) def upload_params(ns, values, verbose=False): """
mitmproxy__mitmproxy-2793
MITM Proxy Crashes on when editing a "form" Stack trace: ``` Traceback (most recent call last): File "/Users/ograff/mitmproxy/mitmproxy/tools/console/master.py", line 202, in run self.loop.run() File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/urwid/main_loop.py", line 278, in run self._run() File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/urwid/main_loop.py", line 376, in _run self.event_loop.run() File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/urwid/main_loop.py", line 682, in run self._loop() File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/urwid/main_loop.py", line 719, in _loop self._watch_files[fd]() File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/urwid/raw_display.py", line 393, in <lambda> event_loop, callback, self.get_available_raw_input()) File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/urwid/raw_display.py", line 493, in parse_input callback(processed, processed_codes) File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/urwid/main_loop.py", line 403, in _update self.process_input(keys) File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/urwid/main_loop.py", line 503, in process_input k = self._topmost_widget.keypress(self.screen_size, k) File "/Users/ograff/mitmproxy/mitmproxy/tools/console/window.py", line 274, in keypress k = fs.keypress(size, k) File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/urwid/container.py", line 592, in keypress *self.calculate_padding_filler(size, True)), key) File "/Users/ograff/mitmproxy/mitmproxy/tools/console/overlay.py", line 117, in keypress key = self.master.keymap.handle("chooser", key) File "/Users/ograff/mitmproxy/mitmproxy/tools/console/keymap.py", line 123, in handle return self.executor(b.command) File "/Users/ograff/mitmproxy/mitmproxy/tools/console/commandeditor.py", line 24, in __call__ ret = self.master.commands.call(cmd) File "/Users/ograff/mitmproxy/mitmproxy/command.py", line 144, in call return self.call_args(parts[0], parts[1:]) File "/Users/ograff/mitmproxy/mitmproxy/command.py", line 135, in call_args return self.commands[path].call(args) File "/Users/ograff/mitmproxy/mitmproxy/command.py", line 106, in call ret = self.func(*pargs) File "/Users/ograff/mitmproxy/mitmproxy/command.py", line 197, in wrapper return function(*args, **kwargs) File "/Users/ograff/mitmproxy/mitmproxy/tools/console/consoleaddons.py", line 125, in nav_select self.master.inject_key("m_select") File "/Users/ograff/mitmproxy/mitmproxy/tools/console/master.py", line 178, in inject_key self.loop.process_input([key]) File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/urwid/main_loop.py", line 503, in process_input k = self._topmost_widget.keypress(self.screen_size, k) File "/Users/ograff/mitmproxy/mitmproxy/tools/console/window.py", line 274, in keypress k = fs.keypress(size, k) File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/urwid/container.py", line 592, in keypress *self.calculate_padding_filler(size, True)), key) File "/Users/ograff/mitmproxy/mitmproxy/tools/console/overlay.py", line 120, in keypress signals.pop_view_state.send(self) File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/blinker/base.py", line 267, in send for receiver in self.receivers_for(sender)] File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/blinker/base.py", line 267, in <listcomp> for receiver in self.receivers_for(sender)] File "/Users/ograff/mitmproxy/mitmproxy/tools/console/window.py", line 207, in pop if self.focus_stack().pop(): File "/Users/ograff/mitmproxy/mitmproxy/tools/console/window.py", line 81, in pop self.call("layout_popping") File "/Users/ograff/mitmproxy/mitmproxy/tools/console/window.py", line 93, in call getattr(self.top_window(), name)(*args, **kwargs) File "/Users/ograff/mitmproxy/mitmproxy/tools/console/grideditor/base.py", line 463, in layout_popping self.call(self._w, "layout_popping") File "/Users/ograff/mitmproxy/mitmproxy/tools/console/grideditor/base.py", line 441, in call f(*args, **kwargs) File "/Users/ograff/mitmproxy/mitmproxy/tools/console/grideditor/base.py", line 313, in layout_popping self.callback(self.data_out(res), *self.cb_args, **self.cb_kwargs) File "/Users/ograff/mitmproxy/mitmproxy/tools/console/grideditor/base.py", line 456, in set_data_update self.set_data(vals, flow) File "/Users/ograff/mitmproxy/mitmproxy/tools/console/grideditor/editors.py", line 64, in set_data flow.request.urlencoded_form = vals File "/Users/ograff/mitmproxy/mitmproxy/net/http/request.py", line 462, in urlencoded_form self._set_urlencoded_form(value) File "/Users/ograff/mitmproxy/mitmproxy/net/http/request.py", line 444, in _set_urlencoded_form self.content = mitmproxy.net.http.url.encode(form_data, self.content.decode()).encode() File "/Users/ograff/mitmproxy/mitmproxy/net/http/url.py", line 81, in encode if encoded[-1] == '=': IndexError: string index out of range ``` mitmproxy has crashed! Please lodge a bug report at: https://github.com/mitmproxy/mitmproxy Shutting down... ##### Steps to reproduce the problem: 1. Enter a flow 2. Press "e" 3. Select "form" ##### Any other comments? What have you tried so far? `encoded` is an empty string but probably shouldn't be. Changing `mitmproxy/net/http/url.py` to just check for an empty string and not index into it if its empty results in an empty request body after exiting the editor. ##### System information ``` Mitmproxy version: 3.0.0 (2.0.0dev0631-0x30927468) Python version: 3.6.0 Platform: Darwin-16.5.0-x86_64-i386-64bit SSL version: OpenSSL 1.1.0f 25 May 2017 Mac version: 10.12.4 ('', '', '') x86_64 ```
[ { "content": "import urllib.parse\nfrom typing import Sequence\nfrom typing import Tuple\n\nfrom mitmproxy.net import check\n\n\ndef parse(url):\n \"\"\"\n URL-parsing function that checks that\n - port is an integer 0-65535\n - host is a valid IDNA-encoded hostname with no null-bytes\n - path is valid ASCII\n\n Args:\n A URL (as bytes or as unicode)\n\n Returns:\n A (scheme, host, port, path) tuple\n\n Raises:\n ValueError, if the URL is not properly formatted.\n \"\"\"\n parsed = urllib.parse.urlparse(url)\n\n if not parsed.hostname:\n raise ValueError(\"No hostname given\")\n\n if isinstance(url, bytes):\n host = parsed.hostname\n\n # this should not raise a ValueError,\n # but we try to be very forgiving here and accept just everything.\n else:\n host = parsed.hostname.encode(\"idna\")\n if isinstance(parsed, urllib.parse.ParseResult):\n parsed = parsed.encode(\"ascii\")\n\n port = parsed.port # Returns None if port number invalid in Py3.5. Will throw ValueError in Py3.6\n if not port:\n port = 443 if parsed.scheme == b\"https\" else 80\n\n full_path = urllib.parse.urlunparse(\n (b\"\", b\"\", parsed.path, parsed.params, parsed.query, parsed.fragment)\n )\n if not full_path.startswith(b\"/\"):\n full_path = b\"/\" + full_path\n\n if not check.is_valid_host(host):\n raise ValueError(\"Invalid Host\")\n\n return parsed.scheme, host, port, full_path\n\n\ndef unparse(scheme, host, port, path=\"\"):\n \"\"\"\n Returns a URL string, constructed from the specified components.\n\n Args:\n All args must be str.\n \"\"\"\n if path == \"*\":\n path = \"\"\n return \"%s://%s%s\" % (scheme, hostport(scheme, host, port), path)\n\n\ndef encode(s: Sequence[Tuple[str, str]], similar_to: str=None) -> str:\n \"\"\"\n Takes a list of (key, value) tuples and returns a urlencoded string.\n If similar_to is passed, the output is formatted similar to the provided urlencoded string.\n \"\"\"\n\n remove_trailing_equal = False\n if similar_to:\n remove_trailing_equal = any(\"=\" not in param for param in similar_to.split(\"&\"))\n\n encoded = urllib.parse.urlencode(s, False, errors=\"surrogateescape\")\n\n if remove_trailing_equal:\n encoded = encoded.replace(\"=&\", \"&\")\n if encoded[-1] == '=':\n encoded = encoded[:-1]\n\n return encoded\n\n\ndef decode(s):\n \"\"\"\n Takes a urlencoded string and returns a list of surrogate-escaped (key, value) tuples.\n \"\"\"\n return urllib.parse.parse_qsl(s, keep_blank_values=True, errors='surrogateescape')\n\n\ndef quote(b: str, safe: str=\"/\") -> str:\n \"\"\"\n Returns:\n An ascii-encodable str.\n \"\"\"\n return urllib.parse.quote(b, safe=safe, errors=\"surrogateescape\")\n\n\ndef unquote(s: str) -> str:\n \"\"\"\n Args:\n s: A surrogate-escaped str\n Returns:\n A surrogate-escaped str\n \"\"\"\n return urllib.parse.unquote(s, errors=\"surrogateescape\")\n\n\ndef hostport(scheme, host, port):\n \"\"\"\n Returns the host component, with a port specifcation if needed.\n \"\"\"\n if (port, scheme) in [(80, \"http\"), (443, \"https\"), (80, b\"http\"), (443, b\"https\")]:\n return host\n else:\n if isinstance(host, bytes):\n return b\"%s:%d\" % (host, port)\n else:\n return \"%s:%d\" % (host, port)\n", "path": "mitmproxy/net/http/url.py" } ]
[ { "content": "import urllib.parse\nfrom typing import Sequence\nfrom typing import Tuple\n\nfrom mitmproxy.net import check\n\n\ndef parse(url):\n \"\"\"\n URL-parsing function that checks that\n - port is an integer 0-65535\n - host is a valid IDNA-encoded hostname with no null-bytes\n - path is valid ASCII\n\n Args:\n A URL (as bytes or as unicode)\n\n Returns:\n A (scheme, host, port, path) tuple\n\n Raises:\n ValueError, if the URL is not properly formatted.\n \"\"\"\n parsed = urllib.parse.urlparse(url)\n\n if not parsed.hostname:\n raise ValueError(\"No hostname given\")\n\n if isinstance(url, bytes):\n host = parsed.hostname\n\n # this should not raise a ValueError,\n # but we try to be very forgiving here and accept just everything.\n else:\n host = parsed.hostname.encode(\"idna\")\n if isinstance(parsed, urllib.parse.ParseResult):\n parsed = parsed.encode(\"ascii\")\n\n port = parsed.port # Returns None if port number invalid in Py3.5. Will throw ValueError in Py3.6\n if not port:\n port = 443 if parsed.scheme == b\"https\" else 80\n\n full_path = urllib.parse.urlunparse(\n (b\"\", b\"\", parsed.path, parsed.params, parsed.query, parsed.fragment)\n )\n if not full_path.startswith(b\"/\"):\n full_path = b\"/\" + full_path\n\n if not check.is_valid_host(host):\n raise ValueError(\"Invalid Host\")\n\n return parsed.scheme, host, port, full_path\n\n\ndef unparse(scheme, host, port, path=\"\"):\n \"\"\"\n Returns a URL string, constructed from the specified components.\n\n Args:\n All args must be str.\n \"\"\"\n if path == \"*\":\n path = \"\"\n return \"%s://%s%s\" % (scheme, hostport(scheme, host, port), path)\n\n\ndef encode(s: Sequence[Tuple[str, str]], similar_to: str=None) -> str:\n \"\"\"\n Takes a list of (key, value) tuples and returns a urlencoded string.\n If similar_to is passed, the output is formatted similar to the provided urlencoded string.\n \"\"\"\n\n remove_trailing_equal = False\n if similar_to:\n remove_trailing_equal = any(\"=\" not in param for param in similar_to.split(\"&\"))\n\n encoded = urllib.parse.urlencode(s, False, errors=\"surrogateescape\")\n\n if encoded and remove_trailing_equal:\n encoded = encoded.replace(\"=&\", \"&\")\n if encoded[-1] == '=':\n encoded = encoded[:-1]\n\n return encoded\n\n\ndef decode(s):\n \"\"\"\n Takes a urlencoded string and returns a list of surrogate-escaped (key, value) tuples.\n \"\"\"\n return urllib.parse.parse_qsl(s, keep_blank_values=True, errors='surrogateescape')\n\n\ndef quote(b: str, safe: str=\"/\") -> str:\n \"\"\"\n Returns:\n An ascii-encodable str.\n \"\"\"\n return urllib.parse.quote(b, safe=safe, errors=\"surrogateescape\")\n\n\ndef unquote(s: str) -> str:\n \"\"\"\n Args:\n s: A surrogate-escaped str\n Returns:\n A surrogate-escaped str\n \"\"\"\n return urllib.parse.unquote(s, errors=\"surrogateescape\")\n\n\ndef hostport(scheme, host, port):\n \"\"\"\n Returns the host component, with a port specifcation if needed.\n \"\"\"\n if (port, scheme) in [(80, \"http\"), (443, \"https\"), (80, b\"http\"), (443, b\"https\")]:\n return host\n else:\n if isinstance(host, bytes):\n return b\"%s:%d\" % (host, port)\n else:\n return \"%s:%d\" % (host, port)\n", "path": "mitmproxy/net/http/url.py" } ]
diff --git a/mitmproxy/net/http/url.py b/mitmproxy/net/http/url.py index 86f65cfdc8..f938cb12d4 100644 --- a/mitmproxy/net/http/url.py +++ b/mitmproxy/net/http/url.py @@ -76,7 +76,7 @@ def encode(s: Sequence[Tuple[str, str]], similar_to: str=None) -> str: encoded = urllib.parse.urlencode(s, False, errors="surrogateescape") - if remove_trailing_equal: + if encoded and remove_trailing_equal: encoded = encoded.replace("=&", "&") if encoded[-1] == '=': encoded = encoded[:-1] diff --git a/test/mitmproxy/net/http/test_url.py b/test/mitmproxy/net/http/test_url.py index 2064aab8d1..c9f61fafdf 100644 --- a/test/mitmproxy/net/http/test_url.py +++ b/test/mitmproxy/net/http/test_url.py @@ -108,6 +108,7 @@ def test_empty_key_trailing_equal_sign(): def test_encode(): assert url.encode([('foo', 'bar')]) assert url.encode([('foo', surrogates)]) + assert not url.encode([], similar_to="justatext") def test_decode():
StackStorm__st2-4234
Missing [workflow_engine] in st2.conf.sample ##### SUMMARY https://github.com/StackStorm/st2/blob/master/conf/st2.conf.sample is missing a new section for `[workflow_engine]` Also, shouldn't this section be named `[workflowengine]` to go along with the "style" of the other sections like `[resultstracker]` , `[garbagecollector]`, etc ##### ISSUE TYPE - Bug Report - Feature Idea ##### STACKSTORM VERSION 2.8 ##### EXPECTED RESULTS https://github.com/StackStorm/st2/blob/master/conf/st2.conf.sample contains a section for `[workflow_engine]`
[ { "content": "#!/usr/bin/env python\n# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\nimport collections\nimport importlib\nimport six\nimport sys\nimport traceback\n\nfrom oslo_config import cfg\n\n\nCONFIGS = ['st2actions.config',\n 'st2actions.notifier.config',\n 'st2actions.resultstracker.config',\n 'st2api.config',\n 'st2stream.config',\n 'st2auth.config',\n 'st2common.config',\n 'st2exporter.config',\n 'st2reactor.rules.config',\n 'st2reactor.sensor.config',\n 'st2reactor.garbage_collector.config']\n\nSKIP_GROUPS = ['api_pecan', 'rbac', 'results_tracker']\n\n# We group auth options together to nake it a bit more clear what applies where\nAUTH_OPTIONS = {\n 'common': [\n 'enable',\n 'mode',\n 'logging',\n 'api_url',\n 'token_ttl',\n 'service_token_ttl',\n 'debug'\n ],\n 'standalone': [\n 'host',\n 'port',\n 'use_ssl',\n 'cert',\n 'key',\n 'backend',\n 'backend_kwargs'\n ]\n}\n\n# Some of the config values change depenending on the environment where this script is ran so we\n# set them to static values to ensure consistent and stable output\nSTATIC_OPTION_VALUES = {\n 'actionrunner': {\n 'virtualenv_binary': '/usr/bin/virtualenv',\n 'python_binary': '/usr/bin/python',\n 'python3_binary': '/usr/bin/python3'\n },\n 'webui': {\n 'webui_base_url': 'https://localhost'\n }\n}\n\nCOMMON_AUTH_OPTIONS_COMMENT = \"\"\"\n# Common option - options below apply in both scenarios - when auth service is running as a WSGI\n# service (e.g. under Apache or Nginx) and when it's running in the standalone mode.\n\"\"\".strip()\n\nSTANDALONE_AUTH_OPTIONS_COMMENT = \"\"\"\n# Standalone mode options - options below only apply when auth service is running in the standalone\n# mode.\n\"\"\".strip()\n\n\ndef _import_config(config):\n try:\n return importlib.import_module(config)\n except:\n traceback.print_exc()\n return None\n\n\ndef _read_current_config(opt_groups):\n for k, v in six.iteritems(cfg.CONF._groups):\n if k in SKIP_GROUPS:\n continue\n if k not in opt_groups:\n opt_groups[k] = v\n return opt_groups\n\n\ndef _clear_config():\n cfg.CONF.reset()\n\n\ndef _read_group(opt_group):\n all_options = list(opt_group._opts.values())\n\n if opt_group.name == 'auth':\n print(COMMON_AUTH_OPTIONS_COMMENT)\n print('')\n common_options = [option for option in all_options if option['opt'].name in\n AUTH_OPTIONS['common']]\n _print_options(opt_group=opt_group, options=common_options)\n\n print('')\n print(STANDALONE_AUTH_OPTIONS_COMMENT)\n print('')\n standalone_options = [option for option in all_options if option['opt'].name in\n AUTH_OPTIONS['standalone']]\n _print_options(opt_group=opt_group, options=standalone_options)\n\n if len(common_options) + len(standalone_options) != len(all_options):\n msg = ('Not all options are declared in AUTH_OPTIONS dict, please update it')\n raise Exception(msg)\n else:\n options = all_options\n _print_options(opt_group=opt_group, options=options)\n\n\ndef _read_groups(opt_groups):\n opt_groups = collections.OrderedDict(sorted(opt_groups.items()))\n for name, opt_group in six.iteritems(opt_groups):\n print('[%s]' % name)\n _read_group(opt_group)\n print('')\n\n\ndef _print_options(opt_group, options):\n for opt in options:\n opt = opt['opt']\n\n # Special case for options which could change during this script run\n static_option_value = STATIC_OPTION_VALUES.get(opt_group.name, {}).get(opt.name, None)\n if static_option_value:\n opt.default = static_option_value\n\n # Special handling for list options\n if isinstance(opt, cfg.ListOpt):\n if opt.default:\n value = ','.join(opt.default)\n else:\n value = ''\n\n value += ' # comma separated list allowed here.'\n else:\n value = opt.default\n\n print('# %s' % opt.help)\n print('%s = %s' % (opt.name, value))\n\n\ndef main(args):\n opt_groups = {}\n for config in CONFIGS:\n mod = _import_config(config)\n mod.register_opts()\n _read_current_config(opt_groups)\n _clear_config()\n _read_groups(opt_groups)\n\n\nif __name__ == '__main__':\n main(sys.argv)\n", "path": "tools/config_gen.py" } ]
[ { "content": "#!/usr/bin/env python\n# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\nimport collections\nimport importlib\nimport six\nimport sys\nimport traceback\n\nfrom oslo_config import cfg\n\n\nCONFIGS = ['st2actions.config',\n 'st2actions.notifier.config',\n 'st2actions.resultstracker.config',\n 'st2actions.workflows.config',\n 'st2api.config',\n 'st2stream.config',\n 'st2auth.config',\n 'st2common.config',\n 'st2exporter.config',\n 'st2reactor.rules.config',\n 'st2reactor.sensor.config',\n 'st2reactor.garbage_collector.config']\n\nSKIP_GROUPS = ['api_pecan', 'rbac', 'results_tracker']\n\n# We group auth options together to nake it a bit more clear what applies where\nAUTH_OPTIONS = {\n 'common': [\n 'enable',\n 'mode',\n 'logging',\n 'api_url',\n 'token_ttl',\n 'service_token_ttl',\n 'debug'\n ],\n 'standalone': [\n 'host',\n 'port',\n 'use_ssl',\n 'cert',\n 'key',\n 'backend',\n 'backend_kwargs'\n ]\n}\n\n# Some of the config values change depenending on the environment where this script is ran so we\n# set them to static values to ensure consistent and stable output\nSTATIC_OPTION_VALUES = {\n 'actionrunner': {\n 'virtualenv_binary': '/usr/bin/virtualenv',\n 'python_binary': '/usr/bin/python',\n 'python3_binary': '/usr/bin/python3'\n },\n 'webui': {\n 'webui_base_url': 'https://localhost'\n }\n}\n\nCOMMON_AUTH_OPTIONS_COMMENT = \"\"\"\n# Common option - options below apply in both scenarios - when auth service is running as a WSGI\n# service (e.g. under Apache or Nginx) and when it's running in the standalone mode.\n\"\"\".strip()\n\nSTANDALONE_AUTH_OPTIONS_COMMENT = \"\"\"\n# Standalone mode options - options below only apply when auth service is running in the standalone\n# mode.\n\"\"\".strip()\n\n\ndef _import_config(config):\n try:\n return importlib.import_module(config)\n except:\n traceback.print_exc()\n return None\n\n\ndef _read_current_config(opt_groups):\n for k, v in six.iteritems(cfg.CONF._groups):\n if k in SKIP_GROUPS:\n continue\n if k not in opt_groups:\n opt_groups[k] = v\n return opt_groups\n\n\ndef _clear_config():\n cfg.CONF.reset()\n\n\ndef _read_group(opt_group):\n all_options = list(opt_group._opts.values())\n\n if opt_group.name == 'auth':\n print(COMMON_AUTH_OPTIONS_COMMENT)\n print('')\n common_options = [option for option in all_options if option['opt'].name in\n AUTH_OPTIONS['common']]\n _print_options(opt_group=opt_group, options=common_options)\n\n print('')\n print(STANDALONE_AUTH_OPTIONS_COMMENT)\n print('')\n standalone_options = [option for option in all_options if option['opt'].name in\n AUTH_OPTIONS['standalone']]\n _print_options(opt_group=opt_group, options=standalone_options)\n\n if len(common_options) + len(standalone_options) != len(all_options):\n msg = ('Not all options are declared in AUTH_OPTIONS dict, please update it')\n raise Exception(msg)\n else:\n options = all_options\n _print_options(opt_group=opt_group, options=options)\n\n\ndef _read_groups(opt_groups):\n opt_groups = collections.OrderedDict(sorted(opt_groups.items()))\n for name, opt_group in six.iteritems(opt_groups):\n print('[%s]' % name)\n _read_group(opt_group)\n print('')\n\n\ndef _print_options(opt_group, options):\n for opt in options:\n opt = opt['opt']\n\n # Special case for options which could change during this script run\n static_option_value = STATIC_OPTION_VALUES.get(opt_group.name, {}).get(opt.name, None)\n if static_option_value:\n opt.default = static_option_value\n\n # Special handling for list options\n if isinstance(opt, cfg.ListOpt):\n if opt.default:\n value = ','.join(opt.default)\n else:\n value = ''\n\n value += ' # comma separated list allowed here.'\n else:\n value = opt.default\n\n print('# %s' % opt.help)\n print('%s = %s' % (opt.name, value))\n\n\ndef main(args):\n opt_groups = {}\n for config in CONFIGS:\n mod = _import_config(config)\n mod.register_opts()\n _read_current_config(opt_groups)\n _clear_config()\n _read_groups(opt_groups)\n\n\nif __name__ == '__main__':\n main(sys.argv)\n", "path": "tools/config_gen.py" } ]
diff --git a/conf/st2.conf.sample b/conf/st2.conf.sample index 73fdf0d8bd..27645c77a4 100644 --- a/conf/st2.conf.sample +++ b/conf/st2.conf.sample @@ -324,3 +324,7 @@ local_timezone = America/Los_Angeles # Base https URL to access st2 Web UI. This is used to construct history URLs that are sent out when chatops is used to kick off executions. webui_base_url = https://localhost +[workflow_engine] +# Location of the logging configuration file. +logging = conf/logging.workflowengine.conf + diff --git a/tools/config_gen.py b/tools/config_gen.py index 7c54fec30b..89ad584ffd 100755 --- a/tools/config_gen.py +++ b/tools/config_gen.py @@ -27,6 +27,7 @@ CONFIGS = ['st2actions.config', 'st2actions.notifier.config', 'st2actions.resultstracker.config', + 'st2actions.workflows.config', 'st2api.config', 'st2stream.config', 'st2auth.config',
coala__coala-3908
Fail to install and py.test on docker environment. <!-- Hello! If you're filing a bug, please include every step so as to help us reproduce it on our machines. If you're unsure about how to file an issue, use the issue template. If you need any help regarding usage of coala, check out the documentation or hit us up on chat. You can ignore or delete this text, it is commented and won't appear when the issue is submitted or previewed. Chat: https://coala.io/chat Issue Template: https://github.com/coala/coala/blob/master/CONTRIBUTING.rst#filing-issues Documentation: https://docs.coala.io --> When I try to install by `python setup.py install`, it is failed with this message. `UnicodeEncodeError: 'ascii' codec can't encode character '\xfc' in position 15224: ordinal not in range(128)` Also, the same happening when I try to run unit test on local. It needs to be fixed.
[ { "content": "#!/usr/bin/env python3\n\nimport datetime\nimport locale\nimport platform\nimport sys\nfrom os import getenv\nfrom subprocess import call\n\nimport setuptools.command.build_py\nfrom setuptools import find_packages, setup\nfrom setuptools.command.test import test as TestCommand\n\nfrom coalib import VERSION, assert_supported_version, get_version\nfrom coalib.misc.BuildManPage import BuildManPage\n\ntry:\n locale.getlocale()\nexcept (ValueError, UnicodeError):\n locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')\n\n\nassert_supported_version()\n\n\nclass BuildPyCommand(setuptools.command.build_py.build_py):\n\n def run(self):\n if platform.system() != 'Windows':\n self.run_command('build_manpage')\n setuptools.command.build_py.build_py.run(self)\n\n\nclass PyTestCommand(TestCommand):\n\n def run_tests(self):\n # import here, cause outside the eggs aren't loaded\n import pytest\n errno = pytest.main([])\n sys.exit(errno)\n\n\nclass BuildDocsCommand(setuptools.command.build_py.build_py):\n apidoc_command = (\n 'sphinx-apidoc', '-f', '-o', 'docs', '--no-toc', 'coalib'\n )\n doc_command = ('make', '-C', 'docs', 'html', 'SPHINXOPTS=-W')\n\n def run(self):\n errOne = call(self.apidoc_command)\n errTwo = call(self.doc_command)\n sys.exit(errOne or errTwo)\n\n\n# Generate API documentation only if we are running on readthedocs.io\non_rtd = getenv('READTHEDOCS', None) is not None\nif on_rtd:\n call(BuildDocsCommand.apidoc_command)\n if 'dev' in VERSION:\n current_version = datetime.datetime.now().strftime('%Y%m%d%H%M%S')\n call(['python3', '.misc/adjust_version_number.py', 'coalib/VERSION',\n '-b {}'.format(current_version)])\n VERSION = get_version()\n\nwith open('requirements.txt') as requirements:\n required = requirements.read().splitlines()\n\nwith open('test-requirements.txt') as requirements:\n test_required = requirements.read().splitlines()\n\nwith open('README.rst') as readme:\n long_description = readme.read()\n\n\nif __name__ == '__main__':\n if platform.system() != 'Windows':\n data_files = [('.', ['coala.1'])]\n else:\n data_files = [('.', [])]\n\n setup(name='coala',\n version=VERSION,\n description='Linting and Fixing Code for All Languages',\n author='The coala developers',\n author_email='[email protected]',\n maintainer='Lasse Schuirmann, Fabian Neuschmidt, Mischa Kr\\xfcger'\n if not on_rtd else 'L.S., F.N., M.K.',\n maintainer_email=('[email protected], '\n '[email protected], '\n '[email protected]'),\n url='http://coala.io/',\n platforms='any',\n packages=find_packages(exclude=['build.*', 'tests', 'tests.*']),\n install_requires=required,\n tests_require=test_required,\n package_data={'coalib': ['default_coafile', 'VERSION',\n 'bearlib/languages/documentation/*.coalang']\n },\n license='AGPL-3.0',\n data_files=data_files,\n long_description=long_description,\n entry_points={\n 'console_scripts': [\n 'coala = coalib.coala:main',\n 'coala-ci = coalib.coala_ci:main',\n 'coala-json = coalib.coala_json:main',\n 'coala-format = coalib.coala_format:main',\n 'coala-delete-orig = coalib.coala_delete_orig:main']},\n # from http://pypi.python.org/pypi?%3Aaction=list_classifiers\n classifiers=[\n 'Development Status :: 4 - Beta',\n\n 'Environment :: Console',\n 'Environment :: MacOS X',\n 'Environment :: Win32 (MS Windows)',\n 'Environment :: X11 Applications :: Gnome',\n\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Developers',\n\n 'License :: OSI Approved :: GNU Affero General Public License '\n 'v3 or later (AGPLv3+)',\n\n 'Operating System :: OS Independent',\n\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3 :: Only',\n\n 'Topic :: Scientific/Engineering :: Information Analysis',\n 'Topic :: Software Development :: Quality Assurance',\n 'Topic :: Text Processing :: Linguistic'],\n cmdclass={'build_manpage': BuildManPage,\n 'build_py': BuildPyCommand,\n 'docs': BuildDocsCommand,\n 'test': PyTestCommand})\n", "path": "setup.py" } ]
[ { "content": "#!/usr/bin/env python3\n\nimport datetime\nimport locale\nimport platform\nimport sys\nfrom os import getenv\nfrom subprocess import call\n\nimport setuptools.command.build_py\nfrom setuptools import find_packages, setup\nfrom setuptools.command.test import test as TestCommand\n\nfrom coalib import VERSION, assert_supported_version, get_version\nfrom coalib.misc.BuildManPage import BuildManPage\n\ntry:\n lc = locale.getlocale()\n pf = platform.system()\n if pf != 'Windows' and lc == (None, None):\n locale.setlocale(locale.LC_ALL, 'C.UTF-8')\nexcept (ValueError, UnicodeError):\n locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')\n\n\nassert_supported_version()\n\n\nclass BuildPyCommand(setuptools.command.build_py.build_py):\n\n def run(self):\n if platform.system() != 'Windows':\n self.run_command('build_manpage')\n setuptools.command.build_py.build_py.run(self)\n\n\nclass PyTestCommand(TestCommand):\n\n def run_tests(self):\n # import here, cause outside the eggs aren't loaded\n import pytest\n errno = pytest.main([])\n sys.exit(errno)\n\n\nclass BuildDocsCommand(setuptools.command.build_py.build_py):\n apidoc_command = (\n 'sphinx-apidoc', '-f', '-o', 'docs', '--no-toc', 'coalib'\n )\n doc_command = ('make', '-C', 'docs', 'html', 'SPHINXOPTS=-W')\n\n def run(self):\n errOne = call(self.apidoc_command)\n errTwo = call(self.doc_command)\n sys.exit(errOne or errTwo)\n\n\n# Generate API documentation only if we are running on readthedocs.io\non_rtd = getenv('READTHEDOCS', None) is not None\nif on_rtd:\n call(BuildDocsCommand.apidoc_command)\n if 'dev' in VERSION:\n current_version = datetime.datetime.now().strftime('%Y%m%d%H%M%S')\n call(['python3', '.misc/adjust_version_number.py', 'coalib/VERSION',\n '-b {}'.format(current_version)])\n VERSION = get_version()\n\nwith open('requirements.txt') as requirements:\n required = requirements.read().splitlines()\n\nwith open('test-requirements.txt') as requirements:\n test_required = requirements.read().splitlines()\n\nwith open('README.rst') as readme:\n long_description = readme.read()\n\n\nif __name__ == '__main__':\n if platform.system() != 'Windows':\n data_files = [('.', ['coala.1'])]\n else:\n data_files = [('.', [])]\n\n setup(name='coala',\n version=VERSION,\n description='Linting and Fixing Code for All Languages',\n author='The coala developers',\n author_email='[email protected]',\n maintainer='Lasse Schuirmann, Fabian Neuschmidt, Mischa Kr\\xfcger'\n if not on_rtd else 'L.S., F.N., M.K.',\n maintainer_email=('[email protected], '\n '[email protected], '\n '[email protected]'),\n url='http://coala.io/',\n platforms='any',\n packages=find_packages(exclude=['build.*', 'tests', 'tests.*']),\n install_requires=required,\n tests_require=test_required,\n package_data={'coalib': ['default_coafile', 'VERSION',\n 'bearlib/languages/documentation/*.coalang']\n },\n license='AGPL-3.0',\n data_files=data_files,\n long_description=long_description,\n entry_points={\n 'console_scripts': [\n 'coala = coalib.coala:main',\n 'coala-ci = coalib.coala_ci:main',\n 'coala-json = coalib.coala_json:main',\n 'coala-format = coalib.coala_format:main',\n 'coala-delete-orig = coalib.coala_delete_orig:main']},\n # from http://pypi.python.org/pypi?%3Aaction=list_classifiers\n classifiers=[\n 'Development Status :: 4 - Beta',\n\n 'Environment :: Console',\n 'Environment :: MacOS X',\n 'Environment :: Win32 (MS Windows)',\n 'Environment :: X11 Applications :: Gnome',\n\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Developers',\n\n 'License :: OSI Approved :: GNU Affero General Public License '\n 'v3 or later (AGPLv3+)',\n\n 'Operating System :: OS Independent',\n\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3 :: Only',\n\n 'Topic :: Scientific/Engineering :: Information Analysis',\n 'Topic :: Software Development :: Quality Assurance',\n 'Topic :: Text Processing :: Linguistic'],\n cmdclass={'build_manpage': BuildManPage,\n 'build_py': BuildPyCommand,\n 'docs': BuildDocsCommand,\n 'test': PyTestCommand})\n", "path": "setup.py" } ]
diff --git a/setup.py b/setup.py index 7ff6d8d1c2..76cd0c43c7 100755 --- a/setup.py +++ b/setup.py @@ -15,7 +15,10 @@ from coalib.misc.BuildManPage import BuildManPage try: - locale.getlocale() + lc = locale.getlocale() + pf = platform.system() + if pf != 'Windows' and lc == (None, None): + locale.setlocale(locale.LC_ALL, 'C.UTF-8') except (ValueError, UnicodeError): locale.setlocale(locale.LC_ALL, 'en_US.UTF-8') diff --git a/tests/__init__.py b/tests/__init__.py index e69de29bb2..2b212be3e6 100644 --- a/tests/__init__.py +++ b/tests/__init__.py @@ -0,0 +1,11 @@ +import locale +import platform + + +try: + lc = locale.getlocale() + ps = platform.system() + if ps != 'Windows' and lc == (None, None): + locale.setlocale(locale.LC_ALL, 'C.UTF-8') +except (ValueError, UnicodeError): + locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')
django-oscar__django-oscar-3214
Request not being passed to authenticate method ### Issue Summary Django-oscar's [`EmailAuthenticationForm`](https://github.com/django-oscar/django-oscar/blob/master/src/oscar/apps/customer/forms.py#L76) is a subclass of Django's [`AuthenticationForm`](https://github.com/django/django/blob/master/django/contrib/auth/forms.py#L163). When calling `clean` method, django `authenticate` is called without `request` object. Looking [at the code](https://github.com/django-oscar/django-oscar/blob/master/src/oscar/apps/customer/views.py#L136), the `AccountAuthView` is not passing the `request` object to the Form. Some custom backend, like django-axes's, requires that in order to work properly. That's why [this issue](https://github.com/django-oscar/django-oscar/issues/2111) was happening ### Steps to Reproduce 1. Create a simple authentication backend that requires the `request` object, and put it before django-oscar's backend: ``` # project/backends.py from django.contrib.auth.backends import ModelBackend class CustomBackend(ModelBackend): def authenticate(self, request, username: str = None, password: str = None, **kwargs: dict): if request is None: raise Exception('CustomBackend requires a request as an argument to authenticate') # some logic # On settings AUTHENTICATION_BACKENDS = [ 'project.backends.CustomBackend', 'oscar.apps.customer.auth_backends.EmailBackend', ] ``` 2. Try to login 3. Observe `Exception('CustomBackend requires a request as an argument to authenticate')` is raised ### Technical details ``` $ python --version Python 3.7.4 $ pip freeze | grep Django Django==2.2.6 $ pip freeze | grep django-oscar django-oscar==2.0.3 ```
[ { "content": "from django import http\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.contrib.auth import login as auth_login\nfrom django.contrib.auth import logout as auth_logout\nfrom django.contrib.auth import update_session_auth_hash\nfrom django.contrib.auth.forms import PasswordChangeForm\nfrom django.contrib.sites.shortcuts import get_current_site\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.urls import reverse, reverse_lazy\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views import generic\n\nfrom oscar.apps.customer.utils import get_password_reset_url\nfrom oscar.core.compat import get_user_model\nfrom oscar.core.loading import (\n get_class, get_classes, get_model, get_profile_class)\nfrom oscar.core.utils import safe_referrer\nfrom oscar.views.generic import PostActionMixin\n\nfrom . import signals\n\nPageTitleMixin, RegisterUserMixin = get_classes(\n 'customer.mixins', ['PageTitleMixin', 'RegisterUserMixin'])\nDispatcher = get_class('customer.utils', 'Dispatcher')\nEmailAuthenticationForm, EmailUserCreationForm, OrderSearchForm = get_classes(\n 'customer.forms', ['EmailAuthenticationForm', 'EmailUserCreationForm',\n 'OrderSearchForm'])\nProfileForm, ConfirmPasswordForm = get_classes(\n 'customer.forms', ['ProfileForm', 'ConfirmPasswordForm'])\nUserAddressForm = get_class('address.forms', 'UserAddressForm')\nOrder = get_model('order', 'Order')\nLine = get_model('basket', 'Line')\nBasket = get_model('basket', 'Basket')\nUserAddress = get_model('address', 'UserAddress')\nEmail = get_model('customer', 'Email')\nCommunicationEventType = get_model('customer', 'CommunicationEventType')\n\nUser = get_user_model()\n\n\n# =======\n# Account\n# =======\n\n\nclass AccountSummaryView(generic.RedirectView):\n \"\"\"\n View that exists for legacy reasons and customisability. It commonly gets\n called when the user clicks on \"Account\" in the navbar.\n\n Oscar defaults to just redirecting to the profile summary page (and\n that redirect can be configured via OSCAR_ACCOUNT_REDIRECT_URL), but\n it's also likely you want to display an 'account overview' page or\n such like. The presence of this view allows just that, without\n having to change a lot of templates.\n \"\"\"\n pattern_name = settings.OSCAR_ACCOUNTS_REDIRECT_URL\n permanent = False\n\n\nclass AccountRegistrationView(RegisterUserMixin, generic.FormView):\n form_class = EmailUserCreationForm\n template_name = 'oscar/customer/registration.html'\n redirect_field_name = 'next'\n\n def get(self, request, *args, **kwargs):\n if request.user.is_authenticated:\n return redirect(settings.LOGIN_REDIRECT_URL)\n return super().get(\n request, *args, **kwargs)\n\n def get_logged_in_redirect(self):\n return reverse('customer:summary')\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs['initial'] = {\n 'email': self.request.GET.get('email', ''),\n 'redirect_url': self.request.GET.get(self.redirect_field_name, '')\n }\n kwargs['host'] = self.request.get_host()\n return kwargs\n\n def get_context_data(self, *args, **kwargs):\n ctx = super().get_context_data(\n *args, **kwargs)\n ctx['cancel_url'] = safe_referrer(self.request, '')\n return ctx\n\n def form_valid(self, form):\n self.register_user(form)\n return redirect(form.cleaned_data['redirect_url'])\n\n\nclass AccountAuthView(RegisterUserMixin, generic.TemplateView):\n \"\"\"\n This is actually a slightly odd double form view that allows a customer to\n either login or register.\n \"\"\"\n template_name = 'oscar/customer/login_registration.html'\n login_prefix, registration_prefix = 'login', 'registration'\n login_form_class = EmailAuthenticationForm\n registration_form_class = EmailUserCreationForm\n redirect_field_name = 'next'\n\n def get(self, request, *args, **kwargs):\n if request.user.is_authenticated:\n return redirect(settings.LOGIN_REDIRECT_URL)\n return super().get(\n request, *args, **kwargs)\n\n def get_context_data(self, *args, **kwargs):\n ctx = super().get_context_data(*args, **kwargs)\n if 'login_form' not in kwargs:\n ctx['login_form'] = self.get_login_form()\n if 'registration_form' not in kwargs:\n ctx['registration_form'] = self.get_registration_form()\n return ctx\n\n def post(self, request, *args, **kwargs):\n # Use the name of the submit button to determine which form to validate\n if 'login_submit' in request.POST:\n return self.validate_login_form()\n elif 'registration_submit' in request.POST:\n return self.validate_registration_form()\n return http.HttpResponseBadRequest()\n\n # LOGIN\n\n def get_login_form(self, bind_data=False):\n return self.login_form_class(\n **self.get_login_form_kwargs(bind_data))\n\n def get_login_form_kwargs(self, bind_data=False):\n kwargs = {}\n kwargs['host'] = self.request.get_host()\n kwargs['prefix'] = self.login_prefix\n kwargs['initial'] = {\n 'redirect_url': self.request.GET.get(self.redirect_field_name, ''),\n }\n if bind_data and self.request.method in ('POST', 'PUT'):\n kwargs.update({\n 'data': self.request.POST,\n 'files': self.request.FILES,\n })\n return kwargs\n\n def validate_login_form(self):\n form = self.get_login_form(bind_data=True)\n if form.is_valid():\n user = form.get_user()\n\n # Grab a reference to the session ID before logging in\n old_session_key = self.request.session.session_key\n\n auth_login(self.request, form.get_user())\n\n # Raise signal robustly (we don't want exceptions to crash the\n # request handling). We use a custom signal as we want to track the\n # session key before calling login (which cycles the session ID).\n signals.user_logged_in.send_robust(\n sender=self, request=self.request, user=user,\n old_session_key=old_session_key)\n\n msg = self.get_login_success_message(form)\n if msg:\n messages.success(self.request, msg)\n\n return redirect(self.get_login_success_url(form))\n\n ctx = self.get_context_data(login_form=form)\n return self.render_to_response(ctx)\n\n def get_login_success_message(self, form):\n return _(\"Welcome back\")\n\n def get_login_success_url(self, form):\n redirect_url = form.cleaned_data['redirect_url']\n if redirect_url:\n return redirect_url\n\n # Redirect staff members to dashboard as that's the most likely place\n # they'll want to visit if they're logging in.\n if self.request.user.is_staff:\n return reverse('dashboard:index')\n\n return settings.LOGIN_REDIRECT_URL\n\n # REGISTRATION\n\n def get_registration_form(self, bind_data=False):\n return self.registration_form_class(\n **self.get_registration_form_kwargs(bind_data))\n\n def get_registration_form_kwargs(self, bind_data=False):\n kwargs = {}\n kwargs['host'] = self.request.get_host()\n kwargs['prefix'] = self.registration_prefix\n kwargs['initial'] = {\n 'redirect_url': self.request.GET.get(self.redirect_field_name, ''),\n }\n if bind_data and self.request.method in ('POST', 'PUT'):\n kwargs.update({\n 'data': self.request.POST,\n 'files': self.request.FILES,\n })\n return kwargs\n\n def validate_registration_form(self):\n form = self.get_registration_form(bind_data=True)\n if form.is_valid():\n self.register_user(form)\n\n msg = self.get_registration_success_message(form)\n messages.success(self.request, msg)\n\n return redirect(self.get_registration_success_url(form))\n\n ctx = self.get_context_data(registration_form=form)\n return self.render_to_response(ctx)\n\n def get_registration_success_message(self, form):\n return _(\"Thanks for registering!\")\n\n def get_registration_success_url(self, form):\n redirect_url = form.cleaned_data['redirect_url']\n if redirect_url:\n return redirect_url\n\n return settings.LOGIN_REDIRECT_URL\n\n\nclass LogoutView(generic.RedirectView):\n url = settings.OSCAR_HOMEPAGE\n permanent = False\n\n def get(self, request, *args, **kwargs):\n auth_logout(request)\n response = super().get(request, *args, **kwargs)\n\n for cookie in settings.OSCAR_COOKIES_DELETE_ON_LOGOUT:\n response.delete_cookie(cookie)\n\n return response\n\n\n# =============\n# Profile\n# =============\n\n\nclass ProfileView(PageTitleMixin, generic.TemplateView):\n template_name = 'oscar/customer/profile/profile.html'\n page_title = _('Profile')\n active_tab = 'profile'\n\n def get_context_data(self, **kwargs):\n ctx = super().get_context_data(**kwargs)\n ctx['profile_fields'] = self.get_profile_fields(self.request.user)\n return ctx\n\n def get_profile_fields(self, user):\n field_data = []\n\n # Check for custom user model\n for field_name in User._meta.additional_fields:\n field_data.append(\n self.get_model_field_data(user, field_name))\n\n # Check for profile class\n profile_class = get_profile_class()\n if profile_class:\n try:\n profile = profile_class.objects.get(user=user)\n except ObjectDoesNotExist:\n profile = profile_class(user=user)\n\n field_names = [f.name for f in profile._meta.local_fields]\n for field_name in field_names:\n if field_name in ('user', 'id'):\n continue\n field_data.append(\n self.get_model_field_data(profile, field_name))\n\n return field_data\n\n def get_model_field_data(self, model_class, field_name):\n \"\"\"\n Extract the verbose name and value for a model's field value\n \"\"\"\n field = model_class._meta.get_field(field_name)\n if field.choices:\n value = getattr(model_class, 'get_%s_display' % field_name)()\n else:\n value = getattr(model_class, field_name)\n return {\n 'name': getattr(field, 'verbose_name'),\n 'value': value,\n }\n\n\nclass ProfileUpdateView(PageTitleMixin, generic.FormView):\n form_class = ProfileForm\n template_name = 'oscar/customer/profile/profile_form.html'\n communication_type_code = 'EMAIL_CHANGED'\n page_title = _('Edit Profile')\n active_tab = 'profile'\n success_url = reverse_lazy('customer:profile-view')\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs['user'] = self.request.user\n return kwargs\n\n def form_valid(self, form):\n # Grab current user instance before we save form. We may need this to\n # send a warning email if the email address is changed.\n try:\n old_user = User.objects.get(id=self.request.user.id)\n except User.DoesNotExist:\n old_user = None\n\n form.save()\n\n # We have to look up the email address from the form's\n # cleaned data because the object created by form.save() can\n # either be a user or profile instance depending whether a profile\n # class has been specified by the AUTH_PROFILE_MODULE setting.\n new_email = form.cleaned_data.get('email')\n if new_email and old_user and new_email != old_user.email:\n # Email address has changed - send a confirmation email to the old\n # address including a password reset link in case this is a\n # suspicious change.\n ctx = {\n 'user': self.request.user,\n 'site': get_current_site(self.request),\n 'reset_url': get_password_reset_url(old_user),\n 'new_email': new_email,\n }\n msgs = CommunicationEventType.objects.get_and_render(\n code=self.communication_type_code, context=ctx)\n Dispatcher().dispatch_user_messages(old_user, msgs)\n\n messages.success(self.request, _(\"Profile updated\"))\n return redirect(self.get_success_url())\n\n\nclass ProfileDeleteView(PageTitleMixin, generic.FormView):\n form_class = ConfirmPasswordForm\n template_name = 'oscar/customer/profile/profile_delete.html'\n page_title = _('Delete profile')\n active_tab = 'profile'\n success_url = settings.OSCAR_HOMEPAGE\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs['user'] = self.request.user\n return kwargs\n\n def form_valid(self, form):\n self.request.user.delete()\n messages.success(\n self.request,\n _(\"Your profile has now been deleted. Thanks for using the site.\"))\n return redirect(self.get_success_url())\n\n\nclass ChangePasswordView(PageTitleMixin, generic.FormView):\n form_class = PasswordChangeForm\n template_name = 'oscar/customer/profile/change_password_form.html'\n communication_type_code = 'PASSWORD_CHANGED'\n page_title = _('Change Password')\n active_tab = 'profile'\n success_url = reverse_lazy('customer:profile-view')\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs['user'] = self.request.user\n return kwargs\n\n def form_valid(self, form):\n form.save()\n update_session_auth_hash(self.request, self.request.user)\n messages.success(self.request, _(\"Password updated\"))\n\n ctx = {\n 'user': self.request.user,\n 'site': get_current_site(self.request),\n 'reset_url': get_password_reset_url(self.request.user),\n }\n msgs = CommunicationEventType.objects.get_and_render(\n code=self.communication_type_code, context=ctx)\n Dispatcher().dispatch_user_messages(self.request.user, msgs)\n\n return redirect(self.get_success_url())\n\n\n# =============\n# Email history\n# =============\n\nclass EmailHistoryView(PageTitleMixin, generic.ListView):\n context_object_name = \"emails\"\n template_name = 'oscar/customer/email/email_list.html'\n paginate_by = settings.OSCAR_EMAILS_PER_PAGE\n page_title = _('Email History')\n active_tab = 'emails'\n\n def get_queryset(self):\n \"\"\"\n Return Queryset of :py:class:`Email <oscar.apps.customer.abstract_models.AbstractEmail>`\n instances, that has been sent to the currently authenticated user.\n \"\"\" # noqa\n return Email._default_manager.filter(user=self.request.user)\n\n\nclass EmailDetailView(PageTitleMixin, generic.DetailView):\n \"\"\"Customer email\"\"\"\n template_name = \"oscar/customer/email/email_detail.html\"\n context_object_name = 'email'\n active_tab = 'emails'\n\n def get_object(self, queryset=None):\n return get_object_or_404(Email, user=self.request.user,\n id=self.kwargs['email_id'])\n\n def get_page_title(self):\n \"\"\"Append email subject to page title\"\"\"\n return '%s: %s' % (_('Email'), self.object.subject)\n\n\n# =============\n# Order history\n# =============\n\nclass OrderHistoryView(PageTitleMixin, generic.ListView):\n \"\"\"\n Customer order history\n \"\"\"\n context_object_name = \"orders\"\n template_name = 'oscar/customer/order/order_list.html'\n paginate_by = settings.OSCAR_ORDERS_PER_PAGE\n model = Order\n form_class = OrderSearchForm\n page_title = _('Order History')\n active_tab = 'orders'\n\n def get(self, request, *args, **kwargs):\n if 'date_from' in request.GET:\n self.form = self.form_class(self.request.GET)\n if not self.form.is_valid():\n self.object_list = self.get_queryset()\n ctx = self.get_context_data(object_list=self.object_list)\n return self.render_to_response(ctx)\n data = self.form.cleaned_data\n\n # If the user has just entered an order number, try and look it up\n # and redirect immediately to the order detail page.\n if data['order_number'] and not (data['date_to']\n or data['date_from']):\n try:\n order = Order.objects.get(\n number=data['order_number'], user=self.request.user)\n except Order.DoesNotExist:\n pass\n else:\n return redirect(\n 'customer:order', order_number=order.number)\n else:\n self.form = self.form_class()\n return super().get(request, *args, **kwargs)\n\n def get_queryset(self):\n \"\"\"\n Return Queryset of :py:class:`Order <oscar.apps.order.abstract_models.AbstractOrder>`\n instances for the currently authenticated user.\n \"\"\" # noqa\n qs = self.model._default_manager.filter(user=self.request.user)\n if self.form.is_bound and self.form.is_valid():\n qs = qs.filter(**self.form.get_filters())\n return qs\n\n def get_context_data(self, *args, **kwargs):\n ctx = super().get_context_data(*args, **kwargs)\n ctx['form'] = self.form\n return ctx\n\n\nclass OrderDetailView(PageTitleMixin, PostActionMixin, generic.DetailView):\n model = Order\n active_tab = 'orders'\n\n def get_template_names(self):\n return [\"oscar/customer/order/order_detail.html\"]\n\n def get_page_title(self):\n \"\"\"\n Order number as page title\n \"\"\"\n return '%s #%s' % (_('Order'), self.object.number)\n\n def get_object(self, queryset=None):\n return get_object_or_404(self.model, user=self.request.user,\n number=self.kwargs['order_number'])\n\n def do_reorder(self, order): # noqa (too complex (10))\n \"\"\"\n 'Re-order' a previous order.\n\n This puts the contents of the previous order into your basket\n \"\"\"\n # Collect lines to be added to the basket and any warnings for lines\n # that are no longer available.\n basket = self.request.basket\n lines_to_add = []\n warnings = []\n for line in order.lines.all():\n is_available, reason = line.is_available_to_reorder(\n basket, self.request.strategy)\n if is_available:\n lines_to_add.append(line)\n else:\n warnings.append(reason)\n\n # Check whether the number of items in the basket won't exceed the\n # maximum.\n total_quantity = sum([line.quantity for line in lines_to_add])\n is_quantity_allowed, reason = basket.is_quantity_allowed(\n total_quantity)\n if not is_quantity_allowed:\n messages.warning(self.request, reason)\n self.response = redirect('customer:order-list')\n return\n\n # Add any warnings\n for warning in warnings:\n messages.warning(self.request, warning)\n\n for line in lines_to_add:\n options = []\n for attribute in line.attributes.all():\n if attribute.option:\n options.append({\n 'option': attribute.option,\n 'value': attribute.value})\n basket.add_product(line.product, line.quantity, options)\n\n if len(lines_to_add) > 0:\n self.response = redirect('basket:summary')\n messages.info(\n self.request,\n _(\"All available lines from order %(number)s \"\n \"have been added to your basket\") % {'number': order.number})\n else:\n self.response = redirect('customer:order-list')\n messages.warning(\n self.request,\n _(\"It is not possible to re-order order %(number)s \"\n \"as none of its lines are available to purchase\") %\n {'number': order.number})\n\n\nclass OrderLineView(PostActionMixin, generic.DetailView):\n \"\"\"Customer order line\"\"\"\n\n def get_object(self, queryset=None):\n order = get_object_or_404(Order, user=self.request.user,\n number=self.kwargs['order_number'])\n return order.lines.get(id=self.kwargs['line_id'])\n\n def do_reorder(self, line):\n self.response = redirect('customer:order', self.kwargs['order_number'])\n basket = self.request.basket\n\n line_available_to_reorder, reason = line.is_available_to_reorder(\n basket, self.request.strategy)\n\n if not line_available_to_reorder:\n messages.warning(self.request, reason)\n return\n\n # We need to pass response to the get_or_create... method\n # as a new basket might need to be created\n self.response = redirect('basket:summary')\n\n # Convert line attributes into basket options\n options = []\n for attribute in line.attributes.all():\n if attribute.option:\n options.append({'option': attribute.option,\n 'value': attribute.value})\n basket.add_product(line.product, line.quantity, options)\n\n if line.quantity > 1:\n msg = _(\"%(qty)d copies of '%(product)s' have been added to your\"\n \" basket\") % {\n 'qty': line.quantity, 'product': line.product}\n else:\n msg = _(\"'%s' has been added to your basket\") % line.product\n\n messages.info(self.request, msg)\n\n\nclass AnonymousOrderDetailView(generic.DetailView):\n model = Order\n template_name = \"oscar/customer/anon_order.html\"\n\n def get_object(self, queryset=None):\n # Check URL hash matches that for order to prevent spoof attacks\n order = get_object_or_404(self.model, user=None,\n number=self.kwargs['order_number'])\n if not order.check_verification_hash(self.kwargs['hash']):\n raise http.Http404()\n return order\n\n\n# ------------\n# Address book\n# ------------\n\nclass AddressListView(PageTitleMixin, generic.ListView):\n \"\"\"Customer address book\"\"\"\n context_object_name = \"addresses\"\n template_name = 'oscar/customer/address/address_list.html'\n paginate_by = settings.OSCAR_ADDRESSES_PER_PAGE\n active_tab = 'addresses'\n page_title = _('Address Book')\n\n def get_queryset(self):\n \"\"\"Return customer's addresses\"\"\"\n return UserAddress._default_manager.filter(user=self.request.user)\n\n\nclass AddressCreateView(PageTitleMixin, generic.CreateView):\n form_class = UserAddressForm\n model = UserAddress\n template_name = 'oscar/customer/address/address_form.html'\n active_tab = 'addresses'\n page_title = _('Add a new address')\n success_url = reverse_lazy('customer:address-list')\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs['user'] = self.request.user\n return kwargs\n\n def get_context_data(self, **kwargs):\n ctx = super().get_context_data(**kwargs)\n ctx['title'] = _('Add a new address')\n return ctx\n\n def get_success_url(self):\n messages.success(self.request,\n _(\"Address '%s' created\") % self.object.summary)\n return super().get_success_url()\n\n\nclass AddressUpdateView(PageTitleMixin, generic.UpdateView):\n form_class = UserAddressForm\n model = UserAddress\n template_name = 'oscar/customer/address/address_form.html'\n active_tab = 'addresses'\n page_title = _('Edit address')\n success_url = reverse_lazy('customer:address-list')\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs['user'] = self.request.user\n return kwargs\n\n def get_context_data(self, **kwargs):\n ctx = super().get_context_data(**kwargs)\n ctx['title'] = _('Edit address')\n return ctx\n\n def get_queryset(self):\n return self.request.user.addresses.all()\n\n def get_success_url(self):\n messages.success(self.request,\n _(\"Address '%s' updated\") % self.object.summary)\n return super().get_success_url()\n\n\nclass AddressDeleteView(PageTitleMixin, generic.DeleteView):\n model = UserAddress\n template_name = \"oscar/customer/address/address_delete.html\"\n page_title = _('Delete address?')\n active_tab = 'addresses'\n context_object_name = 'address'\n success_url = reverse_lazy('customer:address-list')\n\n def get_queryset(self):\n return UserAddress._default_manager.filter(user=self.request.user)\n\n def get_success_url(self):\n messages.success(self.request,\n _(\"Address '%s' deleted\") % self.object.summary)\n return super().get_success_url()\n\n\nclass AddressChangeStatusView(generic.RedirectView):\n \"\"\"\n Sets an address as default_for_(billing|shipping)\n \"\"\"\n url = reverse_lazy('customer:address-list')\n permanent = False\n\n def get(self, request, pk=None, action=None, *args, **kwargs):\n address = get_object_or_404(UserAddress, user=self.request.user,\n pk=pk)\n # We don't want the user to set an address as the default shipping\n # address, though they should be able to set it as their billing\n # address.\n if address.country.is_shipping_country:\n setattr(address, 'is_%s' % action, True)\n elif action == 'default_for_billing':\n setattr(address, 'is_default_for_billing', True)\n else:\n messages.error(request, _('We do not ship to this country'))\n address.save()\n return super().get(\n request, *args, **kwargs)\n", "path": "src/oscar/apps/customer/views.py" } ]
[ { "content": "from django import http\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.contrib.auth import login as auth_login\nfrom django.contrib.auth import logout as auth_logout\nfrom django.contrib.auth import update_session_auth_hash\nfrom django.contrib.auth.forms import PasswordChangeForm\nfrom django.contrib.sites.shortcuts import get_current_site\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.urls import reverse, reverse_lazy\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views import generic\n\nfrom oscar.apps.customer.utils import get_password_reset_url\nfrom oscar.core.compat import get_user_model\nfrom oscar.core.loading import (\n get_class, get_classes, get_model, get_profile_class)\nfrom oscar.core.utils import safe_referrer\nfrom oscar.views.generic import PostActionMixin\n\nfrom . import signals\n\nPageTitleMixin, RegisterUserMixin = get_classes(\n 'customer.mixins', ['PageTitleMixin', 'RegisterUserMixin'])\nDispatcher = get_class('customer.utils', 'Dispatcher')\nEmailAuthenticationForm, EmailUserCreationForm, OrderSearchForm = get_classes(\n 'customer.forms', ['EmailAuthenticationForm', 'EmailUserCreationForm',\n 'OrderSearchForm'])\nProfileForm, ConfirmPasswordForm = get_classes(\n 'customer.forms', ['ProfileForm', 'ConfirmPasswordForm'])\nUserAddressForm = get_class('address.forms', 'UserAddressForm')\nOrder = get_model('order', 'Order')\nLine = get_model('basket', 'Line')\nBasket = get_model('basket', 'Basket')\nUserAddress = get_model('address', 'UserAddress')\nEmail = get_model('customer', 'Email')\nCommunicationEventType = get_model('customer', 'CommunicationEventType')\n\nUser = get_user_model()\n\n\n# =======\n# Account\n# =======\n\n\nclass AccountSummaryView(generic.RedirectView):\n \"\"\"\n View that exists for legacy reasons and customisability. It commonly gets\n called when the user clicks on \"Account\" in the navbar.\n\n Oscar defaults to just redirecting to the profile summary page (and\n that redirect can be configured via OSCAR_ACCOUNT_REDIRECT_URL), but\n it's also likely you want to display an 'account overview' page or\n such like. The presence of this view allows just that, without\n having to change a lot of templates.\n \"\"\"\n pattern_name = settings.OSCAR_ACCOUNTS_REDIRECT_URL\n permanent = False\n\n\nclass AccountRegistrationView(RegisterUserMixin, generic.FormView):\n form_class = EmailUserCreationForm\n template_name = 'oscar/customer/registration.html'\n redirect_field_name = 'next'\n\n def get(self, request, *args, **kwargs):\n if request.user.is_authenticated:\n return redirect(settings.LOGIN_REDIRECT_URL)\n return super().get(\n request, *args, **kwargs)\n\n def get_logged_in_redirect(self):\n return reverse('customer:summary')\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs['initial'] = {\n 'email': self.request.GET.get('email', ''),\n 'redirect_url': self.request.GET.get(self.redirect_field_name, '')\n }\n kwargs['host'] = self.request.get_host()\n return kwargs\n\n def get_context_data(self, *args, **kwargs):\n ctx = super().get_context_data(\n *args, **kwargs)\n ctx['cancel_url'] = safe_referrer(self.request, '')\n return ctx\n\n def form_valid(self, form):\n self.register_user(form)\n return redirect(form.cleaned_data['redirect_url'])\n\n\nclass AccountAuthView(RegisterUserMixin, generic.TemplateView):\n \"\"\"\n This is actually a slightly odd double form view that allows a customer to\n either login or register.\n \"\"\"\n template_name = 'oscar/customer/login_registration.html'\n login_prefix, registration_prefix = 'login', 'registration'\n login_form_class = EmailAuthenticationForm\n registration_form_class = EmailUserCreationForm\n redirect_field_name = 'next'\n\n def get(self, request, *args, **kwargs):\n if request.user.is_authenticated:\n return redirect(settings.LOGIN_REDIRECT_URL)\n return super().get(\n request, *args, **kwargs)\n\n def get_context_data(self, *args, **kwargs):\n ctx = super().get_context_data(*args, **kwargs)\n if 'login_form' not in kwargs:\n ctx['login_form'] = self.get_login_form()\n if 'registration_form' not in kwargs:\n ctx['registration_form'] = self.get_registration_form()\n return ctx\n\n def post(self, request, *args, **kwargs):\n # Use the name of the submit button to determine which form to validate\n if 'login_submit' in request.POST:\n return self.validate_login_form()\n elif 'registration_submit' in request.POST:\n return self.validate_registration_form()\n return http.HttpResponseBadRequest()\n\n # LOGIN\n\n def get_login_form(self, bind_data=False):\n return self.login_form_class(\n **self.get_login_form_kwargs(bind_data))\n\n def get_login_form_kwargs(self, bind_data=False):\n kwargs = {}\n kwargs['request'] = self.request\n kwargs['host'] = self.request.get_host()\n kwargs['prefix'] = self.login_prefix\n kwargs['initial'] = {\n 'redirect_url': self.request.GET.get(self.redirect_field_name, ''),\n }\n if bind_data and self.request.method in ('POST', 'PUT'):\n kwargs.update({\n 'data': self.request.POST,\n 'files': self.request.FILES,\n })\n return kwargs\n\n def validate_login_form(self):\n form = self.get_login_form(bind_data=True)\n if form.is_valid():\n user = form.get_user()\n\n # Grab a reference to the session ID before logging in\n old_session_key = self.request.session.session_key\n\n auth_login(self.request, form.get_user())\n\n # Raise signal robustly (we don't want exceptions to crash the\n # request handling). We use a custom signal as we want to track the\n # session key before calling login (which cycles the session ID).\n signals.user_logged_in.send_robust(\n sender=self, request=self.request, user=user,\n old_session_key=old_session_key)\n\n msg = self.get_login_success_message(form)\n if msg:\n messages.success(self.request, msg)\n\n return redirect(self.get_login_success_url(form))\n\n ctx = self.get_context_data(login_form=form)\n return self.render_to_response(ctx)\n\n def get_login_success_message(self, form):\n return _(\"Welcome back\")\n\n def get_login_success_url(self, form):\n redirect_url = form.cleaned_data['redirect_url']\n if redirect_url:\n return redirect_url\n\n # Redirect staff members to dashboard as that's the most likely place\n # they'll want to visit if they're logging in.\n if self.request.user.is_staff:\n return reverse('dashboard:index')\n\n return settings.LOGIN_REDIRECT_URL\n\n # REGISTRATION\n\n def get_registration_form(self, bind_data=False):\n return self.registration_form_class(\n **self.get_registration_form_kwargs(bind_data))\n\n def get_registration_form_kwargs(self, bind_data=False):\n kwargs = {}\n kwargs['host'] = self.request.get_host()\n kwargs['prefix'] = self.registration_prefix\n kwargs['initial'] = {\n 'redirect_url': self.request.GET.get(self.redirect_field_name, ''),\n }\n if bind_data and self.request.method in ('POST', 'PUT'):\n kwargs.update({\n 'data': self.request.POST,\n 'files': self.request.FILES,\n })\n return kwargs\n\n def validate_registration_form(self):\n form = self.get_registration_form(bind_data=True)\n if form.is_valid():\n self.register_user(form)\n\n msg = self.get_registration_success_message(form)\n messages.success(self.request, msg)\n\n return redirect(self.get_registration_success_url(form))\n\n ctx = self.get_context_data(registration_form=form)\n return self.render_to_response(ctx)\n\n def get_registration_success_message(self, form):\n return _(\"Thanks for registering!\")\n\n def get_registration_success_url(self, form):\n redirect_url = form.cleaned_data['redirect_url']\n if redirect_url:\n return redirect_url\n\n return settings.LOGIN_REDIRECT_URL\n\n\nclass LogoutView(generic.RedirectView):\n url = settings.OSCAR_HOMEPAGE\n permanent = False\n\n def get(self, request, *args, **kwargs):\n auth_logout(request)\n response = super().get(request, *args, **kwargs)\n\n for cookie in settings.OSCAR_COOKIES_DELETE_ON_LOGOUT:\n response.delete_cookie(cookie)\n\n return response\n\n\n# =============\n# Profile\n# =============\n\n\nclass ProfileView(PageTitleMixin, generic.TemplateView):\n template_name = 'oscar/customer/profile/profile.html'\n page_title = _('Profile')\n active_tab = 'profile'\n\n def get_context_data(self, **kwargs):\n ctx = super().get_context_data(**kwargs)\n ctx['profile_fields'] = self.get_profile_fields(self.request.user)\n return ctx\n\n def get_profile_fields(self, user):\n field_data = []\n\n # Check for custom user model\n for field_name in User._meta.additional_fields:\n field_data.append(\n self.get_model_field_data(user, field_name))\n\n # Check for profile class\n profile_class = get_profile_class()\n if profile_class:\n try:\n profile = profile_class.objects.get(user=user)\n except ObjectDoesNotExist:\n profile = profile_class(user=user)\n\n field_names = [f.name for f in profile._meta.local_fields]\n for field_name in field_names:\n if field_name in ('user', 'id'):\n continue\n field_data.append(\n self.get_model_field_data(profile, field_name))\n\n return field_data\n\n def get_model_field_data(self, model_class, field_name):\n \"\"\"\n Extract the verbose name and value for a model's field value\n \"\"\"\n field = model_class._meta.get_field(field_name)\n if field.choices:\n value = getattr(model_class, 'get_%s_display' % field_name)()\n else:\n value = getattr(model_class, field_name)\n return {\n 'name': getattr(field, 'verbose_name'),\n 'value': value,\n }\n\n\nclass ProfileUpdateView(PageTitleMixin, generic.FormView):\n form_class = ProfileForm\n template_name = 'oscar/customer/profile/profile_form.html'\n communication_type_code = 'EMAIL_CHANGED'\n page_title = _('Edit Profile')\n active_tab = 'profile'\n success_url = reverse_lazy('customer:profile-view')\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs['user'] = self.request.user\n return kwargs\n\n def form_valid(self, form):\n # Grab current user instance before we save form. We may need this to\n # send a warning email if the email address is changed.\n try:\n old_user = User.objects.get(id=self.request.user.id)\n except User.DoesNotExist:\n old_user = None\n\n form.save()\n\n # We have to look up the email address from the form's\n # cleaned data because the object created by form.save() can\n # either be a user or profile instance depending whether a profile\n # class has been specified by the AUTH_PROFILE_MODULE setting.\n new_email = form.cleaned_data.get('email')\n if new_email and old_user and new_email != old_user.email:\n # Email address has changed - send a confirmation email to the old\n # address including a password reset link in case this is a\n # suspicious change.\n ctx = {\n 'user': self.request.user,\n 'site': get_current_site(self.request),\n 'reset_url': get_password_reset_url(old_user),\n 'new_email': new_email,\n }\n msgs = CommunicationEventType.objects.get_and_render(\n code=self.communication_type_code, context=ctx)\n Dispatcher().dispatch_user_messages(old_user, msgs)\n\n messages.success(self.request, _(\"Profile updated\"))\n return redirect(self.get_success_url())\n\n\nclass ProfileDeleteView(PageTitleMixin, generic.FormView):\n form_class = ConfirmPasswordForm\n template_name = 'oscar/customer/profile/profile_delete.html'\n page_title = _('Delete profile')\n active_tab = 'profile'\n success_url = settings.OSCAR_HOMEPAGE\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs['user'] = self.request.user\n return kwargs\n\n def form_valid(self, form):\n self.request.user.delete()\n messages.success(\n self.request,\n _(\"Your profile has now been deleted. Thanks for using the site.\"))\n return redirect(self.get_success_url())\n\n\nclass ChangePasswordView(PageTitleMixin, generic.FormView):\n form_class = PasswordChangeForm\n template_name = 'oscar/customer/profile/change_password_form.html'\n communication_type_code = 'PASSWORD_CHANGED'\n page_title = _('Change Password')\n active_tab = 'profile'\n success_url = reverse_lazy('customer:profile-view')\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs['user'] = self.request.user\n return kwargs\n\n def form_valid(self, form):\n form.save()\n update_session_auth_hash(self.request, self.request.user)\n messages.success(self.request, _(\"Password updated\"))\n\n ctx = {\n 'user': self.request.user,\n 'site': get_current_site(self.request),\n 'reset_url': get_password_reset_url(self.request.user),\n }\n msgs = CommunicationEventType.objects.get_and_render(\n code=self.communication_type_code, context=ctx)\n Dispatcher().dispatch_user_messages(self.request.user, msgs)\n\n return redirect(self.get_success_url())\n\n\n# =============\n# Email history\n# =============\n\nclass EmailHistoryView(PageTitleMixin, generic.ListView):\n context_object_name = \"emails\"\n template_name = 'oscar/customer/email/email_list.html'\n paginate_by = settings.OSCAR_EMAILS_PER_PAGE\n page_title = _('Email History')\n active_tab = 'emails'\n\n def get_queryset(self):\n \"\"\"\n Return Queryset of :py:class:`Email <oscar.apps.customer.abstract_models.AbstractEmail>`\n instances, that has been sent to the currently authenticated user.\n \"\"\" # noqa\n return Email._default_manager.filter(user=self.request.user)\n\n\nclass EmailDetailView(PageTitleMixin, generic.DetailView):\n \"\"\"Customer email\"\"\"\n template_name = \"oscar/customer/email/email_detail.html\"\n context_object_name = 'email'\n active_tab = 'emails'\n\n def get_object(self, queryset=None):\n return get_object_or_404(Email, user=self.request.user,\n id=self.kwargs['email_id'])\n\n def get_page_title(self):\n \"\"\"Append email subject to page title\"\"\"\n return '%s: %s' % (_('Email'), self.object.subject)\n\n\n# =============\n# Order history\n# =============\n\nclass OrderHistoryView(PageTitleMixin, generic.ListView):\n \"\"\"\n Customer order history\n \"\"\"\n context_object_name = \"orders\"\n template_name = 'oscar/customer/order/order_list.html'\n paginate_by = settings.OSCAR_ORDERS_PER_PAGE\n model = Order\n form_class = OrderSearchForm\n page_title = _('Order History')\n active_tab = 'orders'\n\n def get(self, request, *args, **kwargs):\n if 'date_from' in request.GET:\n self.form = self.form_class(self.request.GET)\n if not self.form.is_valid():\n self.object_list = self.get_queryset()\n ctx = self.get_context_data(object_list=self.object_list)\n return self.render_to_response(ctx)\n data = self.form.cleaned_data\n\n # If the user has just entered an order number, try and look it up\n # and redirect immediately to the order detail page.\n if data['order_number'] and not (data['date_to']\n or data['date_from']):\n try:\n order = Order.objects.get(\n number=data['order_number'], user=self.request.user)\n except Order.DoesNotExist:\n pass\n else:\n return redirect(\n 'customer:order', order_number=order.number)\n else:\n self.form = self.form_class()\n return super().get(request, *args, **kwargs)\n\n def get_queryset(self):\n \"\"\"\n Return Queryset of :py:class:`Order <oscar.apps.order.abstract_models.AbstractOrder>`\n instances for the currently authenticated user.\n \"\"\" # noqa\n qs = self.model._default_manager.filter(user=self.request.user)\n if self.form.is_bound and self.form.is_valid():\n qs = qs.filter(**self.form.get_filters())\n return qs\n\n def get_context_data(self, *args, **kwargs):\n ctx = super().get_context_data(*args, **kwargs)\n ctx['form'] = self.form\n return ctx\n\n\nclass OrderDetailView(PageTitleMixin, PostActionMixin, generic.DetailView):\n model = Order\n active_tab = 'orders'\n\n def get_template_names(self):\n return [\"oscar/customer/order/order_detail.html\"]\n\n def get_page_title(self):\n \"\"\"\n Order number as page title\n \"\"\"\n return '%s #%s' % (_('Order'), self.object.number)\n\n def get_object(self, queryset=None):\n return get_object_or_404(self.model, user=self.request.user,\n number=self.kwargs['order_number'])\n\n def do_reorder(self, order): # noqa (too complex (10))\n \"\"\"\n 'Re-order' a previous order.\n\n This puts the contents of the previous order into your basket\n \"\"\"\n # Collect lines to be added to the basket and any warnings for lines\n # that are no longer available.\n basket = self.request.basket\n lines_to_add = []\n warnings = []\n for line in order.lines.all():\n is_available, reason = line.is_available_to_reorder(\n basket, self.request.strategy)\n if is_available:\n lines_to_add.append(line)\n else:\n warnings.append(reason)\n\n # Check whether the number of items in the basket won't exceed the\n # maximum.\n total_quantity = sum([line.quantity for line in lines_to_add])\n is_quantity_allowed, reason = basket.is_quantity_allowed(\n total_quantity)\n if not is_quantity_allowed:\n messages.warning(self.request, reason)\n self.response = redirect('customer:order-list')\n return\n\n # Add any warnings\n for warning in warnings:\n messages.warning(self.request, warning)\n\n for line in lines_to_add:\n options = []\n for attribute in line.attributes.all():\n if attribute.option:\n options.append({\n 'option': attribute.option,\n 'value': attribute.value})\n basket.add_product(line.product, line.quantity, options)\n\n if len(lines_to_add) > 0:\n self.response = redirect('basket:summary')\n messages.info(\n self.request,\n _(\"All available lines from order %(number)s \"\n \"have been added to your basket\") % {'number': order.number})\n else:\n self.response = redirect('customer:order-list')\n messages.warning(\n self.request,\n _(\"It is not possible to re-order order %(number)s \"\n \"as none of its lines are available to purchase\") %\n {'number': order.number})\n\n\nclass OrderLineView(PostActionMixin, generic.DetailView):\n \"\"\"Customer order line\"\"\"\n\n def get_object(self, queryset=None):\n order = get_object_or_404(Order, user=self.request.user,\n number=self.kwargs['order_number'])\n return order.lines.get(id=self.kwargs['line_id'])\n\n def do_reorder(self, line):\n self.response = redirect('customer:order', self.kwargs['order_number'])\n basket = self.request.basket\n\n line_available_to_reorder, reason = line.is_available_to_reorder(\n basket, self.request.strategy)\n\n if not line_available_to_reorder:\n messages.warning(self.request, reason)\n return\n\n # We need to pass response to the get_or_create... method\n # as a new basket might need to be created\n self.response = redirect('basket:summary')\n\n # Convert line attributes into basket options\n options = []\n for attribute in line.attributes.all():\n if attribute.option:\n options.append({'option': attribute.option,\n 'value': attribute.value})\n basket.add_product(line.product, line.quantity, options)\n\n if line.quantity > 1:\n msg = _(\"%(qty)d copies of '%(product)s' have been added to your\"\n \" basket\") % {\n 'qty': line.quantity, 'product': line.product}\n else:\n msg = _(\"'%s' has been added to your basket\") % line.product\n\n messages.info(self.request, msg)\n\n\nclass AnonymousOrderDetailView(generic.DetailView):\n model = Order\n template_name = \"oscar/customer/anon_order.html\"\n\n def get_object(self, queryset=None):\n # Check URL hash matches that for order to prevent spoof attacks\n order = get_object_or_404(self.model, user=None,\n number=self.kwargs['order_number'])\n if not order.check_verification_hash(self.kwargs['hash']):\n raise http.Http404()\n return order\n\n\n# ------------\n# Address book\n# ------------\n\nclass AddressListView(PageTitleMixin, generic.ListView):\n \"\"\"Customer address book\"\"\"\n context_object_name = \"addresses\"\n template_name = 'oscar/customer/address/address_list.html'\n paginate_by = settings.OSCAR_ADDRESSES_PER_PAGE\n active_tab = 'addresses'\n page_title = _('Address Book')\n\n def get_queryset(self):\n \"\"\"Return customer's addresses\"\"\"\n return UserAddress._default_manager.filter(user=self.request.user)\n\n\nclass AddressCreateView(PageTitleMixin, generic.CreateView):\n form_class = UserAddressForm\n model = UserAddress\n template_name = 'oscar/customer/address/address_form.html'\n active_tab = 'addresses'\n page_title = _('Add a new address')\n success_url = reverse_lazy('customer:address-list')\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs['user'] = self.request.user\n return kwargs\n\n def get_context_data(self, **kwargs):\n ctx = super().get_context_data(**kwargs)\n ctx['title'] = _('Add a new address')\n return ctx\n\n def get_success_url(self):\n messages.success(self.request,\n _(\"Address '%s' created\") % self.object.summary)\n return super().get_success_url()\n\n\nclass AddressUpdateView(PageTitleMixin, generic.UpdateView):\n form_class = UserAddressForm\n model = UserAddress\n template_name = 'oscar/customer/address/address_form.html'\n active_tab = 'addresses'\n page_title = _('Edit address')\n success_url = reverse_lazy('customer:address-list')\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs['user'] = self.request.user\n return kwargs\n\n def get_context_data(self, **kwargs):\n ctx = super().get_context_data(**kwargs)\n ctx['title'] = _('Edit address')\n return ctx\n\n def get_queryset(self):\n return self.request.user.addresses.all()\n\n def get_success_url(self):\n messages.success(self.request,\n _(\"Address '%s' updated\") % self.object.summary)\n return super().get_success_url()\n\n\nclass AddressDeleteView(PageTitleMixin, generic.DeleteView):\n model = UserAddress\n template_name = \"oscar/customer/address/address_delete.html\"\n page_title = _('Delete address?')\n active_tab = 'addresses'\n context_object_name = 'address'\n success_url = reverse_lazy('customer:address-list')\n\n def get_queryset(self):\n return UserAddress._default_manager.filter(user=self.request.user)\n\n def get_success_url(self):\n messages.success(self.request,\n _(\"Address '%s' deleted\") % self.object.summary)\n return super().get_success_url()\n\n\nclass AddressChangeStatusView(generic.RedirectView):\n \"\"\"\n Sets an address as default_for_(billing|shipping)\n \"\"\"\n url = reverse_lazy('customer:address-list')\n permanent = False\n\n def get(self, request, pk=None, action=None, *args, **kwargs):\n address = get_object_or_404(UserAddress, user=self.request.user,\n pk=pk)\n # We don't want the user to set an address as the default shipping\n # address, though they should be able to set it as their billing\n # address.\n if address.country.is_shipping_country:\n setattr(address, 'is_%s' % action, True)\n elif action == 'default_for_billing':\n setattr(address, 'is_default_for_billing', True)\n else:\n messages.error(request, _('We do not ship to this country'))\n address.save()\n return super().get(\n request, *args, **kwargs)\n", "path": "src/oscar/apps/customer/views.py" } ]
diff --git a/src/oscar/apps/customer/views.py b/src/oscar/apps/customer/views.py index e04ccf81b58..56e6fb62adb 100644 --- a/src/oscar/apps/customer/views.py +++ b/src/oscar/apps/customer/views.py @@ -135,6 +135,7 @@ def get_login_form(self, bind_data=False): def get_login_form_kwargs(self, bind_data=False): kwargs = {} + kwargs['request'] = self.request kwargs['host'] = self.request.get_host() kwargs['prefix'] = self.login_prefix kwargs['initial'] = { diff --git a/tests/unit/customer/test_views.py b/tests/unit/customer/test_views.py new file mode 100644 index 00000000000..41775878ede --- /dev/null +++ b/tests/unit/customer/test_views.py @@ -0,0 +1,28 @@ +from unittest.mock import Mock, patch + +from django.test import Client, TestCase +from django.urls import reverse + +from oscar.apps.customer.forms import EmailAuthenticationForm + + +class TestAccountAuthView(TestCase): + + def setUp(self): + self.client = Client() + + def test_request_is_passed_to_form(self): + form_class = Mock(wraps=EmailAuthenticationForm) + data = {"login_submit": ["1"]} + initial = {'redirect_url': ''} + with patch("oscar.apps.customer.views.AccountAuthView.login_form_class", new=form_class): + response = self.client.post(reverse("customer:login"), data=data) + assert form_class.called + form_class.assert_called_with( + data=data, + files={}, + host="testserver", + initial=initial, + prefix='login', + request=response.wsgi_request, + )
ansible__ansible-43500
Task name is overridden by include_role, and is not evaluated in output <!--- Verify first that your issue/request is not already reported on GitHub. THIS FORM WILL BE READ BY A MACHINE, COMPLETE ALL SECTIONS AS DESCRIBED. Also test if the latest release, and devel branch are affected too. ALWAYS add information AFTER (OUTSIDE) these html comments. Otherwise it may end up being automatically closed by our bot. --> ##### SUMMARY <!--- Explain the problem briefly --> When using `include_role`, the `name` parameter given to it appears to override the `name` parameter given to the task itself. Additionally, if jinja2 was being used to determine which role to include, that is not evaluated and is printed raw, which is not useful to an observer. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Insert, BELOW THIS COMMENT, the name of the module, plugin, task or feature. Do not include extra details here, e.g. "vyos_command" not "the network module vyos_command" or the full path--> include_role ##### ANSIBLE VERSION <!--- Paste, BELOW THIS COMMENT, verbatim output from "ansible --version" between quotes below --> ``` ansible 2.6.1 ``` ##### CONFIGURATION <!--- If using Ansible 2.4 or above, paste, BELOW THIS COMMENT, the results of "ansible-config dump --only-changed" Otherwise, mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables).--> ##### OS / ENVIRONMENT <!--- Mention, BELOW THIS COMMENT, the OS you are running Ansible from, and the OS you are managing, or say "N/A" for anything that is not platform-specific. Also mention the specific version of what you are trying to control, e.g. if this is a network bug the version of firmware on the network device.--> Red Hat Enterprise Linux Server release 7.4 (Maipo) ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> Playbook ```yaml - name: "test" hosts: localhost gather_facts: no connection: local vars: role_type: a tasks: - name: Role inclusion test include_role: name: "role-{{ role_type }}" ``` roles/role-a/tasks/main.yml ```yaml --- - debug: msg="This is Role A" ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS ``` PLAY [test] ******************************************************************** TASK [Role inclusion test] ***************************************************** TASK [role-a : debug] ********************************************************** [...] ``` Or, less preferably: ``` PLAY [test] ******************************************************************** TASK [include_role : role-a ************************************* TASK [role-a : debug] ********************************************************** [...] ``` ##### ACTUAL RESULTS ``` PLAY [test] ******************************************************************** TASK [include_role : role-{{ role_type }}] ************************************* TASK [role-a : debug] ********************************************************** [...] ```
[ { "content": "\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\n# Make coding more python3-ish\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nfrom os.path import basename\n\nfrom ansible.errors import AnsibleParserError\nfrom ansible.playbook.attribute import FieldAttribute\nfrom ansible.playbook.block import Block\nfrom ansible.playbook.task_include import TaskInclude\nfrom ansible.playbook.role import Role\nfrom ansible.playbook.role.include import RoleInclude\n\ntry:\n from __main__ import display\nexcept ImportError:\n from ansible.utils.display import Display\n display = Display()\n\n__all__ = ['IncludeRole']\n\n\nclass IncludeRole(TaskInclude):\n\n \"\"\"\n A Role include is derived from a regular role to handle the special\n circumstances related to the `- include_role: ...`\n \"\"\"\n\n BASE = ('name', 'role') # directly assigned\n FROM_ARGS = ('tasks_from', 'vars_from', 'defaults_from') # used to populate from dict in role\n OTHER_ARGS = ('apply', 'private', 'public', 'allow_duplicates') # assigned to matching property\n VALID_ARGS = tuple(frozenset(BASE + FROM_ARGS + OTHER_ARGS)) # all valid args\n\n # =================================================================================\n # ATTRIBUTES\n\n # private as this is a 'module options' vs a task property\n _allow_duplicates = FieldAttribute(isa='bool', default=True, private=True)\n _private = FieldAttribute(isa='bool', default=None, private=True)\n _public = FieldAttribute(isa='bool', default=False, private=True)\n\n def __init__(self, block=None, role=None, task_include=None):\n\n super(IncludeRole, self).__init__(block=block, role=role, task_include=task_include)\n\n self._from_files = {}\n self._parent_role = role\n self._role_name = None\n self._role_path = None\n\n def get_name(self):\n ''' return the name of the task '''\n return \"%s : %s\" % (self.action, self._role_name)\n\n def get_block_list(self, play=None, variable_manager=None, loader=None):\n\n # only need play passed in when dynamic\n if play is None:\n myplay = self._parent._play\n else:\n myplay = play\n\n ri = RoleInclude.load(self._role_name, play=myplay, variable_manager=variable_manager, loader=loader)\n ri.vars.update(self.vars)\n\n # build role\n actual_role = Role.load(ri, myplay, parent_role=self._parent_role, from_files=self._from_files,\n from_include=True)\n actual_role._metadata.allow_duplicates = self.allow_duplicates\n\n if self.statically_loaded or self.public:\n myplay.roles.append(actual_role)\n\n # save this for later use\n self._role_path = actual_role._role_path\n\n # compile role with parent roles as dependencies to ensure they inherit\n # variables\n if not self._parent_role:\n dep_chain = []\n else:\n dep_chain = list(self._parent_role._parents)\n dep_chain.append(self._parent_role)\n\n blocks = actual_role.compile(play=myplay, dep_chain=dep_chain)\n for b in blocks:\n b._parent = self\n\n # updated available handlers in play\n handlers = actual_role.get_handler_blocks(play=myplay)\n for h in handlers:\n h._parent = self\n myplay.handlers = myplay.handlers + handlers\n return blocks, handlers\n\n @staticmethod\n def load(data, block=None, role=None, task_include=None, variable_manager=None, loader=None):\n\n ir = IncludeRole(block, role, task_include=task_include).load_data(data, variable_manager=variable_manager, loader=loader)\n\n # Validate options\n my_arg_names = frozenset(ir.args.keys())\n\n # name is needed, or use role as alias\n ir._role_name = ir.args.get('name', ir.args.get('role'))\n if ir._role_name is None:\n raise AnsibleParserError(\"'name' is a required field for %s.\" % ir.action, obj=data)\n\n if 'public' in ir.args and ir.action != 'include_role':\n raise AnsibleParserError('Invalid options for %s: private' % ir.action, obj=data)\n\n if 'private' in ir.args:\n display.deprecated(\n msg='Supplying \"private\" for \"%s\" is a no op, and is deprecated' % ir.action,\n version='2.8'\n )\n\n # validate bad args, otherwise we silently ignore\n bad_opts = my_arg_names.difference(IncludeRole.VALID_ARGS)\n if bad_opts:\n raise AnsibleParserError('Invalid options for %s: %s' % (ir.action, ','.join(list(bad_opts))), obj=data)\n\n # build options for role includes\n for key in my_arg_names.intersection(IncludeRole.FROM_ARGS):\n from_key = key.replace('_from', '')\n ir._from_files[from_key] = basename(ir.args.get(key))\n\n apply_attrs = ir.args.pop('apply', {})\n if apply_attrs and ir.action != 'include_role':\n raise AnsibleParserError('Invalid options for %s: apply' % ir.action, obj=data)\n elif apply_attrs:\n apply_attrs['block'] = []\n p_block = Block.load(\n apply_attrs,\n play=block._play,\n parent_block=block,\n role=role,\n task_include=task_include,\n use_handlers=block._use_handlers,\n variable_manager=variable_manager,\n loader=loader,\n )\n ir._parent = p_block\n\n # manual list as otherwise the options would set other task parameters we don't want.\n for option in my_arg_names.intersection(IncludeRole.OTHER_ARGS):\n setattr(ir, option, ir.args.get(option))\n\n return ir\n\n def copy(self, exclude_parent=False, exclude_tasks=False):\n\n new_me = super(IncludeRole, self).copy(exclude_parent=exclude_parent, exclude_tasks=exclude_tasks)\n new_me.statically_loaded = self.statically_loaded\n new_me._from_files = self._from_files.copy()\n new_me._parent_role = self._parent_role\n new_me._role_name = self._role_name\n new_me._role_path = self._role_path\n\n return new_me\n\n def get_include_params(self):\n v = super(IncludeRole, self).get_include_params()\n if self._parent_role:\n v.update(self._parent_role.get_role_params())\n return v\n", "path": "lib/ansible/playbook/role_include.py" } ]
[ { "content": "\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\n# Make coding more python3-ish\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nfrom os.path import basename\n\nfrom ansible.errors import AnsibleParserError\nfrom ansible.playbook.attribute import FieldAttribute\nfrom ansible.playbook.block import Block\nfrom ansible.playbook.task_include import TaskInclude\nfrom ansible.playbook.role import Role\nfrom ansible.playbook.role.include import RoleInclude\n\ntry:\n from __main__ import display\nexcept ImportError:\n from ansible.utils.display import Display\n display = Display()\n\n__all__ = ['IncludeRole']\n\n\nclass IncludeRole(TaskInclude):\n\n \"\"\"\n A Role include is derived from a regular role to handle the special\n circumstances related to the `- include_role: ...`\n \"\"\"\n\n BASE = ('name', 'role') # directly assigned\n FROM_ARGS = ('tasks_from', 'vars_from', 'defaults_from') # used to populate from dict in role\n OTHER_ARGS = ('apply', 'private', 'public', 'allow_duplicates') # assigned to matching property\n VALID_ARGS = tuple(frozenset(BASE + FROM_ARGS + OTHER_ARGS)) # all valid args\n\n # =================================================================================\n # ATTRIBUTES\n\n # private as this is a 'module options' vs a task property\n _allow_duplicates = FieldAttribute(isa='bool', default=True, private=True)\n _private = FieldAttribute(isa='bool', default=None, private=True)\n _public = FieldAttribute(isa='bool', default=False, private=True)\n\n def __init__(self, block=None, role=None, task_include=None):\n\n super(IncludeRole, self).__init__(block=block, role=role, task_include=task_include)\n\n self._from_files = {}\n self._parent_role = role\n self._role_name = None\n self._role_path = None\n\n def get_name(self):\n ''' return the name of the task '''\n return self.name or \"%s : %s\" % (self.action, self._role_name)\n\n def get_block_list(self, play=None, variable_manager=None, loader=None):\n\n # only need play passed in when dynamic\n if play is None:\n myplay = self._parent._play\n else:\n myplay = play\n\n ri = RoleInclude.load(self._role_name, play=myplay, variable_manager=variable_manager, loader=loader)\n ri.vars.update(self.vars)\n\n # build role\n actual_role = Role.load(ri, myplay, parent_role=self._parent_role, from_files=self._from_files,\n from_include=True)\n actual_role._metadata.allow_duplicates = self.allow_duplicates\n\n if self.statically_loaded or self.public:\n myplay.roles.append(actual_role)\n\n # save this for later use\n self._role_path = actual_role._role_path\n\n # compile role with parent roles as dependencies to ensure they inherit\n # variables\n if not self._parent_role:\n dep_chain = []\n else:\n dep_chain = list(self._parent_role._parents)\n dep_chain.append(self._parent_role)\n\n blocks = actual_role.compile(play=myplay, dep_chain=dep_chain)\n for b in blocks:\n b._parent = self\n\n # updated available handlers in play\n handlers = actual_role.get_handler_blocks(play=myplay)\n for h in handlers:\n h._parent = self\n myplay.handlers = myplay.handlers + handlers\n return blocks, handlers\n\n @staticmethod\n def load(data, block=None, role=None, task_include=None, variable_manager=None, loader=None):\n\n ir = IncludeRole(block, role, task_include=task_include).load_data(data, variable_manager=variable_manager, loader=loader)\n\n # Validate options\n my_arg_names = frozenset(ir.args.keys())\n\n # name is needed, or use role as alias\n ir._role_name = ir.args.get('name', ir.args.get('role'))\n if ir._role_name is None:\n raise AnsibleParserError(\"'name' is a required field for %s.\" % ir.action, obj=data)\n\n if 'public' in ir.args and ir.action != 'include_role':\n raise AnsibleParserError('Invalid options for %s: private' % ir.action, obj=data)\n\n if 'private' in ir.args:\n display.deprecated(\n msg='Supplying \"private\" for \"%s\" is a no op, and is deprecated' % ir.action,\n version='2.8'\n )\n\n # validate bad args, otherwise we silently ignore\n bad_opts = my_arg_names.difference(IncludeRole.VALID_ARGS)\n if bad_opts:\n raise AnsibleParserError('Invalid options for %s: %s' % (ir.action, ','.join(list(bad_opts))), obj=data)\n\n # build options for role includes\n for key in my_arg_names.intersection(IncludeRole.FROM_ARGS):\n from_key = key.replace('_from', '')\n ir._from_files[from_key] = basename(ir.args.get(key))\n\n apply_attrs = ir.args.pop('apply', {})\n if apply_attrs and ir.action != 'include_role':\n raise AnsibleParserError('Invalid options for %s: apply' % ir.action, obj=data)\n elif apply_attrs:\n apply_attrs['block'] = []\n p_block = Block.load(\n apply_attrs,\n play=block._play,\n parent_block=block,\n role=role,\n task_include=task_include,\n use_handlers=block._use_handlers,\n variable_manager=variable_manager,\n loader=loader,\n )\n ir._parent = p_block\n\n # manual list as otherwise the options would set other task parameters we don't want.\n for option in my_arg_names.intersection(IncludeRole.OTHER_ARGS):\n setattr(ir, option, ir.args.get(option))\n\n return ir\n\n def copy(self, exclude_parent=False, exclude_tasks=False):\n\n new_me = super(IncludeRole, self).copy(exclude_parent=exclude_parent, exclude_tasks=exclude_tasks)\n new_me.statically_loaded = self.statically_loaded\n new_me._from_files = self._from_files.copy()\n new_me._parent_role = self._parent_role\n new_me._role_name = self._role_name\n new_me._role_path = self._role_path\n\n return new_me\n\n def get_include_params(self):\n v = super(IncludeRole, self).get_include_params()\n if self._parent_role:\n v.update(self._parent_role.get_role_params())\n return v\n", "path": "lib/ansible/playbook/role_include.py" } ]
diff --git a/lib/ansible/playbook/role_include.py b/lib/ansible/playbook/role_include.py index 37fc0ad68363a0..bef6ca65a363bd 100644 --- a/lib/ansible/playbook/role_include.py +++ b/lib/ansible/playbook/role_include.py @@ -68,7 +68,7 @@ def __init__(self, block=None, role=None, task_include=None): def get_name(self): ''' return the name of the task ''' - return "%s : %s" % (self.action, self._role_name) + return self.name or "%s : %s" % (self.action, self._role_name) def get_block_list(self, play=None, variable_manager=None, loader=None):
vacanza__python-holidays-1775
Mississippi Holiday - Confederate Memorial Day - Calculation incorrect Library version: holidays 0.47 The date for Confederate Memorial Day this year is 2024-04-29. The python-holidays library calculated it to be today, 2024-04-22. https://www.sos.ms.gov/communications-publications/state-holidays https://law.justia.com/codes/mississippi/2020/title-3/chapter-3/section-3-3-7/ the last Monday of April (Confederate Memorial Day); `import holidays` `#days = holidays.USA(years=[2024], subdiv='MS')` `days = holidays.country_holidays('USA', subdiv='MS', years=[2024])` `print(days)` Output: {datetime.date(2024, 1, 1): "New Year's Day", datetime.date(2024, 5, 27): 'Memorial Day', datetime.date(2024, 6, 19): 'Juneteenth National Independence Day', datetime.date(2024, 7, 4): 'Independence Day', datetime.date(2024, 9, 2): 'Labor Day', datetime.date(2024, 11, 11): 'Veterans Day', datetime.date(2024, 11, 28): 'Thanksgiving', datetime.date(2024, 12, 25): 'Christmas Day', datetime.date(2024, 2, 19): "Washington's Birthday", datetime.date(2024, 1, 15): "Dr. Martin Luther King Jr. and Robert E. Lee's Birthdays", **datetime.date(2024, 4, 22): 'Confederate Memorial Day'**}
[ { "content": "# holidays\n# --------\n# A fast, efficient Python library for generating country, province and state\n# specific sets of holidays on the fly. It aims to make determining whether a\n# specific date is a holiday as fast and flexible as possible.\n#\n# Authors: Vacanza Team and individual contributors (see AUTHORS file)\n# dr-prodigy <[email protected]> (c) 2017-2023\n# ryanss <[email protected]> (c) 2014-2017\n# Website: https://github.com/vacanza/python-holidays\n# License: MIT (see LICENSE file)\n\nfrom typing import Tuple, Union\n\nfrom holidays.calendars.gregorian import DEC\nfrom holidays.constants import PUBLIC, UNOFFICIAL\nfrom holidays.groups import ChristianHolidays, InternationalHolidays\nfrom holidays.observed_holiday_base import (\n ObservedHolidayBase,\n MON_TO_NEXT_TUE,\n FRI_TO_PREV_THU,\n SAT_TO_PREV_FRI,\n SUN_TO_NEXT_MON,\n SAT_SUN_TO_PREV_FRI,\n SAT_SUN_TO_NEXT_MON,\n)\n\n\nclass UnitedStates(ObservedHolidayBase, ChristianHolidays, InternationalHolidays):\n \"\"\"\n https://en.wikipedia.org/wiki/Public_holidays_in_the_United_States\n\n For Northern Mariana Islands (subdivision MP):\n - https://governor.gov.mp/archived-news/executive-actions-archive/memorandum-2022-legal-holidays/ # noqa: E501\n - https://webcache.googleusercontent.com/search?q=cache:C17_7FBgPtQJ:https://governor.gov.mp/archived-news/executive-actions-archive/memorandum-2022-legal-holidays/&hl=en&gl=sg&strip=1&vwsrc=0 # noqa: E501\n\n Columbus Day / Indigenous Peoples' Day history:\n - https://www.pewresearch.org/short-reads/2023/10/05/working-on-columbus-day-or-indigenous-peoples-day-it-depends-on-where-your-job-is/ # noqa: E501\n - https://www.officeholidays.com/holidays/usa/columbus-day-state-guide\n - https://en.wikipedia.org/wiki/Indigenous_Peoples%27_Day_(United_States)\n - https://www.sos.ri.gov/divisions/civics-and-education/reference-desk/ri-state-holidays\n - https://web.archive.org/web/20080831103521/http://www.dpa.ca.gov/personnel-policies/holidays.htm # noqa: E501\n\n \"\"\"\n\n country = \"US\"\n supported_categories = (PUBLIC, UNOFFICIAL)\n observed_label = \"%s (observed)\"\n subdivisions: Union[Tuple[()], Tuple[str, ...]] = (\n \"AK\", # Alaska.\n \"AL\", # Alabama.\n \"AR\", # Arkansas.\n \"AS\", # American Samoa.\n \"AZ\", # Arizona.\n \"CA\", # California.\n \"CO\", # Colorado.\n \"CT\", # Connecticut.\n \"DC\", # District of Columbia.\n \"DE\", # Delaware.\n \"FL\", # Florida.\n \"GA\", # Georgia.\n \"GU\", # Guam.\n \"HI\", # Hawaii.\n \"IA\", # Iowa.\n \"ID\", # Idaho.\n \"IL\", # Illinois.\n \"IN\", # Indiana.\n \"KS\", # Kansas.\n \"KY\", # Kentucky.\n \"LA\", # Louisiana.\n \"MA\", # Massachusetts.\n \"MD\", # Maryland.\n \"ME\", # Maine.\n \"MI\", # Michigan.\n \"MN\", # Minnesota.\n \"MO\", # Missouri.\n \"MP\", # Northern Mariana Islands.\n \"MS\", # Mississippi.\n \"MT\", # Montana.\n \"NC\", # North Carolina.\n \"ND\", # North Dakota.\n \"NE\", # Nebraska.\n \"NH\", # New Hampshire.\n \"NJ\", # New Jersey.\n \"NM\", # New Mexico.\n \"NV\", # Nevada.\n \"NY\", # New York.\n \"OH\", # Ohio.\n \"OK\", # Oklahoma.\n \"OR\", # Oregon.\n \"PA\", # Pennsylvania.\n \"PR\", # Puerto Rico.\n \"RI\", # Rhode Island.\n \"SC\", # South Carolina.\n \"SD\", # South Dakota.\n \"TN\", # Tennessee.\n \"TX\", # Texas.\n \"UM\", # United States Minor Outlying Islands.\n \"UT\", # Utah.\n \"VA\", # Virginia.\n \"VI\", # Virgin Islands, U.S..\n \"VT\", # Vermont.\n \"WA\", # Washington.\n \"WI\", # Wisconsin.\n \"WV\", # West Virginia.\n \"WY\", # Wyoming.\n )\n\n _deprecated_subdivisions = (\n \"FM\",\n \"MH\",\n \"PW\",\n )\n\n def __init__(self, *args, **kwargs):\n ChristianHolidays.__init__(self)\n InternationalHolidays.__init__(self)\n kwargs.setdefault(\"observed_rule\", SAT_TO_PREV_FRI + SUN_TO_NEXT_MON)\n super().__init__(*args, **kwargs)\n\n def _populate_public_holidays(self):\n # New Year's Day\n if self._year >= 1871:\n name = \"New Year's Day\"\n self._add_observed(self._add_new_years_day(name))\n self._add_observed(self._next_year_new_years_day, name=name)\n\n # Memorial Day\n if self._year >= 1888:\n name = \"Memorial Day\"\n if self._year >= 1971:\n self._add_holiday_last_mon_of_may(name)\n else:\n self._add_holiday_may_30(name)\n\n # Juneteenth Day\n if self._year >= 2021:\n self._add_observed(self._add_holiday_jun_19(\"Juneteenth National Independence Day\"))\n\n # Independence Day\n if self._year >= 1871:\n self._add_observed(self._add_holiday_jul_4(\"Independence Day\"))\n\n # Labor Day\n if self._year >= 1894:\n self._add_holiday_1st_mon_of_sep(\"Labor Day\")\n\n # Veterans Day\n if self._year >= 1938:\n name = \"Veterans Day\" if self._year >= 1954 else \"Armistice Day\"\n if 1971 <= self._year <= 1977:\n self._add_holiday_4th_mon_of_oct(name)\n else:\n self._add_observed(self._add_remembrance_day(name))\n\n # Thanksgiving\n if self._year >= 1871:\n self._add_holiday_4th_thu_of_nov(\"Thanksgiving\")\n\n # Christmas Day\n if self._year >= 1871:\n self._add_observed(self._add_christmas_day(\"Christmas Day\"))\n\n def _add_christmas_eve_holiday(self):\n # Christmas Eve\n # If on Friday, observed on Thursday\n # If on Saturday or Sunday, observed on Friday\n name = \"Christmas Eve\"\n self._add_observed(\n self._add_christmas_eve(name), name=name, rule=FRI_TO_PREV_THU + SAT_SUN_TO_PREV_FRI\n )\n\n def _populate_subdiv_holidays(self):\n if PUBLIC not in self.categories:\n return None\n\n # Martin Luther King Jr. Day\n if self._year >= 1986 and self.subdiv not in {\"AL\", \"AR\", \"AZ\", \"GA\", \"ID\", \"MS\", \"NH\"}:\n self._add_holiday_3rd_mon_of_jan(\"Martin Luther King Jr. Day\")\n\n # Washington's Birthday\n if self._year >= 1879 and self.subdiv not in {\n \"AL\",\n \"AR\",\n \"DE\",\n \"FL\",\n \"GA\",\n \"NM\",\n \"PR\",\n \"VI\",\n }:\n name = \"Washington's Birthday\"\n if self._year >= 1971:\n self._add_holiday_3rd_mon_of_feb(name)\n else:\n self._add_holiday_feb_22(name)\n\n # Columbus Day\n if self._year >= 1937 and (\n self.subdiv is None\n or self.subdiv\n in {\n \"AS\",\n \"AZ\",\n \"CT\",\n \"GA\",\n \"ID\",\n \"IL\",\n \"IN\",\n \"MA\",\n \"MD\",\n \"MO\",\n \"MT\",\n \"NJ\",\n \"NY\",\n \"OH\",\n \"PA\",\n \"UT\",\n \"WV\",\n }\n ):\n name = \"Columbus Day\"\n if self._year >= 1971:\n self._add_holiday_2nd_mon_of_oct(name)\n else:\n self._add_columbus_day(name)\n\n super()._populate_subdiv_holidays()\n\n def _populate_subdiv_ak_public_holidays(self):\n # Seward's Day\n if self._year >= 1918:\n name = \"Seward's Day\"\n if self._year >= 1955:\n self._add_holiday_last_mon_of_mar(name)\n else:\n self._add_holiday_mar_30(name)\n\n # Indigenous Peoples' Day\n if self._year >= 1971:\n self._add_holiday_2nd_mon_of_oct(\n \"Indigenous Peoples' Day\" if self._year >= 2015 else \"Columbus Day\"\n )\n\n # Alaska Day\n if self._year >= 1867:\n self._add_observed(self._add_holiday_oct_18(\"Alaska Day\"))\n\n def _populate_subdiv_al_public_holidays(self):\n # Martin Luther King Jr. Day\n if self._year >= 1986:\n self._add_holiday_3rd_mon_of_jan(\"Martin Luther King, Jr & Robert E. Lee's Birthday\")\n\n # Washington's Birthday\n name = \"George Washington & Thomas Jefferson's Birthday\"\n if self._year >= 1971:\n self._add_holiday_3rd_mon_of_feb(name)\n else:\n self._add_holiday_feb_22(name)\n\n # Confederate Memorial Day\n if self._year >= 1866:\n self._add_holiday_4th_mon_of_apr(\"Confederate Memorial Day\")\n\n # Jefferson Davis Birthday\n if self._year >= 1890:\n self._add_holiday_1st_mon_of_jun(\"Jefferson Davis Birthday\")\n\n # Columbus Day / American Indian Heritage Day / Fraternal Day\n if self._year >= 1971:\n self._add_holiday_2nd_mon_of_oct(\n \"Columbus Day / American Indian Heritage Day / Fraternal Day\"\n if self._year >= 2000\n else \"Columbus Day / Fraternal Day\"\n )\n\n def _populate_subdiv_ar_public_holidays(self):\n # Martin Luther King Jr. Day\n if self._year >= 1986:\n name = (\n \"Martin Luther King Jr. Day\"\n if self._year >= 2018\n else \"Dr. Martin Luther King Jr. and Robert E. Lee's Birthdays\"\n )\n self._add_holiday_3rd_mon_of_jan(name)\n\n # Washington's Birthday\n name = \"George Washington's Birthday and Daisy Gatson Bates Day\"\n if self._year >= 1971:\n self._add_holiday_3rd_mon_of_feb(name)\n else:\n self._add_holiday_feb_22(name)\n\n def _populate_subdiv_as_public_holidays(self):\n # Christmas Eve\n self._add_christmas_eve_holiday()\n\n def _populate_subdiv_az_public_holidays(self):\n # Martin Luther King Jr. Day\n if self._year >= 1986:\n self._add_holiday_3rd_mon_of_jan(\"Dr. Martin Luther King Jr. / Civil Rights Day\")\n\n def _populate_subdiv_ca_public_holidays(self):\n # Lincoln's Birthday\n if 1971 <= self._year <= 2009:\n self._add_observed(self._add_holiday_feb_12(\"Lincoln's Birthday\"))\n\n # Susan B. Anthony Day\n if self._year >= 2014:\n self._add_holiday_feb_15(\"Susan B. Anthony Day\")\n\n # Cesar Chavez Day\n if self._year >= 1995:\n self._add_observed(self._add_holiday_mar_31(\"Cesar Chavez Day\"), rule=SUN_TO_NEXT_MON)\n\n # Columbus Day\n if 1971 <= self._year <= 2008:\n self._add_holiday_2nd_mon_of_oct(\"Columbus Day\")\n\n # Day After Thanksgiving\n if self._year >= 1975:\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Day After Thanksgiving\")\n\n def _populate_subdiv_co_public_holidays(self):\n # Cesar Chavez Day\n if self._year >= 2001:\n self._add_holiday_mar_31(\"Cesar Chavez Day\")\n\n def _populate_subdiv_ct_public_holidays(self):\n # Lincoln's Birthday\n if self._year >= 1971:\n self._add_observed(self._add_holiday_feb_12(\"Lincoln's Birthday\"))\n\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n def _populate_subdiv_dc_public_holidays(self):\n # Inauguration Day\n if self._year >= 1789 and (self._year - 1789) % 4 == 0:\n name = \"Inauguration Day\"\n self._add_observed(\n self._add_holiday_jan_20(name)\n if self._year >= 1937\n else self._add_holiday_mar_4(name),\n rule=SUN_TO_NEXT_MON,\n )\n\n # Emancipation Day\n if self._year >= 2005:\n self._add_observed(self._add_holiday_apr_16(\"Emancipation Day\"))\n\n # Indigenous Peoples' Day\n if self._year >= 1971:\n self._add_holiday_2nd_mon_of_oct(\n \"Indigenous Peoples' Day\" if self._year >= 2019 else \"Columbus Day\"\n )\n\n def _populate_subdiv_de_public_holidays(self):\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n # Election Day\n if self._year >= 2008 and self._year % 2 == 0:\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n # Day After Thanksgiving\n if self._year >= 1975:\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Day After Thanksgiving\")\n\n def _populate_subdiv_fl_public_holidays(self):\n # Susan B. Anthony Day\n if self._year >= 2011:\n self._add_holiday_feb_15(\"Susan B. Anthony Day\")\n\n # Friday After Thanksgiving\n if self._year >= 1975:\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Friday After Thanksgiving\")\n\n def _populate_subdiv_ga_public_holidays(self):\n # Martin Luther King Jr. Day\n if self._year >= 1986:\n self._add_holiday_3rd_mon_of_jan(\n \"Martin Luther King Jr. Day\" if self._year >= 2012 else \"Robert E. Lee's Birthday\"\n )\n\n # Confederate Memorial Day\n if self._year >= 1866:\n name = \"State Holiday\" if self._year >= 2016 else \"Confederate Memorial Day\"\n if self._year == 2020:\n self._add_holiday_apr_10(name)\n else:\n self._add_holiday_4th_mon_of_apr(name)\n\n # Robert E. Lee's Birthday\n if self._year >= 1986:\n self._add_holiday_1_day_past_4th_thu_of_nov(\n \"State Holiday\" if self._year >= 2016 else \"Robert E. Lee's Birthday\"\n )\n\n # Washington's Birthday\n name = \"Washington's Birthday\"\n if self._is_wednesday(DEC, 24):\n self._add_holiday_dec_26(name)\n else:\n self._add_holiday_dec_24(name)\n\n def _populate_subdiv_gu_public_holidays(self):\n # Guam Discovery Day\n if self._year >= 1970:\n self._add_holiday_1st_mon_of_mar(\"Guam Discovery Day\")\n\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n # Liberation Day (Guam)\n if self._year >= 1945:\n self._add_holiday_jul_21(\"Liberation Day (Guam)\")\n\n # All Souls' Day\n self._add_all_souls_day(\"All Souls' Day\")\n\n # Lady of Camarin Day\n self._add_immaculate_conception_day(\"Lady of Camarin Day\")\n\n def _populate_subdiv_hi_public_holidays(self):\n # Prince Jonah Kuhio Kalanianaole Day\n if self._year >= 1949:\n self._add_observed(self._add_holiday_mar_26(\"Prince Jonah Kuhio Kalanianaole Day\"))\n\n # Kamehameha Day\n if self._year >= 1872:\n jun_11 = self._add_holiday_jun_11(\"Kamehameha Day\")\n if self._year >= 2011:\n self._add_observed(jun_11)\n\n # Statehood Day\n if self._year >= 1959:\n self._add_holiday_3rd_fri_of_aug(\"Statehood Day\")\n\n # Election Day\n if self._year >= 2008 and self._year % 2 == 0:\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n def _populate_subdiv_ia_public_holidays(self):\n # Lincoln's Birthday\n if self._year >= 1971:\n self._add_observed(self._add_holiday_feb_12(\"Lincoln's Birthday\"))\n\n def _populate_subdiv_id_public_holidays(self):\n # Martin Luther King Jr. Day\n if self._year >= 1986:\n self._add_holiday_3rd_mon_of_jan(\n \"Martin Luther King Jr. / Idaho Human Rights Day\"\n if self._year >= 2006\n else \"Martin Luther King Jr. Day\",\n )\n\n def _populate_subdiv_il_public_holidays(self):\n # Lincoln's Birthday\n if self._year >= 1971:\n self._add_observed(self._add_holiday_feb_12(\"Lincoln's Birthday\"))\n\n # Casimir Pulaski Day\n if self._year >= 1978:\n self._add_holiday_1st_mon_of_mar(\"Casimir Pulaski Day\")\n\n # Election Day\n if self._year >= 2008 and self._year % 2 == 0:\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n def _populate_subdiv_in_public_holidays(self):\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n # Primary Election Day\n if self._year >= 2015 or (self._year >= 2006 and self._year % 2 == 0):\n self._add_holiday_1_day_past_1st_mon_of_may(\"Primary Election Day\")\n\n # Election Day\n if self._year >= 2015 or (self._year >= 2008 and self._year % 2 == 0):\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n # Lincoln's Birthday\n if self._year >= 2010:\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Lincoln's Birthday\")\n\n def _populate_subdiv_ks_public_holidays(self):\n # Christmas Eve\n if self._year >= 2013:\n self._add_christmas_eve_holiday()\n\n def _populate_subdiv_ky_public_holidays(self):\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n # New Year's Eve\n if self._year >= 2013:\n self._add_observed(self._add_new_years_eve(\"New Year's Eve\"))\n\n def _populate_subdiv_la_public_holidays(self):\n # Inauguration Day\n if self._year >= 1789 and (self._year - 1789) % 4 == 0:\n name = \"Inauguration Day\"\n self._add_observed(\n self._add_holiday_jan_20(name)\n if self._year >= 1937\n else self._add_holiday_mar_4(name),\n rule=SUN_TO_NEXT_MON,\n )\n\n # Mardi Gras\n if self._year >= 1857:\n self._add_carnival_tuesday(\"Mardi Gras\")\n\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n # Election Day\n if self._year >= 2008 and self._year % 2 == 0:\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n def _populate_subdiv_ma_public_holidays(self):\n # Evacuation Day\n if self._year >= 1901:\n self._add_observed(\n self._add_holiday_mar_17(\"Evacuation Day\"), rule=SAT_SUN_TO_NEXT_MON\n )\n\n # Patriots' Day\n if self._year >= 1894:\n name = \"Patriots' Day\"\n if self._year >= 1969:\n self._add_holiday_3rd_mon_of_apr(name)\n else:\n self._add_holiday_apr_19(name)\n\n def _populate_subdiv_md_public_holidays(self):\n if self._year >= 1789 and (self._year - 1789) % 4 == 0:\n # Inauguration Day\n name = \"Inauguration Day\"\n self._add_observed(\n self._add_holiday_jan_20(name)\n if self._year >= 1937\n else self._add_holiday_mar_4(name),\n rule=SUN_TO_NEXT_MON,\n )\n\n # American Indian Heritage Day\n if self._year >= 2008:\n self._add_holiday_1_day_past_4th_thu_of_nov(\"American Indian Heritage Day\")\n\n def _populate_subdiv_me_public_holidays(self):\n # Patriots' Day\n if self._year >= 1894:\n name = \"Patriots' Day\"\n if self._year >= 1969:\n self._add_holiday_3rd_mon_of_apr(\"Patriots' Day\")\n else:\n self._add_holiday_apr_19(name)\n\n # Indigenous Peoples' Day\n if self._year >= 1971:\n self._add_holiday_2nd_mon_of_oct(\n \"Indigenous Peoples' Day\" if self._year >= 2019 else \"Columbus Day\"\n )\n\n def _populate_subdiv_mi_public_holidays(self):\n if self._year >= 2013:\n # Christmas Eve\n self._add_christmas_eve_holiday()\n\n # New Year's Eve\n self._add_observed(self._add_new_years_eve(\"New Year's Eve\"))\n\n def _populate_subdiv_mn_public_holidays(self):\n pass\n\n def _populate_subdiv_mo_public_holidays(self):\n # Truman Day\n if self._year >= 1949:\n self._add_observed(self._add_holiday_may_8(\"Truman Day\"))\n\n def _populate_subdiv_mp_public_holidays(self):\n # Commonwealth Covenant Day in Northern Mariana Islands\n self._add_observed(self._add_holiday_mar_24(\"Commonwealth Covenant Day\"))\n\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n # Commonwealth Cultural Day in Northern Mariana Islands\n self._add_holiday_2nd_mon_of_oct(\"Commonwealth Cultural Day\")\n\n # Election Day\n if self._year >= 2008 and self._year % 2 == 0:\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n # Citizenship Day in Northern Mariana Islands\n self._add_observed(self._add_holiday_nov_4(\"Citizenship Day\"))\n\n # Constitution Day in Northern Mariana Islands\n self._add_observed(self._add_holiday_dec_8(\"Constitution Day\"))\n\n def _populate_subdiv_ms_public_holidays(self):\n # Martin Luther King Jr. Day\n if self._year >= 1986:\n self._add_holiday_3rd_mon_of_jan(\n \"Dr. Martin Luther King Jr. and Robert E. Lee's Birthdays\",\n )\n\n # Confederate Memorial Day\n if self._year >= 1866:\n self._add_holiday_4th_mon_of_apr(\"Confederate Memorial Day\")\n\n def _populate_subdiv_mt_public_holidays(self):\n # Election Day\n if self._year >= 2008 and self._year % 2 == 0:\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n def _populate_subdiv_nc_public_holidays(self):\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n # Day After Thanksgiving\n if self._year >= 1975:\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Day After Thanksgiving\")\n\n # Christmas Eve\n if self._year >= 2013:\n self._add_christmas_eve_holiday()\n\n # Day After Christmas\n if self._year >= 2013:\n # If on Saturday or Sunday, observed on Monday\n # If on Monday, observed on Tuesday\n name = \"Day After Christmas\"\n self._add_observed(\n self._add_christmas_day_two(name),\n name=name,\n rule=MON_TO_NEXT_TUE + SAT_SUN_TO_NEXT_MON,\n )\n\n def _populate_subdiv_nd_public_holidays(self):\n pass\n\n def _populate_subdiv_ne_public_holidays(self):\n # Arbor Day\n if self._year >= 1875:\n name = \"Arbor Day\"\n if self._year >= 1989:\n self._add_holiday_last_fri_of_apr(name)\n else:\n self._add_holiday_apr_22(name)\n\n # Indigenous Peoples' Day\n if self._year >= 1971:\n self._add_holiday_2nd_mon_of_oct(\n \"Indigenous Peoples' Day\" if self._year >= 2020 else \"Columbus Day\"\n )\n\n def _populate_subdiv_nh_public_holidays(self):\n # Martin Luther King Jr. Day\n if self._year >= 1986:\n self._add_holiday_3rd_mon_of_jan(\"Dr. Martin Luther King Jr. / Civil Rights Day\")\n\n # Election Day\n if self._year >= 2008 and self._year % 2 == 0:\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n # Day After Thanksgiving\n if self._year >= 1975:\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Day After Thanksgiving\")\n\n def _populate_subdiv_nj_public_holidays(self):\n # Lincoln's Birthday\n if self._year >= 1971:\n self._add_observed(self._add_holiday_feb_12(\"Lincoln's Birthday\"))\n\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n # Election Day\n if self._year >= 2008 and self._year % 2 == 0:\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n def _populate_subdiv_nm_public_holidays(self):\n # Indigenous Peoples' Day\n if self._year >= 1971:\n self._add_holiday_2nd_mon_of_oct(\n \"Indigenous Peoples' Day\" if self._year >= 2019 else \"Columbus Day\"\n )\n\n # Presidents' Day\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Presidents' Day\")\n\n def _populate_subdiv_nv_public_holidays(self):\n # Nevada Day\n if self._year >= 1933:\n name = \"Nevada Day\"\n self._add_observed(\n self._add_holiday_last_fri_of_oct(name)\n if self._year >= 2000\n else self._add_holiday_oct_31(name)\n )\n\n # Family Day\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Family Day\")\n\n def _populate_subdiv_ny_public_holidays(self):\n # Lincoln's Birthday\n if self._year >= 1971:\n self._add_observed(self._add_holiday_feb_12(\"Lincoln's Birthday\"))\n\n # Susan B. Anthony Day\n if self._year >= 2004:\n self._add_holiday_feb_15(\"Susan B. Anthony Day\")\n\n # Election Day\n if self._year >= 2015 or (self._year >= 2008 and self._year % 2 == 0):\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n def _populate_subdiv_oh_public_holidays(self):\n pass\n\n def _populate_subdiv_ok_public_holidays(self):\n # Day After Thanksgiving\n if self._year >= 1975:\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Day After Thanksgiving\")\n\n def _populate_subdiv_or_public_holidays(self):\n pass\n\n def _populate_subdiv_pa_public_holidays(self):\n # Day After Thanksgiving\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Day After Thanksgiving\")\n\n def _populate_subdiv_pr_public_holidays(self):\n # Epiphany\n self._add_epiphany_day(\"Epiphany\")\n\n # Washington's Birthday\n self._add_holiday_3rd_mon_of_feb(\"Presidents' Day\")\n\n # Emancipation Day\n self._add_observed(self._add_holiday_mar_22(\"Emancipation Day\"), rule=SUN_TO_NEXT_MON)\n\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n # Constitution Day\n self._add_observed(self._add_holiday_jul_25(\"Constitution Day\"), rule=SUN_TO_NEXT_MON)\n\n # Discovery Day\n self._add_observed(self._add_holiday_nov_19(\"Discovery Day\"), rule=SUN_TO_NEXT_MON)\n\n def _populate_subdiv_ri_public_holidays(self):\n # Victory Day\n if self._year >= 1948:\n self._add_holiday_2nd_mon_of_aug(\"Victory Day\")\n\n # Indigenous Peoples' Day\n if self._year >= 1971:\n self._add_holiday_2nd_mon_of_oct(\n \"Indigenous Peoples' Day / Columbus Day\" if self._year >= 2022 else \"Columbus Day\"\n )\n\n def _populate_subdiv_sc_public_holidays(self):\n # Confederate Memorial Day\n if self._year >= 1866:\n self._add_holiday_4th_mon_of_apr(\"Confederate Memorial Day\")\n\n def _populate_subdiv_sd_public_holidays(self):\n # Native Americans' Day / Columbus Day\n if self._year >= 1937:\n name = \"Native Americans' Day\" if self._year >= 1990 else \"Columbus Day\"\n if self._year >= 1970:\n self._add_holiday_2nd_mon_of_oct(name)\n else:\n self._add_columbus_day(name)\n\n def _populate_subdiv_tn_public_holidays(self):\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n def _populate_subdiv_tx_public_holidays(self):\n # Confederate Memorial Day\n if self._year >= 1931:\n self._add_holiday_jan_19(\"Confederate Memorial Day\")\n\n # Texas Independence Day\n if self._year >= 1874:\n self._add_holiday_mar_2(\"Texas Independence Day\")\n\n # Cesar Chavez Day\n if self._year >= 2000:\n self._add_holiday_mar_31(\"Cesar Chavez Day\")\n\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n # San Jacinto Day\n if self._year >= 1875:\n self._add_holiday_apr_21(\"San Jacinto Day\")\n\n # Emancipation Day In Texas\n if self._year >= 1980:\n self._add_holiday_jun_19(\"Emancipation Day In Texas\")\n\n # Lyndon Baines Johnson Day\n if self._year >= 1973:\n self._add_holiday_aug_27(\"Lyndon Baines Johnson Day\")\n\n # Friday After Thanksgiving\n if self._year >= 1975:\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Friday After Thanksgiving\")\n\n # Christmas Eve\n if self._year >= 1981:\n self._add_christmas_eve_holiday()\n\n # Day After Christmas\n if self._year >= 1981:\n self._add_christmas_day_two(\"Day After Christmas\")\n\n def _populate_subdiv_um_public_holidays(self):\n pass\n\n def _populate_subdiv_ut_public_holidays(self):\n # Pioneer Day\n if self._year >= 1849:\n self._add_observed(self._add_holiday_jul_24(\"Pioneer Day\"))\n\n def _populate_subdiv_va_public_holidays(self):\n # Lee Jackson Day\n if 1889 <= self._year <= 2020:\n name = \"Lee Jackson Day\"\n if self._year >= 2000:\n self._add_holiday_3_days_prior_3rd_mon_of_jan(name)\n elif self._year >= 1983:\n self._add_holiday_3rd_mon_of_jan(name)\n else:\n self._add_holiday_jan_19(name)\n\n # Inauguration Day\n if self._year >= 1789 and (self._year - 1789) % 4 == 0:\n name = \"Inauguration Day\"\n self._add_observed(\n self._add_holiday_jan_20(name)\n if self._year >= 1937\n else self._add_holiday_mar_4(name),\n rule=SUN_TO_NEXT_MON,\n )\n # Indigenous Peoples' Day\n if self._year >= 1971:\n self._add_holiday_2nd_mon_of_oct(\n \"Indigenous Peoples' Day\" if self._year >= 2020 else \"Columbus Day\"\n )\n\n def _populate_subdiv_vi_public_holidays(self):\n # Three Kings Day\n self._add_epiphany_day(\"Three Kings Day\")\n\n # Washington's Birthday\n name = \"Presidents' Day\"\n if self._year >= 1971:\n self._add_holiday_3rd_mon_of_feb(name)\n else:\n self._add_holiday_feb_22(name)\n\n # Transfer Day\n self._add_holiday_mar_31(\"Transfer Day\")\n\n # Holy Thursday\n self._add_holy_thursday(\"Holy Thursday\")\n\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n # Easter Monday\n self._add_easter_monday(\"Easter Monday\")\n\n # Emancipation Day in US Virgin Islands\n self._add_holiday_jul_3(\"Emancipation Day\")\n\n # Columbus Day\n if self._year >= 1937:\n name = \"Columbus Day and Puerto Rico Friendship Day\"\n if self._year >= 1970:\n self._add_holiday_2nd_mon_of_oct(name)\n else:\n self._add_columbus_day(name)\n\n # Liberty Day\n self._add_holiday_nov_1(\"Liberty Day\")\n\n # Christmas Second Day\n self._add_christmas_day_two(\"Christmas Second Day\")\n\n def _populate_subdiv_vt_public_holidays(self):\n # Town Meeting Day\n if self._year >= 1800:\n self._add_holiday_1st_tue_of_mar(\"Town Meeting Day\")\n\n # Bennington Battle Day\n if self._year >= 1778:\n self._add_observed(self._add_holiday_aug_16(\"Bennington Battle Day\"))\n\n def _populate_subdiv_wa_public_holidays(self):\n pass\n\n def _populate_subdiv_wi_public_holidays(self):\n # Susan B. Anthony Day\n if self._year >= 1976:\n self._add_holiday_feb_15(\"Susan B. Anthony Day\")\n\n if self._year >= 2012:\n # Christmas Eve\n self._add_christmas_eve_holiday()\n\n # New Year's Eve\n self._add_observed(self._add_new_years_eve(\"New Year's Eve\"))\n\n def _populate_subdiv_wv_public_holidays(self):\n # West Virginia Day\n if self._year >= 1927:\n self._add_observed(self._add_holiday_jun_20(\"West Virginia Day\"))\n\n # Election Day\n if self._year >= 2008 and self._year % 2 == 0:\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n # Day After Thanksgiving\n if self._year >= 1975:\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Day After Thanksgiving\")\n\n def _populate_subdiv_wy_public_holidays(self):\n pass\n\n def _populate_unofficial_holidays(self):\n # Very common celebrated cultural days, but no official observance.\n # Due to its nature, no in-lieus are observed.\n\n # Valentine's Day\n # While the modern iteration of Valentine's Day has started in the UK in 1797,\n # it wasn't until 1847 in the US that this started to be observed here.\n\n if self._year >= 1847:\n self._add_holiday_feb_14(\"Valentine's Day\")\n\n # St. Patrick's Day\n # Started in Boston in 1737 for the US.\n\n self._add_holiday_mar_17(\"St. Patrick's Day\")\n\n # Halloween\n # Halloween began in the US sometime around the 19th century.\n\n self._add_holiday_oct_31(\"Halloween\")\n\n # Continental US non-Public dates\n\n if self.subdiv not in {\"AS\", \"GU\", \"MP\", \"PR\", \"UM\", \"VI\"}:\n # Groundhog Day\n # First observed on Feb 2 in 1886 in Continental US + Hawaii.\n\n if self._year >= 1886:\n self._add_holiday_feb_2(\"Groundhog Day\")\n\n # Election Day\n # May be duplicates for certain states which has this as their actual public holiday.\n # The current US Presidential Election date pattern was codified in 1848 nationwide.\n\n if self._year >= 1848 and self._year % 4 == 0:\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n\nclass US(UnitedStates):\n pass\n\n\nclass USA(UnitedStates):\n pass\n", "path": "holidays/countries/united_states.py" } ]
[ { "content": "# holidays\n# --------\n# A fast, efficient Python library for generating country, province and state\n# specific sets of holidays on the fly. It aims to make determining whether a\n# specific date is a holiday as fast and flexible as possible.\n#\n# Authors: Vacanza Team and individual contributors (see AUTHORS file)\n# dr-prodigy <[email protected]> (c) 2017-2023\n# ryanss <[email protected]> (c) 2014-2017\n# Website: https://github.com/vacanza/python-holidays\n# License: MIT (see LICENSE file)\n\nfrom typing import Tuple, Union\n\nfrom holidays.calendars.gregorian import DEC\nfrom holidays.constants import PUBLIC, UNOFFICIAL\nfrom holidays.groups import ChristianHolidays, InternationalHolidays\nfrom holidays.observed_holiday_base import (\n ObservedHolidayBase,\n MON_TO_NEXT_TUE,\n FRI_TO_PREV_THU,\n SAT_TO_PREV_FRI,\n SUN_TO_NEXT_MON,\n SAT_SUN_TO_PREV_FRI,\n SAT_SUN_TO_NEXT_MON,\n)\n\n\nclass UnitedStates(ObservedHolidayBase, ChristianHolidays, InternationalHolidays):\n \"\"\"\n https://en.wikipedia.org/wiki/Public_holidays_in_the_United_States\n\n For Northern Mariana Islands (subdivision MP):\n - https://governor.gov.mp/archived-news/executive-actions-archive/memorandum-2022-legal-holidays/ # noqa: E501\n - https://webcache.googleusercontent.com/search?q=cache:C17_7FBgPtQJ:https://governor.gov.mp/archived-news/executive-actions-archive/memorandum-2022-legal-holidays/&hl=en&gl=sg&strip=1&vwsrc=0 # noqa: E501\n\n Columbus Day / Indigenous Peoples' Day history:\n - https://www.pewresearch.org/short-reads/2023/10/05/working-on-columbus-day-or-indigenous-peoples-day-it-depends-on-where-your-job-is/ # noqa: E501\n - https://www.officeholidays.com/holidays/usa/columbus-day-state-guide\n - https://en.wikipedia.org/wiki/Indigenous_Peoples%27_Day_(United_States)\n - https://www.sos.ri.gov/divisions/civics-and-education/reference-desk/ri-state-holidays\n - https://web.archive.org/web/20080831103521/http://www.dpa.ca.gov/personnel-policies/holidays.htm # noqa: E501\n\n \"\"\"\n\n country = \"US\"\n supported_categories = (PUBLIC, UNOFFICIAL)\n observed_label = \"%s (observed)\"\n subdivisions: Union[Tuple[()], Tuple[str, ...]] = (\n \"AK\", # Alaska.\n \"AL\", # Alabama.\n \"AR\", # Arkansas.\n \"AS\", # American Samoa.\n \"AZ\", # Arizona.\n \"CA\", # California.\n \"CO\", # Colorado.\n \"CT\", # Connecticut.\n \"DC\", # District of Columbia.\n \"DE\", # Delaware.\n \"FL\", # Florida.\n \"GA\", # Georgia.\n \"GU\", # Guam.\n \"HI\", # Hawaii.\n \"IA\", # Iowa.\n \"ID\", # Idaho.\n \"IL\", # Illinois.\n \"IN\", # Indiana.\n \"KS\", # Kansas.\n \"KY\", # Kentucky.\n \"LA\", # Louisiana.\n \"MA\", # Massachusetts.\n \"MD\", # Maryland.\n \"ME\", # Maine.\n \"MI\", # Michigan.\n \"MN\", # Minnesota.\n \"MO\", # Missouri.\n \"MP\", # Northern Mariana Islands.\n \"MS\", # Mississippi.\n \"MT\", # Montana.\n \"NC\", # North Carolina.\n \"ND\", # North Dakota.\n \"NE\", # Nebraska.\n \"NH\", # New Hampshire.\n \"NJ\", # New Jersey.\n \"NM\", # New Mexico.\n \"NV\", # Nevada.\n \"NY\", # New York.\n \"OH\", # Ohio.\n \"OK\", # Oklahoma.\n \"OR\", # Oregon.\n \"PA\", # Pennsylvania.\n \"PR\", # Puerto Rico.\n \"RI\", # Rhode Island.\n \"SC\", # South Carolina.\n \"SD\", # South Dakota.\n \"TN\", # Tennessee.\n \"TX\", # Texas.\n \"UM\", # United States Minor Outlying Islands.\n \"UT\", # Utah.\n \"VA\", # Virginia.\n \"VI\", # Virgin Islands, U.S..\n \"VT\", # Vermont.\n \"WA\", # Washington.\n \"WI\", # Wisconsin.\n \"WV\", # West Virginia.\n \"WY\", # Wyoming.\n )\n\n _deprecated_subdivisions = (\n \"FM\",\n \"MH\",\n \"PW\",\n )\n\n def __init__(self, *args, **kwargs):\n ChristianHolidays.__init__(self)\n InternationalHolidays.__init__(self)\n kwargs.setdefault(\"observed_rule\", SAT_TO_PREV_FRI + SUN_TO_NEXT_MON)\n super().__init__(*args, **kwargs)\n\n def _populate_public_holidays(self):\n # New Year's Day\n if self._year >= 1871:\n name = \"New Year's Day\"\n self._add_observed(self._add_new_years_day(name))\n self._add_observed(self._next_year_new_years_day, name=name)\n\n # Memorial Day\n if self._year >= 1888:\n name = \"Memorial Day\"\n if self._year >= 1971:\n self._add_holiday_last_mon_of_may(name)\n else:\n self._add_holiday_may_30(name)\n\n # Juneteenth Day\n if self._year >= 2021:\n self._add_observed(self._add_holiday_jun_19(\"Juneteenth National Independence Day\"))\n\n # Independence Day\n if self._year >= 1871:\n self._add_observed(self._add_holiday_jul_4(\"Independence Day\"))\n\n # Labor Day\n if self._year >= 1894:\n self._add_holiday_1st_mon_of_sep(\"Labor Day\")\n\n # Veterans Day\n if self._year >= 1938:\n name = \"Veterans Day\" if self._year >= 1954 else \"Armistice Day\"\n if 1971 <= self._year <= 1977:\n self._add_holiday_4th_mon_of_oct(name)\n else:\n self._add_observed(self._add_remembrance_day(name))\n\n # Thanksgiving\n if self._year >= 1871:\n self._add_holiday_4th_thu_of_nov(\"Thanksgiving\")\n\n # Christmas Day\n if self._year >= 1871:\n self._add_observed(self._add_christmas_day(\"Christmas Day\"))\n\n def _add_christmas_eve_holiday(self):\n # Christmas Eve\n # If on Friday, observed on Thursday\n # If on Saturday or Sunday, observed on Friday\n name = \"Christmas Eve\"\n self._add_observed(\n self._add_christmas_eve(name), name=name, rule=FRI_TO_PREV_THU + SAT_SUN_TO_PREV_FRI\n )\n\n def _populate_subdiv_holidays(self):\n if PUBLIC not in self.categories:\n return None\n\n # Martin Luther King Jr. Day\n if self._year >= 1986 and self.subdiv not in {\"AL\", \"AR\", \"AZ\", \"GA\", \"ID\", \"MS\", \"NH\"}:\n self._add_holiday_3rd_mon_of_jan(\"Martin Luther King Jr. Day\")\n\n # Washington's Birthday\n if self._year >= 1879 and self.subdiv not in {\n \"AL\",\n \"AR\",\n \"DE\",\n \"FL\",\n \"GA\",\n \"NM\",\n \"PR\",\n \"VI\",\n }:\n name = \"Washington's Birthday\"\n if self._year >= 1971:\n self._add_holiday_3rd_mon_of_feb(name)\n else:\n self._add_holiday_feb_22(name)\n\n # Columbus Day\n if self._year >= 1937 and (\n self.subdiv is None\n or self.subdiv\n in {\n \"AS\",\n \"AZ\",\n \"CT\",\n \"GA\",\n \"ID\",\n \"IL\",\n \"IN\",\n \"MA\",\n \"MD\",\n \"MO\",\n \"MT\",\n \"NJ\",\n \"NY\",\n \"OH\",\n \"PA\",\n \"UT\",\n \"WV\",\n }\n ):\n name = \"Columbus Day\"\n if self._year >= 1971:\n self._add_holiday_2nd_mon_of_oct(name)\n else:\n self._add_columbus_day(name)\n\n super()._populate_subdiv_holidays()\n\n def _populate_subdiv_ak_public_holidays(self):\n # Seward's Day\n if self._year >= 1918:\n name = \"Seward's Day\"\n if self._year >= 1955:\n self._add_holiday_last_mon_of_mar(name)\n else:\n self._add_holiday_mar_30(name)\n\n # Indigenous Peoples' Day\n if self._year >= 1971:\n self._add_holiday_2nd_mon_of_oct(\n \"Indigenous Peoples' Day\" if self._year >= 2015 else \"Columbus Day\"\n )\n\n # Alaska Day\n if self._year >= 1867:\n self._add_observed(self._add_holiday_oct_18(\"Alaska Day\"))\n\n def _populate_subdiv_al_public_holidays(self):\n # Martin Luther King Jr. Day\n if self._year >= 1986:\n self._add_holiday_3rd_mon_of_jan(\"Martin Luther King, Jr & Robert E. Lee's Birthday\")\n\n # Washington's Birthday\n name = \"George Washington & Thomas Jefferson's Birthday\"\n if self._year >= 1971:\n self._add_holiday_3rd_mon_of_feb(name)\n else:\n self._add_holiday_feb_22(name)\n\n # Confederate Memorial Day\n if self._year >= 1866:\n self._add_holiday_4th_mon_of_apr(\"Confederate Memorial Day\")\n\n # Jefferson Davis Birthday\n if self._year >= 1890:\n self._add_holiday_1st_mon_of_jun(\"Jefferson Davis Birthday\")\n\n # Columbus Day / American Indian Heritage Day / Fraternal Day\n if self._year >= 1971:\n self._add_holiday_2nd_mon_of_oct(\n \"Columbus Day / American Indian Heritage Day / Fraternal Day\"\n if self._year >= 2000\n else \"Columbus Day / Fraternal Day\"\n )\n\n def _populate_subdiv_ar_public_holidays(self):\n # Martin Luther King Jr. Day\n if self._year >= 1986:\n name = (\n \"Martin Luther King Jr. Day\"\n if self._year >= 2018\n else \"Dr. Martin Luther King Jr. and Robert E. Lee's Birthdays\"\n )\n self._add_holiday_3rd_mon_of_jan(name)\n\n # Washington's Birthday\n name = \"George Washington's Birthday and Daisy Gatson Bates Day\"\n if self._year >= 1971:\n self._add_holiday_3rd_mon_of_feb(name)\n else:\n self._add_holiday_feb_22(name)\n\n def _populate_subdiv_as_public_holidays(self):\n # Christmas Eve\n self._add_christmas_eve_holiday()\n\n def _populate_subdiv_az_public_holidays(self):\n # Martin Luther King Jr. Day\n if self._year >= 1986:\n self._add_holiday_3rd_mon_of_jan(\"Dr. Martin Luther King Jr. / Civil Rights Day\")\n\n def _populate_subdiv_ca_public_holidays(self):\n # Lincoln's Birthday\n if 1971 <= self._year <= 2009:\n self._add_observed(self._add_holiday_feb_12(\"Lincoln's Birthday\"))\n\n # Susan B. Anthony Day\n if self._year >= 2014:\n self._add_holiday_feb_15(\"Susan B. Anthony Day\")\n\n # Cesar Chavez Day\n if self._year >= 1995:\n self._add_observed(self._add_holiday_mar_31(\"Cesar Chavez Day\"), rule=SUN_TO_NEXT_MON)\n\n # Columbus Day\n if 1971 <= self._year <= 2008:\n self._add_holiday_2nd_mon_of_oct(\"Columbus Day\")\n\n # Day After Thanksgiving\n if self._year >= 1975:\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Day After Thanksgiving\")\n\n def _populate_subdiv_co_public_holidays(self):\n # Cesar Chavez Day\n if self._year >= 2001:\n self._add_holiday_mar_31(\"Cesar Chavez Day\")\n\n def _populate_subdiv_ct_public_holidays(self):\n # Lincoln's Birthday\n if self._year >= 1971:\n self._add_observed(self._add_holiday_feb_12(\"Lincoln's Birthday\"))\n\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n def _populate_subdiv_dc_public_holidays(self):\n # Inauguration Day\n if self._year >= 1789 and (self._year - 1789) % 4 == 0:\n name = \"Inauguration Day\"\n self._add_observed(\n self._add_holiday_jan_20(name)\n if self._year >= 1937\n else self._add_holiday_mar_4(name),\n rule=SUN_TO_NEXT_MON,\n )\n\n # Emancipation Day\n if self._year >= 2005:\n self._add_observed(self._add_holiday_apr_16(\"Emancipation Day\"))\n\n # Indigenous Peoples' Day\n if self._year >= 1971:\n self._add_holiday_2nd_mon_of_oct(\n \"Indigenous Peoples' Day\" if self._year >= 2019 else \"Columbus Day\"\n )\n\n def _populate_subdiv_de_public_holidays(self):\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n # Election Day\n if self._year >= 2008 and self._year % 2 == 0:\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n # Day After Thanksgiving\n if self._year >= 1975:\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Day After Thanksgiving\")\n\n def _populate_subdiv_fl_public_holidays(self):\n # Susan B. Anthony Day\n if self._year >= 2011:\n self._add_holiday_feb_15(\"Susan B. Anthony Day\")\n\n # Friday After Thanksgiving\n if self._year >= 1975:\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Friday After Thanksgiving\")\n\n def _populate_subdiv_ga_public_holidays(self):\n # Martin Luther King Jr. Day\n if self._year >= 1986:\n self._add_holiday_3rd_mon_of_jan(\n \"Martin Luther King Jr. Day\" if self._year >= 2012 else \"Robert E. Lee's Birthday\"\n )\n\n # Confederate Memorial Day\n if self._year >= 1866:\n name = \"State Holiday\" if self._year >= 2016 else \"Confederate Memorial Day\"\n if self._year == 2020:\n self._add_holiday_apr_10(name)\n else:\n self._add_holiday_4th_mon_of_apr(name)\n\n # Robert E. Lee's Birthday\n if self._year >= 1986:\n self._add_holiday_1_day_past_4th_thu_of_nov(\n \"State Holiday\" if self._year >= 2016 else \"Robert E. Lee's Birthday\"\n )\n\n # Washington's Birthday\n name = \"Washington's Birthday\"\n if self._is_wednesday(DEC, 24):\n self._add_holiday_dec_26(name)\n else:\n self._add_holiday_dec_24(name)\n\n def _populate_subdiv_gu_public_holidays(self):\n # Guam Discovery Day\n if self._year >= 1970:\n self._add_holiday_1st_mon_of_mar(\"Guam Discovery Day\")\n\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n # Liberation Day (Guam)\n if self._year >= 1945:\n self._add_holiday_jul_21(\"Liberation Day (Guam)\")\n\n # All Souls' Day\n self._add_all_souls_day(\"All Souls' Day\")\n\n # Lady of Camarin Day\n self._add_immaculate_conception_day(\"Lady of Camarin Day\")\n\n def _populate_subdiv_hi_public_holidays(self):\n # Prince Jonah Kuhio Kalanianaole Day\n if self._year >= 1949:\n self._add_observed(self._add_holiday_mar_26(\"Prince Jonah Kuhio Kalanianaole Day\"))\n\n # Kamehameha Day\n if self._year >= 1872:\n jun_11 = self._add_holiday_jun_11(\"Kamehameha Day\")\n if self._year >= 2011:\n self._add_observed(jun_11)\n\n # Statehood Day\n if self._year >= 1959:\n self._add_holiday_3rd_fri_of_aug(\"Statehood Day\")\n\n # Election Day\n if self._year >= 2008 and self._year % 2 == 0:\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n def _populate_subdiv_ia_public_holidays(self):\n # Lincoln's Birthday\n if self._year >= 1971:\n self._add_observed(self._add_holiday_feb_12(\"Lincoln's Birthday\"))\n\n def _populate_subdiv_id_public_holidays(self):\n # Martin Luther King Jr. Day\n if self._year >= 1986:\n self._add_holiday_3rd_mon_of_jan(\n \"Martin Luther King Jr. / Idaho Human Rights Day\"\n if self._year >= 2006\n else \"Martin Luther King Jr. Day\",\n )\n\n def _populate_subdiv_il_public_holidays(self):\n # Lincoln's Birthday\n if self._year >= 1971:\n self._add_observed(self._add_holiday_feb_12(\"Lincoln's Birthday\"))\n\n # Casimir Pulaski Day\n if self._year >= 1978:\n self._add_holiday_1st_mon_of_mar(\"Casimir Pulaski Day\")\n\n # Election Day\n if self._year >= 2008 and self._year % 2 == 0:\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n def _populate_subdiv_in_public_holidays(self):\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n # Primary Election Day\n if self._year >= 2015 or (self._year >= 2006 and self._year % 2 == 0):\n self._add_holiday_1_day_past_1st_mon_of_may(\"Primary Election Day\")\n\n # Election Day\n if self._year >= 2015 or (self._year >= 2008 and self._year % 2 == 0):\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n # Lincoln's Birthday\n if self._year >= 2010:\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Lincoln's Birthday\")\n\n def _populate_subdiv_ks_public_holidays(self):\n # Christmas Eve\n if self._year >= 2013:\n self._add_christmas_eve_holiday()\n\n def _populate_subdiv_ky_public_holidays(self):\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n # New Year's Eve\n if self._year >= 2013:\n self._add_observed(self._add_new_years_eve(\"New Year's Eve\"))\n\n def _populate_subdiv_la_public_holidays(self):\n # Inauguration Day\n if self._year >= 1789 and (self._year - 1789) % 4 == 0:\n name = \"Inauguration Day\"\n self._add_observed(\n self._add_holiday_jan_20(name)\n if self._year >= 1937\n else self._add_holiday_mar_4(name),\n rule=SUN_TO_NEXT_MON,\n )\n\n # Mardi Gras\n if self._year >= 1857:\n self._add_carnival_tuesday(\"Mardi Gras\")\n\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n # Election Day\n if self._year >= 2008 and self._year % 2 == 0:\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n def _populate_subdiv_ma_public_holidays(self):\n # Evacuation Day\n if self._year >= 1901:\n self._add_observed(\n self._add_holiday_mar_17(\"Evacuation Day\"), rule=SAT_SUN_TO_NEXT_MON\n )\n\n # Patriots' Day\n if self._year >= 1894:\n name = \"Patriots' Day\"\n if self._year >= 1969:\n self._add_holiday_3rd_mon_of_apr(name)\n else:\n self._add_holiday_apr_19(name)\n\n def _populate_subdiv_md_public_holidays(self):\n if self._year >= 1789 and (self._year - 1789) % 4 == 0:\n # Inauguration Day\n name = \"Inauguration Day\"\n self._add_observed(\n self._add_holiday_jan_20(name)\n if self._year >= 1937\n else self._add_holiday_mar_4(name),\n rule=SUN_TO_NEXT_MON,\n )\n\n # American Indian Heritage Day\n if self._year >= 2008:\n self._add_holiday_1_day_past_4th_thu_of_nov(\"American Indian Heritage Day\")\n\n def _populate_subdiv_me_public_holidays(self):\n # Patriots' Day\n if self._year >= 1894:\n name = \"Patriots' Day\"\n if self._year >= 1969:\n self._add_holiday_3rd_mon_of_apr(\"Patriots' Day\")\n else:\n self._add_holiday_apr_19(name)\n\n # Indigenous Peoples' Day\n if self._year >= 1971:\n self._add_holiday_2nd_mon_of_oct(\n \"Indigenous Peoples' Day\" if self._year >= 2019 else \"Columbus Day\"\n )\n\n def _populate_subdiv_mi_public_holidays(self):\n if self._year >= 2013:\n # Christmas Eve\n self._add_christmas_eve_holiday()\n\n # New Year's Eve\n self._add_observed(self._add_new_years_eve(\"New Year's Eve\"))\n\n def _populate_subdiv_mn_public_holidays(self):\n pass\n\n def _populate_subdiv_mo_public_holidays(self):\n # Truman Day\n if self._year >= 1949:\n self._add_observed(self._add_holiday_may_8(\"Truman Day\"))\n\n def _populate_subdiv_mp_public_holidays(self):\n # Commonwealth Covenant Day in Northern Mariana Islands\n self._add_observed(self._add_holiday_mar_24(\"Commonwealth Covenant Day\"))\n\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n # Commonwealth Cultural Day in Northern Mariana Islands\n self._add_holiday_2nd_mon_of_oct(\"Commonwealth Cultural Day\")\n\n # Election Day\n if self._year >= 2008 and self._year % 2 == 0:\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n # Citizenship Day in Northern Mariana Islands\n self._add_observed(self._add_holiday_nov_4(\"Citizenship Day\"))\n\n # Constitution Day in Northern Mariana Islands\n self._add_observed(self._add_holiday_dec_8(\"Constitution Day\"))\n\n def _populate_subdiv_ms_public_holidays(self):\n # Martin Luther King Jr. Day\n if self._year >= 1986:\n self._add_holiday_3rd_mon_of_jan(\n \"Dr. Martin Luther King Jr. and Robert E. Lee's Birthdays\",\n )\n\n # Confederate Memorial Day\n if self._year >= 1866:\n self._add_holiday_last_mon_of_apr(\"Confederate Memorial Day\")\n\n def _populate_subdiv_mt_public_holidays(self):\n # Election Day\n if self._year >= 2008 and self._year % 2 == 0:\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n def _populate_subdiv_nc_public_holidays(self):\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n # Day After Thanksgiving\n if self._year >= 1975:\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Day After Thanksgiving\")\n\n # Christmas Eve\n if self._year >= 2013:\n self._add_christmas_eve_holiday()\n\n # Day After Christmas\n if self._year >= 2013:\n # If on Saturday or Sunday, observed on Monday\n # If on Monday, observed on Tuesday\n name = \"Day After Christmas\"\n self._add_observed(\n self._add_christmas_day_two(name),\n name=name,\n rule=MON_TO_NEXT_TUE + SAT_SUN_TO_NEXT_MON,\n )\n\n def _populate_subdiv_nd_public_holidays(self):\n pass\n\n def _populate_subdiv_ne_public_holidays(self):\n # Arbor Day\n if self._year >= 1875:\n name = \"Arbor Day\"\n if self._year >= 1989:\n self._add_holiday_last_fri_of_apr(name)\n else:\n self._add_holiday_apr_22(name)\n\n # Indigenous Peoples' Day\n if self._year >= 1971:\n self._add_holiday_2nd_mon_of_oct(\n \"Indigenous Peoples' Day\" if self._year >= 2020 else \"Columbus Day\"\n )\n\n def _populate_subdiv_nh_public_holidays(self):\n # Martin Luther King Jr. Day\n if self._year >= 1986:\n self._add_holiday_3rd_mon_of_jan(\"Dr. Martin Luther King Jr. / Civil Rights Day\")\n\n # Election Day\n if self._year >= 2008 and self._year % 2 == 0:\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n # Day After Thanksgiving\n if self._year >= 1975:\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Day After Thanksgiving\")\n\n def _populate_subdiv_nj_public_holidays(self):\n # Lincoln's Birthday\n if self._year >= 1971:\n self._add_observed(self._add_holiday_feb_12(\"Lincoln's Birthday\"))\n\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n # Election Day\n if self._year >= 2008 and self._year % 2 == 0:\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n def _populate_subdiv_nm_public_holidays(self):\n # Indigenous Peoples' Day\n if self._year >= 1971:\n self._add_holiday_2nd_mon_of_oct(\n \"Indigenous Peoples' Day\" if self._year >= 2019 else \"Columbus Day\"\n )\n\n # Presidents' Day\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Presidents' Day\")\n\n def _populate_subdiv_nv_public_holidays(self):\n # Nevada Day\n if self._year >= 1933:\n name = \"Nevada Day\"\n self._add_observed(\n self._add_holiday_last_fri_of_oct(name)\n if self._year >= 2000\n else self._add_holiday_oct_31(name)\n )\n\n # Family Day\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Family Day\")\n\n def _populate_subdiv_ny_public_holidays(self):\n # Lincoln's Birthday\n if self._year >= 1971:\n self._add_observed(self._add_holiday_feb_12(\"Lincoln's Birthday\"))\n\n # Susan B. Anthony Day\n if self._year >= 2004:\n self._add_holiday_feb_15(\"Susan B. Anthony Day\")\n\n # Election Day\n if self._year >= 2015 or (self._year >= 2008 and self._year % 2 == 0):\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n def _populate_subdiv_oh_public_holidays(self):\n pass\n\n def _populate_subdiv_ok_public_holidays(self):\n # Day After Thanksgiving\n if self._year >= 1975:\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Day After Thanksgiving\")\n\n def _populate_subdiv_or_public_holidays(self):\n pass\n\n def _populate_subdiv_pa_public_holidays(self):\n # Day After Thanksgiving\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Day After Thanksgiving\")\n\n def _populate_subdiv_pr_public_holidays(self):\n # Epiphany\n self._add_epiphany_day(\"Epiphany\")\n\n # Washington's Birthday\n self._add_holiday_3rd_mon_of_feb(\"Presidents' Day\")\n\n # Emancipation Day\n self._add_observed(self._add_holiday_mar_22(\"Emancipation Day\"), rule=SUN_TO_NEXT_MON)\n\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n # Constitution Day\n self._add_observed(self._add_holiday_jul_25(\"Constitution Day\"), rule=SUN_TO_NEXT_MON)\n\n # Discovery Day\n self._add_observed(self._add_holiday_nov_19(\"Discovery Day\"), rule=SUN_TO_NEXT_MON)\n\n def _populate_subdiv_ri_public_holidays(self):\n # Victory Day\n if self._year >= 1948:\n self._add_holiday_2nd_mon_of_aug(\"Victory Day\")\n\n # Indigenous Peoples' Day\n if self._year >= 1971:\n self._add_holiday_2nd_mon_of_oct(\n \"Indigenous Peoples' Day / Columbus Day\" if self._year >= 2022 else \"Columbus Day\"\n )\n\n def _populate_subdiv_sc_public_holidays(self):\n # Confederate Memorial Day\n if self._year >= 1866:\n self._add_holiday_4th_mon_of_apr(\"Confederate Memorial Day\")\n\n def _populate_subdiv_sd_public_holidays(self):\n # Native Americans' Day / Columbus Day\n if self._year >= 1937:\n name = \"Native Americans' Day\" if self._year >= 1990 else \"Columbus Day\"\n if self._year >= 1970:\n self._add_holiday_2nd_mon_of_oct(name)\n else:\n self._add_columbus_day(name)\n\n def _populate_subdiv_tn_public_holidays(self):\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n def _populate_subdiv_tx_public_holidays(self):\n # Confederate Memorial Day\n if self._year >= 1931:\n self._add_holiday_jan_19(\"Confederate Memorial Day\")\n\n # Texas Independence Day\n if self._year >= 1874:\n self._add_holiday_mar_2(\"Texas Independence Day\")\n\n # Cesar Chavez Day\n if self._year >= 2000:\n self._add_holiday_mar_31(\"Cesar Chavez Day\")\n\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n # San Jacinto Day\n if self._year >= 1875:\n self._add_holiday_apr_21(\"San Jacinto Day\")\n\n # Emancipation Day In Texas\n if self._year >= 1980:\n self._add_holiday_jun_19(\"Emancipation Day In Texas\")\n\n # Lyndon Baines Johnson Day\n if self._year >= 1973:\n self._add_holiday_aug_27(\"Lyndon Baines Johnson Day\")\n\n # Friday After Thanksgiving\n if self._year >= 1975:\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Friday After Thanksgiving\")\n\n # Christmas Eve\n if self._year >= 1981:\n self._add_christmas_eve_holiday()\n\n # Day After Christmas\n if self._year >= 1981:\n self._add_christmas_day_two(\"Day After Christmas\")\n\n def _populate_subdiv_um_public_holidays(self):\n pass\n\n def _populate_subdiv_ut_public_holidays(self):\n # Pioneer Day\n if self._year >= 1849:\n self._add_observed(self._add_holiday_jul_24(\"Pioneer Day\"))\n\n def _populate_subdiv_va_public_holidays(self):\n # Lee Jackson Day\n if 1889 <= self._year <= 2020:\n name = \"Lee Jackson Day\"\n if self._year >= 2000:\n self._add_holiday_3_days_prior_3rd_mon_of_jan(name)\n elif self._year >= 1983:\n self._add_holiday_3rd_mon_of_jan(name)\n else:\n self._add_holiday_jan_19(name)\n\n # Inauguration Day\n if self._year >= 1789 and (self._year - 1789) % 4 == 0:\n name = \"Inauguration Day\"\n self._add_observed(\n self._add_holiday_jan_20(name)\n if self._year >= 1937\n else self._add_holiday_mar_4(name),\n rule=SUN_TO_NEXT_MON,\n )\n # Indigenous Peoples' Day\n if self._year >= 1971:\n self._add_holiday_2nd_mon_of_oct(\n \"Indigenous Peoples' Day\" if self._year >= 2020 else \"Columbus Day\"\n )\n\n def _populate_subdiv_vi_public_holidays(self):\n # Three Kings Day\n self._add_epiphany_day(\"Three Kings Day\")\n\n # Washington's Birthday\n name = \"Presidents' Day\"\n if self._year >= 1971:\n self._add_holiday_3rd_mon_of_feb(name)\n else:\n self._add_holiday_feb_22(name)\n\n # Transfer Day\n self._add_holiday_mar_31(\"Transfer Day\")\n\n # Holy Thursday\n self._add_holy_thursday(\"Holy Thursday\")\n\n # Good Friday\n self._add_good_friday(\"Good Friday\")\n\n # Easter Monday\n self._add_easter_monday(\"Easter Monday\")\n\n # Emancipation Day in US Virgin Islands\n self._add_holiday_jul_3(\"Emancipation Day\")\n\n # Columbus Day\n if self._year >= 1937:\n name = \"Columbus Day and Puerto Rico Friendship Day\"\n if self._year >= 1970:\n self._add_holiday_2nd_mon_of_oct(name)\n else:\n self._add_columbus_day(name)\n\n # Liberty Day\n self._add_holiday_nov_1(\"Liberty Day\")\n\n # Christmas Second Day\n self._add_christmas_day_two(\"Christmas Second Day\")\n\n def _populate_subdiv_vt_public_holidays(self):\n # Town Meeting Day\n if self._year >= 1800:\n self._add_holiday_1st_tue_of_mar(\"Town Meeting Day\")\n\n # Bennington Battle Day\n if self._year >= 1778:\n self._add_observed(self._add_holiday_aug_16(\"Bennington Battle Day\"))\n\n def _populate_subdiv_wa_public_holidays(self):\n pass\n\n def _populate_subdiv_wi_public_holidays(self):\n # Susan B. Anthony Day\n if self._year >= 1976:\n self._add_holiday_feb_15(\"Susan B. Anthony Day\")\n\n if self._year >= 2012:\n # Christmas Eve\n self._add_christmas_eve_holiday()\n\n # New Year's Eve\n self._add_observed(self._add_new_years_eve(\"New Year's Eve\"))\n\n def _populate_subdiv_wv_public_holidays(self):\n # West Virginia Day\n if self._year >= 1927:\n self._add_observed(self._add_holiday_jun_20(\"West Virginia Day\"))\n\n # Election Day\n if self._year >= 2008 and self._year % 2 == 0:\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n # Day After Thanksgiving\n if self._year >= 1975:\n self._add_holiday_1_day_past_4th_thu_of_nov(\"Day After Thanksgiving\")\n\n def _populate_subdiv_wy_public_holidays(self):\n pass\n\n def _populate_unofficial_holidays(self):\n # Very common celebrated cultural days, but no official observance.\n # Due to its nature, no in-lieus are observed.\n\n # Valentine's Day\n # While the modern iteration of Valentine's Day has started in the UK in 1797,\n # it wasn't until 1847 in the US that this started to be observed here.\n\n if self._year >= 1847:\n self._add_holiday_feb_14(\"Valentine's Day\")\n\n # St. Patrick's Day\n # Started in Boston in 1737 for the US.\n\n self._add_holiday_mar_17(\"St. Patrick's Day\")\n\n # Halloween\n # Halloween began in the US sometime around the 19th century.\n\n self._add_holiday_oct_31(\"Halloween\")\n\n # Continental US non-Public dates\n\n if self.subdiv not in {\"AS\", \"GU\", \"MP\", \"PR\", \"UM\", \"VI\"}:\n # Groundhog Day\n # First observed on Feb 2 in 1886 in Continental US + Hawaii.\n\n if self._year >= 1886:\n self._add_holiday_feb_2(\"Groundhog Day\")\n\n # Election Day\n # May be duplicates for certain states which has this as their actual public holiday.\n # The current US Presidential Election date pattern was codified in 1848 nationwide.\n\n if self._year >= 1848 and self._year % 4 == 0:\n self._add_holiday_1_day_past_1st_mon_of_nov(\"Election Day\")\n\n\nclass US(UnitedStates):\n pass\n\n\nclass USA(UnitedStates):\n pass\n", "path": "holidays/countries/united_states.py" } ]
diff --git a/holidays/countries/united_states.py b/holidays/countries/united_states.py index f38ca5fa5..4089308d9 100644 --- a/holidays/countries/united_states.py +++ b/holidays/countries/united_states.py @@ -609,7 +609,7 @@ def _populate_subdiv_ms_public_holidays(self): # Confederate Memorial Day if self._year >= 1866: - self._add_holiday_4th_mon_of_apr("Confederate Memorial Day") + self._add_holiday_last_mon_of_apr("Confederate Memorial Day") def _populate_subdiv_mt_public_holidays(self): # Election Day diff --git a/snapshots/countries/US_MS.json b/snapshots/countries/US_MS.json index 8752f6455..bfcf64923 100644 --- a/snapshots/countries/US_MS.json +++ b/snapshots/countries/US_MS.json @@ -19,7 +19,7 @@ "1951-02-14": "Valentine's Day", "1951-02-22": "Washington's Birthday", "1951-03-17": "St. Patrick's Day", - "1951-04-23": "Confederate Memorial Day", + "1951-04-30": "Confederate Memorial Day", "1951-05-30": "Memorial Day", "1951-07-04": "Independence Day", "1951-09-03": "Labor Day", @@ -92,7 +92,7 @@ "1956-02-14": "Valentine's Day", "1956-02-22": "Washington's Birthday", "1956-03-17": "St. Patrick's Day", - "1956-04-23": "Confederate Memorial Day", + "1956-04-30": "Confederate Memorial Day", "1956-05-30": "Memorial Day", "1956-07-04": "Independence Day", "1956-09-03": "Labor Day", @@ -107,7 +107,7 @@ "1957-02-14": "Valentine's Day", "1957-02-22": "Washington's Birthday", "1957-03-17": "St. Patrick's Day", - "1957-04-22": "Confederate Memorial Day", + "1957-04-29": "Confederate Memorial Day", "1957-05-30": "Memorial Day", "1957-07-04": "Independence Day", "1957-09-02": "Labor Day", @@ -177,7 +177,7 @@ "1962-02-14": "Valentine's Day", "1962-02-22": "Washington's Birthday", "1962-03-17": "St. Patrick's Day", - "1962-04-23": "Confederate Memorial Day", + "1962-04-30": "Confederate Memorial Day", "1962-05-30": "Memorial Day", "1962-07-04": "Independence Day", "1962-09-03": "Labor Day", @@ -191,7 +191,7 @@ "1963-02-14": "Valentine's Day", "1963-02-22": "Washington's Birthday", "1963-03-17": "St. Patrick's Day", - "1963-04-22": "Confederate Memorial Day", + "1963-04-29": "Confederate Memorial Day", "1963-05-30": "Memorial Day", "1963-07-04": "Independence Day", "1963-09-02": "Labor Day", @@ -264,7 +264,7 @@ "1968-02-14": "Valentine's Day", "1968-02-22": "Washington's Birthday", "1968-03-17": "St. Patrick's Day", - "1968-04-22": "Confederate Memorial Day", + "1968-04-29": "Confederate Memorial Day", "1968-05-30": "Memorial Day", "1968-07-04": "Independence Day", "1968-09-02": "Labor Day", @@ -335,7 +335,7 @@ "1973-02-14": "Valentine's Day", "1973-02-19": "Washington's Birthday", "1973-03-17": "St. Patrick's Day", - "1973-04-23": "Confederate Memorial Day", + "1973-04-30": "Confederate Memorial Day", "1973-05-28": "Memorial Day", "1973-07-04": "Independence Day", "1973-09-03": "Labor Day", @@ -348,7 +348,7 @@ "1974-02-14": "Valentine's Day", "1974-02-18": "Washington's Birthday", "1974-03-17": "St. Patrick's Day", - "1974-04-22": "Confederate Memorial Day", + "1974-04-29": "Confederate Memorial Day", "1974-05-27": "Memorial Day", "1974-07-04": "Independence Day", "1974-09-02": "Labor Day", @@ -420,7 +420,7 @@ "1979-02-14": "Valentine's Day", "1979-02-19": "Washington's Birthday", "1979-03-17": "St. Patrick's Day", - "1979-04-23": "Confederate Memorial Day", + "1979-04-30": "Confederate Memorial Day", "1979-05-28": "Memorial Day", "1979-07-04": "Independence Day", "1979-09-03": "Labor Day", @@ -493,7 +493,7 @@ "1984-02-14": "Valentine's Day", "1984-02-20": "Washington's Birthday", "1984-03-17": "St. Patrick's Day", - "1984-04-23": "Confederate Memorial Day", + "1984-04-30": "Confederate Memorial Day", "1984-05-28": "Memorial Day", "1984-07-04": "Independence Day", "1984-09-03": "Labor Day", @@ -508,7 +508,7 @@ "1985-02-14": "Valentine's Day", "1985-02-18": "Washington's Birthday", "1985-03-17": "St. Patrick's Day", - "1985-04-22": "Confederate Memorial Day", + "1985-04-29": "Confederate Memorial Day", "1985-05-27": "Memorial Day", "1985-07-04": "Independence Day", "1985-09-02": "Labor Day", @@ -583,7 +583,7 @@ "1990-02-14": "Valentine's Day", "1990-02-19": "Washington's Birthday", "1990-03-17": "St. Patrick's Day", - "1990-04-23": "Confederate Memorial Day", + "1990-04-30": "Confederate Memorial Day", "1990-05-28": "Memorial Day", "1990-07-04": "Independence Day", "1990-09-03": "Labor Day", @@ -598,7 +598,7 @@ "1991-02-14": "Valentine's Day", "1991-02-18": "Washington's Birthday", "1991-03-17": "St. Patrick's Day", - "1991-04-22": "Confederate Memorial Day", + "1991-04-29": "Confederate Memorial Day", "1991-05-27": "Memorial Day", "1991-07-04": "Independence Day", "1991-09-02": "Labor Day", @@ -676,7 +676,7 @@ "1996-02-14": "Valentine's Day", "1996-02-19": "Washington's Birthday", "1996-03-17": "St. Patrick's Day", - "1996-04-22": "Confederate Memorial Day", + "1996-04-29": "Confederate Memorial Day", "1996-05-27": "Memorial Day", "1996-07-04": "Independence Day", "1996-09-02": "Labor Day", @@ -753,7 +753,7 @@ "2001-02-14": "Valentine's Day", "2001-02-19": "Washington's Birthday", "2001-03-17": "St. Patrick's Day", - "2001-04-23": "Confederate Memorial Day", + "2001-04-30": "Confederate Memorial Day", "2001-05-28": "Memorial Day", "2001-07-04": "Independence Day", "2001-09-03": "Labor Day", @@ -768,7 +768,7 @@ "2002-02-14": "Valentine's Day", "2002-02-18": "Washington's Birthday", "2002-03-17": "St. Patrick's Day", - "2002-04-22": "Confederate Memorial Day", + "2002-04-29": "Confederate Memorial Day", "2002-05-27": "Memorial Day", "2002-07-04": "Independence Day", "2002-09-02": "Labor Day", @@ -845,7 +845,7 @@ "2007-02-14": "Valentine's Day", "2007-02-19": "Washington's Birthday", "2007-03-17": "St. Patrick's Day", - "2007-04-23": "Confederate Memorial Day", + "2007-04-30": "Confederate Memorial Day", "2007-05-28": "Memorial Day", "2007-07-04": "Independence Day", "2007-09-03": "Labor Day", @@ -923,7 +923,7 @@ "2012-02-14": "Valentine's Day", "2012-02-20": "Washington's Birthday", "2012-03-17": "St. Patrick's Day", - "2012-04-23": "Confederate Memorial Day", + "2012-04-30": "Confederate Memorial Day", "2012-05-28": "Memorial Day", "2012-07-04": "Independence Day", "2012-09-03": "Labor Day", @@ -939,7 +939,7 @@ "2013-02-14": "Valentine's Day", "2013-02-18": "Washington's Birthday", "2013-03-17": "St. Patrick's Day", - "2013-04-22": "Confederate Memorial Day", + "2013-04-29": "Confederate Memorial Day", "2013-05-27": "Memorial Day", "2013-07-04": "Independence Day", "2013-09-02": "Labor Day", @@ -1014,7 +1014,7 @@ "2018-02-14": "Valentine's Day", "2018-02-19": "Washington's Birthday", "2018-03-17": "St. Patrick's Day", - "2018-04-23": "Confederate Memorial Day", + "2018-04-30": "Confederate Memorial Day", "2018-05-28": "Memorial Day", "2018-07-04": "Independence Day", "2018-09-03": "Labor Day", @@ -1029,7 +1029,7 @@ "2019-02-14": "Valentine's Day", "2019-02-18": "Washington's Birthday", "2019-03-17": "St. Patrick's Day", - "2019-04-22": "Confederate Memorial Day", + "2019-04-29": "Confederate Memorial Day", "2019-05-27": "Memorial Day", "2019-07-04": "Independence Day", "2019-09-02": "Labor Day", @@ -1112,7 +1112,7 @@ "2024-02-14": "Valentine's Day", "2024-02-19": "Washington's Birthday", "2024-03-17": "St. Patrick's Day", - "2024-04-22": "Confederate Memorial Day", + "2024-04-29": "Confederate Memorial Day", "2024-05-27": "Memorial Day", "2024-06-19": "Juneteenth National Independence Day", "2024-07-04": "Independence Day", @@ -1195,7 +1195,7 @@ "2029-02-14": "Valentine's Day", "2029-02-19": "Washington's Birthday", "2029-03-17": "St. Patrick's Day", - "2029-04-23": "Confederate Memorial Day", + "2029-04-30": "Confederate Memorial Day", "2029-05-28": "Memorial Day", "2029-06-19": "Juneteenth National Independence Day", "2029-07-04": "Independence Day", @@ -1211,7 +1211,7 @@ "2030-02-14": "Valentine's Day", "2030-02-18": "Washington's Birthday", "2030-03-17": "St. Patrick's Day", - "2030-04-22": "Confederate Memorial Day", + "2030-04-29": "Confederate Memorial Day", "2030-05-27": "Memorial Day", "2030-06-19": "Juneteenth National Independence Day", "2030-07-04": "Independence Day", @@ -1295,7 +1295,7 @@ "2035-02-14": "Valentine's Day", "2035-02-19": "Washington's Birthday", "2035-03-17": "St. Patrick's Day", - "2035-04-23": "Confederate Memorial Day", + "2035-04-30": "Confederate Memorial Day", "2035-05-28": "Memorial Day", "2035-06-19": "Juneteenth National Independence Day", "2035-07-04": "Independence Day", @@ -1380,7 +1380,7 @@ "2040-02-14": "Valentine's Day", "2040-02-20": "Washington's Birthday", "2040-03-17": "St. Patrick's Day", - "2040-04-23": "Confederate Memorial Day", + "2040-04-30": "Confederate Memorial Day", "2040-05-28": "Memorial Day", "2040-06-19": "Juneteenth National Independence Day", "2040-07-04": "Independence Day", @@ -1397,7 +1397,7 @@ "2041-02-14": "Valentine's Day", "2041-02-18": "Washington's Birthday", "2041-03-17": "St. Patrick's Day", - "2041-04-22": "Confederate Memorial Day", + "2041-04-29": "Confederate Memorial Day", "2041-05-27": "Memorial Day", "2041-06-19": "Juneteenth National Independence Day", "2041-07-04": "Independence Day", @@ -1478,7 +1478,7 @@ "2046-02-14": "Valentine's Day", "2046-02-19": "Washington's Birthday", "2046-03-17": "St. Patrick's Day", - "2046-04-23": "Confederate Memorial Day", + "2046-04-30": "Confederate Memorial Day", "2046-05-28": "Memorial Day", "2046-06-19": "Juneteenth National Independence Day", "2046-07-04": "Independence Day", @@ -1494,7 +1494,7 @@ "2047-02-14": "Valentine's Day", "2047-02-18": "Washington's Birthday", "2047-03-17": "St. Patrick's Day", - "2047-04-22": "Confederate Memorial Day", + "2047-04-29": "Confederate Memorial Day", "2047-05-27": "Memorial Day", "2047-06-19": "Juneteenth National Independence Day", "2047-07-04": "Independence Day", diff --git a/tests/countries/test_united_states.py b/tests/countries/test_united_states.py index b404b0595..a62eccae4 100644 --- a/tests/countries/test_united_states.py +++ b/tests/countries/test_united_states.py @@ -1119,11 +1119,33 @@ def test_confederate_memorial_day(self): "2022-04-25", "2023-04-24", ) - for subdiv in ("AL", "MS", "SC"): + for subdiv in ("AL", "SC"): self.assertHolidayName(name, self.state_hols[subdiv], range(1866, 2050)) self.assertNoHolidayName(name, self.state_hols[subdiv], range(1865, 1866)) self.assertHolidayName(name, self.state_hols[subdiv], dt) + self.assertHolidayName( + name, + self.state_hols["MS"], + "2010-04-26", + "2011-04-25", + "2012-04-30", + "2013-04-29", + "2014-04-28", + "2015-04-27", + "2016-04-25", + "2017-04-24", + "2018-04-30", + "2019-04-29", + "2020-04-27", + "2021-04-26", + "2022-04-25", + "2023-04-24", + "2024-04-29", + ) + self.assertHolidayName(name, self.state_hols["MS"], range(1866, 2050)) + self.assertNoHolidayName(name, self.state_hols["MS"], range(1865, 1866)) + self.assertHolidayName( name, self.state_hols["TX"], (f"{year}-01-19" for year in range(1931, 2050)) )
PaddlePaddle__PaddleNLP-6607
[Question]: GPT-3预训练的eval阶段出错 ### GPT-3预训练的eval阶段出错 环境: (1)8卡A100 (2)python -m pip install paddlepaddle-gpu==0.0.0.post112 -f https://www.paddlepaddle.org.cn/whl/linux/gpu/develop.html 报错内容: 采用8卡tensor_parallel预训练GPT-13B,在eval阶段报错: ![image](https://github.com/PaddlePaddle/PaddleNLP/assets/41565156/13ba9d65-37f3-4456-9571-f79b54db4939)
[ { "content": "# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport collections\nimport math\nfrom functools import partial\n\nimport numpy as np\nimport paddle\nimport paddle.incubate as incubate\nimport paddle.nn as nn\nimport paddle.nn.functional as F\nimport paddle.tensor as tensor\nfrom configuration import (\n GPT_PRETRAINED_INIT_CONFIGURATION,\n GPT_PRETRAINED_RESOURCE_FILES_MAP,\n GPTConfig,\n)\nfrom paddle.distributed import fleet\nfrom paddle.distributed.fleet.meta_parallel import get_rng_state_tracker\nfrom paddle.distributed.fleet.utils import recompute\nfrom paddle.fluid import layers\nfrom paddle.nn.layer.transformer import _convert_param_attr_to_list\n\nfrom paddlenlp.transformers import PretrainedModel, register_base_model\nfrom paddlenlp.transformers.model_outputs import CausalLMOutputWithCrossAttentions\n\ntry:\n from paddle.nn.functional.flash_attention import flash_attention\nexcept:\n flash_attention = None\ntry:\n from paddle.incubate.nn.layer.fused_dropout_add import FusedDropoutAdd\nexcept:\n FusedDropoutAdd = None\n\n\ndef get_triangle_upper_mask(x, mask):\n if mask is not None:\n return mask\n if paddle.is_compiled_with_xpu():\n # xpu does not support set constant to -np.inf\n mask = paddle.full_like(x, -1e4)\n else:\n mask = paddle.full_like(x, -np.inf)\n mask.stop_gradient = True\n mask = paddle.triu(mask, diagonal=1)\n mask.stop_gradient = True\n return mask\n\n\ndef parallel_matmul(x, y, tensor_parallel_output=True):\n is_fleet_init = True\n tensor_parallel_degree = 1\n try:\n hcg = fleet.get_hybrid_communicate_group()\n model_parallel_group = hcg.get_model_parallel_group()\n tensor_parallel_degree = hcg.get_model_parallel_world_size()\n except:\n is_fleet_init = False\n\n if is_fleet_init and tensor_parallel_degree > 1 and y.is_distributed:\n # if not running under distributed.launch, it will raise AttributeError: 'Fleet' object has no attribute '_hcg'\n input_parallel = paddle.distributed.collective._c_identity(x, group=model_parallel_group)\n logits = paddle.matmul(input_parallel, y, transpose_y=True)\n\n if tensor_parallel_output:\n return logits\n\n return paddle.distributed.collective._c_concat(logits, group=model_parallel_group)\n\n else:\n logits = paddle.matmul(x, y, transpose_y=True)\n return logits\n\n\nclass MultiHeadAttention(nn.Layer):\n \"\"\"\n Attention mapps queries and a set of key-value pairs to outputs, and\n Multi-Head Attention performs multiple parallel attention to jointly attending\n to information from different representation subspaces.\n\n \"\"\"\n\n Cache = collections.namedtuple(\"Cache\", [\"k\", \"v\"])\n StaticCache = collections.namedtuple(\"StaticCache\", [\"k\", \"v\"])\n\n def __init__(self, config,):\n super(MultiHeadAttention, self).__init__()\n\n self.config = config\n\n # Recompute defaults to False and is controlled by Trainer\n self.enable_recompute = False\n\n self.use_flash_attention = config.use_flash_attention if flash_attention else None\n\n self.head_dim = config.hidden_size // config.num_attention_heads\n assert self.head_dim * config.num_attention_heads == config.hidden_size, \"hidden_size must be divisible by num_attention_heads\"\n\n self.num_attention_heads = config.num_attention_heads # default, without tensor parallel\n if config.tensor_parallel_degree > 1:\n assert config.num_attention_heads % config.tensor_parallel_degree == 0\n self.num_attention_heads = config.num_attention_heads // config.tensor_parallel_degree\n\n if config.fuse_attention_qkv:\n self.qkv_proj = fleet.meta_parallel.ColumnParallelLinear(\n config.hidden_size,\n 3 * config.hidden_size,\n has_bias=True,\n gather_output=False,\n fuse_matmul_bias=config.fused_linear,\n )\n else:\n self.q_proj = fleet.meta_parallel.ColumnParallelLinear(\n config.hidden_size,\n config.hidden_size,\n has_bias=True,\n gather_output=False,\n fuse_matmul_bias=config.fused_linear,\n )\n\n self.k_proj = fleet.meta_parallel.ColumnParallelLinear(\n config.hidden_size,\n config.hidden_size,\n has_bias=True,\n gather_output=False,\n fuse_matmul_bias=config.fused_linear,\n )\n\n self.v_proj = fleet.meta_parallel.ColumnParallelLinear(\n config.hidden_size,\n config.hidden_size,\n has_bias=True,\n gather_output=False,\n fuse_matmul_bias=config.fused_linear,\n )\n\n self.out_proj = fleet.meta_parallel.RowParallelLinear(\n config.hidden_size,\n config.hidden_size,\n has_bias=True,\n input_is_parallel=True,\n fuse_matmul_bias=config.fused_linear,\n )\n else:\n if self.config.fuse_attention_qkv:\n self.qkv_proj = nn.Linear(config.hidden_size, 3 * config.hidden_size, bias_attr=True)\n else:\n self.q_proj = nn.Linear(config.hidden_size, config.hidden_size, bias_attr=True)\n self.k_proj = nn.Linear(config.hidden_size, config.hidden_size, bias_attr=True)\n self.v_proj = nn.Linear(config.hidden_size, config.hidden_size, bias_attr=True)\n\n self.out_proj = nn.Linear(config.hidden_size, config.hidden_size, bias_attr=True)\n\n def _fuse_prepare_qkv(self, query, use_cache=False, cache=None):\n mix_layer = self.qkv_proj(query)\n mix_layer = paddle.reshape_(mix_layer, [0, 0, -1, 3 * self.head_dim])\n q, k, v = paddle.split(mix_layer, num_or_sections=3, axis=-1)\n\n assert not isinstance(cache, self.StaticCache), \"cache currently does not support the StaticCache type\"\n\n if isinstance(cache, self.Cache):\n # for decoder self-attention in inference\n k = tensor.concat([cache.k, k], axis=1)\n v = tensor.concat([cache.v, v], axis=1)\n if use_cache is True:\n cache = self.Cache(k, v)\n\n return (q, k, v, cache) if use_cache else (q, k, v, None)\n\n def _prepare_qkv(self, query, key, value, use_cache=False, cache=None):\n r\"\"\"\n Prapares linear projected queries, keys and values for usage of subsequnt\n multiple parallel attention. If `cache` is not None, using cached results\n to reduce redundant calculations.\n\n \"\"\"\n q = self.q_proj(query)\n q = tensor.reshape(x=q, shape=[0, 0, -1, self.head_dim])\n\n if isinstance(cache, self.StaticCache):\n # for encoder-decoder attention in inference and has cached\n k, v = cache.k, cache.v\n else:\n k, v = self.compute_kv(key, value)\n\n if isinstance(cache, self.Cache):\n # for decoder self-attention in inference\n k = tensor.concat([cache.k, k], axis=1)\n v = tensor.concat([cache.v, v], axis=1)\n if use_cache is True:\n cache = self.Cache(k, v)\n\n return (q, k, v, cache) if use_cache else (q, k, v, None)\n\n def compute_kv(self, key, value):\n r\"\"\"\n Applies linear projection on input keys and values, then splits heads\n (reshape and transpose) to get keys and values from different representation\n subspaces. The results are used as key-values pairs for subsequent multiple\n parallel attention.\n\n It is part of calculations in multi-head attention, and is provided as\n a method to pre-compute and prefetch these results, thus we can use them\n to construct cache for inference.\n\n \"\"\"\n k = self.k_proj(key)\n v = self.v_proj(value)\n k = tensor.reshape(x=k, shape=[0, 0, -1, self.head_dim])\n v = tensor.reshape(x=v, shape=[0, 0, -1, self.head_dim])\n return k, v\n\n def gen_cache(self, key, value=None, type=Cache):\n \"\"\"\n Generates cache for `forward` usage in inference accroding to arguments.\n The generated cache is an instance of `MultiHeadAttention.Cache` or an\n instance of `MultiHeadAttention.StaticCache`.\n \"\"\"\n if type == MultiHeadAttention.StaticCache: # static_kv\n k, v = self.compute_kv(key, value)\n return self.StaticCache(k, v)\n elif value is None: # incremental_state\n k = layers.fill_constant_batch_size_like(\n input=key, shape=[-1, self.num_attention_heads, 0, self.head_dim], dtype=key.dtype, value=0\n )\n v = layers.fill_constant_batch_size_like(\n input=key, shape=[-1, self.num_attention_heads, 0, self.head_dim], dtype=key.dtype, value=0\n )\n return self.Cache(k, v)\n else:\n # incremental_state with initial value, mainly for usage like UniLM\n return self.Cache(key, value)\n\n def _flash_attention(self, q, k, v, attn_mask=None, output_attentions=False):\n out, weights = flash_attention(\n q, k, v, self.config.hidden_dropout_prob, causal=True, return_softmax=output_attentions, training=self.training\n )\n out = tensor.reshape(x=out, shape=[0, 0, out.shape[2] * out.shape[3]])\n return (out, weights) if output_attentions else out\n\n def core_attn(self, q, k, v, attn_mask=None, output_attentions=False):\n perm = [0, 2, 1, 3]\n q = tensor.transpose(x=q, perm=perm)\n k = tensor.transpose(x=k, perm=perm)\n v = tensor.transpose(x=v, perm=perm)\n\n # scale dot product attention\n\n scale_qk_coeff = self.config.scale_qk_coeff * self.head_dim**0.5\n product = paddle.matmul(x=q.scale(1.0 / scale_qk_coeff), y=k, transpose_y=True)\n\n if self.config.scale_qk_coeff != 1.0:\n product = product.scale(self.config.scale_qk_coeff)\n\n # softmax_mask_fuse_upper_triangle is not supported sif paddle is not compiled with cuda/rocm\n if not paddle.is_compiled_with_cuda():\n attn_mask = get_triangle_upper_mask(product, attn_mask)\n\n if attn_mask is not None:\n product = product + attn_mask\n weights = F.softmax(product)\n else:\n weights = incubate.softmax_mask_fuse_upper_triangle(product)\n\n if self.config.hidden_dropout_prob:\n if self.training:\n with get_rng_state_tracker().rng_state(\"local_seed\"):\n weights = F.dropout(weights, self.config.hidden_dropout_prob, training=self.training, mode=\"upscale_in_train\")\n else:\n weights = F.dropout(weights, self.config.hidden_dropout_prob, training=self.training, mode=\"upscale_in_train\")\n\n out = paddle.matmul(weights, v)\n\n # combine heads\n out = tensor.transpose(out, perm=[0, 2, 1, 3])\n out = tensor.reshape(x=out, shape=[0, 0, -1])\n\n return (out, weights) if output_attentions else out\n\n def forward(self, query, key, value, attn_mask=None, use_cache=False, cache=None, output_attentions=False):\n r\"\"\"\n Applies multi-head attention to map queries and a set of key-value pairs\n to outputs.\n \"\"\"\n key = query if key is None else key\n value = query if value is None else value\n # compute q ,k ,v\n if self.config.fuse_attention_qkv:\n q, k, v, cache = self._fuse_prepare_qkv(query, use_cache, cache)\n else:\n q, k, v, cache = self._prepare_qkv(query, key, value, use_cache, cache)\n\n if self.use_flash_attention and attn_mask is None:\n attn_func = self._flash_attention\n else:\n attn_func = self.core_attn\n has_gradient = (not q.stop_gradient) or (not k.stop_gradient) or (not v.stop_gradient)\n if self.enable_recompute and self.config.recompute_granularity == \"core_attn\" and has_gradient:\n out = recompute(attn_func, q, k, v, attn_mask, output_attentions, use_reentrant=False)\n else:\n out = attn_func(q, k, v, attn_mask=attn_mask, output_attentions=output_attentions)\n\n if output_attentions:\n out, weights = out\n\n # project to output\n out = self.out_proj(out)\n\n outs = [out]\n if output_attentions:\n outs.append(weights)\n if use_cache:\n outs.append(cache)\n return out if len(outs) == 1 else tuple(outs)\n\n\nclass TransformerDecoder(nn.Layer):\n \"\"\"\n TransformerDecoder is a stack of N decoder layers.\n \"\"\"\n\n def __init__(\n self,\n config,\n decoder_layers,\n ):\n super(TransformerDecoder, self).__init__()\n\n self.config = config\n self.layers = decoder_layers\n self.norm = nn.LayerNorm(config.hidden_size, epsilon=1e-5)\n\n # Recompute defaults to False and is controlled by Trainer\n self.enable_recompute = False\n\n def forward(self, tgt, tgt_mask=None, memory=None, memory_mask=None, use_cache=False, cache=None, output_attentions=False):\n r\"\"\"\n Applies a stack of N Transformer decoder layers on inputs. If `norm` is\n provided, also applies layer normalization on the output of last decoder\n layer.\n \"\"\"\n output = tgt\n new_caches = []\n all_self_attentions = [] if output_attentions else None\n\n for i, mod in enumerate(self.layers):\n if cache is None:\n if use_cache:\n output, new_cache = mod(output, tgt_mask=tgt_mask, memory=memory, use_cache=use_cache, cache=cache, output_attentions=output_attentions)\n new_caches.append(new_cache)\n else:\n has_gradient = not output.stop_gradient\n if self.enable_recompute and self.config.recompute_granularity == \"full\" and has_gradient:\n output = recompute(mod, output, tgt_mask, memory, use_cache, cache, output_attentions, use_reentrant=False)\n else:\n output = mod(output, tgt_mask, memory, use_cache, cache, output_attentions)\n\n else:\n output, new_cache = mod(output, tgt_mask=tgt_mask, memory=memory, use_cache=use_cache, cache=cache[i], output_attentions=output_attentions)\n new_caches.append(new_cache)\n\n if output_attentions:\n output, weights = output\n all_self_attentions.append(weights)\n\n if self.norm is not None:\n output = self.norm(output)\n\n outputs = [output]\n if output_attentions:\n outputs.append(all_self_attentions)\n if use_cache:\n outputs.append(new_caches)\n return output if len(outputs) == 1 else tuple(outputs)\n\n def gen_cache(self, memory, do_zip=False):\n r\"\"\"\n Generates cache for `forward` usage. The generated cache is a list, and\n each element in it is a tuple( :code:`(incremental_cache, static_cache)` )\n produced by `TransformerDecoderLayer.gen_cache`. See `TransformerDecoderLayer.gen_cache`\n for more details. If `do_zip` is True, apply `zip` on these tuples to get\n a list with two elements.\n \"\"\"\n cache = [layer.gen_cache(memory) for layer in self.layers]\n if do_zip:\n cache = list(zip(*cache))\n return cache\n\n\nclass TransformerDecoderLayer(nn.Layer):\n \"\"\"\n The transformer decoder layer.\n\n It contains multiheadattention and some linear layers.\n \"\"\"\n\n def __init__(self, config: GPTConfig):\n\n super(TransformerDecoderLayer, self).__init__()\n \n self.config = config\n \n # Recompute defaults to False and is controlled by Trainer\n self.enable_recompute = False\n \n if not FusedDropoutAdd:\n config.use_fused_dropout_add = False\n\n self.self_attn = MultiHeadAttention(config=config)\n\n if config.tensor_parallel_degree > 1:\n self.linear1 = fleet.meta_parallel.ColumnParallelLinear(\n config.hidden_size,\n config.intermediate_size,\n gather_output=False,\n has_bias=True,\n fuse_matmul_bias=self.config.fused_linear,\n )\n self.linear2 = fleet.meta_parallel.RowParallelLinear(\n config.intermediate_size,\n config.hidden_size,\n input_is_parallel=True,\n has_bias=True,\n fuse_matmul_bias=self.config.fused_linear,\n )\n else:\n self.linear1 = nn.Linear(config.hidden_size, config.intermediate_size, bias_attr=True)\n self.linear2 = nn.Linear(config.intermediate_size, config.hidden_size, bias_attr=True)\n\n self.norm1 = nn.LayerNorm(config.hidden_size, epsilon=1e-5)\n self.norm2 = nn.LayerNorm(config.hidden_size, epsilon=1e-5)\n \n if not config.use_fused_dropout_add:\n self.dropout1 = nn.Dropout(config.hidden_dropout_prob, mode=\"upscale_in_train\")\n self.dropout2 = nn.Dropout(config.hidden_dropout_prob, mode=\"upscale_in_train\")\n else:\n self.fused_dropout_add1 = FusedDropoutAdd(config.hidden_dropout_prob, mode=\"upscale_in_train\")\n self.fused_dropout_add2 = FusedDropoutAdd(config.hidden_dropout_prob, mode=\"upscale_in_train\")\n\n self.activation = getattr(F, config.hidden_activation)\n\n def forward(self, tgt, tgt_mask=None, memory=None, use_cache=False, cache=None, output_attentions=False):\n residual = tgt\n\n if self.config.normalize_before:\n tgt = self.norm1(tgt)\n\n if use_cache is False:\n has_gradient = not tgt.stop_gradient\n if self.enable_recompute and self.config.recompute_granularity == \"full_attn\" and has_gradient:\n tgt = recompute(self.self_attn, tgt, None, None, tgt_mask, use_cache, cache, output_attentions, use_reentrant=False)\n else:\n tgt = self.self_attn(tgt, tgt, tgt, tgt_mask, use_cache, cache, output_attentions)\n else:\n tgt, incremental_cache = self.self_attn(tgt, tgt, tgt, tgt_mask, use_cache, cache, output_attentions)\n\n if output_attentions:\n tgt, weights = tgt\n\n current_seed = \"global_seed\"\n if self.training:\n with get_rng_state_tracker().rng_state(current_seed):\n if not self.config.use_fused_dropout_add:\n tgt = residual + self.dropout1(tgt)\n else:\n tgt = self.fused_dropout_add1(tgt, residual)\n else:\n if not self.config.use_fused_dropout_add:\n tgt = residual + self.dropout1(tgt)\n else:\n tgt = self.fused_dropout_add1(tgt, residual)\n\n if not self.config.normalize_before:\n tgt = self.norm1(tgt)\n\n residual = tgt\n if self.config.normalize_before:\n tgt = self.norm2(tgt)\n\n if self.training:\n with get_rng_state_tracker().rng_state(current_seed):\n if not self.config.use_fused_dropout_add:\n tgt = residual + self.linear2(F.gelu(self.linear1(tgt), approximate=True))\n else:\n tgt = self.fused_dropout_add2(self.linear2(F.gelu(self.linear1(tgt), approximate=True)), residual)\n else:\n if not self.config.use_fused_dropout_add:\n tgt = residual + self.linear2(F.gelu(self.linear1(tgt), approximate=True))\n else:\n tgt = self.fused_dropout_add2(self.linear2(F.gelu(self.linear1(tgt), approximate=True)), residual)\n\n if not self.config.normalize_before:\n tgt = self.norm2(tgt)\n\n if output_attentions:\n tgt = (tgt, weights)\n return tgt if use_cache is False else (tgt, incremental_cache)\n\n def gen_cache(self, memory):\n incremental_cache = self.self_attn.gen_cache(memory, type=self.self_attn.Cache)\n return incremental_cache\n\n\nclass GPTEmbeddings(nn.Layer):\n \"\"\"\n Include embeddings from word, position and token_type embeddings\n \"\"\"\n\n def __init__(self, config,):\n super(GPTEmbeddings, self).__init__()\n\n self.config = config\n\n if config.tensor_parallel_degree > 1:\n self.word_embeddings = fleet.meta_parallel.VocabParallelEmbedding(\n config.vocab_size,\n config.hidden_size,\n )\n else:\n self.word_embeddings = nn.Embedding(\n config.vocab_size,\n config.hidden_size,\n )\n\n self.position_embeddings = nn.Embedding(\n config.max_position_embeddings,\n config.hidden_size,\n )\n\n self.dropout = nn.Dropout(config.hidden_dropout_prob)\n\n def forward(self, input_ids, position_ids=None):\n if position_ids is None:\n ones = paddle.ones_like(input_ids, dtype=\"int64\")\n seq_length = paddle.cumsum(ones, axis=-1)\n position_ids = seq_length - ones\n\n input_embedings = self.word_embeddings(input_ids)\n position_embeddings = self.position_embeddings(position_ids)\n embeddings = input_embedings + position_embeddings\n embeddings = self.dropout(embeddings)\n\n return embeddings\n\n\nclass GPTPretrainedModel(PretrainedModel):\n \"\"\"\n An abstract class for pretrained GPT models. It provides GPT related\n `model_config_file`, `resource_files_names`, `pretrained_resource_files_map`,\n `pretrained_init_configuration`, `base_model_prefix` for downloading and\n loading pretrained models. See `PretrainedModel` for more details.\n \"\"\"\n\n model_config_file = \"model_config.json\"\n resource_files_names = {\"model_state\": \"model_state.pdparams\"}\n base_model_prefix = \"gpt\"\n config_class = GPTConfig\n pretrained_init_configuration = GPT_PRETRAINED_INIT_CONFIGURATION\n pretrained_resource_files_map = GPT_PRETRAINED_RESOURCE_FILES_MAP\n\n @classmethod\n def _get_tensor_parallel_mappings(cls, config, is_split=True):\n\n from paddlenlp.transformers.conversion_utils import split_or_merge_func\n\n fn = split_or_merge_func(\n is_split=is_split,\n tensor_parallel_degree=config.tensor_parallel_degree,\n tensor_parallel_rank=config.tensor_parallel_rank,\n num_attention_heads=config.num_attention_heads,\n )\n\n def get_tensor_parallel_split_mappings(num_layers):\n final_actions = {}\n base_actions = {\n # Column Linear\n \"layers.0.linear1.weight\": partial(fn, is_column=True),\n \"layers.0.linear1.bias\": partial(fn, is_column=True),\n # Row Linear\n \"word_embeddings.weight\": partial(fn, is_column=False),\n \"layers.0.self_attn.out_proj.weight\": partial(fn, is_column=False),\n \"layers.0.linear2.weight\": partial(fn, is_column=False),\n }\n\n if config.fuse_attention_qkv:\n base_actions[\"layers.0.self_attn.qkv_proj.weight\"] = partial(fn, is_column=True)\n base_actions[\"layers.0.self_attn.qkv_proj.bias\"] = partial(fn, is_column=True)\n else:\n base_actions[\"layers.0.self_attn.q_proj.weight\"] = partial(fn, is_column=True)\n base_actions[\"layers.0.self_attn.k_proj.weight\"] = partial(fn, is_column=True)\n base_actions[\"layers.0.self_attn.v_proj.weight\"] = partial(fn, is_column=True)\n base_actions[\"layers.0.self_attn.q_proj.bias\"] = partial(fn, is_column=True)\n base_actions[\"layers.0.self_attn.k_proj.bias\"] = partial(fn, is_column=True)\n base_actions[\"layers.0.self_attn.v_proj.bias\"] = partial(fn, is_column=True)\n\n for key, action in base_actions.items():\n if \"layers.0.\" in key:\n for i in range(num_layers):\n final_actions[key.replace(\"layers.0.\", f\"layers.{i}.\")] = action\n final_actions[key] = action\n\n return final_actions\n\n mappings = get_tensor_parallel_split_mappings(config.num_hidden_layers)\n\n return mappings\n\n def _init_weights(self, layer):\n \"\"\"Initialization hook\"\"\"\n if isinstance(\n layer,\n (\n nn.Linear,\n nn.Embedding,\n fleet.meta_parallel.VocabParallelEmbedding,\n fleet.meta_parallel.ColumnParallelLinear,\n fleet.meta_parallel.RowParallelLinear,\n ),\n ):\n # In the dygraph mode, use the `set_value` to reset the parameter directly,\n # and reset the `state_dict` to update parameter in static mode.\n if isinstance(layer.weight, paddle.Tensor):\n layer.weight.set_value(\n paddle.tensor.normal(\n mean=0.0,\n std=self.config.initializer_range,\n shape=layer.weight.shape,\n )\n )\n # Layer.apply is DFS https://github.com/PaddlePaddle/Paddle/blob/a6f5021fcc58b21f4414bae6bf4731ef6971582c/python/paddle/nn/layer/layers.py#L527-L530\n # sublayer is init first\n # scale RowParallelLinear weight\n with paddle.no_grad():\n if isinstance(layer, TransformerDecoderLayer):\n factor = 1 / math.sqrt(2 * self.config.num_hidden_layers)\n layer.linear2.weight.scale_(factor)\n if isinstance(layer, MultiHeadAttention):\n factor = 1 / math.sqrt(2 * self.config.num_hidden_layers)\n layer.out_proj.weight.scale_(factor)\n\n\n@register_base_model\nclass GPTModel(GPTPretrainedModel):\n \"\"\"\n The base model of gpt.\n \"\"\"\n\n def __init__(self, config: GPTConfig):\n super(GPTModel, self).__init__(config)\n\n self.config = config\n\n self.embeddings = GPTEmbeddings(config)\n\n decoder_layers = nn.LayerList()\n for i in range(config.num_hidden_layers):\n decoder_layers.append(TransformerDecoderLayer(config))\n\n self.decoder = TransformerDecoder(\n config,\n decoder_layers,\n )\n\n def forward(self, input_ids, position_ids=None, attention_mask=None, use_cache=False, cache=None, output_attentions=False):\n if position_ids is None:\n past_length = 0\n if cache is not None:\n past_length = paddle.shape(attention_mask)[-1] - 1\n position_ids = paddle.arange(past_length, paddle.shape(input_ids)[-1] + past_length, dtype=\"int64\")\n position_ids = position_ids.unsqueeze(0)\n input_shape = paddle.shape(input_ids)\n position_ids = paddle.expand(position_ids, input_shape)\n embedding_output = self.embeddings(input_ids=input_ids, position_ids=position_ids)\n\n if not self.config.fused_softmax_with_triangular or not paddle.is_compiled_with_cuda():\n # TODO, use registered buffer\n causal_mask = paddle.tensor.tril(\n paddle.ones((paddle.shape(input_ids)[-1], paddle.shape(input_ids)[-1]), dtype=\"int64\"),\n )\n if attention_mask is not None:\n if attention_mask.dtype != paddle.int64:\n attention_mask = paddle.cast(attention_mask, dtype=paddle.int64)\n if len(attention_mask.shape) == 2:\n attention_mask = attention_mask[:, None, None, :]\n attention_mask = (1.0 - (attention_mask & causal_mask)) * -1e4\n else:\n attention_mask = (1.0 - causal_mask) * -1e4\n\n encoder_outputs = self.decoder(\n embedding_output,\n memory=None,\n tgt_mask=None\n if (self.config.fused_softmax_with_triangular and self.training)\n else attention_mask, # use softmax_mask_fuse_upper_triangle\n use_cache=use_cache,\n cache=cache,\n output_attentions=output_attentions,\n )\n return encoder_outputs\n\n\nclass GPTPretrainingCriterion(paddle.nn.Layer):\n \"\"\"\n Criterion for GPT.\n\n It calculates the final loss.\n \"\"\"\n\n def __init__(self, config):\n super(GPTPretrainingCriterion, self).__init__()\n self.config = config\n if config.tensor_parallel_degree > 1 and config.tensor_parallel_output:\n self.loss_func = fleet.meta_parallel.ParallelCrossEntropy(ignore_index=config.ignore_index)\n else:\n self.loss_func = paddle.nn.CrossEntropyLoss(reduction=\"none\", ignore_index=config.ignore_index)\n\n def forward(self, prediction_scores, masked_lm_labels, loss_mask=None):\n\n if self.config.lm_shift_labels:\n # Shift so that tokens < n predict n\n prediction_scores = prediction_scores[..., :-1, :]\n masked_lm_labels = masked_lm_labels[..., 1:]\n\n with paddle.amp.auto_cast(False):\n masked_lm_loss = self.loss_func(prediction_scores.astype(\"float32\"), masked_lm_labels.unsqueeze(2))\n masked_lm_loss = masked_lm_loss[masked_lm_loss > 0].astype(\"float32\")\n loss = paddle.mean(masked_lm_loss)\n\n return loss\n\n\nclass GPTForCausalLM(GPTPretrainedModel):\n \"\"\"\n The GPT Model with a `language modeling` head on top.\n Args:\n gpt (:class:`GPTModel`):\n An instance of :class:`GPTModel`.\n \"\"\"\n\n def __init__(self, config: GPTConfig):\n super(GPTForCausalLM, self).__init__(config)\n self.config = config\n self.gpt = GPTModel(config)\n self.criterion = GPTPretrainingCriterion(config)\n\n def forward(\n self,\n input_ids=None,\n position_ids=None,\n attention_mask=None,\n inputs_embeds=None,\n use_cache=False,\n cache=None,\n labels=None,\n output_attentions=None,\n return_dict=False,\n ):\n r\"\"\"\n Args:\n input_ids (Tensor, optional):\n See :class:`GPTModel`.\n position_ids (Tensor, optional):\n See :class:`GPTModel`.\n attention_mask (Tensor, optional):\n See :class:`GPTModel`.\n inputs_embeds (Tensor, optional):\n See :class:`GPTModel`.\n use_cache (bool, optional):\n See :class:`GPTModel`.\n cache (Tensor, optional):\n See :class:`GPTModel`.\n labels (paddle.Tensor, optional):\n A Tensor of shape `(batch_size, sequence_length)`.\n Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set\n `labels = input_ids` Indices are selected in `[-100, 0, ..., vocab_size]` All labels set to `-100`\n are ignored (masked), the loss is only computed for labels in `[0, ..., vocab_size]`\n Defaults to None.\n output_attentions (bool, optional):\n See :class:`GPTModel`.\n output_hidden_states (bool, optional):\n See :class:`GPTModel`.\n return_dict (bool, optional):\n See :class:`GPTModel`.\n Returns:\n An instance of :class:`~paddlenlp.transformers.model_outputs.BaseModelOutputWithPastAndCrossAttentions` if\n `return_dict=True`. Otherwise it returns a tuple of tensors corresponding\n to ordered and not None (depending on the input arguments) fields of\n :class:`~paddlenlp.transformers.model_outputs.BaseModelOutputWithPastAndCrossAttentions`.\n Especialy, when `return_dict=use_cache=output_attentions=output_hidden_states=False`,\n returns a tensor `logits` which is the output of the gpt model.\n \"\"\"\n input_type = type(input_ids) if input_ids is not None else type(inputs_embeds)\n output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions\n outputs = self.gpt(\n input_ids,\n position_ids=position_ids,\n attention_mask=attention_mask,\n use_cache=use_cache,\n cache=cache,\n output_attentions=output_attentions,\n # output_hidden_states=output_hidden_states,\n # return_dict=return_dict,\n )\n if isinstance(outputs, input_type):\n hidden_states = outputs\n else:\n hidden_states = outputs[0]\n\n tensor_parallel_output = (\n self.config.tensor_parallel_output and labels is not None and self.config.tensor_parallel_degree > 1\n )\n logits = parallel_matmul(hidden_states, self.gpt.embeddings.word_embeddings.weight, tensor_parallel_output)\n\n loss = None\n if labels is not None:\n loss = self.criterion(logits, labels)\n return loss\n\n # outputs = [output, all_hidden_states, new_caches, all_self_attentions]\n if not return_dict:\n if isinstance(outputs, input_type):\n return (loss, logits) if loss is not None else logits\n\n outputs = (logits,) + outputs[1:]\n return ((loss,) + outputs) if loss is not None else outputs\n\n return CausalLMOutputWithCrossAttentions(\n loss=loss,\n logits=logits,\n past_key_values=outputs.past_key_values,\n hidden_states=outputs.hidden_states,\n attentions=outputs.attentions,\n cross_attentions=outputs.cross_attentions,\n )\n\n def prepare_inputs_for_generation(self, input_ids, use_cache=False, cache=None, **kwargs):\n # only last token for inputs_ids if cache is defined in kwargs\n position_ids = kwargs.get(\"position_ids\", None)\n attention_mask = kwargs.get(\"attention_mask\", None)\n if attention_mask is not None and attention_mask.ndim == 4:\n attention_mask = attention_mask[:, -1:, -1:, :]\n if cache is not None:\n input_ids = input_ids[:, -1].unsqueeze(-1)\n if position_ids is not None:\n position_ids = position_ids[:, -1].unsqueeze(-1)\n return {\n \"input_ids\": input_ids,\n \"position_ids\": position_ids,\n \"attention_mask\": attention_mask,\n \"use_cache\": use_cache,\n \"cache\": cache,\n }\n\n @staticmethod\n def prepare_attention_mask_for_generation(input_ids, pad_token_id, eos_token_id):\n is_pad_token_in_inputs_ids = (pad_token_id is not None) and paddle.any(\n input_ids == pad_token_id\n ).numpy().item()\n is_pad_token_not_equal_to_eos_token_id = (eos_token_id is None) or (\n (eos_token_id is not None) and (pad_token_id != eos_token_id)\n )\n if is_pad_token_in_inputs_ids and is_pad_token_not_equal_to_eos_token_id:\n attention_mask = (input_ids != pad_token_id).astype(\"int64\")\n else:\n attention_mask = paddle.ones_like(input_ids, dtype=\"int64\")\n return paddle.unsqueeze(attention_mask, axis=[1, 2])\n", "path": "llm/gpt-3/modeling.py" } ]
[ { "content": "# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport collections\nimport math\nfrom functools import partial\n\nimport numpy as np\nimport paddle\nimport paddle.incubate as incubate\nimport paddle.nn as nn\nimport paddle.nn.functional as F\nimport paddle.tensor as tensor\nfrom configuration import (\n GPT_PRETRAINED_INIT_CONFIGURATION,\n GPT_PRETRAINED_RESOURCE_FILES_MAP,\n GPTConfig,\n)\nfrom paddle.distributed import fleet\nfrom paddle.distributed.fleet.meta_parallel import get_rng_state_tracker\nfrom paddle.distributed.fleet.utils import recompute\nfrom paddle.fluid import layers\nfrom paddle.nn.layer.transformer import _convert_param_attr_to_list\n\nfrom paddlenlp.transformers import PretrainedModel, register_base_model\nfrom paddlenlp.transformers.model_outputs import CausalLMOutputWithCrossAttentions\n\ntry:\n from paddle.nn.functional.flash_attention import flash_attention\nexcept:\n flash_attention = None\ntry:\n from paddle.incubate.nn.layer.fused_dropout_add import FusedDropoutAdd\nexcept:\n FusedDropoutAdd = None\n\n\ndef get_triangle_upper_mask(x, mask):\n if mask is not None:\n return mask\n if paddle.is_compiled_with_xpu():\n # xpu does not support set constant to -np.inf\n mask = paddle.full_like(x, -1e4)\n else:\n mask = paddle.full_like(x, -np.inf)\n mask.stop_gradient = True\n mask = paddle.triu(mask, diagonal=1)\n mask.stop_gradient = True\n return mask\n\n\ndef parallel_matmul(x, y, tensor_parallel_output=True):\n is_fleet_init = True\n tensor_parallel_degree = 1\n try:\n hcg = fleet.get_hybrid_communicate_group()\n model_parallel_group = hcg.get_model_parallel_group()\n tensor_parallel_degree = hcg.get_model_parallel_world_size()\n except:\n is_fleet_init = False\n\n if is_fleet_init and tensor_parallel_degree > 1 and y.is_distributed:\n # if not running under distributed.launch, it will raise AttributeError: 'Fleet' object has no attribute '_hcg'\n input_parallel = paddle.distributed.collective._c_identity(x, group=model_parallel_group)\n logits = paddle.matmul(input_parallel, y, transpose_y=True)\n\n if tensor_parallel_output:\n return logits\n\n return paddle.distributed.collective._c_concat(logits, group=model_parallel_group)\n\n else:\n logits = paddle.matmul(x, y, transpose_y=True)\n return logits\n\n\nclass MultiHeadAttention(nn.Layer):\n \"\"\"\n Attention mapps queries and a set of key-value pairs to outputs, and\n Multi-Head Attention performs multiple parallel attention to jointly attending\n to information from different representation subspaces.\n\n \"\"\"\n\n Cache = collections.namedtuple(\"Cache\", [\"k\", \"v\"])\n StaticCache = collections.namedtuple(\"StaticCache\", [\"k\", \"v\"])\n\n def __init__(self, config,):\n super(MultiHeadAttention, self).__init__()\n\n self.config = config\n\n # Recompute defaults to False and is controlled by Trainer\n self.enable_recompute = False\n\n self.use_flash_attention = config.use_flash_attention if flash_attention else None\n\n self.head_dim = config.hidden_size // config.num_attention_heads\n assert self.head_dim * config.num_attention_heads == config.hidden_size, \"hidden_size must be divisible by num_attention_heads\"\n\n self.num_attention_heads = config.num_attention_heads # default, without tensor parallel\n if config.tensor_parallel_degree > 1:\n assert config.num_attention_heads % config.tensor_parallel_degree == 0\n self.num_attention_heads = config.num_attention_heads // config.tensor_parallel_degree\n\n if config.fuse_attention_qkv:\n self.qkv_proj = fleet.meta_parallel.ColumnParallelLinear(\n config.hidden_size,\n 3 * config.hidden_size,\n has_bias=True,\n gather_output=False,\n fuse_matmul_bias=config.fused_linear,\n )\n else:\n self.q_proj = fleet.meta_parallel.ColumnParallelLinear(\n config.hidden_size,\n config.hidden_size,\n has_bias=True,\n gather_output=False,\n fuse_matmul_bias=config.fused_linear,\n )\n\n self.k_proj = fleet.meta_parallel.ColumnParallelLinear(\n config.hidden_size,\n config.hidden_size,\n has_bias=True,\n gather_output=False,\n fuse_matmul_bias=config.fused_linear,\n )\n\n self.v_proj = fleet.meta_parallel.ColumnParallelLinear(\n config.hidden_size,\n config.hidden_size,\n has_bias=True,\n gather_output=False,\n fuse_matmul_bias=config.fused_linear,\n )\n\n self.out_proj = fleet.meta_parallel.RowParallelLinear(\n config.hidden_size,\n config.hidden_size,\n has_bias=True,\n input_is_parallel=True,\n fuse_matmul_bias=config.fused_linear,\n )\n else:\n if self.config.fuse_attention_qkv:\n self.qkv_proj = nn.Linear(config.hidden_size, 3 * config.hidden_size, bias_attr=True)\n else:\n self.q_proj = nn.Linear(config.hidden_size, config.hidden_size, bias_attr=True)\n self.k_proj = nn.Linear(config.hidden_size, config.hidden_size, bias_attr=True)\n self.v_proj = nn.Linear(config.hidden_size, config.hidden_size, bias_attr=True)\n\n self.out_proj = nn.Linear(config.hidden_size, config.hidden_size, bias_attr=True)\n\n def _fuse_prepare_qkv(self, query, use_cache=False, cache=None):\n mix_layer = self.qkv_proj(query)\n mix_layer = paddle.reshape_(mix_layer, [0, 0, -1, 3 * self.head_dim])\n q, k, v = paddle.split(mix_layer, num_or_sections=3, axis=-1)\n\n assert not isinstance(cache, self.StaticCache), \"cache currently does not support the StaticCache type\"\n\n if isinstance(cache, self.Cache):\n # for decoder self-attention in inference\n k = tensor.concat([cache.k, k], axis=1)\n v = tensor.concat([cache.v, v], axis=1)\n if use_cache is True:\n cache = self.Cache(k, v)\n\n return (q, k, v, cache) if use_cache else (q, k, v, None)\n\n def _prepare_qkv(self, query, key, value, use_cache=False, cache=None):\n r\"\"\"\n Prapares linear projected queries, keys and values for usage of subsequnt\n multiple parallel attention. If `cache` is not None, using cached results\n to reduce redundant calculations.\n\n \"\"\"\n q = self.q_proj(query)\n q = tensor.reshape(x=q, shape=[0, 0, -1, self.head_dim])\n\n if isinstance(cache, self.StaticCache):\n # for encoder-decoder attention in inference and has cached\n k, v = cache.k, cache.v\n else:\n k, v = self.compute_kv(key, value)\n\n if isinstance(cache, self.Cache):\n # for decoder self-attention in inference\n k = tensor.concat([cache.k, k], axis=1)\n v = tensor.concat([cache.v, v], axis=1)\n if use_cache is True:\n cache = self.Cache(k, v)\n\n return (q, k, v, cache) if use_cache else (q, k, v, None)\n\n def compute_kv(self, key, value):\n r\"\"\"\n Applies linear projection on input keys and values, then splits heads\n (reshape and transpose) to get keys and values from different representation\n subspaces. The results are used as key-values pairs for subsequent multiple\n parallel attention.\n\n It is part of calculations in multi-head attention, and is provided as\n a method to pre-compute and prefetch these results, thus we can use them\n to construct cache for inference.\n\n \"\"\"\n k = self.k_proj(key)\n v = self.v_proj(value)\n k = tensor.reshape(x=k, shape=[0, 0, -1, self.head_dim])\n v = tensor.reshape(x=v, shape=[0, 0, -1, self.head_dim])\n return k, v\n\n def gen_cache(self, key, value=None, type=Cache):\n \"\"\"\n Generates cache for `forward` usage in inference accroding to arguments.\n The generated cache is an instance of `MultiHeadAttention.Cache` or an\n instance of `MultiHeadAttention.StaticCache`.\n \"\"\"\n if type == MultiHeadAttention.StaticCache: # static_kv\n k, v = self.compute_kv(key, value)\n return self.StaticCache(k, v)\n elif value is None: # incremental_state\n k = layers.fill_constant_batch_size_like(\n input=key, shape=[-1, self.num_attention_heads, 0, self.head_dim], dtype=key.dtype, value=0\n )\n v = layers.fill_constant_batch_size_like(\n input=key, shape=[-1, self.num_attention_heads, 0, self.head_dim], dtype=key.dtype, value=0\n )\n return self.Cache(k, v)\n else:\n # incremental_state with initial value, mainly for usage like UniLM\n return self.Cache(key, value)\n\n def _flash_attention(self, q, k, v, attn_mask=None, output_attentions=False):\n out, weights = flash_attention(\n q, k, v, self.config.hidden_dropout_prob, causal=True, return_softmax=output_attentions, training=self.training\n )\n out = tensor.reshape(x=out, shape=[0, 0, out.shape[2] * out.shape[3]])\n return (out, weights) if output_attentions else out\n\n def core_attn(self, q, k, v, attn_mask=None, output_attentions=False):\n perm = [0, 2, 1, 3]\n q = tensor.transpose(x=q, perm=perm)\n k = tensor.transpose(x=k, perm=perm)\n v = tensor.transpose(x=v, perm=perm)\n\n # scale dot product attention\n\n scale_qk_coeff = self.config.scale_qk_coeff * self.head_dim**0.5\n product = paddle.matmul(x=q.scale(1.0 / scale_qk_coeff), y=k, transpose_y=True)\n\n if self.config.scale_qk_coeff != 1.0:\n product = product.scale(self.config.scale_qk_coeff)\n\n # softmax_mask_fuse_upper_triangle is not supported sif paddle is not compiled with cuda/rocm\n if not paddle.is_compiled_with_cuda():\n attn_mask = get_triangle_upper_mask(product, attn_mask)\n\n if attn_mask is not None:\n product = product + attn_mask\n weights = F.softmax(product)\n else:\n weights = incubate.softmax_mask_fuse_upper_triangle(product)\n\n if self.config.hidden_dropout_prob:\n if self.training:\n with get_rng_state_tracker().rng_state(\"local_seed\"):\n weights = F.dropout(weights, self.config.hidden_dropout_prob, training=self.training, mode=\"upscale_in_train\")\n else:\n weights = F.dropout(weights, self.config.hidden_dropout_prob, training=self.training, mode=\"upscale_in_train\")\n\n out = paddle.matmul(weights, v)\n\n # combine heads\n out = tensor.transpose(out, perm=[0, 2, 1, 3])\n out = tensor.reshape(x=out, shape=[0, 0, -1])\n\n return (out, weights) if output_attentions else out\n\n def forward(self, query, key, value, attn_mask=None, use_cache=False, cache=None, output_attentions=False):\n r\"\"\"\n Applies multi-head attention to map queries and a set of key-value pairs\n to outputs.\n \"\"\"\n key = query if key is None else key\n value = query if value is None else value\n # compute q ,k ,v\n if self.config.fuse_attention_qkv:\n q, k, v, cache = self._fuse_prepare_qkv(query, use_cache, cache)\n else:\n q, k, v, cache = self._prepare_qkv(query, key, value, use_cache, cache)\n\n if self.use_flash_attention and attn_mask is None:\n attn_func = self._flash_attention\n else:\n attn_func = self.core_attn\n has_gradient = (not q.stop_gradient) or (not k.stop_gradient) or (not v.stop_gradient)\n if self.enable_recompute and self.config.recompute_granularity == \"core_attn\" and has_gradient:\n out = recompute(attn_func, q, k, v, attn_mask, output_attentions, use_reentrant=False)\n else:\n out = attn_func(q, k, v, attn_mask=attn_mask, output_attentions=output_attentions)\n\n if output_attentions:\n out, weights = out\n\n # project to output\n out = self.out_proj(out)\n\n outs = [out]\n if output_attentions:\n outs.append(weights)\n if use_cache:\n outs.append(cache)\n return out if len(outs) == 1 else tuple(outs)\n\n\nclass TransformerDecoder(nn.Layer):\n \"\"\"\n TransformerDecoder is a stack of N decoder layers.\n \"\"\"\n\n def __init__(\n self,\n config,\n decoder_layers,\n ):\n super(TransformerDecoder, self).__init__()\n\n self.config = config\n self.layers = decoder_layers\n self.norm = nn.LayerNorm(config.hidden_size, epsilon=1e-5)\n\n # Recompute defaults to False and is controlled by Trainer\n self.enable_recompute = False\n\n def forward(self, tgt, tgt_mask=None, memory=None, memory_mask=None, use_cache=False, cache=None, output_attentions=False):\n r\"\"\"\n Applies a stack of N Transformer decoder layers on inputs. If `norm` is\n provided, also applies layer normalization on the output of last decoder\n layer.\n \"\"\"\n output = tgt\n new_caches = []\n all_self_attentions = [] if output_attentions else None\n\n for i, mod in enumerate(self.layers):\n if cache is None:\n if use_cache:\n output, new_cache = mod(output, tgt_mask=tgt_mask, memory=memory, use_cache=use_cache, cache=cache, output_attentions=output_attentions)\n new_caches.append(new_cache)\n else:\n has_gradient = not output.stop_gradient\n if self.enable_recompute and self.config.recompute_granularity == \"full\" and has_gradient:\n output = recompute(mod, output, tgt_mask, memory, use_cache, cache, output_attentions, use_reentrant=False)\n else:\n output = mod(output, tgt_mask, memory, use_cache, cache, output_attentions)\n\n else:\n output, new_cache = mod(output, tgt_mask=tgt_mask, memory=memory, use_cache=use_cache, cache=cache[i], output_attentions=output_attentions)\n new_caches.append(new_cache)\n\n if output_attentions:\n output, weights = output\n all_self_attentions.append(weights)\n\n if self.norm is not None:\n output = self.norm(output)\n\n outputs = [output]\n if output_attentions:\n outputs.append(all_self_attentions)\n if use_cache:\n outputs.append(new_caches)\n return output if len(outputs) == 1 else tuple(outputs)\n\n def gen_cache(self, memory, do_zip=False):\n r\"\"\"\n Generates cache for `forward` usage. The generated cache is a list, and\n each element in it is a tuple( :code:`(incremental_cache, static_cache)` )\n produced by `TransformerDecoderLayer.gen_cache`. See `TransformerDecoderLayer.gen_cache`\n for more details. If `do_zip` is True, apply `zip` on these tuples to get\n a list with two elements.\n \"\"\"\n cache = [layer.gen_cache(memory) for layer in self.layers]\n if do_zip:\n cache = list(zip(*cache))\n return cache\n\n\nclass TransformerDecoderLayer(nn.Layer):\n \"\"\"\n The transformer decoder layer.\n\n It contains multiheadattention and some linear layers.\n \"\"\"\n\n def __init__(self, config: GPTConfig):\n\n super(TransformerDecoderLayer, self).__init__()\n \n self.config = config\n \n # Recompute defaults to False and is controlled by Trainer\n self.enable_recompute = False\n \n if not FusedDropoutAdd:\n config.use_fused_dropout_add = False\n\n self.self_attn = MultiHeadAttention(config=config)\n\n if config.tensor_parallel_degree > 1:\n self.linear1 = fleet.meta_parallel.ColumnParallelLinear(\n config.hidden_size,\n config.intermediate_size,\n gather_output=False,\n has_bias=True,\n fuse_matmul_bias=self.config.fused_linear,\n )\n self.linear2 = fleet.meta_parallel.RowParallelLinear(\n config.intermediate_size,\n config.hidden_size,\n input_is_parallel=True,\n has_bias=True,\n fuse_matmul_bias=self.config.fused_linear,\n )\n else:\n self.linear1 = nn.Linear(config.hidden_size, config.intermediate_size, bias_attr=True)\n self.linear2 = nn.Linear(config.intermediate_size, config.hidden_size, bias_attr=True)\n\n self.norm1 = nn.LayerNorm(config.hidden_size, epsilon=1e-5)\n self.norm2 = nn.LayerNorm(config.hidden_size, epsilon=1e-5)\n \n if not config.use_fused_dropout_add:\n self.dropout1 = nn.Dropout(config.hidden_dropout_prob, mode=\"upscale_in_train\")\n self.dropout2 = nn.Dropout(config.hidden_dropout_prob, mode=\"upscale_in_train\")\n else:\n self.fused_dropout_add1 = FusedDropoutAdd(config.hidden_dropout_prob, mode=\"upscale_in_train\")\n self.fused_dropout_add2 = FusedDropoutAdd(config.hidden_dropout_prob, mode=\"upscale_in_train\")\n\n self.activation = getattr(F, config.hidden_activation)\n\n def forward(self, tgt, tgt_mask=None, memory=None, use_cache=False, cache=None, output_attentions=False):\n residual = tgt\n\n if self.config.normalize_before:\n tgt = self.norm1(tgt)\n\n if use_cache is False:\n has_gradient = not tgt.stop_gradient\n if self.enable_recompute and self.config.recompute_granularity == \"full_attn\" and has_gradient:\n tgt = recompute(self.self_attn, tgt, None, None, tgt_mask, use_cache, cache, output_attentions, use_reentrant=False)\n else:\n tgt = self.self_attn(tgt, tgt, tgt, tgt_mask, use_cache, cache, output_attentions)\n else:\n tgt, incremental_cache = self.self_attn(tgt, tgt, tgt, tgt_mask, use_cache, cache, output_attentions)\n\n if output_attentions:\n tgt, weights = tgt\n\n current_seed = \"global_seed\"\n if self.training:\n with get_rng_state_tracker().rng_state(current_seed):\n if not self.config.use_fused_dropout_add:\n tgt = residual + self.dropout1(tgt)\n else:\n tgt = self.fused_dropout_add1(tgt, residual)\n else:\n if not self.config.use_fused_dropout_add:\n tgt = residual + self.dropout1(tgt)\n else:\n tgt = self.fused_dropout_add1(tgt, residual)\n\n if not self.config.normalize_before:\n tgt = self.norm1(tgt)\n\n residual = tgt\n if self.config.normalize_before:\n tgt = self.norm2(tgt)\n\n if self.training:\n with get_rng_state_tracker().rng_state(current_seed):\n if not self.config.use_fused_dropout_add:\n tgt = residual + self.linear2(F.gelu(self.linear1(tgt), approximate=True))\n else:\n tgt = self.fused_dropout_add2(self.linear2(F.gelu(self.linear1(tgt), approximate=True)), residual)\n else:\n if not self.config.use_fused_dropout_add:\n tgt = residual + self.linear2(F.gelu(self.linear1(tgt), approximate=True))\n else:\n tgt = self.fused_dropout_add2(self.linear2(F.gelu(self.linear1(tgt), approximate=True)), residual)\n\n if not self.config.normalize_before:\n tgt = self.norm2(tgt)\n\n if output_attentions:\n tgt = (tgt, weights)\n return tgt if use_cache is False else (tgt, incremental_cache)\n\n def gen_cache(self, memory):\n incremental_cache = self.self_attn.gen_cache(memory, type=self.self_attn.Cache)\n return incremental_cache\n\n\nclass GPTEmbeddings(nn.Layer):\n \"\"\"\n Include embeddings from word, position and token_type embeddings\n \"\"\"\n\n def __init__(self, config,):\n super(GPTEmbeddings, self).__init__()\n\n self.config = config\n\n if config.tensor_parallel_degree > 1:\n self.word_embeddings = fleet.meta_parallel.VocabParallelEmbedding(\n config.vocab_size,\n config.hidden_size,\n )\n else:\n self.word_embeddings = nn.Embedding(\n config.vocab_size,\n config.hidden_size,\n )\n\n self.position_embeddings = nn.Embedding(\n config.max_position_embeddings,\n config.hidden_size,\n )\n\n self.dropout = nn.Dropout(config.hidden_dropout_prob)\n\n def forward(self, input_ids, position_ids=None):\n if position_ids is None:\n ones = paddle.ones_like(input_ids, dtype=\"int64\")\n seq_length = paddle.cumsum(ones, axis=-1)\n position_ids = seq_length - ones\n\n input_embedings = self.word_embeddings(input_ids)\n position_embeddings = self.position_embeddings(position_ids)\n embeddings = input_embedings + position_embeddings\n embeddings = self.dropout(embeddings)\n\n return embeddings\n\n\nclass GPTPretrainedModel(PretrainedModel):\n \"\"\"\n An abstract class for pretrained GPT models. It provides GPT related\n `model_config_file`, `resource_files_names`, `pretrained_resource_files_map`,\n `pretrained_init_configuration`, `base_model_prefix` for downloading and\n loading pretrained models. See `PretrainedModel` for more details.\n \"\"\"\n\n model_config_file = \"model_config.json\"\n resource_files_names = {\"model_state\": \"model_state.pdparams\"}\n base_model_prefix = \"gpt\"\n config_class = GPTConfig\n pretrained_init_configuration = GPT_PRETRAINED_INIT_CONFIGURATION\n pretrained_resource_files_map = GPT_PRETRAINED_RESOURCE_FILES_MAP\n\n @classmethod\n def _get_tensor_parallel_mappings(cls, config, is_split=True):\n\n from paddlenlp.transformers.conversion_utils import split_or_merge_func\n\n fn = split_or_merge_func(\n is_split=is_split,\n tensor_parallel_degree=config.tensor_parallel_degree,\n tensor_parallel_rank=config.tensor_parallel_rank,\n num_attention_heads=config.num_attention_heads,\n )\n\n def get_tensor_parallel_split_mappings(num_layers):\n final_actions = {}\n base_actions = {\n # Column Linear\n \"layers.0.linear1.weight\": partial(fn, is_column=True),\n \"layers.0.linear1.bias\": partial(fn, is_column=True),\n # Row Linear\n \"word_embeddings.weight\": partial(fn, is_column=False),\n \"layers.0.self_attn.out_proj.weight\": partial(fn, is_column=False),\n \"layers.0.linear2.weight\": partial(fn, is_column=False),\n }\n\n if config.fuse_attention_qkv:\n base_actions[\"layers.0.self_attn.qkv_proj.weight\"] = partial(fn, is_column=True)\n base_actions[\"layers.0.self_attn.qkv_proj.bias\"] = partial(fn, is_column=True)\n else:\n base_actions[\"layers.0.self_attn.q_proj.weight\"] = partial(fn, is_column=True)\n base_actions[\"layers.0.self_attn.k_proj.weight\"] = partial(fn, is_column=True)\n base_actions[\"layers.0.self_attn.v_proj.weight\"] = partial(fn, is_column=True)\n base_actions[\"layers.0.self_attn.q_proj.bias\"] = partial(fn, is_column=True)\n base_actions[\"layers.0.self_attn.k_proj.bias\"] = partial(fn, is_column=True)\n base_actions[\"layers.0.self_attn.v_proj.bias\"] = partial(fn, is_column=True)\n\n for key, action in base_actions.items():\n if \"layers.0.\" in key:\n for i in range(num_layers):\n final_actions[key.replace(\"layers.0.\", f\"layers.{i}.\")] = action\n final_actions[key] = action\n\n return final_actions\n\n mappings = get_tensor_parallel_split_mappings(config.num_hidden_layers)\n\n return mappings\n\n def _init_weights(self, layer):\n \"\"\"Initialization hook\"\"\"\n if isinstance(\n layer,\n (\n nn.Linear,\n nn.Embedding,\n fleet.meta_parallel.VocabParallelEmbedding,\n fleet.meta_parallel.ColumnParallelLinear,\n fleet.meta_parallel.RowParallelLinear,\n ),\n ):\n # In the dygraph mode, use the `set_value` to reset the parameter directly,\n # and reset the `state_dict` to update parameter in static mode.\n if isinstance(layer.weight, paddle.Tensor):\n layer.weight.set_value(\n paddle.tensor.normal(\n mean=0.0,\n std=self.config.initializer_range,\n shape=layer.weight.shape,\n )\n )\n # Layer.apply is DFS https://github.com/PaddlePaddle/Paddle/blob/a6f5021fcc58b21f4414bae6bf4731ef6971582c/python/paddle/nn/layer/layers.py#L527-L530\n # sublayer is init first\n # scale RowParallelLinear weight\n with paddle.no_grad():\n if isinstance(layer, TransformerDecoderLayer):\n factor = 1 / math.sqrt(2 * self.config.num_hidden_layers)\n layer.linear2.weight.scale_(factor)\n if isinstance(layer, MultiHeadAttention):\n factor = 1 / math.sqrt(2 * self.config.num_hidden_layers)\n layer.out_proj.weight.scale_(factor)\n\n\n@register_base_model\nclass GPTModel(GPTPretrainedModel):\n \"\"\"\n The base model of gpt.\n \"\"\"\n\n def __init__(self, config: GPTConfig):\n super(GPTModel, self).__init__(config)\n\n self.config = config\n\n self.embeddings = GPTEmbeddings(config)\n\n decoder_layers = nn.LayerList()\n for i in range(config.num_hidden_layers):\n decoder_layers.append(TransformerDecoderLayer(config))\n\n self.decoder = TransformerDecoder(\n config,\n decoder_layers,\n )\n\n def forward(self, input_ids, position_ids=None, attention_mask=None, use_cache=False, cache=None, output_attentions=False):\n if position_ids is None:\n past_length = 0\n if cache is not None:\n past_length = paddle.shape(attention_mask)[-1] - 1\n position_ids = paddle.arange(past_length, paddle.shape(input_ids)[-1] + past_length, dtype=\"int64\")\n position_ids = position_ids.unsqueeze(0)\n input_shape = paddle.shape(input_ids)\n position_ids = paddle.expand(position_ids, input_shape)\n embedding_output = self.embeddings(input_ids=input_ids, position_ids=position_ids)\n\n if not self.config.fused_softmax_with_triangular or not paddle.is_compiled_with_cuda():\n # TODO, use registered buffer\n causal_mask = paddle.tensor.tril(\n paddle.ones((paddle.shape(input_ids)[-1], paddle.shape(input_ids)[-1]), dtype=\"int64\"),\n )\n if attention_mask is not None:\n if attention_mask.dtype != paddle.int64:\n attention_mask = paddle.cast(attention_mask, dtype=paddle.int64)\n if len(attention_mask.shape) == 2:\n attention_mask = attention_mask[:, None, None, :]\n attention_mask = (1.0 - (attention_mask & causal_mask)) * -1e4\n else:\n attention_mask = (1.0 - causal_mask) * -1e4\n\n encoder_outputs = self.decoder(\n embedding_output,\n memory=None,\n tgt_mask=None\n if (self.config.fused_softmax_with_triangular and self.training)\n else attention_mask, # use softmax_mask_fuse_upper_triangle\n use_cache=use_cache,\n cache=cache,\n output_attentions=output_attentions,\n )\n return encoder_outputs\n\n\nclass GPTPretrainingCriterion(paddle.nn.Layer):\n \"\"\"\n Criterion for GPT.\n\n It calculates the final loss.\n \"\"\"\n\n def __init__(self, config):\n super(GPTPretrainingCriterion, self).__init__()\n self.config = config\n if config.tensor_parallel_degree > 1 and config.tensor_parallel_output:\n self.loss_func = fleet.meta_parallel.ParallelCrossEntropy(ignore_index=config.ignore_index)\n else:\n self.loss_func = paddle.nn.CrossEntropyLoss(reduction=\"none\", ignore_index=config.ignore_index)\n\n def forward(self, prediction_scores, masked_lm_labels, loss_mask=None):\n\n if self.config.lm_shift_labels:\n # Shift so that tokens < n predict n\n prediction_scores = prediction_scores[..., :-1, :]\n masked_lm_labels = masked_lm_labels[..., 1:]\n\n with paddle.amp.auto_cast(False):\n masked_lm_loss = self.loss_func(prediction_scores.astype(\"float32\"), masked_lm_labels.unsqueeze(2))\n masked_lm_loss = masked_lm_loss[masked_lm_loss > 0].astype(\"float32\")\n loss = paddle.mean(masked_lm_loss)\n\n return loss\n\n\nclass GPTForCausalLM(GPTPretrainedModel):\n \"\"\"\n The GPT Model with a `language modeling` head on top.\n Args:\n gpt (:class:`GPTModel`):\n An instance of :class:`GPTModel`.\n \"\"\"\n\n def __init__(self, config: GPTConfig):\n super(GPTForCausalLM, self).__init__(config)\n self.config = config\n self.gpt = GPTModel(config)\n self.criterion = GPTPretrainingCriterion(config)\n\n def forward(\n self,\n input_ids=None,\n position_ids=None,\n attention_mask=None,\n inputs_embeds=None,\n use_cache=False,\n cache=None,\n labels=None,\n output_attentions=None,\n return_dict=False,\n ):\n r\"\"\"\n Args:\n input_ids (Tensor, optional):\n See :class:`GPTModel`.\n position_ids (Tensor, optional):\n See :class:`GPTModel`.\n attention_mask (Tensor, optional):\n See :class:`GPTModel`.\n inputs_embeds (Tensor, optional):\n See :class:`GPTModel`.\n use_cache (bool, optional):\n See :class:`GPTModel`.\n cache (Tensor, optional):\n See :class:`GPTModel`.\n labels (paddle.Tensor, optional):\n A Tensor of shape `(batch_size, sequence_length)`.\n Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set\n `labels = input_ids` Indices are selected in `[-100, 0, ..., vocab_size]` All labels set to `-100`\n are ignored (masked), the loss is only computed for labels in `[0, ..., vocab_size]`\n Defaults to None.\n output_attentions (bool, optional):\n See :class:`GPTModel`.\n output_hidden_states (bool, optional):\n See :class:`GPTModel`.\n return_dict (bool, optional):\n See :class:`GPTModel`.\n Returns:\n An instance of :class:`~paddlenlp.transformers.model_outputs.BaseModelOutputWithPastAndCrossAttentions` if\n `return_dict=True`. Otherwise it returns a tuple of tensors corresponding\n to ordered and not None (depending on the input arguments) fields of\n :class:`~paddlenlp.transformers.model_outputs.BaseModelOutputWithPastAndCrossAttentions`.\n Especialy, when `return_dict=use_cache=output_attentions=output_hidden_states=False`,\n returns a tensor `logits` which is the output of the gpt model.\n \"\"\"\n input_type = type(input_ids) if input_ids is not None else type(inputs_embeds)\n output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions\n outputs = self.gpt(\n input_ids,\n position_ids=position_ids,\n attention_mask=attention_mask,\n use_cache=use_cache,\n cache=cache,\n output_attentions=output_attentions,\n # output_hidden_states=output_hidden_states,\n # return_dict=return_dict,\n )\n if isinstance(outputs, input_type):\n hidden_states = outputs\n else:\n hidden_states = outputs[0]\n\n tensor_parallel_output = (\n self.config.tensor_parallel_output and labels is not None and self.config.tensor_parallel_degree > 1\n )\n logits = parallel_matmul(hidden_states, self.gpt.embeddings.word_embeddings.weight, tensor_parallel_output)\n\n loss = None\n if labels is not None:\n loss = self.criterion(logits, labels)\n\n # outputs = [output, all_hidden_states, new_caches, all_self_attentions]\n if not return_dict:\n if isinstance(outputs, input_type):\n return (loss, logits) if loss is not None else logits\n\n outputs = (logits,) + outputs[1:]\n return ((loss,) + outputs) if loss is not None else outputs\n\n return CausalLMOutputWithCrossAttentions(\n loss=loss,\n logits=logits,\n past_key_values=outputs.past_key_values,\n hidden_states=outputs.hidden_states,\n attentions=outputs.attentions,\n cross_attentions=outputs.cross_attentions,\n )\n\n def prepare_inputs_for_generation(self, input_ids, use_cache=False, cache=None, **kwargs):\n # only last token for inputs_ids if cache is defined in kwargs\n position_ids = kwargs.get(\"position_ids\", None)\n attention_mask = kwargs.get(\"attention_mask\", None)\n if attention_mask is not None and attention_mask.ndim == 4:\n attention_mask = attention_mask[:, -1:, -1:, :]\n if cache is not None:\n input_ids = input_ids[:, -1].unsqueeze(-1)\n if position_ids is not None:\n position_ids = position_ids[:, -1].unsqueeze(-1)\n return {\n \"input_ids\": input_ids,\n \"position_ids\": position_ids,\n \"attention_mask\": attention_mask,\n \"use_cache\": use_cache,\n \"cache\": cache,\n }\n\n @staticmethod\n def prepare_attention_mask_for_generation(input_ids, pad_token_id, eos_token_id):\n is_pad_token_in_inputs_ids = (pad_token_id is not None) and paddle.any(\n input_ids == pad_token_id\n ).numpy().item()\n is_pad_token_not_equal_to_eos_token_id = (eos_token_id is None) or (\n (eos_token_id is not None) and (pad_token_id != eos_token_id)\n )\n if is_pad_token_in_inputs_ids and is_pad_token_not_equal_to_eos_token_id:\n attention_mask = (input_ids != pad_token_id).astype(\"int64\")\n else:\n attention_mask = paddle.ones_like(input_ids, dtype=\"int64\")\n return paddle.unsqueeze(attention_mask, axis=[1, 2])\n", "path": "llm/gpt-3/modeling.py" } ]
diff --git a/llm/gpt-3/modeling.py b/llm/gpt-3/modeling.py index 968b48f33c75..3c45721c01d3 100644 --- a/llm/gpt-3/modeling.py +++ b/llm/gpt-3/modeling.py @@ -827,7 +827,6 @@ def forward( loss = None if labels is not None: loss = self.criterion(logits, labels) - return loss # outputs = [output, all_hidden_states, new_caches, all_self_attentions] if not return_dict:
aio-libs-abandoned__aioredis-py-535
Add a BUSYGROUP reply error The XGROUP CREATE command can return a BUSYGROUP error when a group already exists: https://redis.io/commands/xgroup I think the `ReplyError` subclass for matching it would look like this: ```py class BusyGroupError(ReplyError): MATCH_REPLY = "BUSYGROUP Consumer Group name already exists" ```
[ { "content": "__all__ = [\n 'RedisError',\n 'ProtocolError',\n 'ReplyError',\n 'MaxClientsError',\n 'AuthError',\n 'PipelineError',\n 'MultiExecError',\n 'WatchVariableError',\n 'ChannelClosedError',\n 'ConnectionClosedError',\n 'ConnectionForcedCloseError',\n 'PoolClosedError',\n 'MasterNotFoundError',\n 'SlaveNotFoundError',\n 'ReadOnlyError',\n ]\n\n\nclass RedisError(Exception):\n \"\"\"Base exception class for aioredis exceptions.\"\"\"\n\n\nclass ProtocolError(RedisError):\n \"\"\"Raised when protocol error occurs.\"\"\"\n\n\nclass ReplyError(RedisError):\n \"\"\"Raised for redis error replies (-ERR).\"\"\"\n\n MATCH_REPLY = None\n\n def __new__(cls, msg, *args):\n for klass in cls.__subclasses__():\n if msg and klass.MATCH_REPLY and msg.startswith(klass.MATCH_REPLY):\n return klass(msg, *args)\n return super().__new__(cls, msg, *args)\n\n\nclass MaxClientsError(ReplyError):\n \"\"\"Raised for redis server when the maximum number of client has been\n reached.\"\"\"\n\n MATCH_REPLY = \"ERR max number of clients reached\"\n\n\nclass AuthError(ReplyError):\n \"\"\"Raised when authentication errors occurs.\"\"\"\n\n MATCH_REPLY = (\"NOAUTH \", \"ERR invalid password\")\n\n\nclass PipelineError(RedisError):\n \"\"\"Raised if command within pipeline raised error.\"\"\"\n\n def __init__(self, errors):\n super().__init__('{} errors:'.format(self.__class__.__name__), errors)\n\n\nclass MultiExecError(PipelineError):\n \"\"\"Raised if command within MULTI/EXEC block caused error.\"\"\"\n\n\nclass WatchVariableError(MultiExecError):\n \"\"\"Raised if watched variable changed (EXEC returns None).\"\"\"\n\n\nclass ChannelClosedError(RedisError):\n \"\"\"Raised when Pub/Sub channel is unsubscribed and messages queue is empty.\n \"\"\"\n\n\nclass ReadOnlyError(RedisError):\n \"\"\"Raised from slave when read-only mode is enabled\"\"\"\n\n\nclass MasterNotFoundError(RedisError):\n \"\"\"Raised for sentinel master not found error.\"\"\"\n\n\nclass SlaveNotFoundError(RedisError):\n \"\"\"Raised for sentinel slave not found error.\"\"\"\n\n\nclass MasterReplyError(RedisError):\n \"\"\"Raised by sentinel client for master error replies.\"\"\"\n\n\nclass SlaveReplyError(RedisError):\n \"\"\"Raised by sentinel client for slave error replies.\"\"\"\n\n\nclass ConnectionClosedError(RedisError):\n \"\"\"Raised if connection to server was closed.\"\"\"\n\n\nclass ConnectionForcedCloseError(ConnectionClosedError):\n \"\"\"Raised if connection was closed with .close() method.\"\"\"\n\n\nclass PoolClosedError(RedisError):\n \"\"\"Raised if pool is closed.\"\"\"\n", "path": "aioredis/errors.py" } ]
[ { "content": "__all__ = [\n 'RedisError',\n 'ProtocolError',\n 'ReplyError',\n 'MaxClientsError',\n 'AuthError',\n 'PipelineError',\n 'MultiExecError',\n 'WatchVariableError',\n 'ChannelClosedError',\n 'ConnectionClosedError',\n 'ConnectionForcedCloseError',\n 'PoolClosedError',\n 'MasterNotFoundError',\n 'SlaveNotFoundError',\n 'ReadOnlyError',\n ]\n\n\nclass RedisError(Exception):\n \"\"\"Base exception class for aioredis exceptions.\"\"\"\n\n\nclass ProtocolError(RedisError):\n \"\"\"Raised when protocol error occurs.\"\"\"\n\n\nclass ReplyError(RedisError):\n \"\"\"Raised for redis error replies (-ERR).\"\"\"\n\n MATCH_REPLY = None\n\n def __new__(cls, msg, *args):\n for klass in cls.__subclasses__():\n if msg and klass.MATCH_REPLY and msg.startswith(klass.MATCH_REPLY):\n return klass(msg, *args)\n return super().__new__(cls, msg, *args)\n\n\nclass MaxClientsError(ReplyError):\n \"\"\"Raised for redis server when the maximum number of client has been\n reached.\"\"\"\n\n MATCH_REPLY = \"ERR max number of clients reached\"\n\n\nclass AuthError(ReplyError):\n \"\"\"Raised when authentication errors occurs.\"\"\"\n\n MATCH_REPLY = (\"NOAUTH \", \"ERR invalid password\")\n\n\nclass BusyGroupError(ReplyError):\n \"\"\"Raised if Consumer Group name already exists.\"\"\"\n\n MATCH_REPLY = \"BUSYGROUP Consumer Group name already exists\"\n\n\nclass PipelineError(RedisError):\n \"\"\"Raised if command within pipeline raised error.\"\"\"\n\n def __init__(self, errors):\n super().__init__('{} errors:'.format(self.__class__.__name__), errors)\n\n\nclass MultiExecError(PipelineError):\n \"\"\"Raised if command within MULTI/EXEC block caused error.\"\"\"\n\n\nclass WatchVariableError(MultiExecError):\n \"\"\"Raised if watched variable changed (EXEC returns None).\"\"\"\n\n\nclass ChannelClosedError(RedisError):\n \"\"\"Raised when Pub/Sub channel is unsubscribed and messages queue is empty.\n \"\"\"\n\n\nclass ReadOnlyError(RedisError):\n \"\"\"Raised from slave when read-only mode is enabled\"\"\"\n\n\nclass MasterNotFoundError(RedisError):\n \"\"\"Raised for sentinel master not found error.\"\"\"\n\n\nclass SlaveNotFoundError(RedisError):\n \"\"\"Raised for sentinel slave not found error.\"\"\"\n\n\nclass MasterReplyError(RedisError):\n \"\"\"Raised by sentinel client for master error replies.\"\"\"\n\n\nclass SlaveReplyError(RedisError):\n \"\"\"Raised by sentinel client for slave error replies.\"\"\"\n\n\nclass ConnectionClosedError(RedisError):\n \"\"\"Raised if connection to server was closed.\"\"\"\n\n\nclass ConnectionForcedCloseError(ConnectionClosedError):\n \"\"\"Raised if connection was closed with .close() method.\"\"\"\n\n\nclass PoolClosedError(RedisError):\n \"\"\"Raised if pool is closed.\"\"\"\n", "path": "aioredis/errors.py" } ]
diff --git a/aioredis/errors.py b/aioredis/errors.py index b73e2e424..504c6ce06 100644 --- a/aioredis/errors.py +++ b/aioredis/errors.py @@ -50,6 +50,12 @@ class AuthError(ReplyError): MATCH_REPLY = ("NOAUTH ", "ERR invalid password") +class BusyGroupError(ReplyError): + """Raised if Consumer Group name already exists.""" + + MATCH_REPLY = "BUSYGROUP Consumer Group name already exists" + + class PipelineError(RedisError): """Raised if command within pipeline raised error.""" diff --git a/tests/stream_commands_test.py b/tests/stream_commands_test.py index 6a7adbd0f..d29178be3 100644 --- a/tests/stream_commands_test.py +++ b/tests/stream_commands_test.py @@ -4,7 +4,7 @@ from collections import OrderedDict from unittest import mock -from aioredis import ReplyError +from aioredis.errors import BusyGroupError from _testutils import redis_version pytestmark = redis_version( @@ -314,7 +314,7 @@ async def test_xgroup_create_mkstream(redis, server_bin): async def test_xgroup_create_already_exists(redis, server_bin): await redis.xadd('test_stream', {'a': 1}) await redis.xgroup_create('test_stream', 'test_group') - with pytest.raises(ReplyError): + with pytest.raises(BusyGroupError): await redis.xgroup_create('test_stream', 'test_group')
ansible__ansible-18194
serial with % groups task per host ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME serial ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` devel ansible 2.2.0.0 (detached HEAD eafb4043c9) last updated 2016/10/25 13:47:30 (GMT +200) ``` ##### CONFIGURATION n/a ##### OS / ENVIRONMENT n/a ##### SUMMARY When using serial with `%`, there is a suspicious grouping per host for every task, while I don't see such a grouping when using serial with a number: ##### STEPS TO REPRODUCE See gist https://gist.github.com/resmo/c650dc1846c14cdccbc41d509c92f4c0 <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> ``` TASK [command] ***************************************************************** changed: [three] => (item=1) changed: [one] => (item=1) changed: [two] => (item=1) changed: [three] => (item=1) changed: [one] => (item=1) changed: [two] => (item=1) changed: [three] => (item=1) changed: [one] => (item=1) changed: [two] => (item=1) changed: [three] => (item=1) changed: [one] => (item=1) changed: [two] => (item=1) ``` ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes below --> ``` TASK [command] ***************************************************************** changed: [two] => (item=1) changed: [two] => (item=1) changed: [two] => (item=1) changed: [two] => (item=1) changed: [one] => (item=1) changed: [one] => (item=1) changed: [one] => (item=1) changed: [one] => (item=1) changed: [three] => (item=1) changed: [three] => (item=1) changed: [three] => (item=1) changed: [three] => (item=1) ```
[ { "content": "# (c) 2012-2014, Michael DeHaan <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\n# Make coding more python3-ish\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nimport multiprocessing\nimport os\nimport tempfile\n\nfrom ansible import constants as C\nfrom ansible.compat.six import string_types\nfrom ansible.errors import AnsibleError\nfrom ansible.executor.play_iterator import PlayIterator\nfrom ansible.executor.stats import AggregateStats\nfrom ansible.module_utils._text import to_text\nfrom ansible.playbook.block import Block\nfrom ansible.playbook.play_context import PlayContext\nfrom ansible.plugins import callback_loader, strategy_loader, module_loader\nfrom ansible.plugins.callback import CallbackBase\nfrom ansible.template import Templar\nfrom ansible.utils.helpers import pct_to_int\nfrom ansible.vars.hostvars import HostVars\n\ntry:\n from __main__ import display\nexcept ImportError:\n from ansible.utils.display import Display\n display = Display()\n\n__all__ = ['TaskQueueManager']\n\n\nclass TaskQueueManager:\n\n '''\n This class handles the multiprocessing requirements of Ansible by\n creating a pool of worker forks, a result handler fork, and a\n manager object with shared datastructures/queues for coordinating\n work between all processes.\n\n The queue manager is responsible for loading the play strategy plugin,\n which dispatches the Play's tasks to hosts.\n '''\n\n RUN_OK = 0\n RUN_ERROR = 1\n RUN_FAILED_HOSTS = 2\n RUN_UNREACHABLE_HOSTS = 4\n RUN_FAILED_BREAK_PLAY = 8\n RUN_UNKNOWN_ERROR = 255\n\n def __init__(self, inventory, variable_manager, loader, options, passwords, stdout_callback=None, run_additional_callbacks=True, run_tree=False):\n\n self._inventory = inventory\n self._variable_manager = variable_manager\n self._loader = loader\n self._options = options\n self._stats = AggregateStats()\n self.passwords = passwords\n self._stdout_callback = stdout_callback\n self._run_additional_callbacks = run_additional_callbacks\n self._run_tree = run_tree\n\n self._callbacks_loaded = False\n self._callback_plugins = []\n self._start_at_done = False\n\n # make sure the module path (if specified) is parsed and\n # added to the module_loader object\n if options.module_path is not None:\n for path in options.module_path.split(os.pathsep):\n module_loader.add_directory(path)\n\n # a special flag to help us exit cleanly\n self._terminated = False\n\n # this dictionary is used to keep track of notified handlers\n self._notified_handlers = dict()\n self._listening_handlers = dict()\n\n # dictionaries to keep track of failed/unreachable hosts\n self._failed_hosts = dict()\n self._unreachable_hosts = dict()\n\n self._final_q = multiprocessing.Queue()\n\n # A temporary file (opened pre-fork) used by connection\n # plugins for inter-process locking.\n self._connection_lockfile = tempfile.TemporaryFile()\n\n def _initialize_processes(self, num):\n self._workers = []\n\n for i in range(num):\n rslt_q = multiprocessing.Queue()\n self._workers.append([None, rslt_q])\n\n def _initialize_notified_handlers(self, play):\n '''\n Clears and initializes the shared notified handlers dict with entries\n for each handler in the play, which is an empty array that will contain\n inventory hostnames for those hosts triggering the handler.\n '''\n\n # Zero the dictionary first by removing any entries there.\n # Proxied dicts don't support iteritems, so we have to use keys()\n self._notified_handlers.clear()\n self._listening_handlers.clear()\n\n def _process_block(b):\n temp_list = []\n for t in b.block:\n if isinstance(t, Block):\n temp_list.extend(_process_block(t))\n else:\n temp_list.append(t)\n return temp_list\n\n handler_list = []\n for handler_block in play.handlers:\n handler_list.extend(_process_block(handler_block))\n\n # then initialize it with the given handler list\n for handler in handler_list:\n if handler not in self._notified_handlers:\n self._notified_handlers[handler] = []\n if handler.listen:\n listeners = handler.listen\n if not isinstance(listeners, list):\n listeners = [ listeners ]\n for listener in listeners:\n if listener not in self._listening_handlers:\n self._listening_handlers[listener] = []\n self._listening_handlers[listener].append(handler.get_name())\n\n def load_callbacks(self):\n '''\n Loads all available callbacks, with the exception of those which\n utilize the CALLBACK_TYPE option. When CALLBACK_TYPE is set to 'stdout',\n only one such callback plugin will be loaded.\n '''\n\n if self._callbacks_loaded:\n return\n\n stdout_callback_loaded = False\n if self._stdout_callback is None:\n self._stdout_callback = C.DEFAULT_STDOUT_CALLBACK\n\n if isinstance(self._stdout_callback, CallbackBase):\n stdout_callback_loaded = True\n elif isinstance(self._stdout_callback, string_types):\n if self._stdout_callback not in callback_loader:\n raise AnsibleError(\"Invalid callback for stdout specified: %s\" % self._stdout_callback)\n else:\n self._stdout_callback = callback_loader.get(self._stdout_callback)\n stdout_callback_loaded = True\n else:\n raise AnsibleError(\"callback must be an instance of CallbackBase or the name of a callback plugin\")\n\n for callback_plugin in callback_loader.all(class_only=True):\n if hasattr(callback_plugin, 'CALLBACK_VERSION') and callback_plugin.CALLBACK_VERSION >= 2.0:\n # we only allow one callback of type 'stdout' to be loaded, so check\n # the name of the current plugin and type to see if we need to skip\n # loading this callback plugin\n callback_type = getattr(callback_plugin, 'CALLBACK_TYPE', None)\n callback_needs_whitelist = getattr(callback_plugin, 'CALLBACK_NEEDS_WHITELIST', False)\n (callback_name, _) = os.path.splitext(os.path.basename(callback_plugin._original_path))\n if callback_type == 'stdout':\n if callback_name != self._stdout_callback or stdout_callback_loaded:\n continue\n stdout_callback_loaded = True\n elif callback_name == 'tree' and self._run_tree:\n pass\n elif not self._run_additional_callbacks or (callback_needs_whitelist and (\n C.DEFAULT_CALLBACK_WHITELIST is None or callback_name not in C.DEFAULT_CALLBACK_WHITELIST)):\n continue\n\n self._callback_plugins.append(callback_plugin())\n\n self._callbacks_loaded = True\n\n def run(self, play):\n '''\n Iterates over the roles/tasks in a play, using the given (or default)\n strategy for queueing tasks. The default is the linear strategy, which\n operates like classic Ansible by keeping all hosts in lock-step with\n a given task (meaning no hosts move on to the next task until all hosts\n are done with the current task).\n '''\n\n if not self._callbacks_loaded:\n self.load_callbacks()\n\n all_vars = self._variable_manager.get_vars(loader=self._loader, play=play)\n templar = Templar(loader=self._loader, variables=all_vars)\n\n new_play = play.copy()\n new_play.post_validate(templar)\n new_play.handlers = new_play.compile_roles_handlers() + new_play.handlers\n\n self.hostvars = HostVars(\n inventory=self._inventory,\n variable_manager=self._variable_manager,\n loader=self._loader,\n )\n\n # Fork # of forks, # of hosts or serial, whichever is lowest\n num_hosts = len(self._inventory.get_hosts(new_play.hosts))\n\n max_serial = 0\n if new_play.serial:\n # the play has not been post_validated here, so we may need\n # to convert the scalar value to a list at this point\n serial_items = new_play.serial\n if not isinstance(serial_items, list):\n serial_items = [serial_items]\n max_serial = max([pct_to_int(x, num_hosts) for x in serial_items])\n\n contenders = [self._options.forks, max_serial, num_hosts]\n contenders = [v for v in contenders if v is not None and v > 0]\n self._initialize_processes(min(contenders))\n\n play_context = PlayContext(new_play, self._options, self.passwords, self._connection_lockfile.fileno())\n for callback_plugin in self._callback_plugins:\n if hasattr(callback_plugin, 'set_play_context'):\n callback_plugin.set_play_context(play_context)\n\n self.send_callback('v2_playbook_on_play_start', new_play)\n\n # initialize the shared dictionary containing the notified handlers\n self._initialize_notified_handlers(new_play)\n\n # load the specified strategy (or the default linear one)\n strategy = strategy_loader.get(new_play.strategy, self)\n if strategy is None:\n raise AnsibleError(\"Invalid play strategy specified: %s\" % new_play.strategy, obj=play._ds)\n\n # build the iterator\n iterator = PlayIterator(\n inventory=self._inventory,\n play=new_play,\n play_context=play_context,\n variable_manager=self._variable_manager,\n all_vars=all_vars,\n start_at_done = self._start_at_done,\n )\n\n # Because the TQM may survive multiple play runs, we start by marking\n # any hosts as failed in the iterator here which may have been marked\n # as failed in previous runs. Then we clear the internal list of failed\n # hosts so we know what failed this round.\n for host_name in self._failed_hosts.keys():\n host = self._inventory.get_host(host_name)\n iterator.mark_host_failed(host)\n\n self.clear_failed_hosts()\n\n # during initialization, the PlayContext will clear the start_at_task\n # field to signal that a matching task was found, so check that here\n # and remember it so we don't try to skip tasks on future plays\n if getattr(self._options, 'start_at_task', None) is not None and play_context.start_at_task is None:\n self._start_at_done = True\n\n # and run the play using the strategy and cleanup on way out\n play_return = strategy.run(iterator, play_context)\n\n # now re-save the hosts that failed from the iterator to our internal list\n for host_name in iterator.get_failed_hosts():\n self._failed_hosts[host_name] = True\n\n strategy.cleanup()\n self._cleanup_processes()\n return play_return\n\n def cleanup(self):\n display.debug(\"RUNNING CLEANUP\")\n self.terminate()\n self._final_q.close()\n self._cleanup_processes()\n\n def _cleanup_processes(self):\n if hasattr(self, '_workers'):\n for (worker_prc, rslt_q) in self._workers:\n rslt_q.close()\n if worker_prc and worker_prc.is_alive():\n try:\n worker_prc.terminate()\n except AttributeError:\n pass\n\n def clear_failed_hosts(self):\n self._failed_hosts = dict()\n\n def get_inventory(self):\n return self._inventory\n\n def get_variable_manager(self):\n return self._variable_manager\n\n def get_loader(self):\n return self._loader\n\n def get_workers(self):\n return self._workers[:]\n\n def terminate(self):\n self._terminated = True\n\n def has_dead_workers(self):\n\n # [<WorkerProcess(WorkerProcess-2, stopped[SIGKILL])>,\n # <WorkerProcess(WorkerProcess-2, stopped[SIGTERM])>\n\n defunct = False\n for idx,x in enumerate(self._workers):\n if hasattr(x[0], 'exitcode'):\n if x[0].exitcode in [-9, -15]:\n defunct = True\n return defunct\n\n def send_callback(self, method_name, *args, **kwargs):\n for callback_plugin in [self._stdout_callback] + self._callback_plugins:\n # a plugin that set self.disabled to True will not be called\n # see osx_say.py example for such a plugin\n if getattr(callback_plugin, 'disabled', False):\n continue\n\n # try to find v2 method, fallback to v1 method, ignore callback if no method found\n methods = []\n for possible in [method_name, 'v2_on_any']:\n gotit = getattr(callback_plugin, possible, None)\n if gotit is None:\n gotit = getattr(callback_plugin, possible.replace('v2_',''), None)\n if gotit is not None:\n methods.append(gotit)\n\n for method in methods:\n try:\n # temporary hack, required due to a change in the callback API, so\n # we don't break backwards compatibility with callbacks which were\n # designed to use the original API\n # FIXME: target for removal and revert to the original code here after a year (2017-01-14)\n if method_name == 'v2_playbook_on_start':\n import inspect\n (f_args, f_varargs, f_keywords, f_defaults) = inspect.getargspec(method)\n if 'playbook' in f_args:\n method(*args, **kwargs)\n else:\n method()\n else:\n method(*args, **kwargs)\n except Exception as e:\n # TODO: add config toggle to make this fatal or not?\n display.warning(u\"Failure using method (%s) in callback plugin (%s): %s\" % (to_text(method_name), to_text(callback_plugin), to_text(e)))\n from traceback import format_tb\n from sys import exc_info\n display.debug('Callback Exception: \\n' + ' '.join(format_tb(exc_info()[2])))\n", "path": "lib/ansible/executor/task_queue_manager.py" } ]
[ { "content": "# (c) 2012-2014, Michael DeHaan <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\n# Make coding more python3-ish\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nimport multiprocessing\nimport os\nimport tempfile\n\nfrom ansible import constants as C\nfrom ansible.compat.six import string_types\nfrom ansible.errors import AnsibleError\nfrom ansible.executor.play_iterator import PlayIterator\nfrom ansible.executor.stats import AggregateStats\nfrom ansible.module_utils._text import to_text\nfrom ansible.playbook.block import Block\nfrom ansible.playbook.play_context import PlayContext\nfrom ansible.plugins import callback_loader, strategy_loader, module_loader\nfrom ansible.plugins.callback import CallbackBase\nfrom ansible.template import Templar\nfrom ansible.utils.helpers import pct_to_int\nfrom ansible.vars.hostvars import HostVars\n\ntry:\n from __main__ import display\nexcept ImportError:\n from ansible.utils.display import Display\n display = Display()\n\n__all__ = ['TaskQueueManager']\n\n\nclass TaskQueueManager:\n\n '''\n This class handles the multiprocessing requirements of Ansible by\n creating a pool of worker forks, a result handler fork, and a\n manager object with shared datastructures/queues for coordinating\n work between all processes.\n\n The queue manager is responsible for loading the play strategy plugin,\n which dispatches the Play's tasks to hosts.\n '''\n\n RUN_OK = 0\n RUN_ERROR = 1\n RUN_FAILED_HOSTS = 2\n RUN_UNREACHABLE_HOSTS = 4\n RUN_FAILED_BREAK_PLAY = 8\n RUN_UNKNOWN_ERROR = 255\n\n def __init__(self, inventory, variable_manager, loader, options, passwords, stdout_callback=None, run_additional_callbacks=True, run_tree=False):\n\n self._inventory = inventory\n self._variable_manager = variable_manager\n self._loader = loader\n self._options = options\n self._stats = AggregateStats()\n self.passwords = passwords\n self._stdout_callback = stdout_callback\n self._run_additional_callbacks = run_additional_callbacks\n self._run_tree = run_tree\n\n self._callbacks_loaded = False\n self._callback_plugins = []\n self._start_at_done = False\n\n # make sure the module path (if specified) is parsed and\n # added to the module_loader object\n if options.module_path is not None:\n for path in options.module_path.split(os.pathsep):\n module_loader.add_directory(path)\n\n # a special flag to help us exit cleanly\n self._terminated = False\n\n # this dictionary is used to keep track of notified handlers\n self._notified_handlers = dict()\n self._listening_handlers = dict()\n\n # dictionaries to keep track of failed/unreachable hosts\n self._failed_hosts = dict()\n self._unreachable_hosts = dict()\n\n self._final_q = multiprocessing.Queue()\n\n # A temporary file (opened pre-fork) used by connection\n # plugins for inter-process locking.\n self._connection_lockfile = tempfile.TemporaryFile()\n\n def _initialize_processes(self, num):\n self._workers = []\n\n for i in range(num):\n rslt_q = multiprocessing.Queue()\n self._workers.append([None, rslt_q])\n\n def _initialize_notified_handlers(self, play):\n '''\n Clears and initializes the shared notified handlers dict with entries\n for each handler in the play, which is an empty array that will contain\n inventory hostnames for those hosts triggering the handler.\n '''\n\n # Zero the dictionary first by removing any entries there.\n # Proxied dicts don't support iteritems, so we have to use keys()\n self._notified_handlers.clear()\n self._listening_handlers.clear()\n\n def _process_block(b):\n temp_list = []\n for t in b.block:\n if isinstance(t, Block):\n temp_list.extend(_process_block(t))\n else:\n temp_list.append(t)\n return temp_list\n\n handler_list = []\n for handler_block in play.handlers:\n handler_list.extend(_process_block(handler_block))\n\n # then initialize it with the given handler list\n for handler in handler_list:\n if handler not in self._notified_handlers:\n self._notified_handlers[handler] = []\n if handler.listen:\n listeners = handler.listen\n if not isinstance(listeners, list):\n listeners = [ listeners ]\n for listener in listeners:\n if listener not in self._listening_handlers:\n self._listening_handlers[listener] = []\n self._listening_handlers[listener].append(handler.get_name())\n\n def load_callbacks(self):\n '''\n Loads all available callbacks, with the exception of those which\n utilize the CALLBACK_TYPE option. When CALLBACK_TYPE is set to 'stdout',\n only one such callback plugin will be loaded.\n '''\n\n if self._callbacks_loaded:\n return\n\n stdout_callback_loaded = False\n if self._stdout_callback is None:\n self._stdout_callback = C.DEFAULT_STDOUT_CALLBACK\n\n if isinstance(self._stdout_callback, CallbackBase):\n stdout_callback_loaded = True\n elif isinstance(self._stdout_callback, string_types):\n if self._stdout_callback not in callback_loader:\n raise AnsibleError(\"Invalid callback for stdout specified: %s\" % self._stdout_callback)\n else:\n self._stdout_callback = callback_loader.get(self._stdout_callback)\n stdout_callback_loaded = True\n else:\n raise AnsibleError(\"callback must be an instance of CallbackBase or the name of a callback plugin\")\n\n for callback_plugin in callback_loader.all(class_only=True):\n if hasattr(callback_plugin, 'CALLBACK_VERSION') and callback_plugin.CALLBACK_VERSION >= 2.0:\n # we only allow one callback of type 'stdout' to be loaded, so check\n # the name of the current plugin and type to see if we need to skip\n # loading this callback plugin\n callback_type = getattr(callback_plugin, 'CALLBACK_TYPE', None)\n callback_needs_whitelist = getattr(callback_plugin, 'CALLBACK_NEEDS_WHITELIST', False)\n (callback_name, _) = os.path.splitext(os.path.basename(callback_plugin._original_path))\n if callback_type == 'stdout':\n if callback_name != self._stdout_callback or stdout_callback_loaded:\n continue\n stdout_callback_loaded = True\n elif callback_name == 'tree' and self._run_tree:\n pass\n elif not self._run_additional_callbacks or (callback_needs_whitelist and (\n C.DEFAULT_CALLBACK_WHITELIST is None or callback_name not in C.DEFAULT_CALLBACK_WHITELIST)):\n continue\n\n self._callback_plugins.append(callback_plugin())\n\n self._callbacks_loaded = True\n\n def run(self, play):\n '''\n Iterates over the roles/tasks in a play, using the given (or default)\n strategy for queueing tasks. The default is the linear strategy, which\n operates like classic Ansible by keeping all hosts in lock-step with\n a given task (meaning no hosts move on to the next task until all hosts\n are done with the current task).\n '''\n\n if not self._callbacks_loaded:\n self.load_callbacks()\n\n all_vars = self._variable_manager.get_vars(loader=self._loader, play=play)\n templar = Templar(loader=self._loader, variables=all_vars)\n\n new_play = play.copy()\n new_play.post_validate(templar)\n new_play.handlers = new_play.compile_roles_handlers() + new_play.handlers\n\n self.hostvars = HostVars(\n inventory=self._inventory,\n variable_manager=self._variable_manager,\n loader=self._loader,\n )\n\n # Fork # of forks, # of hosts or serial, whichever is lowest\n num_hosts = len(self._inventory.get_hosts(new_play.hosts, ignore_restrictions=True))\n\n max_serial = 0\n if new_play.serial:\n # the play has not been post_validated here, so we may need\n # to convert the scalar value to a list at this point\n serial_items = new_play.serial\n if not isinstance(serial_items, list):\n serial_items = [serial_items]\n max_serial = max([pct_to_int(x, num_hosts) for x in serial_items])\n\n contenders = [self._options.forks, max_serial, num_hosts]\n contenders = [v for v in contenders if v is not None and v > 0]\n self._initialize_processes(min(contenders))\n\n play_context = PlayContext(new_play, self._options, self.passwords, self._connection_lockfile.fileno())\n for callback_plugin in self._callback_plugins:\n if hasattr(callback_plugin, 'set_play_context'):\n callback_plugin.set_play_context(play_context)\n\n self.send_callback('v2_playbook_on_play_start', new_play)\n\n # initialize the shared dictionary containing the notified handlers\n self._initialize_notified_handlers(new_play)\n\n # load the specified strategy (or the default linear one)\n strategy = strategy_loader.get(new_play.strategy, self)\n if strategy is None:\n raise AnsibleError(\"Invalid play strategy specified: %s\" % new_play.strategy, obj=play._ds)\n\n # build the iterator\n iterator = PlayIterator(\n inventory=self._inventory,\n play=new_play,\n play_context=play_context,\n variable_manager=self._variable_manager,\n all_vars=all_vars,\n start_at_done = self._start_at_done,\n )\n\n # Because the TQM may survive multiple play runs, we start by marking\n # any hosts as failed in the iterator here which may have been marked\n # as failed in previous runs. Then we clear the internal list of failed\n # hosts so we know what failed this round.\n for host_name in self._failed_hosts.keys():\n host = self._inventory.get_host(host_name)\n iterator.mark_host_failed(host)\n\n self.clear_failed_hosts()\n\n # during initialization, the PlayContext will clear the start_at_task\n # field to signal that a matching task was found, so check that here\n # and remember it so we don't try to skip tasks on future plays\n if getattr(self._options, 'start_at_task', None) is not None and play_context.start_at_task is None:\n self._start_at_done = True\n\n # and run the play using the strategy and cleanup on way out\n play_return = strategy.run(iterator, play_context)\n\n # now re-save the hosts that failed from the iterator to our internal list\n for host_name in iterator.get_failed_hosts():\n self._failed_hosts[host_name] = True\n\n strategy.cleanup()\n self._cleanup_processes()\n return play_return\n\n def cleanup(self):\n display.debug(\"RUNNING CLEANUP\")\n self.terminate()\n self._final_q.close()\n self._cleanup_processes()\n\n def _cleanup_processes(self):\n if hasattr(self, '_workers'):\n for (worker_prc, rslt_q) in self._workers:\n rslt_q.close()\n if worker_prc and worker_prc.is_alive():\n try:\n worker_prc.terminate()\n except AttributeError:\n pass\n\n def clear_failed_hosts(self):\n self._failed_hosts = dict()\n\n def get_inventory(self):\n return self._inventory\n\n def get_variable_manager(self):\n return self._variable_manager\n\n def get_loader(self):\n return self._loader\n\n def get_workers(self):\n return self._workers[:]\n\n def terminate(self):\n self._terminated = True\n\n def has_dead_workers(self):\n\n # [<WorkerProcess(WorkerProcess-2, stopped[SIGKILL])>,\n # <WorkerProcess(WorkerProcess-2, stopped[SIGTERM])>\n\n defunct = False\n for idx,x in enumerate(self._workers):\n if hasattr(x[0], 'exitcode'):\n if x[0].exitcode in [-9, -15]:\n defunct = True\n return defunct\n\n def send_callback(self, method_name, *args, **kwargs):\n for callback_plugin in [self._stdout_callback] + self._callback_plugins:\n # a plugin that set self.disabled to True will not be called\n # see osx_say.py example for such a plugin\n if getattr(callback_plugin, 'disabled', False):\n continue\n\n # try to find v2 method, fallback to v1 method, ignore callback if no method found\n methods = []\n for possible in [method_name, 'v2_on_any']:\n gotit = getattr(callback_plugin, possible, None)\n if gotit is None:\n gotit = getattr(callback_plugin, possible.replace('v2_',''), None)\n if gotit is not None:\n methods.append(gotit)\n\n for method in methods:\n try:\n # temporary hack, required due to a change in the callback API, so\n # we don't break backwards compatibility with callbacks which were\n # designed to use the original API\n # FIXME: target for removal and revert to the original code here after a year (2017-01-14)\n if method_name == 'v2_playbook_on_start':\n import inspect\n (f_args, f_varargs, f_keywords, f_defaults) = inspect.getargspec(method)\n if 'playbook' in f_args:\n method(*args, **kwargs)\n else:\n method()\n else:\n method(*args, **kwargs)\n except Exception as e:\n # TODO: add config toggle to make this fatal or not?\n display.warning(u\"Failure using method (%s) in callback plugin (%s): %s\" % (to_text(method_name), to_text(callback_plugin), to_text(e)))\n from traceback import format_tb\n from sys import exc_info\n display.debug('Callback Exception: \\n' + ' '.join(format_tb(exc_info()[2])))\n", "path": "lib/ansible/executor/task_queue_manager.py" } ]
diff --git a/lib/ansible/executor/task_queue_manager.py b/lib/ansible/executor/task_queue_manager.py index 2e6948f1e0c869..08b7dd0f6716ee 100644 --- a/lib/ansible/executor/task_queue_manager.py +++ b/lib/ansible/executor/task_queue_manager.py @@ -222,7 +222,7 @@ def run(self, play): ) # Fork # of forks, # of hosts or serial, whichever is lowest - num_hosts = len(self._inventory.get_hosts(new_play.hosts)) + num_hosts = len(self._inventory.get_hosts(new_play.hosts, ignore_restrictions=True)) max_serial = 0 if new_play.serial:
freedomofpress__securedrop-6408
Test securedrop-admin with Tails 5.0 ## Description https://tails.boum.org/news/test_5.0-beta1/ Tails 5.0 is based on Debian Bullseye, which means it's using a newer Python version (3.9) among plenty of other things. It's probably worth walking through a full SD install + backup/restore to make sure it works as expected.
[ { "content": "# -*- mode: python; coding: utf-8 -*-\n#\n# Copyright (C) 2013-2018 Freedom of the Press Foundation & al\n# Copyright (C) 2018 Loic Dachary <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n\nimport argparse\nimport logging\nimport os\nimport shutil\nimport subprocess\nimport sys\nfrom typing import Iterator\n\nfrom typing import List\n\nsdlog = logging.getLogger(__name__)\n\nDIR = os.path.dirname(os.path.realpath(__file__))\nVENV_DIR = os.path.join(DIR, \".venv3\")\n\n\ndef setup_logger(verbose: bool = False) -> None:\n \"\"\" Configure logging handler \"\"\"\n # Set default level on parent\n sdlog.setLevel(logging.DEBUG)\n level = logging.DEBUG if verbose else logging.INFO\n\n stdout = logging.StreamHandler(sys.stdout)\n stdout.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))\n stdout.setLevel(level)\n sdlog.addHandler(stdout)\n\n\ndef run_command(command: List[str]) -> Iterator[bytes]:\n \"\"\"\n Wrapper function to display stdout for running command,\n similar to how shelling out in a Bash script displays rolling output.\n\n Yields a list of the stdout from the `command`, and raises a\n CalledProcessError if `command` returns non-zero.\n \"\"\"\n popen = subprocess.Popen(command,\n stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT)\n if popen.stdout is None:\n raise EnvironmentError(\"Could not run command: None stdout\")\n for stdout_line in iter(popen.stdout.readline, b\"\"):\n yield stdout_line\n popen.stdout.close()\n return_code = popen.wait()\n if return_code:\n raise subprocess.CalledProcessError(return_code, command)\n\n\ndef is_tails() -> bool:\n try:\n id = subprocess.check_output('lsb_release --id --short',\n shell=True).decode('utf-8').strip()\n except subprocess.CalledProcessError:\n return False\n\n # dirty hack to unreliably detect Tails 4.0~beta2\n if id == 'Debian':\n if os.uname()[1] == 'amnesia':\n id = 'Tails'\n\n return id == 'Tails'\n\n\ndef clean_up_tails3_venv(virtualenv_dir: str = VENV_DIR) -> None:\n \"\"\"\n Tails 3.x, based on debian stretch uses libpython3.5, whereas Tails 4.x is\n based on Debian Buster and uses libpython3.7. This means that the Tails 3.x\n virtualenv will not work under Tails 4.x, and will need to be destroyed and\n rebuilt. We can detect if the version of libpython is 3.5 in the\n admin/.venv3/ folder, and delete it if that's the case. This will ensure a\n smooth upgrade from Tails 3.x to Tails 4.x.\n \"\"\"\n if is_tails():\n try:\n dist = subprocess.check_output('lsb_release --codename --short',\n shell=True).strip()\n except subprocess.CalledProcessError:\n return None\n\n # tails4 is based on buster\n if dist == b'buster':\n python_lib_path = os.path.join(virtualenv_dir, \"lib/python3.5\")\n if os.path.exists(os.path.join(python_lib_path)):\n sdlog.info(\n \"Tails 3 Python 3 virtualenv detected. \"\n \"Removing it.\"\n )\n shutil.rmtree(virtualenv_dir)\n sdlog.info(\"Tails 3 Python 3 virtualenv deleted.\")\n\n\ndef checkenv(args: argparse.Namespace) -> None:\n clean_up_tails3_venv(VENV_DIR)\n if not os.path.exists(os.path.join(VENV_DIR, \"bin/activate\")):\n sdlog.error('Please run \"securedrop-admin setup\".')\n sys.exit(1)\n\n\ndef maybe_torify() -> List[str]:\n if is_tails():\n return ['torify']\n else:\n return []\n\n\ndef install_apt_dependencies(args: argparse.Namespace) -> None:\n \"\"\"\n Install apt dependencies in Tails. In order to install Ansible in\n a virtualenv, first there are a number of Python prerequisites.\n \"\"\"\n sdlog.info(\"Installing SecureDrop Admin dependencies\")\n sdlog.info((\"You'll be prompted for the temporary Tails admin password,\"\n \" which was set on Tails login screen\"))\n\n apt_command = ['sudo', 'su', '-c',\n \"apt-get update && \\\n apt-get -q -o=Dpkg::Use-Pty=0 install -y \\\n python3-virtualenv \\\n python3-yaml \\\n python3-pip \\\n ccontrol \\\n virtualenv \\\n libffi-dev \\\n libssl-dev \\\n libpython3-dev\",\n ]\n\n try:\n # Print command results in real-time, to keep Admin apprised\n # of progress during long-running command.\n for output_line in run_command(apt_command):\n print(output_line.decode('utf-8').rstrip())\n except subprocess.CalledProcessError:\n # Tails supports apt persistence, which was used by SecureDrop\n # under Tails 2.x. If updates are being applied, don't try to pile\n # on with more apt requests.\n sdlog.error((\"Failed to install apt dependencies. Check network\"\n \" connection and try again.\"))\n raise\n\n\ndef envsetup(args: argparse.Namespace, virtualenv_dir: str = VENV_DIR) -> None:\n \"\"\"Installs Admin tooling required for managing SecureDrop. Specifically:\n\n * updates apt-cache\n * installs apt packages for Python virtualenv\n * creates virtualenv\n * installs pip packages inside virtualenv\n\n The virtualenv is created within the Persistence volume in Tails, so that\n Ansible is available to the Admin on subsequent boots without requiring\n installation of packages again.\n \"\"\"\n # clean up Tails 3.x venv when migrating to Tails 4.x\n clean_up_tails3_venv(virtualenv_dir)\n\n # virtualenv doesnt exist? Install dependencies and create\n if not os.path.exists(virtualenv_dir):\n\n install_apt_dependencies(args)\n\n # Technically you can create a virtualenv from within python\n # but pip can only be run over Tor on Tails, and debugging that\n # along with instaling a third-party dependency is not worth\n # the effort here.\n sdlog.info(\"Setting up virtualenv\")\n try:\n sdlog.debug(subprocess.check_output(\n maybe_torify() + ['virtualenv',\n '--python=python3',\n virtualenv_dir\n ],\n stderr=subprocess.STDOUT))\n except subprocess.CalledProcessError as e:\n sdlog.debug(e.output)\n sdlog.error((\"Unable to create virtualenv. Check network settings\"\n \" and try again.\"))\n sdlog.debug(\"Cleaning up virtualenv\")\n if os.path.exists(virtualenv_dir):\n shutil.rmtree(virtualenv_dir)\n raise\n else:\n sdlog.info(\"Virtualenv already exists, not creating\")\n\n if args.t:\n install_pip_dependencies(\n args,\n requirements_file='requirements-testinfra.txt',\n desc=\"dependencies with verification support\"\n )\n else:\n install_pip_dependencies(args)\n\n if os.path.exists(os.path.join(DIR, 'setup.py')):\n install_pip_self(args)\n\n sdlog.info(\"Finished installing SecureDrop dependencies\")\n\n\ndef install_pip_self(args: argparse.Namespace) -> None:\n pip_install_cmd = [\n os.path.join(VENV_DIR, 'bin', 'pip3'),\n 'install', '-e', DIR\n ]\n try:\n subprocess.check_output(maybe_torify() + pip_install_cmd,\n stderr=subprocess.STDOUT)\n except subprocess.CalledProcessError as e:\n sdlog.debug(e.output)\n sdlog.error(\"Unable to install self, run with -v for more information\")\n raise\n\n\ndef install_pip_dependencies(\n args: argparse.Namespace,\n requirements_file: str = \"requirements.txt\",\n desc: str = \"Python dependencies\",\n) -> None:\n \"\"\"\n Install Python dependencies via pip into virtualenv.\n \"\"\"\n pip_install_cmd = [\n os.path.join(VENV_DIR, 'bin', 'pip3'),\n 'install',\n '--no-deps',\n '-r', os.path.join(DIR, requirements_file),\n '--require-hashes',\n '-U', '--upgrade-strategy', 'only-if-needed',\n ]\n\n sdlog.info(\"Checking {} for securedrop-admin\".format(desc))\n try:\n pip_output = subprocess.check_output(maybe_torify() + pip_install_cmd,\n stderr=subprocess.STDOUT)\n except subprocess.CalledProcessError as e:\n sdlog.debug(e.output)\n sdlog.error((\"Failed to install {}. Check network\"\n \" connection and try again.\".format(desc)))\n raise\n\n sdlog.debug(pip_output)\n if \"Successfully installed\" in str(pip_output):\n sdlog.info(\"{} for securedrop-admin upgraded\".format(desc))\n else:\n sdlog.info(\"{} for securedrop-admin are up-to-date\".format(desc))\n\n\ndef parse_argv(argv: List[str]) -> argparse.Namespace:\n parser = argparse.ArgumentParser()\n parser.add_argument('-v', action='store_true', default=False,\n help=\"Increase verbosity on output\")\n parser.add_argument('-t', action='store_true', default=False,\n help=\"Install additional test dependencies\")\n parser.set_defaults(func=envsetup)\n\n subparsers = parser.add_subparsers()\n\n envsetup_parser = subparsers.add_parser(\n 'envsetup',\n help='Set up the admin virtualenv.'\n )\n envsetup_parser.set_defaults(func=envsetup)\n\n checkenv_parser = subparsers.add_parser(\n 'checkenv',\n help='Check that the admin virtualenv is properly set up.'\n )\n checkenv_parser.set_defaults(func=checkenv)\n\n return parser.parse_args(argv)\n\n\nif __name__ == \"__main__\":\n args = parse_argv(sys.argv[1:])\n setup_logger(args.v)\n\n try:\n args.func(args)\n except Exception:\n sys.exit(1)\n else:\n sys.exit(0)\n", "path": "admin/bootstrap.py" } ]
[ { "content": "# -*- mode: python; coding: utf-8 -*-\n#\n# Copyright (C) 2013-2018 Freedom of the Press Foundation & al\n# Copyright (C) 2018 Loic Dachary <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n\nimport argparse\nimport logging\nimport os\nimport shutil\nimport subprocess\nimport sys\nfrom typing import Iterator\n\nfrom typing import List\n\nsdlog = logging.getLogger(__name__)\n\nDIR = os.path.dirname(os.path.realpath(__file__))\nVENV_DIR = os.path.join(DIR, \".venv3\")\n\n\ndef setup_logger(verbose: bool = False) -> None:\n \"\"\" Configure logging handler \"\"\"\n # Set default level on parent\n sdlog.setLevel(logging.DEBUG)\n level = logging.DEBUG if verbose else logging.INFO\n\n stdout = logging.StreamHandler(sys.stdout)\n stdout.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))\n stdout.setLevel(level)\n sdlog.addHandler(stdout)\n\n\ndef run_command(command: List[str]) -> Iterator[bytes]:\n \"\"\"\n Wrapper function to display stdout for running command,\n similar to how shelling out in a Bash script displays rolling output.\n\n Yields a list of the stdout from the `command`, and raises a\n CalledProcessError if `command` returns non-zero.\n \"\"\"\n popen = subprocess.Popen(command,\n stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT)\n if popen.stdout is None:\n raise EnvironmentError(\"Could not run command: None stdout\")\n for stdout_line in iter(popen.stdout.readline, b\"\"):\n yield stdout_line\n popen.stdout.close()\n return_code = popen.wait()\n if return_code:\n raise subprocess.CalledProcessError(return_code, command)\n\n\ndef is_tails() -> bool:\n try:\n id = subprocess.check_output('lsb_release --id --short',\n shell=True).decode('utf-8').strip()\n except subprocess.CalledProcessError:\n return False\n\n # dirty hack to unreliably detect Tails 4.0~beta2\n if id == 'Debian':\n if os.uname()[1] == 'amnesia':\n id = 'Tails'\n\n return id == 'Tails'\n\n\ndef clean_up_tails3_venv(virtualenv_dir: str = VENV_DIR) -> None:\n \"\"\"\n Tails 3.x, based on debian stretch uses libpython3.5, whereas Tails 4.x is\n based on Debian Buster and uses libpython3.7. This means that the Tails 3.x\n virtualenv will not work under Tails 4.x, and will need to be destroyed and\n rebuilt. We can detect if the version of libpython is 3.5 in the\n admin/.venv3/ folder, and delete it if that's the case. This will ensure a\n smooth upgrade from Tails 3.x to Tails 4.x.\n \"\"\"\n if is_tails():\n try:\n dist = subprocess.check_output('lsb_release --codename --short',\n shell=True).strip()\n except subprocess.CalledProcessError:\n return None\n\n # tails4 is based on buster\n if dist == b'buster':\n python_lib_path = os.path.join(virtualenv_dir, \"lib/python3.5\")\n if os.path.exists(os.path.join(python_lib_path)):\n sdlog.info(\n \"Tails 3 Python 3 virtualenv detected. \"\n \"Removing it.\"\n )\n shutil.rmtree(virtualenv_dir)\n sdlog.info(\"Tails 3 Python 3 virtualenv deleted.\")\n\n\ndef checkenv(args: argparse.Namespace) -> None:\n clean_up_tails3_venv(VENV_DIR)\n if not os.path.exists(os.path.join(VENV_DIR, \"bin/activate\")):\n sdlog.error('Please run \"securedrop-admin setup\".')\n sys.exit(1)\n\n\ndef maybe_torify() -> List[str]:\n if is_tails():\n return ['torify']\n else:\n return []\n\n\ndef install_apt_dependencies(args: argparse.Namespace) -> None:\n \"\"\"\n Install apt dependencies in Tails. In order to install Ansible in\n a virtualenv, first there are a number of Python prerequisites.\n \"\"\"\n sdlog.info(\"Installing SecureDrop Admin dependencies\")\n sdlog.info((\"You'll be prompted for the temporary Tails admin password,\"\n \" which was set on Tails login screen\"))\n\n apt_command = ['sudo', 'su', '-c',\n \"apt-get update && \\\n apt-get -q -o=Dpkg::Use-Pty=0 install -y \\\n python3-virtualenv \\\n python3-yaml \\\n python3-pip \\\n virtualenv \\\n libffi-dev \\\n libssl-dev \\\n libpython3-dev\",\n ]\n\n try:\n # Print command results in real-time, to keep Admin apprised\n # of progress during long-running command.\n for output_line in run_command(apt_command):\n print(output_line.decode('utf-8').rstrip())\n except subprocess.CalledProcessError:\n # Tails supports apt persistence, which was used by SecureDrop\n # under Tails 2.x. If updates are being applied, don't try to pile\n # on with more apt requests.\n sdlog.error((\"Failed to install apt dependencies. Check network\"\n \" connection and try again.\"))\n raise\n\n\ndef envsetup(args: argparse.Namespace, virtualenv_dir: str = VENV_DIR) -> None:\n \"\"\"Installs Admin tooling required for managing SecureDrop. Specifically:\n\n * updates apt-cache\n * installs apt packages for Python virtualenv\n * creates virtualenv\n * installs pip packages inside virtualenv\n\n The virtualenv is created within the Persistence volume in Tails, so that\n Ansible is available to the Admin on subsequent boots without requiring\n installation of packages again.\n \"\"\"\n # clean up Tails 3.x venv when migrating to Tails 4.x\n clean_up_tails3_venv(virtualenv_dir)\n\n # virtualenv doesnt exist? Install dependencies and create\n if not os.path.exists(virtualenv_dir):\n\n install_apt_dependencies(args)\n\n # Technically you can create a virtualenv from within python\n # but pip can only be run over Tor on Tails, and debugging that\n # along with instaling a third-party dependency is not worth\n # the effort here.\n sdlog.info(\"Setting up virtualenv\")\n try:\n sdlog.debug(subprocess.check_output(\n maybe_torify() + ['virtualenv',\n '--python=python3',\n virtualenv_dir\n ],\n stderr=subprocess.STDOUT))\n except subprocess.CalledProcessError as e:\n sdlog.debug(e.output)\n sdlog.error((\"Unable to create virtualenv. Check network settings\"\n \" and try again.\"))\n sdlog.debug(\"Cleaning up virtualenv\")\n if os.path.exists(virtualenv_dir):\n shutil.rmtree(virtualenv_dir)\n raise\n else:\n sdlog.info(\"Virtualenv already exists, not creating\")\n\n if args.t:\n install_pip_dependencies(\n args,\n requirements_file='requirements-testinfra.txt',\n desc=\"dependencies with verification support\"\n )\n else:\n install_pip_dependencies(args)\n\n if os.path.exists(os.path.join(DIR, 'setup.py')):\n install_pip_self(args)\n\n sdlog.info(\"Finished installing SecureDrop dependencies\")\n\n\ndef install_pip_self(args: argparse.Namespace) -> None:\n pip_install_cmd = [\n os.path.join(VENV_DIR, 'bin', 'pip3'),\n 'install', '-e', DIR\n ]\n try:\n subprocess.check_output(maybe_torify() + pip_install_cmd,\n stderr=subprocess.STDOUT)\n except subprocess.CalledProcessError as e:\n sdlog.debug(e.output)\n sdlog.error(\"Unable to install self, run with -v for more information\")\n raise\n\n\ndef install_pip_dependencies(\n args: argparse.Namespace,\n requirements_file: str = \"requirements.txt\",\n desc: str = \"Python dependencies\",\n) -> None:\n \"\"\"\n Install Python dependencies via pip into virtualenv.\n \"\"\"\n pip_install_cmd = [\n os.path.join(VENV_DIR, 'bin', 'pip3'),\n 'install',\n '--no-deps',\n '-r', os.path.join(DIR, requirements_file),\n '--require-hashes',\n '-U', '--upgrade-strategy', 'only-if-needed',\n ]\n\n sdlog.info(\"Checking {} for securedrop-admin\".format(desc))\n try:\n pip_output = subprocess.check_output(maybe_torify() + pip_install_cmd,\n stderr=subprocess.STDOUT)\n except subprocess.CalledProcessError as e:\n sdlog.debug(e.output)\n sdlog.error((\"Failed to install {}. Check network\"\n \" connection and try again.\".format(desc)))\n raise\n\n sdlog.debug(pip_output)\n if \"Successfully installed\" in str(pip_output):\n sdlog.info(\"{} for securedrop-admin upgraded\".format(desc))\n else:\n sdlog.info(\"{} for securedrop-admin are up-to-date\".format(desc))\n\n\ndef parse_argv(argv: List[str]) -> argparse.Namespace:\n parser = argparse.ArgumentParser()\n parser.add_argument('-v', action='store_true', default=False,\n help=\"Increase verbosity on output\")\n parser.add_argument('-t', action='store_true', default=False,\n help=\"Install additional test dependencies\")\n parser.set_defaults(func=envsetup)\n\n subparsers = parser.add_subparsers()\n\n envsetup_parser = subparsers.add_parser(\n 'envsetup',\n help='Set up the admin virtualenv.'\n )\n envsetup_parser.set_defaults(func=envsetup)\n\n checkenv_parser = subparsers.add_parser(\n 'checkenv',\n help='Check that the admin virtualenv is properly set up.'\n )\n checkenv_parser.set_defaults(func=checkenv)\n\n return parser.parse_args(argv)\n\n\nif __name__ == \"__main__\":\n args = parse_argv(sys.argv[1:])\n setup_logger(args.v)\n\n try:\n args.func(args)\n except Exception:\n sys.exit(1)\n else:\n sys.exit(0)\n", "path": "admin/bootstrap.py" } ]
diff --git a/admin/bootstrap.py b/admin/bootstrap.py index 1506f9a943..598cf25fa1 100755 --- a/admin/bootstrap.py +++ b/admin/bootstrap.py @@ -138,7 +138,6 @@ def install_apt_dependencies(args: argparse.Namespace) -> None: python3-virtualenv \ python3-yaml \ python3-pip \ - ccontrol \ virtualenv \ libffi-dev \ libssl-dev \ diff --git a/install_files/ansible-base/roles/tails-config/tasks/configure_network_hook.yml b/install_files/ansible-base/roles/tails-config/tasks/configure_network_hook.yml index 870e5133cf..85031bcf66 100644 --- a/install_files/ansible-base/roles/tails-config/tasks/configure_network_hook.yml +++ b/install_files/ansible-base/roles/tails-config/tasks/configure_network_hook.yml @@ -22,4 +22,4 @@ - name: Run SecureDrop network hook # Writes files to /etc, so elevated privileges are required. become: yes - command: python "{{ tails_config_securedrop_dotfiles }}/securedrop_init.py" + command: python3 "{{ tails_config_securedrop_dotfiles }}/securedrop_init.py"
napari__napari-1241
Some keyboard combinations crash napari ## 🐛 Bug On Linux, I can kill napari by pressing and releasing the Super key. I get the following error: ```pytb WARNING: Traceback (most recent call last): File "/home/jni/miniconda3/envs/all/lib/python3.8/site-packages/vispy/app/backends/_qt.py", line 505, in keyReleaseEvent self._keyEvent(self._vispy_canvas.events.key_release, ev) File "/home/jni/miniconda3/envs/all/lib/python3.8/site-packages/vispy/app/backends/_qt.py", line 551, in _keyEvent func(native=ev, key=key, text=text_type(ev.text()), modifiers=mod) File "/home/jni/miniconda3/envs/all/lib/python3.8/site-packages/vispy/util/event.py", line 455, in __call__ self._invoke_callback(cb, event) File "/home/jni/miniconda3/envs/all/lib/python3.8/site-packages/vispy/util/event.py", line 473, in _invoke_callback _handle_exception(self.ignore_callback_errors, File "/home/jni/miniconda3/envs/all/lib/python3.8/site-packages/vispy/util/event.py", line 471, in _invoke_callback cb(event) File "/home/jni/miniconda3/envs/all/lib/python3.8/site-packages/napari/_qt/qt_viewer.py", line 574, in on_key_release combo = components_to_key_combo(event.key.name, event.modifiers) AttributeError: 'NoneType' object has no attribute 'name' ``` This might be specific to i3.
[ { "content": "from pathlib import Path\n\nfrom qtpy.QtCore import QCoreApplication, Qt, QSize\nfrom qtpy.QtWidgets import (\n QWidget,\n QVBoxLayout,\n QFileDialog,\n QSplitter,\n QMessageBox,\n)\nfrom qtpy.QtGui import QCursor, QGuiApplication\nfrom qtpy.QtCore import QThreadPool\nfrom ..utils.io import imsave\nfrom vispy.scene import SceneCanvas, PanZoomCamera, ArcballCamera\nfrom vispy.visuals.transforms import ChainTransform\n\nfrom .qt_dims import QtDims\nfrom .qt_layerlist import QtLayerList\nfrom ..resources import get_stylesheet\nfrom ..utils.theme import template\nfrom ..utils.interactions import (\n ReadOnlyWrapper,\n mouse_press_callbacks,\n mouse_move_callbacks,\n mouse_release_callbacks,\n)\nfrom ..utils.key_bindings import components_to_key_combo\n\nfrom .utils import QImg2array, square_pixmap\nfrom .qt_controls import QtControls\nfrom .qt_viewer_buttons import QtLayerButtons, QtViewerButtons\nfrom .qt_viewer_dock_widget import QtViewerDockWidget\nfrom .qt_about_key_bindings import QtAboutKeyBindings\nfrom .._vispy import create_vispy_visual\n\n\nclass QtViewer(QSplitter):\n \"\"\"Qt view for the napari Viewer model.\n\n Parameters\n ----------\n viewer : napari.components.ViewerModel\n Napari viewer containing the rendered scene, layers, and controls.\n\n Attributes\n ----------\n canvas : vispy.scene.SceneCanvas\n Canvas for rendering the current view.\n console : QtConsole\n iPython console terminal integrated into the napari GUI.\n controls : QtControls\n Qt view for GUI controls.\n dims : napari.qt_dims.QtDims\n Dimension sliders; Qt View for Dims model.\n dockConsole : QtViewerDockWidget\n QWidget wrapped in a QDockWidget with forwarded viewer events.\n aboutKeybindings : QtAboutKeybindings\n Key bindings for the 'About' Qt dialog.\n dockLayerControls : QtViewerDockWidget\n QWidget wrapped in a QDockWidget with forwarded viewer events.\n dockLayerList : QtViewerDockWidget\n QWidget wrapped in a QDockWidget with forwarded viewer events.\n layerButtons : QtLayerButtons\n Button controls for napari layers.\n layers : QtLayerList\n Qt view for LayerList controls.\n layer_to_visual : dict\n Dictionary mapping napari layers with their corresponding vispy_layers.\n pool : qtpy.QtCore.QThreadPool\n Pool of worker threads.\n view : vispy scene widget\n View displayed by vispy canvas. Adds a vispy ViewBox as a child widget.\n viewer : napari.components.ViewerModel\n Napari viewer containing the rendered scene, layers, and controls.\n viewerButtons : QtViewerButtons\n Button controls for the napari viewer.\n \"\"\"\n\n raw_stylesheet = get_stylesheet()\n\n def __init__(self, viewer):\n super().__init__()\n self.setAttribute(Qt.WA_DeleteOnClose)\n self.pool = QThreadPool()\n\n QCoreApplication.setAttribute(\n Qt.AA_UseStyleSheetPropagationInWidgetStyles, True\n )\n\n self.viewer = viewer\n self.dims = QtDims(self.viewer.dims)\n self.controls = QtControls(self.viewer)\n self.layers = QtLayerList(self.viewer.layers)\n self.layerButtons = QtLayerButtons(self.viewer)\n self.viewerButtons = QtViewerButtons(self.viewer)\n self._console = None\n\n layerList = QWidget()\n layerList.setObjectName('layerList')\n layerListLayout = QVBoxLayout()\n layerListLayout.addWidget(self.layerButtons)\n layerListLayout.addWidget(self.layers)\n layerListLayout.addWidget(self.viewerButtons)\n layerListLayout.setContentsMargins(8, 4, 8, 6)\n layerList.setLayout(layerListLayout)\n self.dockLayerList = QtViewerDockWidget(\n self,\n layerList,\n name='layer list',\n area='left',\n allowed_areas=['left', 'right'],\n )\n self.dockLayerControls = QtViewerDockWidget(\n self,\n self.controls,\n name='layer controls',\n area='left',\n allowed_areas=['left', 'right'],\n )\n self.dockConsole = QtViewerDockWidget(\n self,\n QWidget(),\n name='console',\n area='bottom',\n allowed_areas=['top', 'bottom'],\n shortcut='Ctrl+Shift+C',\n )\n self.dockConsole.setVisible(False)\n # because the console is loaded lazily in the @getter, this line just\n # gets (or creates) the console when the dock console is made visible.\n self.dockConsole.visibilityChanged.connect(\n lambda visible: self.console if visible else None\n )\n self.dockLayerControls.visibilityChanged.connect(self._constrain_width)\n self.dockLayerList.setMaximumWidth(258)\n self.dockLayerList.setMinimumWidth(258)\n\n # This dictionary holds the corresponding vispy visual for each layer\n self.layer_to_visual = {}\n self.viewerButtons.consoleButton.clicked.connect(\n self.toggle_console_visibility\n )\n\n self.canvas = SceneCanvas(keys=None, vsync=True, parent=self)\n self.canvas.events.ignore_callback_errors = False\n self.canvas.events.draw.connect(self.dims.enable_play)\n self.canvas.native.setMinimumSize(QSize(200, 200))\n self.canvas.context.set_depth_func('lequal')\n\n self.canvas.connect(self.on_mouse_move)\n self.canvas.connect(self.on_mouse_press)\n self.canvas.connect(self.on_mouse_release)\n self.canvas.connect(self.on_key_press)\n self.canvas.connect(self.on_key_release)\n\n self.view = self.canvas.central_widget.add_view()\n self._update_camera()\n\n main_widget = QWidget()\n main_layout = QVBoxLayout()\n main_layout.setContentsMargins(10, 22, 10, 2)\n main_layout.addWidget(self.canvas.native)\n main_layout.addWidget(self.dims)\n main_layout.setSpacing(10)\n main_widget.setLayout(main_layout)\n\n self.setOrientation(Qt.Vertical)\n self.addWidget(main_widget)\n\n self._last_visited_dir = str(Path.home())\n\n self._cursors = {\n 'cross': Qt.CrossCursor,\n 'forbidden': Qt.ForbiddenCursor,\n 'pointing': Qt.PointingHandCursor,\n 'standard': QCursor(),\n }\n\n self._update_palette()\n\n self.viewer.events.interactive.connect(self._on_interactive)\n self.viewer.events.cursor.connect(self._on_cursor)\n self.viewer.events.reset_view.connect(self._on_reset_view)\n self.viewer.events.palette.connect(self._update_palette)\n self.viewer.layers.events.reordered.connect(self._reorder_layers)\n self.viewer.layers.events.added.connect(self._add_layer)\n self.viewer.layers.events.removed.connect(self._remove_layer)\n self.viewer.dims.events.camera.connect(\n lambda event: self._update_camera()\n )\n # stop any animations whenever the layers change\n self.viewer.events.layers_change.connect(lambda x: self.dims.stop())\n\n self.setAcceptDrops(True)\n\n @property\n def console(self):\n \"\"\"QtConsole: iPython console terminal integrated into the napari GUI.\n \"\"\"\n if self._console is None:\n from .qt_console import QtConsole\n\n self.console = QtConsole({'viewer': self.viewer})\n return self._console\n\n @console.setter\n def console(self, console):\n self._console = console\n self.dockConsole.widget = console\n self._update_palette()\n\n def _constrain_width(self, event):\n \"\"\"Allow the layer controls to be wider, only if floated.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n if self.dockLayerControls.isFloating():\n self.controls.setMaximumWidth(700)\n else:\n self.controls.setMaximumWidth(220)\n\n def _add_layer(self, event):\n \"\"\"When a layer is added, set its parent and order.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n layers = event.source\n layer = event.item\n vispy_layer = create_vispy_visual(layer)\n vispy_layer.node.parent = self.view.scene\n vispy_layer.order = len(layers)\n self.canvas.connect(vispy_layer.on_draw)\n self.layer_to_visual[layer] = vispy_layer\n\n def _remove_layer(self, event):\n \"\"\"When a layer is removed, remove its parent.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n layer = event.item\n vispy_layer = self.layer_to_visual[layer]\n self.canvas.events.draw.disconnect(vispy_layer.on_draw)\n vispy_layer.node.transforms = ChainTransform()\n vispy_layer.node.parent = None\n del vispy_layer\n\n def _reorder_layers(self, event):\n \"\"\"When the list is reordered, propagate changes to draw order.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n for i, layer in enumerate(self.viewer.layers):\n vispy_layer = self.layer_to_visual[layer]\n vispy_layer.order = i\n self.canvas._draw_order.clear()\n self.canvas.update()\n\n def _update_camera(self):\n \"\"\"Update the viewer camera.\"\"\"\n if self.viewer.dims.ndisplay == 3:\n # Set a 3D camera\n if not isinstance(self.view.camera, ArcballCamera):\n self.view.camera = ArcballCamera(name=\"ArcballCamera\", fov=0)\n # flip y-axis to have correct alignment\n # self.view.camera.flip = (0, 1, 0)\n\n self.view.camera.viewbox_key_event = viewbox_key_event\n self.viewer.reset_view()\n else:\n # Set 2D camera\n if not isinstance(self.view.camera, PanZoomCamera):\n self.view.camera = PanZoomCamera(\n aspect=1, name=\"PanZoomCamera\"\n )\n # flip y-axis to have correct alignment\n self.view.camera.flip = (0, 1, 0)\n\n self.view.camera.viewbox_key_event = viewbox_key_event\n self.viewer.reset_view()\n\n def _save_layers_dialog(self, selected=False):\n \"\"\"Save layers (all or selected) to disk, using ``LayerList.save()``.\n\n Parameters\n ----------\n selected : bool\n If True, only layers that are selected in the viewer will be saved.\n By default, all layers are saved.\n \"\"\"\n msg = ''\n if not len(self.viewer.layers):\n msg = \"There are no layers in the viewer to save\"\n elif selected and not len(self.viewer.layers.selected):\n msg = (\n 'Please select one or more layers to save,'\n '\\nor use \"Save all layers...\"'\n )\n if msg:\n QMessageBox.warning(self, \"Nothing to save\", msg, QMessageBox.Ok)\n return\n\n filename, _ = QFileDialog.getSaveFileName(\n parent=self,\n caption=f'Save {\"selected\" if selected else \"all\"} layers',\n directory=self._last_visited_dir, # home dir by default\n )\n if filename:\n self.viewer.layers.save(filename, selected=selected)\n\n def screenshot(self, path=None):\n \"\"\"Take currently displayed screen and convert to an image array.\n\n Parmeters\n ---------\n path : str\n Filename for saving screenshot image.\n\n Returns\n -------\n image : array\n Numpy array of type ubyte and shape (h, w, 4). Index [0, 0] is the\n upper-left corner of the rendered region.\n \"\"\"\n img = self.canvas.native.grabFramebuffer()\n if path is not None:\n imsave(path, QImg2array(img)) # scikit-image imsave method\n return QImg2array(img)\n\n def _screenshot_dialog(self):\n \"\"\"Save screenshot of current display, default .png\"\"\"\n filename, _ = QFileDialog.getSaveFileName(\n parent=self,\n caption='Save screenshot',\n directory=self._last_visited_dir, # home dir by default\n filter=\"Image files (*.png *.bmp *.gif *.tif *.tiff)\", # first one used by default\n # jpg and jpeg not included as they don't support an alpha channel\n )\n if (filename != '') and (filename is not None):\n # double check that an appropriate extension has been added as the\n # filter option does not always add an extension on linux and windows\n # see https://bugreports.qt.io/browse/QTBUG-27186\n image_extensions = ('.bmp', '.gif', '.png', '.tif', '.tiff')\n if not filename.endswith(image_extensions):\n filename = filename + '.png'\n self.screenshot(path=filename)\n\n def _open_files_dialog(self):\n \"\"\"Add files from the menubar.\"\"\"\n filenames, _ = QFileDialog.getOpenFileNames(\n parent=self,\n caption='Select file(s)...',\n directory=self._last_visited_dir, # home dir by default\n )\n if (filenames != []) and (filenames is not None):\n self.viewer.open(filenames)\n\n def _open_files_dialog_as_stack_dialog(self):\n \"\"\"Add files as a stack, from the menubar.\"\"\"\n filenames, _ = QFileDialog.getOpenFileNames(\n parent=self,\n caption='Select files...',\n directory=self._last_visited_dir, # home dir by default\n )\n if (filenames != []) and (filenames is not None):\n self.viewer.open(filenames, stack=True)\n\n def _open_folder_dialog(self):\n \"\"\"Add a folder of files from the menubar.\"\"\"\n folder = QFileDialog.getExistingDirectory(\n parent=self,\n caption='Select folder...',\n directory=self._last_visited_dir, # home dir by default\n )\n if folder not in {'', None}:\n self.viewer.open([folder])\n\n def _on_interactive(self, event):\n \"\"\"Link interactive attributes of view and viewer.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n self.view.interactive = self.viewer.interactive\n\n def _on_cursor(self, event):\n \"\"\"Set the appearance of the mouse cursor.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n cursor = self.viewer.cursor\n if cursor == 'square':\n size = self.viewer.cursor_size\n # make sure the square fits within the current canvas\n if size < 8 or size > (\n min(*self.viewer.window.qt_viewer.canvas.size) - 4\n ):\n q_cursor = self._cursors['cross']\n else:\n q_cursor = QCursor(square_pixmap(size))\n else:\n q_cursor = self._cursors[cursor]\n self.canvas.native.setCursor(q_cursor)\n\n def _on_reset_view(self, event):\n \"\"\"Reset view of the rendered scene.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n if isinstance(self.view.camera, ArcballCamera):\n quat = self.view.camera._quaternion.create_from_axis_angle(\n *event.quaternion\n )\n self.view.camera._quaternion = quat\n self.view.camera.center = event.center\n self.view.camera.scale_factor = event.scale_factor\n else:\n # Assumes default camera has the same properties as PanZoomCamera\n self.view.camera.rect = event.rect\n\n def _update_palette(self, event=None):\n \"\"\"Update the napari GUI theme.\"\"\"\n # template and apply the primary stylesheet\n themed_stylesheet = template(\n self.raw_stylesheet, **self.viewer.palette\n )\n if self._console is not None:\n self.console._update_palette(\n self.viewer.palette, themed_stylesheet\n )\n self.setStyleSheet(themed_stylesheet)\n self.canvas.bgcolor = self.viewer.palette['canvas']\n\n def toggle_console_visibility(self, event=None):\n \"\"\"Toggle console visible and not visible.\n\n Imports the console the first time it is requested.\n \"\"\"\n # force instantiation of console if not already instantiated\n _ = self.console\n\n viz = not self.dockConsole.isVisible()\n # modulate visibility at the dock widget level as console is docakable\n self.dockConsole.setVisible(viz)\n if self.dockConsole.isFloating():\n self.dockConsole.setFloating(True)\n\n self.viewerButtons.consoleButton.setProperty(\n 'expanded', self.dockConsole.isVisible()\n )\n self.viewerButtons.consoleButton.style().unpolish(\n self.viewerButtons.consoleButton\n )\n self.viewerButtons.consoleButton.style().polish(\n self.viewerButtons.consoleButton\n )\n\n def show_key_bindings_dialog(self, event=None):\n dialog = QtAboutKeyBindings(self.viewer, parent=self)\n dialog.show()\n\n def on_mouse_press(self, event):\n \"\"\"Called whenever mouse pressed in canvas.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n if event.pos is None:\n return\n\n event = ReadOnlyWrapper(event)\n mouse_press_callbacks(self.viewer, event)\n\n layer = self.viewer.active_layer\n if layer is not None:\n # update cursor position in visual and layer\n visual = self.layer_to_visual[layer]\n visual._position = list(event.pos)\n layer.position = visual._transform_position(visual._position)\n mouse_press_callbacks(layer, event)\n\n def on_mouse_move(self, event):\n \"\"\"Called whenever mouse moves over canvas.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n if event.pos is None:\n return\n\n mouse_move_callbacks(self.viewer, event)\n\n layer = self.viewer.active_layer\n if layer is not None:\n # update cursor position in visual and layer\n visual = self.layer_to_visual[layer]\n visual._position = list(event.pos)\n layer.position = visual._transform_position(visual._position)\n mouse_move_callbacks(layer, event)\n\n def on_mouse_release(self, event):\n \"\"\"Called whenever mouse released in canvas.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n if event.pos is None:\n return\n\n mouse_release_callbacks(self.viewer, event)\n\n layer = self.viewer.active_layer\n if layer is not None:\n # update cursor position in visual and layer\n visual = self.layer_to_visual[layer]\n visual._position = list(event.pos)\n layer.position = visual._transform_position(visual._position)\n mouse_release_callbacks(layer, event)\n\n def on_key_press(self, event):\n \"\"\"Called whenever key pressed in canvas.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n if (\n event.native is not None\n and event.native.isAutoRepeat()\n and event.key.name not in ['Up', 'Down', 'Left', 'Right']\n ) or event.key is None:\n # pass if no key is present or if key is held down, unless the\n # key being held down is one of the navigation keys\n # this helps for scrolling, etc.\n return\n\n combo = components_to_key_combo(event.key.name, event.modifiers)\n self.viewer.press_key(combo)\n\n def on_key_release(self, event):\n \"\"\"Called whenever key released in canvas.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n combo = components_to_key_combo(event.key.name, event.modifiers)\n self.viewer.release_key(combo)\n\n def keyPressEvent(self, event):\n \"\"\"Called whenever a key is pressed.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n self.canvas._backend._keyEvent(self.canvas.events.key_press, event)\n event.accept()\n\n def keyReleaseEvent(self, event):\n \"\"\"Called whenever a key is released.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n self.canvas._backend._keyEvent(self.canvas.events.key_release, event)\n event.accept()\n\n def dragEnterEvent(self, event):\n \"\"\"Ignore event if not dragging & dropping a file or URL to open.\n\n Using event.ignore() here allows the event to pass through the\n parent widget to its child widget, otherwise the parent widget\n would catch the event and not pass it on to the child widget.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n if event.mimeData().hasUrls():\n event.accept()\n else:\n event.ignore()\n\n def dropEvent(self, event):\n \"\"\"Add local files and web URLS with drag and drop.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n shift_down = QGuiApplication.keyboardModifiers() & Qt.ShiftModifier\n filenames = []\n for url in event.mimeData().urls():\n if url.isLocalFile():\n filenames.append(url.toLocalFile())\n else:\n filenames.append(url.toString())\n self.viewer.open(filenames, stack=bool(shift_down))\n\n def closeEvent(self, event):\n \"\"\"Clear pool of worker threads and close.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n # if the viewer.QtDims object is playing an axis, we need to terminate\n # the AnimationThread before close, otherwise it will cauyse a segFault\n # or Abort trap. (calling stop() when no animation is occuring is also\n # not a problem)\n self.dims.stop()\n self.canvas.native.deleteLater()\n if self._console is not None:\n self.console.close()\n self.dockConsole.deleteLater()\n if not self.pool.waitForDone(10000):\n raise TimeoutError(\"Timed out waiting for QtViewer.pool to finish\")\n event.accept()\n\n\ndef viewbox_key_event(event):\n \"\"\"ViewBox key event handler.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n return\n", "path": "napari/_qt/qt_viewer.py" } ]
[ { "content": "from pathlib import Path\n\nfrom qtpy.QtCore import QCoreApplication, Qt, QSize\nfrom qtpy.QtWidgets import (\n QWidget,\n QVBoxLayout,\n QFileDialog,\n QSplitter,\n QMessageBox,\n)\nfrom qtpy.QtGui import QCursor, QGuiApplication\nfrom qtpy.QtCore import QThreadPool\nfrom ..utils.io import imsave\nfrom vispy.scene import SceneCanvas, PanZoomCamera, ArcballCamera\nfrom vispy.visuals.transforms import ChainTransform\n\nfrom .qt_dims import QtDims\nfrom .qt_layerlist import QtLayerList\nfrom ..resources import get_stylesheet\nfrom ..utils.theme import template\nfrom ..utils.interactions import (\n ReadOnlyWrapper,\n mouse_press_callbacks,\n mouse_move_callbacks,\n mouse_release_callbacks,\n)\nfrom ..utils.key_bindings import components_to_key_combo\n\nfrom .utils import QImg2array, square_pixmap\nfrom .qt_controls import QtControls\nfrom .qt_viewer_buttons import QtLayerButtons, QtViewerButtons\nfrom .qt_viewer_dock_widget import QtViewerDockWidget\nfrom .qt_about_key_bindings import QtAboutKeyBindings\nfrom .._vispy import create_vispy_visual\n\n\nclass QtViewer(QSplitter):\n \"\"\"Qt view for the napari Viewer model.\n\n Parameters\n ----------\n viewer : napari.components.ViewerModel\n Napari viewer containing the rendered scene, layers, and controls.\n\n Attributes\n ----------\n canvas : vispy.scene.SceneCanvas\n Canvas for rendering the current view.\n console : QtConsole\n iPython console terminal integrated into the napari GUI.\n controls : QtControls\n Qt view for GUI controls.\n dims : napari.qt_dims.QtDims\n Dimension sliders; Qt View for Dims model.\n dockConsole : QtViewerDockWidget\n QWidget wrapped in a QDockWidget with forwarded viewer events.\n aboutKeybindings : QtAboutKeybindings\n Key bindings for the 'About' Qt dialog.\n dockLayerControls : QtViewerDockWidget\n QWidget wrapped in a QDockWidget with forwarded viewer events.\n dockLayerList : QtViewerDockWidget\n QWidget wrapped in a QDockWidget with forwarded viewer events.\n layerButtons : QtLayerButtons\n Button controls for napari layers.\n layers : QtLayerList\n Qt view for LayerList controls.\n layer_to_visual : dict\n Dictionary mapping napari layers with their corresponding vispy_layers.\n pool : qtpy.QtCore.QThreadPool\n Pool of worker threads.\n view : vispy scene widget\n View displayed by vispy canvas. Adds a vispy ViewBox as a child widget.\n viewer : napari.components.ViewerModel\n Napari viewer containing the rendered scene, layers, and controls.\n viewerButtons : QtViewerButtons\n Button controls for the napari viewer.\n \"\"\"\n\n raw_stylesheet = get_stylesheet()\n\n def __init__(self, viewer):\n super().__init__()\n self.setAttribute(Qt.WA_DeleteOnClose)\n self.pool = QThreadPool()\n\n QCoreApplication.setAttribute(\n Qt.AA_UseStyleSheetPropagationInWidgetStyles, True\n )\n\n self.viewer = viewer\n self.dims = QtDims(self.viewer.dims)\n self.controls = QtControls(self.viewer)\n self.layers = QtLayerList(self.viewer.layers)\n self.layerButtons = QtLayerButtons(self.viewer)\n self.viewerButtons = QtViewerButtons(self.viewer)\n self._console = None\n\n layerList = QWidget()\n layerList.setObjectName('layerList')\n layerListLayout = QVBoxLayout()\n layerListLayout.addWidget(self.layerButtons)\n layerListLayout.addWidget(self.layers)\n layerListLayout.addWidget(self.viewerButtons)\n layerListLayout.setContentsMargins(8, 4, 8, 6)\n layerList.setLayout(layerListLayout)\n self.dockLayerList = QtViewerDockWidget(\n self,\n layerList,\n name='layer list',\n area='left',\n allowed_areas=['left', 'right'],\n )\n self.dockLayerControls = QtViewerDockWidget(\n self,\n self.controls,\n name='layer controls',\n area='left',\n allowed_areas=['left', 'right'],\n )\n self.dockConsole = QtViewerDockWidget(\n self,\n QWidget(),\n name='console',\n area='bottom',\n allowed_areas=['top', 'bottom'],\n shortcut='Ctrl+Shift+C',\n )\n self.dockConsole.setVisible(False)\n # because the console is loaded lazily in the @getter, this line just\n # gets (or creates) the console when the dock console is made visible.\n self.dockConsole.visibilityChanged.connect(\n lambda visible: self.console if visible else None\n )\n self.dockLayerControls.visibilityChanged.connect(self._constrain_width)\n self.dockLayerList.setMaximumWidth(258)\n self.dockLayerList.setMinimumWidth(258)\n\n # This dictionary holds the corresponding vispy visual for each layer\n self.layer_to_visual = {}\n self.viewerButtons.consoleButton.clicked.connect(\n self.toggle_console_visibility\n )\n\n self.canvas = SceneCanvas(keys=None, vsync=True, parent=self)\n self.canvas.events.ignore_callback_errors = False\n self.canvas.events.draw.connect(self.dims.enable_play)\n self.canvas.native.setMinimumSize(QSize(200, 200))\n self.canvas.context.set_depth_func('lequal')\n\n self.canvas.connect(self.on_mouse_move)\n self.canvas.connect(self.on_mouse_press)\n self.canvas.connect(self.on_mouse_release)\n self.canvas.connect(self.on_key_press)\n self.canvas.connect(self.on_key_release)\n\n self.view = self.canvas.central_widget.add_view()\n self._update_camera()\n\n main_widget = QWidget()\n main_layout = QVBoxLayout()\n main_layout.setContentsMargins(10, 22, 10, 2)\n main_layout.addWidget(self.canvas.native)\n main_layout.addWidget(self.dims)\n main_layout.setSpacing(10)\n main_widget.setLayout(main_layout)\n\n self.setOrientation(Qt.Vertical)\n self.addWidget(main_widget)\n\n self._last_visited_dir = str(Path.home())\n\n self._cursors = {\n 'cross': Qt.CrossCursor,\n 'forbidden': Qt.ForbiddenCursor,\n 'pointing': Qt.PointingHandCursor,\n 'standard': QCursor(),\n }\n\n self._update_palette()\n\n self.viewer.events.interactive.connect(self._on_interactive)\n self.viewer.events.cursor.connect(self._on_cursor)\n self.viewer.events.reset_view.connect(self._on_reset_view)\n self.viewer.events.palette.connect(self._update_palette)\n self.viewer.layers.events.reordered.connect(self._reorder_layers)\n self.viewer.layers.events.added.connect(self._add_layer)\n self.viewer.layers.events.removed.connect(self._remove_layer)\n self.viewer.dims.events.camera.connect(\n lambda event: self._update_camera()\n )\n # stop any animations whenever the layers change\n self.viewer.events.layers_change.connect(lambda x: self.dims.stop())\n\n self.setAcceptDrops(True)\n\n @property\n def console(self):\n \"\"\"QtConsole: iPython console terminal integrated into the napari GUI.\n \"\"\"\n if self._console is None:\n from .qt_console import QtConsole\n\n self.console = QtConsole({'viewer': self.viewer})\n return self._console\n\n @console.setter\n def console(self, console):\n self._console = console\n self.dockConsole.widget = console\n self._update_palette()\n\n def _constrain_width(self, event):\n \"\"\"Allow the layer controls to be wider, only if floated.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n if self.dockLayerControls.isFloating():\n self.controls.setMaximumWidth(700)\n else:\n self.controls.setMaximumWidth(220)\n\n def _add_layer(self, event):\n \"\"\"When a layer is added, set its parent and order.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n layers = event.source\n layer = event.item\n vispy_layer = create_vispy_visual(layer)\n vispy_layer.node.parent = self.view.scene\n vispy_layer.order = len(layers)\n self.canvas.connect(vispy_layer.on_draw)\n self.layer_to_visual[layer] = vispy_layer\n\n def _remove_layer(self, event):\n \"\"\"When a layer is removed, remove its parent.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n layer = event.item\n vispy_layer = self.layer_to_visual[layer]\n self.canvas.events.draw.disconnect(vispy_layer.on_draw)\n vispy_layer.node.transforms = ChainTransform()\n vispy_layer.node.parent = None\n del vispy_layer\n\n def _reorder_layers(self, event):\n \"\"\"When the list is reordered, propagate changes to draw order.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n for i, layer in enumerate(self.viewer.layers):\n vispy_layer = self.layer_to_visual[layer]\n vispy_layer.order = i\n self.canvas._draw_order.clear()\n self.canvas.update()\n\n def _update_camera(self):\n \"\"\"Update the viewer camera.\"\"\"\n if self.viewer.dims.ndisplay == 3:\n # Set a 3D camera\n if not isinstance(self.view.camera, ArcballCamera):\n self.view.camera = ArcballCamera(name=\"ArcballCamera\", fov=0)\n # flip y-axis to have correct alignment\n # self.view.camera.flip = (0, 1, 0)\n\n self.view.camera.viewbox_key_event = viewbox_key_event\n self.viewer.reset_view()\n else:\n # Set 2D camera\n if not isinstance(self.view.camera, PanZoomCamera):\n self.view.camera = PanZoomCamera(\n aspect=1, name=\"PanZoomCamera\"\n )\n # flip y-axis to have correct alignment\n self.view.camera.flip = (0, 1, 0)\n\n self.view.camera.viewbox_key_event = viewbox_key_event\n self.viewer.reset_view()\n\n def _save_layers_dialog(self, selected=False):\n \"\"\"Save layers (all or selected) to disk, using ``LayerList.save()``.\n\n Parameters\n ----------\n selected : bool\n If True, only layers that are selected in the viewer will be saved.\n By default, all layers are saved.\n \"\"\"\n msg = ''\n if not len(self.viewer.layers):\n msg = \"There are no layers in the viewer to save\"\n elif selected and not len(self.viewer.layers.selected):\n msg = (\n 'Please select one or more layers to save,'\n '\\nor use \"Save all layers...\"'\n )\n if msg:\n QMessageBox.warning(self, \"Nothing to save\", msg, QMessageBox.Ok)\n return\n\n filename, _ = QFileDialog.getSaveFileName(\n parent=self,\n caption=f'Save {\"selected\" if selected else \"all\"} layers',\n directory=self._last_visited_dir, # home dir by default\n )\n if filename:\n self.viewer.layers.save(filename, selected=selected)\n\n def screenshot(self, path=None):\n \"\"\"Take currently displayed screen and convert to an image array.\n\n Parmeters\n ---------\n path : str\n Filename for saving screenshot image.\n\n Returns\n -------\n image : array\n Numpy array of type ubyte and shape (h, w, 4). Index [0, 0] is the\n upper-left corner of the rendered region.\n \"\"\"\n img = self.canvas.native.grabFramebuffer()\n if path is not None:\n imsave(path, QImg2array(img)) # scikit-image imsave method\n return QImg2array(img)\n\n def _screenshot_dialog(self):\n \"\"\"Save screenshot of current display, default .png\"\"\"\n filename, _ = QFileDialog.getSaveFileName(\n parent=self,\n caption='Save screenshot',\n directory=self._last_visited_dir, # home dir by default\n filter=\"Image files (*.png *.bmp *.gif *.tif *.tiff)\", # first one used by default\n # jpg and jpeg not included as they don't support an alpha channel\n )\n if (filename != '') and (filename is not None):\n # double check that an appropriate extension has been added as the\n # filter option does not always add an extension on linux and windows\n # see https://bugreports.qt.io/browse/QTBUG-27186\n image_extensions = ('.bmp', '.gif', '.png', '.tif', '.tiff')\n if not filename.endswith(image_extensions):\n filename = filename + '.png'\n self.screenshot(path=filename)\n\n def _open_files_dialog(self):\n \"\"\"Add files from the menubar.\"\"\"\n filenames, _ = QFileDialog.getOpenFileNames(\n parent=self,\n caption='Select file(s)...',\n directory=self._last_visited_dir, # home dir by default\n )\n if (filenames != []) and (filenames is not None):\n self.viewer.open(filenames)\n\n def _open_files_dialog_as_stack_dialog(self):\n \"\"\"Add files as a stack, from the menubar.\"\"\"\n filenames, _ = QFileDialog.getOpenFileNames(\n parent=self,\n caption='Select files...',\n directory=self._last_visited_dir, # home dir by default\n )\n if (filenames != []) and (filenames is not None):\n self.viewer.open(filenames, stack=True)\n\n def _open_folder_dialog(self):\n \"\"\"Add a folder of files from the menubar.\"\"\"\n folder = QFileDialog.getExistingDirectory(\n parent=self,\n caption='Select folder...',\n directory=self._last_visited_dir, # home dir by default\n )\n if folder not in {'', None}:\n self.viewer.open([folder])\n\n def _on_interactive(self, event):\n \"\"\"Link interactive attributes of view and viewer.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n self.view.interactive = self.viewer.interactive\n\n def _on_cursor(self, event):\n \"\"\"Set the appearance of the mouse cursor.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n cursor = self.viewer.cursor\n if cursor == 'square':\n size = self.viewer.cursor_size\n # make sure the square fits within the current canvas\n if size < 8 or size > (\n min(*self.viewer.window.qt_viewer.canvas.size) - 4\n ):\n q_cursor = self._cursors['cross']\n else:\n q_cursor = QCursor(square_pixmap(size))\n else:\n q_cursor = self._cursors[cursor]\n self.canvas.native.setCursor(q_cursor)\n\n def _on_reset_view(self, event):\n \"\"\"Reset view of the rendered scene.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n if isinstance(self.view.camera, ArcballCamera):\n quat = self.view.camera._quaternion.create_from_axis_angle(\n *event.quaternion\n )\n self.view.camera._quaternion = quat\n self.view.camera.center = event.center\n self.view.camera.scale_factor = event.scale_factor\n else:\n # Assumes default camera has the same properties as PanZoomCamera\n self.view.camera.rect = event.rect\n\n def _update_palette(self, event=None):\n \"\"\"Update the napari GUI theme.\"\"\"\n # template and apply the primary stylesheet\n themed_stylesheet = template(\n self.raw_stylesheet, **self.viewer.palette\n )\n if self._console is not None:\n self.console._update_palette(\n self.viewer.palette, themed_stylesheet\n )\n self.setStyleSheet(themed_stylesheet)\n self.canvas.bgcolor = self.viewer.palette['canvas']\n\n def toggle_console_visibility(self, event=None):\n \"\"\"Toggle console visible and not visible.\n\n Imports the console the first time it is requested.\n \"\"\"\n # force instantiation of console if not already instantiated\n _ = self.console\n\n viz = not self.dockConsole.isVisible()\n # modulate visibility at the dock widget level as console is docakable\n self.dockConsole.setVisible(viz)\n if self.dockConsole.isFloating():\n self.dockConsole.setFloating(True)\n\n self.viewerButtons.consoleButton.setProperty(\n 'expanded', self.dockConsole.isVisible()\n )\n self.viewerButtons.consoleButton.style().unpolish(\n self.viewerButtons.consoleButton\n )\n self.viewerButtons.consoleButton.style().polish(\n self.viewerButtons.consoleButton\n )\n\n def show_key_bindings_dialog(self, event=None):\n dialog = QtAboutKeyBindings(self.viewer, parent=self)\n dialog.show()\n\n def on_mouse_press(self, event):\n \"\"\"Called whenever mouse pressed in canvas.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n if event.pos is None:\n return\n\n event = ReadOnlyWrapper(event)\n mouse_press_callbacks(self.viewer, event)\n\n layer = self.viewer.active_layer\n if layer is not None:\n # update cursor position in visual and layer\n visual = self.layer_to_visual[layer]\n visual._position = list(event.pos)\n layer.position = visual._transform_position(visual._position)\n mouse_press_callbacks(layer, event)\n\n def on_mouse_move(self, event):\n \"\"\"Called whenever mouse moves over canvas.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n if event.pos is None:\n return\n\n mouse_move_callbacks(self.viewer, event)\n\n layer = self.viewer.active_layer\n if layer is not None:\n # update cursor position in visual and layer\n visual = self.layer_to_visual[layer]\n visual._position = list(event.pos)\n layer.position = visual._transform_position(visual._position)\n mouse_move_callbacks(layer, event)\n\n def on_mouse_release(self, event):\n \"\"\"Called whenever mouse released in canvas.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n if event.pos is None:\n return\n\n mouse_release_callbacks(self.viewer, event)\n\n layer = self.viewer.active_layer\n if layer is not None:\n # update cursor position in visual and layer\n visual = self.layer_to_visual[layer]\n visual._position = list(event.pos)\n layer.position = visual._transform_position(visual._position)\n mouse_release_callbacks(layer, event)\n\n def on_key_press(self, event):\n \"\"\"Called whenever key pressed in canvas.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n if (\n event.native is not None\n and event.native.isAutoRepeat()\n and event.key.name not in ['Up', 'Down', 'Left', 'Right']\n ) or event.key is None:\n # pass if no key is present or if key is held down, unless the\n # key being held down is one of the navigation keys\n # this helps for scrolling, etc.\n return\n\n combo = components_to_key_combo(event.key.name, event.modifiers)\n self.viewer.press_key(combo)\n\n def on_key_release(self, event):\n \"\"\"Called whenever key released in canvas.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n if event.key is None:\n return\n combo = components_to_key_combo(event.key.name, event.modifiers)\n self.viewer.release_key(combo)\n\n def keyPressEvent(self, event):\n \"\"\"Called whenever a key is pressed.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n self.canvas._backend._keyEvent(self.canvas.events.key_press, event)\n event.accept()\n\n def keyReleaseEvent(self, event):\n \"\"\"Called whenever a key is released.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n self.canvas._backend._keyEvent(self.canvas.events.key_release, event)\n event.accept()\n\n def dragEnterEvent(self, event):\n \"\"\"Ignore event if not dragging & dropping a file or URL to open.\n\n Using event.ignore() here allows the event to pass through the\n parent widget to its child widget, otherwise the parent widget\n would catch the event and not pass it on to the child widget.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n if event.mimeData().hasUrls():\n event.accept()\n else:\n event.ignore()\n\n def dropEvent(self, event):\n \"\"\"Add local files and web URLS with drag and drop.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n shift_down = QGuiApplication.keyboardModifiers() & Qt.ShiftModifier\n filenames = []\n for url in event.mimeData().urls():\n if url.isLocalFile():\n filenames.append(url.toLocalFile())\n else:\n filenames.append(url.toString())\n self.viewer.open(filenames, stack=bool(shift_down))\n\n def closeEvent(self, event):\n \"\"\"Clear pool of worker threads and close.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n # if the viewer.QtDims object is playing an axis, we need to terminate\n # the AnimationThread before close, otherwise it will cauyse a segFault\n # or Abort trap. (calling stop() when no animation is occuring is also\n # not a problem)\n self.dims.stop()\n self.canvas.native.deleteLater()\n if self._console is not None:\n self.console.close()\n self.dockConsole.deleteLater()\n if not self.pool.waitForDone(10000):\n raise TimeoutError(\"Timed out waiting for QtViewer.pool to finish\")\n event.accept()\n\n\ndef viewbox_key_event(event):\n \"\"\"ViewBox key event handler.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n return\n", "path": "napari/_qt/qt_viewer.py" } ]
diff --git a/napari/_qt/qt_viewer.py b/napari/_qt/qt_viewer.py index 66931fd3d07..ad4aad9ccb9 100644 --- a/napari/_qt/qt_viewer.py +++ b/napari/_qt/qt_viewer.py @@ -571,6 +571,8 @@ def on_key_release(self, event): event : qtpy.QtCore.QEvent Event from the Qt context. """ + if event.key is None: + return combo = components_to_key_combo(event.key.name, event.modifiers) self.viewer.release_key(combo)
avocado-framework__avocado-4869
TAP parser: warnings are being marked as error .avocado.hint: ``` 1 [kinds] 2 tap = ./scripts/*/*.t 3 4 [tap] 5 uri = $testpath 6 kwargs = PERL5LIB=./lib,LIBVIRT_TCK_CONFIG=./conf/default.yml,LIBVIRT_TCK_AUTOCLEAN=1 ``` test.t: ```bash #!/bin/bash echo "1..4 warning: foo ok 1 - started persistent domain object ok 2 - dynamic domain label type is svirt_tcg_t ok 3 - dynamic image label type is svirt_image_t ok 4 - Domain MCS c40,c302 == Image MCS c40,c302" ```
[ { "content": "import enum\nimport re\nfrom collections import namedtuple\n\n\[email protected]\nclass TestResult(enum.Enum):\n PASS = 'PASS'\n SKIP = 'SKIP'\n FAIL = 'FAIL'\n XFAIL = 'XFAIL'\n XPASS = 'XPASS'\n\n\n# TapParser is based on Meson's TAP parser, which were licensed under the\n# MIT (X11) license and were contributed to both Meson and Avocado by the\n# same author (Paolo).\n\nclass TapParser:\n Plan = namedtuple('Plan', ['count', 'late', 'skipped', 'explanation'])\n Bailout = namedtuple('Bailout', ['message'])\n Test = namedtuple('Test', ['number', 'name', 'result', 'explanation'])\n Error = namedtuple('Error', ['message'])\n Version = namedtuple('Version', ['version'])\n\n _MAIN = 1\n _AFTER_TEST = 2\n _YAML = 3\n\n _RE_BAILOUT = re.compile(r'Bail out!\\s*(.*)')\n _RE_DIRECTIVE = re.compile(r'(?:\\s*\\#\\s*([Ss][Kk][Ii][Pp]\\S*|[Tt][Oo][Dd][Oo])\\b\\s*(.*))?')\n _RE_PLAN = re.compile(r'1\\.\\.([0-9]+)' + _RE_DIRECTIVE.pattern)\n _RE_TEST = re.compile(r'((?:not )?ok)\\s*(?:([0-9]+)\\s*)?([^#]*)' + _RE_DIRECTIVE.pattern)\n _RE_VERSION = re.compile(r'TAP version ([0-9]+)')\n _RE_YAML_START = re.compile(r'(\\s+)---.*')\n _RE_YAML_END = re.compile(r'\\s+\\.\\.\\.\\s*')\n\n def __init__(self, tap_io):\n self.tap_io = tap_io\n\n def parse_test(self, ok, num, name, directive, explanation):\n name = name.strip()\n explanation = explanation.strip() if explanation else None\n if directive is not None:\n directive = directive.upper()\n if directive == 'SKIP':\n if ok:\n yield self.Test(num, name, TestResult.SKIP, explanation)\n return\n elif directive == 'TODO':\n result = TestResult.XPASS if ok else TestResult.XFAIL\n yield self.Test(num, name, result, explanation)\n return\n else:\n yield self.Error('invalid directive \"%s\"' % (directive,))\n\n result = TestResult.PASS if ok else TestResult.FAIL\n yield self.Test(num, name, result, explanation)\n\n def parse(self):\n found_late_test = False\n bailed_out = False\n plan = None\n lineno = 0\n num_tests = 0\n yaml_lineno = 0\n yaml_indent = ''\n state = self._MAIN\n version = 12\n while True:\n lineno += 1\n try:\n line = next(self.tap_io).rstrip()\n except StopIteration:\n break\n\n # YAML blocks are only accepted after a test\n if state == self._AFTER_TEST:\n if version >= 13:\n m = self._RE_YAML_START.match(line)\n if m:\n state = self._YAML\n yaml_lineno = lineno\n yaml_indent = m.group(1)\n continue\n state = self._MAIN\n\n elif state == self._YAML:\n if self._RE_YAML_END.match(line):\n state = self._MAIN\n continue\n if line.startswith(yaml_indent):\n continue\n yield self.Error('YAML block not terminated (started on line %d)' % (yaml_lineno,))\n state = self._MAIN\n\n assert state == self._MAIN\n if line.startswith('#'):\n continue\n\n m = self._RE_TEST.match(line)\n if m:\n if plan and plan.late and not found_late_test:\n yield self.Error('unexpected test after late plan')\n found_late_test = True\n num_tests += 1\n num = num_tests if m.group(2) is None else int(m.group(2))\n if num != num_tests:\n yield self.Error('out of order test numbers')\n yield from self.parse_test(m.group(1) == 'ok', num,\n m.group(3), m.group(4), m.group(5))\n state = self._AFTER_TEST\n continue\n\n m = self._RE_PLAN.match(line)\n if m:\n if plan:\n yield self.Error('more than one plan found')\n else:\n count = int(m.group(1))\n skipped = (count == 0)\n if m.group(2):\n if m.group(2).upper().startswith('SKIP'):\n if count > 0:\n yield self.Error('invalid SKIP directive for plan')\n skipped = True\n else:\n yield self.Error('invalid directive for plan')\n plan = self.Plan(count=count, late=(num_tests > 0),\n skipped=skipped, explanation=m.group(3))\n yield plan\n continue\n\n m = self._RE_BAILOUT.match(line)\n if m:\n yield self.Bailout(m.group(1))\n bailed_out = True\n continue\n\n m = self._RE_VERSION.match(line)\n if m:\n # The TAP version is only accepted as the first line\n if lineno != 1:\n yield self.Error('version number must be on the first line')\n continue\n version = int(m.group(1))\n if version < 13:\n yield self.Error('version number should be at least 13')\n else:\n yield self.Version(version=version)\n continue\n\n if line == '':\n continue\n\n yield self.Error('unexpected input at line %d' % (lineno,))\n\n if state == self._YAML:\n yield self.Error('YAML block not terminated (started on line %d)' % (yaml_lineno,))\n\n if not bailed_out and plan and num_tests != plan.count:\n if num_tests < plan.count:\n yield self.Error('Too few tests run (expected %d, got %d)'\n % (plan.count, num_tests))\n else:\n yield self.Error('Too many tests run (expected %d, got %d)'\n % (plan.count, num_tests))\n", "path": "avocado/core/tapparser.py" } ]
[ { "content": "import enum\nimport re\nfrom collections import namedtuple\n\n\[email protected]\nclass TestResult(enum.Enum):\n PASS = 'PASS'\n SKIP = 'SKIP'\n FAIL = 'FAIL'\n XFAIL = 'XFAIL'\n XPASS = 'XPASS'\n\n\n# TapParser is based on Meson's TAP parser, which were licensed under the\n# MIT (X11) license and were contributed to both Meson and Avocado by the\n# same author (Paolo).\n\nclass TapParser:\n Plan = namedtuple('Plan', ['count', 'late', 'skipped', 'explanation'])\n Bailout = namedtuple('Bailout', ['message'])\n Test = namedtuple('Test', ['number', 'name', 'result', 'explanation'])\n Error = namedtuple('Error', ['message'])\n Version = namedtuple('Version', ['version'])\n\n _MAIN = 1\n _AFTER_TEST = 2\n _YAML = 3\n\n _RE_BAILOUT = re.compile(r'Bail out!\\s*(.*)')\n _RE_DIRECTIVE = re.compile(r'(?:\\s*\\#\\s*([Ss][Kk][Ii][Pp]\\S*|[Tt][Oo][Dd][Oo])\\b\\s*(.*))?')\n _RE_PLAN = re.compile(r'1\\.\\.([0-9]+)' + _RE_DIRECTIVE.pattern)\n _RE_TEST = re.compile(r'((?:not )?ok)\\s*(?:([0-9]+)\\s*)?([^#]*)' + _RE_DIRECTIVE.pattern)\n _RE_VERSION = re.compile(r'TAP version ([0-9]+)')\n _RE_YAML_START = re.compile(r'(\\s+)---.*')\n _RE_YAML_END = re.compile(r'\\s+\\.\\.\\.\\s*')\n\n def __init__(self, tap_io):\n self.tap_io = tap_io\n\n def parse_test(self, ok, num, name, directive, explanation):\n name = name.strip()\n explanation = explanation.strip() if explanation else None\n if directive is not None:\n directive = directive.upper()\n if directive == 'SKIP':\n if ok:\n yield self.Test(num, name, TestResult.SKIP, explanation)\n return\n elif directive == 'TODO':\n result = TestResult.XPASS if ok else TestResult.XFAIL\n yield self.Test(num, name, result, explanation)\n return\n else:\n yield self.Error('invalid directive \"%s\"' % (directive,))\n\n result = TestResult.PASS if ok else TestResult.FAIL\n yield self.Test(num, name, result, explanation)\n\n def parse(self):\n found_late_test = False\n bailed_out = False\n plan = None\n lineno = 0\n num_tests = 0\n yaml_lineno = 0\n yaml_indent = ''\n state = self._MAIN\n version = 12\n while True:\n lineno += 1\n try:\n line = next(self.tap_io).rstrip()\n except StopIteration:\n break\n\n # YAML blocks are only accepted after a test\n if state == self._AFTER_TEST:\n if version >= 13:\n m = self._RE_YAML_START.match(line)\n if m:\n state = self._YAML\n yaml_lineno = lineno\n yaml_indent = m.group(1)\n continue\n state = self._MAIN\n\n elif state == self._YAML:\n if self._RE_YAML_END.match(line):\n state = self._MAIN\n continue\n if line.startswith(yaml_indent):\n continue\n yield self.Error('YAML block not terminated (started on line %d)' % (yaml_lineno,))\n state = self._MAIN\n\n assert state == self._MAIN\n if line.startswith('#'):\n continue\n\n m = self._RE_TEST.match(line)\n if m:\n if plan and plan.late and not found_late_test:\n yield self.Error('unexpected test after late plan')\n found_late_test = True\n num_tests += 1\n num = num_tests if m.group(2) is None else int(m.group(2))\n if num != num_tests:\n yield self.Error('out of order test numbers')\n yield from self.parse_test(m.group(1) == 'ok', num,\n m.group(3), m.group(4), m.group(5))\n state = self._AFTER_TEST\n continue\n\n m = self._RE_PLAN.match(line)\n if m:\n if plan:\n yield self.Error('more than one plan found')\n else:\n count = int(m.group(1))\n skipped = (count == 0)\n if m.group(2):\n if m.group(2).upper().startswith('SKIP'):\n if count > 0:\n yield self.Error('invalid SKIP directive for plan')\n skipped = True\n else:\n yield self.Error('invalid directive for plan')\n plan = self.Plan(count=count, late=(num_tests > 0),\n skipped=skipped, explanation=m.group(3))\n yield plan\n continue\n\n m = self._RE_BAILOUT.match(line)\n if m:\n yield self.Bailout(m.group(1))\n bailed_out = True\n continue\n\n m = self._RE_VERSION.match(line)\n if m:\n # The TAP version is only accepted as the first line\n if lineno != 1:\n yield self.Error('version number must be on the first line')\n continue\n version = int(m.group(1))\n if version < 13:\n yield self.Error('version number should be at least 13')\n else:\n yield self.Version(version=version)\n continue\n\n if line == '':\n continue\n\n if state == self._YAML:\n yield self.Error('YAML block not terminated (started on line %d)' % (yaml_lineno,))\n\n if not bailed_out and plan and num_tests != plan.count:\n if num_tests < plan.count:\n yield self.Error('Too few tests run (expected %d, got %d)'\n % (plan.count, num_tests))\n else:\n yield self.Error('Too many tests run (expected %d, got %d)'\n % (plan.count, num_tests))\n", "path": "avocado/core/tapparser.py" } ]
diff --git a/avocado/core/tapparser.py b/avocado/core/tapparser.py index 7572492d5f..2389148307 100644 --- a/avocado/core/tapparser.py +++ b/avocado/core/tapparser.py @@ -153,8 +153,6 @@ def parse(self): if line == '': continue - yield self.Error('unexpected input at line %d' % (lineno,)) - if state == self._YAML: yield self.Error('YAML block not terminated (started on line %d)' % (yaml_lineno,)) diff --git a/selftests/unit/test_tap.py b/selftests/unit/test_tap.py index f703dd7fb4..40913c4440 100644 --- a/selftests/unit/test_tap.py +++ b/selftests/unit/test_tap.py @@ -259,7 +259,6 @@ def test_empty_line(self): def test_unexpected(self): events = self.parse_tap('1..1\ninvalid\nok 1') self.assert_plan(events, count=1, late=False) - self.assert_error(events) self.assert_test(events, number=1, name='', result=TestResult.PASS) self.assert_last(events)
Pyomo__pyomo-351
GAMS stream output (tee) not working after logfile option addition ``` from pyomo.environ import * m = ConcreteModel() m.x = Var() m.c = Constraint(expr=m.x >= 2) m.o = Objective(expr=m.x) SolverFactory('gams').solve(m, tee=True) ``` is failing to stream GAMS output after the merge of #302. Expected behavior is restored by checking out the preceding bcb316cca7f539ebe0f25f7321b5a28ccc964d51 commit.
[ { "content": "# ___________________________________________________________________________\n#\n# Pyomo: Python Optimization Modeling Objects\n# Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC\n# Under the terms of Contract DE-NA0003525 with National Technology and\n# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain\n# rights in this software.\n# This software is distributed under the 3-clause BSD License.\n# ___________________________________________________________________________\n\nfrom six import StringIO, iteritems, itervalues\nfrom tempfile import mkdtemp\nimport os, sys, math, logging, shutil, time\n\nfrom pyomo.core.base import (Constraint, Suffix, Var, value,\n Expression, Objective)\nfrom pyomo.opt import ProblemFormat, SolverFactory\n\nimport pyomo.util.plugin\nfrom pyomo.opt.base import IOptSolver\nimport pyutilib.services\n\nfrom pyomo.opt.base.solvers import _extract_version\nimport pyutilib.subprocess\nfrom pyutilib.misc import Options\n\nfrom pyomo.core.kernel.component_block import IBlockStorage\n\nimport pyomo.core.base.suffix\nimport pyomo.core.kernel.component_suffix\n\nfrom pyomo.opt.results import (SolverResults, SolverStatus, Solution,\n SolutionStatus, TerminationCondition, ProblemSense)\n\n\nlogger = logging.getLogger('pyomo.solvers')\n\npyutilib.services.register_executable(name=\"gams\")\n\nclass GAMSSolver(pyomo.util.plugin.Plugin):\n \"\"\"\n A generic interface to GAMS solvers\n\n Pass solver_io keyword arg to SolverFactory to choose solver mode:\n solver_io='direct' or 'python' to use GAMS Python API\n Requires installation, visit https://www.gams.com for help.\n solver_io='shell' or 'gms' to use command line to call gams\n Requires the gams executable be on your system PATH.\n \"\"\"\n pyomo.util.plugin.implements(IOptSolver)\n pyomo.util.plugin.alias('gams', doc='The GAMS modeling language')\n\n def __new__(cls, *args, **kwds):\n try:\n mode = kwds['solver_io']\n if mode is None:\n mode = 'shell'\n del kwds['solver_io']\n except KeyError:\n mode = 'shell'\n\n if mode == 'direct' or mode == 'python':\n return SolverFactory('_gams_direct', **kwds)\n if mode == 'shell' or mode == 'gms':\n return SolverFactory('_gams_shell', **kwds)\n else:\n logger.error('Unknown IO type: %s' % mode)\n return\n\n\nclass GAMSDirect(pyomo.util.plugin.Plugin):\n \"\"\"A generic interface to GAMS solvers\"\"\"\n pyomo.util.plugin.implements(IOptSolver)\n pyomo.util.plugin.alias('_gams_direct', doc='The GAMS modeling language')\n\n def __init__(self, **kwds):\n self._version = None\n self._default_variable_value = None\n\n self._capabilities = Options()\n self._capabilities.linear = True\n self._capabilities.quadratic_objective = True\n self._capabilities.quadratic_constraint = True\n self._capabilities.integer = True\n self._capabilities.sos1 = False\n self._capabilities.sos2 = False\n\n self.options = Options() # ignored\n\n pyomo.util.plugin.Plugin.__init__(self, **kwds)\n\n def available(self, exception_flag=True):\n \"\"\"True if the solver is available\"\"\"\n try:\n from gams import GamsWorkspace, DebugLevel\n return True\n except ImportError as e:\n if exception_flag is False:\n return False\n else:\n raise ImportError(\"Import of gams failed - GAMS direct \"\n \"solver functionality is not available.\\n\"\n \"GAMS message: %s\" % e)\n\n def _get_version(self):\n \"\"\"\n Returns a tuple describing the solver executable version.\n \"\"\"\n if not self.available(exception_flag=False):\n return _extract_version('')\n from gams import GamsWorkspace\n ws = GamsWorkspace()\n version = tuple(int(i) for i in ws._version.split('.'))\n while(len(version) < 4):\n version += (0,)\n version = version[:4]\n return version\n\n def version(self):\n \"\"\"\n Returns a 4-tuple describing the solver executable version.\n \"\"\"\n if self._version is None:\n self._version = self._get_version()\n return self._version\n\n def warm_start_capable(self):\n return True\n\n def default_variable_value(self):\n return self._default_variable_value\n\n def solve(self, *args, **kwds):\n \"\"\"\n Uses GAMS Python API. Visit https://www.gams.com for installation help.\n\n tee=False:\n Output GAMS log to stdout.\n logfile=None:\n Optionally a logfile can be written.\n load_solutions=True:\n Optionally skip loading solution into model, in which case\n the results object will contain the solution data.\n keepfiles=False:\n Keep temporary files. Equivalent of DebugLevel.KeepFiles.\n Summary of temp files can be found in _gams_py_gjo0.pf\n tmpdir=None:\n Specify directory path for storing temporary files.\n A directory will be created if one of this name doesn't exist.\n None (default) uses the system default temporary path.\n report_timing=False:\n Print timing reports for presolve, solver, postsolve, etc.\n io_options:\n Updated with additional keywords passed to solve()\n warmstart=False:\n Warmstart by initializing model's variables to their values.\n symbolic_solver_labels=False:\n Use full Pyomo component names rather than\n shortened symbols (slower, but useful for debugging).\n labeler=None:\n Custom labeler option. Incompatible with symbolic_solver_labels.\n solver=None:\n If None, GAMS will use default solver for model type.\n mtype=None:\n Model type. If None, will chose from lp, nlp, mip, and minlp.\n add_options=None:\n List of additional lines to write directly\n into model file before the solve statement.\n For model attributes, <model name> is GAMS_MODEL.\n skip_trivial_constraints=False:\n Skip writing constraints whose body section is fixed\n file_determinism=1:\n How much effort do we want to put into ensuring the\n GAMS file is written deterministically for a Pyomo model:\n 0 : None\n 1 : sort keys of indexed components (default)\n 2 : sort keys AND sort names (over declaration order)\n put_results=None:\n Filename for optionally writing solution values and\n marginals to (put_results).dat, and solver statuses\n to (put_results + 'stat').dat.\n \"\"\"\n\n # Make sure available() doesn't crash\n self.available()\n\n from gams import GamsWorkspace, DebugLevel\n from gams.workspace import GamsExceptionExecution\n\n if len(args) != 1:\n raise ValueError('Exactly one model must be passed '\n 'to solve method of GAMSSolver.')\n model = args[0]\n\n load_solutions = kwds.pop(\"load_solutions\", True)\n tee = kwds.pop(\"tee\", False)\n logfile = kwds.pop(\"logfile\", None)\n keepfiles = kwds.pop(\"keepfiles\", False)\n tmpdir = kwds.pop(\"tmpdir\", None)\n report_timing = kwds.pop(\"report_timing\", False)\n io_options = kwds.pop(\"io_options\", {})\n\n if len(kwds):\n # Pass remaining keywords to writer, which will handle\n # any unrecognized arguments\n io_options.update(kwds)\n\n initial_time = time.time()\n\n ####################################################################\n # Presolve\n ####################################################################\n\n # Create StringIO stream to pass to gams_writer, on which the\n # model file will be written. The writer also passes this StringIO\n # back, but output_file is defined in advance for clarity.\n output_file = StringIO()\n if isinstance(model, IBlockStorage):\n # Kernel blocks have slightly different write method\n smap_id = model.write(filename=output_file,\n format=ProblemFormat.gams,\n _called_by_solver=True,\n **io_options)\n symbolMap = getattr(model, \"._symbol_maps\")[smap_id]\n else:\n (_, smap_id) = model.write(filename=output_file,\n format=ProblemFormat.gams,\n io_options=io_options)\n symbolMap = model.solutions.symbol_map[smap_id]\n\n presolve_completion_time = time.time()\n if report_timing:\n print(\" %6.2f seconds required for presolve\" %\n (presolve_completion_time - initial_time))\n\n ####################################################################\n # Apply solver\n ####################################################################\n\n # IMPORTANT - only delete the whole tmpdir if the solver was the one\n # that made the directory. Otherwise, just delete the files the solver\n # made, if not keepfiles. That way the user can select a directory\n # they already have, like the current directory, without having to\n # worry about the rest of the contents of that directory being deleted.\n newdir = True\n if tmpdir is not None and os.path.exists(tmpdir):\n newdir = False\n\n ws = GamsWorkspace(debug=DebugLevel.KeepFiles if keepfiles\n else DebugLevel.Off,\n working_directory=tmpdir)\n\n t1 = ws.add_job_from_string(output_file.getvalue())\n\n try:\n with OutputStream(tee=tee, logfile=logfile) as output_stream:\n t1.run(output=output_stream)\n except GamsExceptionExecution as e:\n try:\n if e.rc == 3:\n # Execution Error\n check_expr_evaluation(model, symbolMap, 'direct')\n finally:\n # Always name working directory or delete files,\n # regardless of any errors.\n if keepfiles:\n print(\"\\nGAMS WORKING DIRECTORY: %s\\n\" %\n ws.working_directory)\n elif tmpdir is not None:\n # Garbage collect all references to t1.out_db\n # So that .gdx file can be deleted\n t1 = rec = rec_lo = rec_hi = None\n file_removal_gams_direct(tmpdir, newdir)\n raise\n except:\n # Catch other errors and remove files first\n if keepfiles:\n print(\"\\nGAMS WORKING DIRECTORY: %s\\n\" % ws.working_directory)\n elif tmpdir is not None:\n # Garbage collect all references to t1.out_db\n # So that .gdx file can be deleted\n t1 = rec = rec_lo = rec_hi = None\n file_removal_gams_direct(tmpdir, newdir)\n raise\n\n solve_completion_time = time.time()\n if report_timing:\n print(\" %6.2f seconds required for solver\" %\n (solve_completion_time - presolve_completion_time))\n\n ####################################################################\n # Postsolve\n ####################################################################\n\n # import suffixes must be on the top-level model\n if isinstance(model, IBlockStorage):\n model_suffixes = list(name for (name,comp) \\\n in pyomo.core.kernel.component_suffix.\\\n import_suffix_generator(model,\n active=True,\n descend_into=False,\n return_key=True))\n else:\n model_suffixes = list(name for (name,comp) \\\n in pyomo.core.base.suffix.\\\n active_import_suffix_generator(model))\n extract_dual = ('dual' in model_suffixes)\n extract_rc = ('rc' in model_suffixes)\n\n results = SolverResults()\n results.problem.name = t1.name\n results.problem.lower_bound = t1.out_db[\"OBJEST\"].find_record().value\n results.problem.upper_bound = t1.out_db[\"OBJEST\"].find_record().value\n results.problem.number_of_variables = \\\n t1.out_db[\"NUMVAR\"].find_record().value\n results.problem.number_of_constraints = \\\n t1.out_db[\"NUMEQU\"].find_record().value\n results.problem.number_of_nonzeros = \\\n t1.out_db[\"NUMNZ\"].find_record().value\n results.problem.number_of_binary_variables = None\n # Includes binary vars:\n results.problem.number_of_integer_variables = \\\n t1.out_db[\"NUMDVAR\"].find_record().value\n results.problem.number_of_continuous_variables = \\\n t1.out_db[\"NUMVAR\"].find_record().value \\\n - t1.out_db[\"NUMDVAR\"].find_record().value\n results.problem.number_of_objectives = 1 # required by GAMS writer\n obj = list(model.component_data_objects(Objective, active=True))\n assert len(obj) == 1, 'Only one objective is allowed.'\n obj = obj[0]\n objctvval = t1.out_db[\"OBJVAL\"].find_record().value\n if obj.is_minimizing():\n results.problem.sense = ProblemSense.minimize\n results.problem.upper_bound = objctvval\n else:\n results.problem.sense = ProblemSense.maximize\n results.problem.lower_bound = objctvval\n\n results.solver.name = \"GAMS \" + str(self.version())\n\n # Init termination condition to None to give preference to this first\n # block of code, only set certain TC's below if it's still None\n results.solver.termination_condition = None\n results.solver.message = None\n\n solvestat = t1.out_db[\"SOLVESTAT\"].find_record().value\n if solvestat == 1:\n results.solver.status = SolverStatus.ok\n elif solvestat == 2:\n results.solver.status = SolverStatus.ok\n results.solver.termination_condition = TerminationCondition.maxIterations\n elif solvestat == 3:\n results.solver.status = SolverStatus.ok\n results.solver.termination_condition = TerminationCondition.maxTimeLimit\n elif solvestat == 5:\n results.solver.status = SolverStatus.ok\n results.solver.termination_condition = TerminationCondition.maxEvaluations\n elif solvestat == 7:\n results.solver.status = SolverStatus.aborted\n results.solver.termination_condition = TerminationCondition.licensingProblems\n elif solvestat == 8:\n results.solver.status = SolverStatus.aborted\n results.solver.termination_condition = TerminationCondition.userInterrupt\n elif solvestat == 10:\n results.solver.status = SolverStatus.error\n results.solver.termination_condition = TerminationCondition.solverFailure\n elif solvestat == 11:\n results.solver.status = SolverStatus.error\n results.solver.termination_condition = TerminationCondition.internalSolverError\n elif solvestat == 4:\n results.solver.status = SolverStatus.warning\n results.solver.message = \"Solver quit with a problem (see LST file)\"\n elif solvestat in (9, 12, 13):\n results.solver.status = SolverStatus.error\n elif solvestat == 6:\n results.solver.status = SolverStatus.unknown\n\n results.solver.return_code = 0\n # Not sure if this value is actually user time\n # \"the elapsed time it took to execute a solve statement in total\"\n results.solver.user_time = t1.out_db[\"ETSOLVE\"].find_record().value\n results.solver.system_time = None\n results.solver.wallclock_time = None\n results.solver.termination_message = None\n\n soln = Solution()\n\n modelstat = t1.out_db[\"MODELSTAT\"].find_record().value\n if modelstat == 1:\n results.solver.termination_condition = TerminationCondition.optimal\n soln.status = SolutionStatus.optimal\n elif modelstat == 2:\n results.solver.termination_condition = TerminationCondition.locallyOptimal\n soln.status = SolutionStatus.locallyOptimal\n elif modelstat in [3, 18]:\n results.solver.termination_condition = TerminationCondition.unbounded\n soln.status = SolutionStatus.unbounded\n elif modelstat in [4, 5, 6, 10, 19]:\n results.solver.termination_condition = TerminationCondition.infeasible\n soln.status = SolutionStatus.infeasible\n elif modelstat == 7:\n results.solver.termination_condition = TerminationCondition.feasible\n soln.status = SolutionStatus.feasible\n elif modelstat == 8:\n # 'Integer solution model found'\n results.solver.termination_condition = TerminationCondition.optimal\n soln.status = SolutionStatus.optimal\n elif modelstat == 9:\n results.solver.termination_condition = TerminationCondition.intermediateNonInteger\n soln.status = SolutionStatus.other\n elif modelstat == 11:\n # Should be handled above, if modelstat and solvestat both\n # indicate a licensing problem\n if results.solver.termination_condition is None:\n results.solver.termination_condition = TerminationCondition.licensingProblems\n soln.status = SolutionStatus.error\n elif modelstat in [12, 13]:\n if results.solver.termination_condition is None:\n results.solver.termination_condition = TerminationCondition.error\n soln.status = SolutionStatus.error\n elif modelstat == 14:\n if results.solver.termination_condition is None:\n results.solver.termination_condition = TerminationCondition.noSolution\n soln.status = SolutionStatus.unknown\n elif modelstat in [15, 16, 17]:\n # Having to do with CNS models,\n # not sure what to make of status descriptions\n results.solver.termination_condition = TerminationCondition.optimal\n soln.status = SolutionStatus.unsure\n else:\n # This is just a backup catch, all cases are handled above\n soln.status = SolutionStatus.error\n\n soln.gap = abs(results.problem.upper_bound \\\n - results.problem.lower_bound)\n\n for sym, ref in iteritems(symbolMap.bySymbol):\n obj = ref()\n if isinstance(model, IBlockStorage):\n # Kernel variables have no 'parent_component'\n if obj.ctype is Objective:\n soln.objective[sym] = {'Value': objctvval}\n if obj.ctype is not Var:\n continue\n else:\n if obj.parent_component().type() is Objective:\n soln.objective[sym] = {'Value': objctvval}\n if obj.parent_component().type() is not Var:\n continue\n rec = t1.out_db[sym].find_record()\n # obj.value = rec.level\n soln.variable[sym] = {\"Value\": rec.level}\n if extract_rc and not math.isnan(rec.marginal):\n # Do not set marginals to nan\n # model.rc[obj] = rec.marginal\n soln.variable[sym]['rc'] = rec.marginal\n\n if extract_dual:\n for c in model.component_data_objects(Constraint, active=True):\n if c.body.is_fixed():\n continue\n sym = symbolMap.getSymbol(c)\n if c.equality:\n rec = t1.out_db[sym].find_record()\n if not math.isnan(rec.marginal):\n # model.dual[c] = rec.marginal\n soln.constraint[sym] = {'dual': rec.marginal}\n else:\n # Solver didn't provide marginals,\n # nothing else to do here\n break\n else:\n # Inequality, assume if 2-sided that only\n # one side's marginal is nonzero\n # Negate marginal for _lo equations\n marg = 0\n if c.lower is not None:\n rec_lo = t1.out_db[sym + '_lo'].find_record()\n marg -= rec_lo.marginal\n if c.upper is not None:\n rec_hi = t1.out_db[sym + '_hi'].find_record()\n marg += rec_hi.marginal\n if not math.isnan(marg):\n # model.dual[c] = marg\n soln.constraint[sym] = {'dual': marg}\n else:\n # Solver didn't provide marginals,\n # nothing else to do here\n break\n\n results.solution.insert(soln)\n\n if keepfiles:\n print(\"\\nGAMS WORKING DIRECTORY: %s\\n\" % ws.working_directory)\n elif tmpdir is not None:\n # Garbage collect all references to t1.out_db\n # So that .gdx file can be deleted\n t1 = rec = rec_lo = rec_hi = None\n file_removal_gams_direct(tmpdir, newdir)\n\n ####################################################################\n # Finish with results\n ####################################################################\n\n results._smap_id = smap_id\n results._smap = None\n if isinstance(model, IBlockStorage):\n if len(results.solution) == 1:\n results.solution(0).symbol_map = \\\n getattr(model, \"._symbol_maps\")[results._smap_id]\n results.solution(0).default_variable_value = \\\n self._default_variable_value\n if load_solutions:\n model.load_solution(results.solution(0))\n results.solution.clear()\n else:\n assert len(results.solution) == 0\n # see the hack in the write method\n # we don't want this to stick around on the model\n # after the solve\n assert len(getattr(model, \"._symbol_maps\")) == 1\n delattr(model, \"._symbol_maps\")\n del results._smap_id\n else:\n if load_solutions:\n model.solutions.load_from(results)\n results._smap_id = None\n results.solution.clear()\n else:\n results._smap = model.solutions.symbol_map[smap_id]\n model.solutions.delete_symbol_map(smap_id)\n\n postsolve_completion_time = time.time()\n if report_timing:\n print(\" %6.2f seconds required for postsolve\" %\n (postsolve_completion_time - solve_completion_time))\n print(\" %6.2f seconds required total\" %\n (postsolve_completion_time - initial_time))\n\n return results\n\n\nclass GAMSShell(pyomo.util.plugin.Plugin):\n \"\"\"A generic interface to GAMS solvers\"\"\"\n pyomo.util.plugin.implements(IOptSolver)\n pyomo.util.plugin.alias('_gams_shell', doc='The GAMS modeling language')\n\n def __init__(self, **kwds):\n self._version = None\n self._default_variable_value = None\n\n self._capabilities = Options()\n self._capabilities.linear = True\n self._capabilities.quadratic_objective = True\n self._capabilities.quadratic_constraint = True\n self._capabilities.integer = True\n self._capabilities.sos1 = False\n self._capabilities.sos2 = False\n\n self.options = Options() # ignored\n\n pyomo.util.plugin.Plugin.__init__(self, **kwds)\n\n def available(self, exception_flag=True):\n \"\"\"True if the solver is available\"\"\"\n exe = pyutilib.services.registered_executable(\"gams\")\n if exception_flag is False:\n return exe is not None\n else:\n if exe is not None:\n return True\n else:\n raise NameError(\n \"No 'gams' command found on system PATH - GAMS shell \"\n \"solver functionality is not available.\")\n\n def _default_executable(self):\n executable = pyutilib.services.registered_executable(\"gams\")\n if executable is None:\n logger.warning(\"Could not locate the 'gams' executable, \"\n \"which is required for solver gams\")\n self.enable = False\n return None\n return executable.get_path()\n\n def executable(self):\n \"\"\"\n Returns the executable used by this solver.\n \"\"\"\n return self._default_executable()\n\n def _get_version(self):\n \"\"\"\n Returns a tuple describing the solver executable version.\n \"\"\"\n solver_exec = self.executable()\n\n if solver_exec is None:\n return _extract_version('')\n else:\n results = pyutilib.subprocess.run([solver_exec])\n return _extract_version(results[1])\n\n def version(self):\n \"\"\"\n Returns a 4-tuple describing the solver executable version.\n \"\"\"\n if self._version is None:\n self._version = self._get_version()\n return self._version\n\n def warm_start_capable(self):\n return True\n\n def default_variable_value(self):\n return self._default_variable_value\n\n def solve(self, *args, **kwds):\n \"\"\"\n Uses command line to call GAMS.\n\n tee=False:\n Output GAMS log to stdout.\n logfile=None:\n Optionally a logfile can be written.\n load_solutions=True:\n Optionally skip loading solution into model, in which case\n the results object will contain the solution data.\n keepfiles=False:\n Keep temporary files.\n tmpdir=None:\n Specify directory path for storing temporary files.\n A directory will be created if one of this name doesn't exist.\n None (default) uses the system default temporary path.\n report_timing=False:\n Print timing reports for presolve, solver, postsolve, etc.\n io_options:\n Updated with additional keywords passed to solve()\n warmstart=False:\n Warmstart by initializing model's variables to their values.\n symbolic_solver_labels=False:\n Use full Pyomo component names rather than\n shortened symbols (slower, but useful for debugging).\n labeler=None:\n Custom labeler. Incompatible with symbolic_solver_labels.\n solver=None:\n If None, GAMS will use default solver for model type.\n mtype=None:\n Model type. If None, will chose from lp, nlp, mip, and minlp.\n add_options=None:\n List of additional lines to write directly\n into model file before the solve statement.\n For model attributes, <model name> is GAMS_MODEL.\n skip_trivial_constraints=False:\n Skip writing constraints whose body section is fixed\n file_determinism=1:\n How much effort do we want to put into ensuring the\n GAMS file is written deterministically for a Pyomo model:\n 0 : None\n 1 : sort keys of indexed components (default)\n 2 : sort keys AND sort names (over declaration order)\n put_results='results':\n Not available for modification on GAMSShell solver.\n \"\"\"\n\n # Make sure available() doesn't crash\n self.available()\n\n if len(args) != 1:\n raise ValueError('Exactly one model must be passed '\n 'to solve method of GAMSSolver.')\n model = args[0]\n\n load_solutions = kwds.pop(\"load_solutions\", True)\n tee = kwds.pop(\"tee\", False)\n logfile = kwds.pop(\"logfile\", None)\n keepfiles = kwds.pop(\"keepfiles\", False)\n tmpdir = kwds.pop(\"tmpdir\", None)\n report_timing = kwds.pop(\"report_timing\", False)\n io_options = kwds.pop(\"io_options\", {})\n\n if len(kwds):\n # Pass remaining keywords to writer, which will handle\n # any unrecognized arguments\n io_options.update(kwds)\n\n initial_time = time.time()\n\n ####################################################################\n # Presolve\n ####################################################################\n\n # IMPORTANT - only delete the whole tmpdir if the solver was the one\n # that made the directory. Otherwise, just delete the files the solver\n # made, if not keepfiles. That way the user can select a directory\n # they already have, like the current directory, without having to\n # worry about the rest of the contents of that directory being deleted.\n newdir = False\n if tmpdir is None:\n tmpdir = mkdtemp()\n newdir = True\n elif not os.path.exists(tmpdir):\n # makedirs creates all necessary intermediate directories in order\n # to create the path to tmpdir, if they don't already exist.\n # However, if keepfiles is False, we only delete the final folder,\n # leaving the rest of the intermediate ones.\n os.makedirs(tmpdir)\n newdir = True\n\n output_filename = os.path.join(tmpdir, 'model.gms')\n lst_filename = os.path.join(tmpdir, 'output.lst')\n statresults_filename = os.path.join(tmpdir, 'resultsstat.dat')\n\n io_options['put_results'] = os.path.join(tmpdir, 'results')\n results_filename = os.path.join(tmpdir, 'results.dat')\n\n if isinstance(model, IBlockStorage):\n # Kernel blocks have slightly different write method\n smap_id = model.write(filename=output_filename,\n format=ProblemFormat.gams,\n _called_by_solver=True,\n **io_options)\n symbolMap = getattr(model, \"._symbol_maps\")[smap_id]\n else:\n (_, smap_id) = model.write(filename=output_filename,\n format=ProblemFormat.gams,\n io_options=io_options)\n symbolMap = model.solutions.symbol_map[smap_id]\n\n presolve_completion_time = time.time()\n if report_timing:\n print(\" %6.2f seconds required for presolve\" %\n (presolve_completion_time - initial_time))\n\n ####################################################################\n # Apply solver\n ####################################################################\n\n exe = self.executable()\n command = [exe, output_filename, 'o=' + lst_filename]\n if tee and not logfile:\n # default behaviour of gams is to print to console, for\n # compatability with windows and *nix we want to explicitly log to\n # stdout (see https://www.gams.com/latest/docs/UG_GamsCall.html)\n command.append(\"lo=3\")\n elif not tee and not logfile:\n command.append(\"lo=0\")\n elif not tee and logfile:\n command.append(\"lo=2\")\n elif tee and logfile:\n command.append(\"lo=4\")\n if logfile:\n command.append(\"lf=\" + str(logfile))\n\n try:\n rc, _ = pyutilib.subprocess.run(command)\n\n if keepfiles:\n print(\"\\nGAMS WORKING DIRECTORY: %s\\n\" % tmpdir)\n\n if rc == 1 or rc == 127:\n raise RuntimeError(\"Command 'gams' was not recognized\")\n elif rc != 0:\n if rc == 3:\n # Execution Error\n # Run check_expr_evaluation, which errors if necessary\n check_expr_evaluation(model, symbolMap, 'shell')\n # If nothing was raised, or for all other cases, raise this\n raise RuntimeError(\"GAMS encountered an error during solve. \"\n \"Check listing file for details.\")\n\n with open(results_filename, 'r') as results_file:\n results_text = results_file.read()\n with open(statresults_filename, 'r') as statresults_file:\n statresults_text = statresults_file.read()\n finally:\n if not keepfiles:\n if newdir:\n shutil.rmtree(tmpdir)\n else:\n os.remove(output_filename)\n os.remove(lst_filename)\n os.remove(results_filename)\n os.remove(statresults_filename)\n\n solve_completion_time = time.time()\n if report_timing:\n print(\" %6.2f seconds required for solver\" %\n (solve_completion_time - presolve_completion_time))\n\n ####################################################################\n # Postsolve\n ####################################################################\n\n # import suffixes must be on the top-level model\n if isinstance(model, IBlockStorage):\n model_suffixes = list(name for (name,comp) \\\n in pyomo.core.kernel.component_suffix.\\\n import_suffix_generator(model,\n active=True,\n descend_into=False,\n return_key=True))\n else:\n model_suffixes = list(name for (name,comp) \\\n in pyomo.core.base.suffix.\\\n active_import_suffix_generator(model))\n extract_dual = ('dual' in model_suffixes)\n extract_rc = ('rc' in model_suffixes)\n\n stat_vars = dict()\n # Skip first line of explanatory text\n for line in statresults_text.splitlines()[1:]:\n items = line.split()\n try:\n stat_vars[items[0]] = float(items[1])\n except ValueError:\n # GAMS printed NA, just make it nan\n stat_vars[items[0]] = float('nan')\n\n results = SolverResults()\n results.problem.name = output_filename\n results.problem.lower_bound = stat_vars[\"OBJEST\"]\n results.problem.upper_bound = stat_vars[\"OBJEST\"]\n results.problem.number_of_variables = stat_vars[\"NUMVAR\"]\n results.problem.number_of_constraints = stat_vars[\"NUMEQU\"]\n results.problem.number_of_nonzeros = stat_vars[\"NUMNZ\"]\n results.problem.number_of_binary_variables = None\n # Includes binary vars:\n results.problem.number_of_integer_variables = stat_vars[\"NUMDVAR\"]\n results.problem.number_of_continuous_variables = stat_vars[\"NUMVAR\"] \\\n - stat_vars[\"NUMDVAR\"]\n results.problem.number_of_objectives = 1 # required by GAMS writer\n obj = list(model.component_data_objects(Objective, active=True))\n assert len(obj) == 1, 'Only one objective is allowed.'\n obj = obj[0]\n objctvval = stat_vars[\"OBJVAL\"]\n if obj.is_minimizing():\n results.problem.sense = ProblemSense.minimize\n results.problem.upper_bound = objctvval\n else:\n results.problem.sense = ProblemSense.maximize\n results.problem.lower_bound = objctvval\n\n results.solver.name = \"GAMS \" + str(self.version())\n\n # Init termination condition to None to give preference to this first\n # block of code, only set certain TC's below if it's still None\n results.solver.termination_condition = None\n results.solver.message = None\n\n solvestat = stat_vars[\"SOLVESTAT\"]\n if solvestat == 1:\n results.solver.status = SolverStatus.ok\n elif solvestat == 2:\n results.solver.status = SolverStatus.ok\n results.solver.termination_condition = TerminationCondition.maxIterations\n elif solvestat == 3:\n results.solver.status = SolverStatus.ok\n results.solver.termination_condition = TerminationCondition.maxTimeLimit\n elif solvestat == 5:\n results.solver.status = SolverStatus.ok\n results.solver.termination_condition = TerminationCondition.maxEvaluations\n elif solvestat == 7:\n results.solver.status = SolverStatus.aborted\n results.solver.termination_condition = TerminationCondition.licensingProblems\n elif solvestat == 8:\n results.solver.status = SolverStatus.aborted\n results.solver.termination_condition = TerminationCondition.userInterrupt\n elif solvestat == 10:\n results.solver.status = SolverStatus.error\n results.solver.termination_condition = TerminationCondition.solverFailure\n elif solvestat == 11:\n results.solver.status = SolverStatus.error\n results.solver.termination_condition = TerminationCondition.internalSolverError\n elif solvestat == 4:\n results.solver.status = SolverStatus.warning\n results.solver.message = \"Solver quit with a problem (see LST file)\"\n elif solvestat in (9, 12, 13):\n results.solver.status = SolverStatus.error\n elif solvestat == 6:\n results.solver.status = SolverStatus.unknown\n\n results.solver.return_code = rc # 0\n # Not sure if this value is actually user time\n # \"the elapsed time it took to execute a solve statement in total\"\n results.solver.user_time = stat_vars[\"ETSOLVE\"]\n results.solver.system_time = None\n results.solver.wallclock_time = None\n results.solver.termination_message = None\n\n soln = Solution()\n\n modelstat = stat_vars[\"MODELSTAT\"]\n if modelstat == 1:\n results.solver.termination_condition = TerminationCondition.optimal\n soln.status = SolutionStatus.optimal\n elif modelstat == 2:\n results.solver.termination_condition = TerminationCondition.locallyOptimal\n soln.status = SolutionStatus.locallyOptimal\n elif modelstat in [3, 18]:\n results.solver.termination_condition = TerminationCondition.unbounded\n soln.status = SolutionStatus.unbounded\n elif modelstat in [4, 5, 6, 10, 19]:\n results.solver.termination_condition = TerminationCondition.infeasible\n soln.status = SolutionStatus.infeasible\n elif modelstat == 7:\n results.solver.termination_condition = TerminationCondition.feasible\n soln.status = SolutionStatus.feasible\n elif modelstat == 8:\n # 'Integer solution model found'\n results.solver.termination_condition = TerminationCondition.optimal\n soln.status = SolutionStatus.optimal\n elif modelstat == 9:\n results.solver.termination_condition = TerminationCondition.intermediateNonInteger\n soln.status = SolutionStatus.other\n elif modelstat == 11:\n # Should be handled above, if modelstat and solvestat both\n # indicate a licensing problem\n if results.solver.termination_condition is None:\n results.solver.termination_condition = TerminationCondition.licensingProblems\n soln.status = SolutionStatus.error\n elif modelstat in [12, 13]:\n if results.solver.termination_condition is None:\n results.solver.termination_condition = TerminationCondition.error\n soln.status = SolutionStatus.error\n elif modelstat == 14:\n if results.solver.termination_condition is None:\n results.solver.termination_condition = TerminationCondition.noSolution\n soln.status = SolutionStatus.unknown\n elif modelstat in [15, 16, 17]:\n # Having to do with CNS models,\n # not sure what to make of status descriptions\n results.solver.termination_condition = TerminationCondition.optimal\n soln.status = SolutionStatus.unsure\n else:\n # This is just a backup catch, all cases are handled above\n soln.status = SolutionStatus.error\n\n soln.gap = abs(results.problem.upper_bound \\\n - results.problem.lower_bound)\n\n model_soln = dict()\n # Skip first line of explanatory text\n for line in results_text.splitlines()[1:]:\n items = line.split()\n model_soln[items[0]] = (items[1], items[2])\n\n has_rc_info = True\n for sym, ref in iteritems(symbolMap.bySymbol):\n obj = ref()\n if isinstance(model, IBlockStorage):\n # Kernel variables have no 'parent_component'\n if obj.ctype is Objective:\n soln.objective[sym] = {'Value': objctvval}\n if obj.ctype is not Var:\n continue\n else:\n if obj.parent_component().type() is Objective:\n soln.objective[sym] = {'Value': objctvval}\n if obj.parent_component().type() is not Var:\n continue\n rec = model_soln[sym]\n # obj.value = float(rec[0])\n soln.variable[sym] = {\"Value\": float(rec[0])}\n if extract_rc and has_rc_info:\n try:\n # model.rc[obj] = float(rec[1])\n soln.variable[sym]['rc'] = float(rec[1])\n except ValueError:\n # Solver didn't provide marginals\n has_rc_info = False\n\n if extract_dual:\n for c in model.component_data_objects(Constraint, active=True):\n if c.body.is_fixed():\n continue\n sym = symbolMap.getSymbol(c)\n if c.equality:\n rec = model_soln[sym]\n try:\n # model.dual[c] = float(rec[1])\n soln.constraint[sym] = {'dual': float(rec[1])}\n except ValueError:\n # Solver didn't provide marginals\n # nothing else to do here\n break\n else:\n # Inequality, assume if 2-sided that only\n # one side's marginal is nonzero\n # Negate marginal for _lo equations\n marg = 0\n if c.lower is not None:\n rec_lo = model_soln[sym + '_lo']\n try:\n marg -= float(rec_lo[1])\n except ValueError:\n # Solver didn't provide marginals\n marg = float('nan')\n if c.upper is not None:\n rec_hi = model_soln[sym + '_hi']\n try:\n marg += float(rec_hi[1])\n except ValueError:\n # Solver didn't provide marginals\n marg = float('nan')\n if not math.isnan(marg):\n # model.dual[c] = marg\n soln.constraint[sym] = {'dual': marg}\n else:\n # Solver didn't provide marginals\n # nothing else to do here\n break\n\n results.solution.insert(soln)\n\n ####################################################################\n # Finish with results\n ####################################################################\n\n results._smap_id = smap_id\n results._smap = None\n if isinstance(model, IBlockStorage):\n if len(results.solution) == 1:\n results.solution(0).symbol_map = \\\n getattr(model, \"._symbol_maps\")[results._smap_id]\n results.solution(0).default_variable_value = \\\n self._default_variable_value\n if load_solutions:\n model.load_solution(results.solution(0))\n results.solution.clear()\n else:\n assert len(results.solution) == 0\n # see the hack in the write method\n # we don't want this to stick around on the model\n # after the solve\n assert len(getattr(model, \"._symbol_maps\")) == 1\n delattr(model, \"._symbol_maps\")\n del results._smap_id\n else:\n if load_solutions:\n model.solutions.load_from(results)\n results._smap_id = None\n results.solution.clear()\n else:\n results._smap = model.solutions.symbol_map[smap_id]\n model.solutions.delete_symbol_map(smap_id)\n\n postsolve_completion_time = time.time()\n if report_timing:\n print(\" %6.2f seconds required for postsolve\" %\n (postsolve_completion_time - solve_completion_time))\n print(\" %6.2f seconds required total\" %\n (postsolve_completion_time - initial_time))\n\n return results\n\n\nclass OutputStream:\n \"\"\"Output stream object for simultaneously writing to multiple streams.\n\n tee=False:\n If set writing to this stream will write to stdout.\n logfile=None:\n Optionally a logfile can be written.\n\n \"\"\"\n\n def __init__(self, tee=False, logfile=None):\n \"\"\"Initialize output stream object.\"\"\"\n if tee:\n self.tee = sys.stdout\n else:\n self.tee = None\n self.logfile = logfile\n self.logfile_buffer = None\n\n def __enter__(self):\n \"\"\"Enter context of output stream and open logfile if given.\"\"\"\n if self.logfile is not None:\n self.logfile_buffer = open(self.logfile, 'a')\n return self\n\n def __exit__(self, *args, **kwargs):\n \"\"\"Enter context of output stream and close logfile if necessary.\"\"\"\n if self.logfile_buffer is not None:\n self.logfile_buffer.close()\n self.logfile_buffer = None\n\n def write(self, message):\n \"\"\"Write messages to all streams.\"\"\"\n if self.tee is not None:\n self.tee.write(message)\n if self.logfile_buffer is not None:\n self.logfile_buffer.write(message)\n\n def flush(self):\n \"\"\"Needed for python3 compatibility.\"\"\"\n if self.tee is not None:\n self.tee.flush()\n if self.logfile_buffer is not None:\n self.logfile_buffer.flush()\n\n\ndef check_expr_evaluation(model, symbolMap, solver_io):\n try:\n # Temporarily initialize uninitialized variables in order to call\n # value() on each expression to check domain violations\n uninit_vars = list()\n for var in model.component_data_objects(Var, active=True):\n if var.value is None:\n uninit_vars.append(var)\n var.value = 0\n\n # Constraints\n for con in model.component_data_objects(Constraint, active=True):\n if con.body.is_fixed():\n continue\n check_expr(con.body, con.name, solver_io)\n\n # Objective\n obj = list(model.component_data_objects(Objective, active=True))\n assert len(obj) == 1, \"GAMS writer can only take 1 active objective\"\n obj = obj[0]\n check_expr(obj.expr, obj.name, solver_io)\n finally:\n # Return uninitialized variables to None\n for var in uninit_vars:\n var.value = None\n\ndef check_expr(expr, name, solver_io):\n # Check if GAMS will encounter domain violations in presolver\n # operations at current values, which are None (0) by default\n # Used to handle log and log10 violations, for example\n try:\n value(expr)\n except ValueError:\n logger.warning(\"While evaluating model.%s's expression, GAMS solver \"\n \"encountered an error.\\nGAMS requires that all \"\n \"equations and expressions evaluate at initial values.\\n\"\n \"Ensure variable values do not violate any domains, \"\n \"and use the warmstart=True keyword to solve().\" % name)\n if solver_io == 'shell':\n # For shell, there is no previous exception to worry about\n # overwriting, so raise the ValueError.\n # But for direct, the GamsExceptionExecution will be raised.\n raise\n\ndef file_removal_gams_direct(tmpdir, newdir):\n if newdir:\n shutil.rmtree(tmpdir)\n else:\n os.remove(os.path.join(tmpdir, '_gams_py_gjo0.gms'))\n os.remove(os.path.join(tmpdir, '_gams_py_gjo0.lst'))\n os.remove(os.path.join(tmpdir, '_gams_py_gdb0.gdx'))\n # .pf file is not made when DebugLevel is Off\n", "path": "pyomo/solvers/plugins/solvers/GAMS.py" } ]
[ { "content": "# ___________________________________________________________________________\n#\n# Pyomo: Python Optimization Modeling Objects\n# Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC\n# Under the terms of Contract DE-NA0003525 with National Technology and\n# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain\n# rights in this software.\n# This software is distributed under the 3-clause BSD License.\n# ___________________________________________________________________________\n\nfrom six import StringIO, iteritems, itervalues\nfrom tempfile import mkdtemp\nimport os, sys, math, logging, shutil, time\n\nfrom pyomo.core.base import (Constraint, Suffix, Var, value,\n Expression, Objective)\nfrom pyomo.opt import ProblemFormat, SolverFactory\n\nimport pyomo.util.plugin\nfrom pyomo.opt.base import IOptSolver\nimport pyutilib.services\n\nfrom pyomo.opt.base.solvers import _extract_version\nimport pyutilib.subprocess\nfrom pyutilib.misc import Options\n\nfrom pyomo.core.kernel.component_block import IBlockStorage\n\nimport pyomo.core.base.suffix\nimport pyomo.core.kernel.component_suffix\n\nfrom pyomo.opt.results import (SolverResults, SolverStatus, Solution,\n SolutionStatus, TerminationCondition, ProblemSense)\n\n\nlogger = logging.getLogger('pyomo.solvers')\n\npyutilib.services.register_executable(name=\"gams\")\n\nclass GAMSSolver(pyomo.util.plugin.Plugin):\n \"\"\"\n A generic interface to GAMS solvers\n\n Pass solver_io keyword arg to SolverFactory to choose solver mode:\n solver_io='direct' or 'python' to use GAMS Python API\n Requires installation, visit https://www.gams.com for help.\n solver_io='shell' or 'gms' to use command line to call gams\n Requires the gams executable be on your system PATH.\n \"\"\"\n pyomo.util.plugin.implements(IOptSolver)\n pyomo.util.plugin.alias('gams', doc='The GAMS modeling language')\n\n def __new__(cls, *args, **kwds):\n try:\n mode = kwds['solver_io']\n if mode is None:\n mode = 'shell'\n del kwds['solver_io']\n except KeyError:\n mode = 'shell'\n\n if mode == 'direct' or mode == 'python':\n return SolverFactory('_gams_direct', **kwds)\n if mode == 'shell' or mode == 'gms':\n return SolverFactory('_gams_shell', **kwds)\n else:\n logger.error('Unknown IO type: %s' % mode)\n return\n\n\nclass GAMSDirect(pyomo.util.plugin.Plugin):\n \"\"\"A generic interface to GAMS solvers\"\"\"\n pyomo.util.plugin.implements(IOptSolver)\n pyomo.util.plugin.alias('_gams_direct', doc='The GAMS modeling language')\n\n def __init__(self, **kwds):\n self._version = None\n self._default_variable_value = None\n\n self._capabilities = Options()\n self._capabilities.linear = True\n self._capabilities.quadratic_objective = True\n self._capabilities.quadratic_constraint = True\n self._capabilities.integer = True\n self._capabilities.sos1 = False\n self._capabilities.sos2 = False\n\n self.options = Options() # ignored\n\n pyomo.util.plugin.Plugin.__init__(self, **kwds)\n\n def available(self, exception_flag=True):\n \"\"\"True if the solver is available\"\"\"\n try:\n from gams import GamsWorkspace, DebugLevel\n return True\n except ImportError as e:\n if exception_flag is False:\n return False\n else:\n raise ImportError(\"Import of gams failed - GAMS direct \"\n \"solver functionality is not available.\\n\"\n \"GAMS message: %s\" % e)\n\n def _get_version(self):\n \"\"\"\n Returns a tuple describing the solver executable version.\n \"\"\"\n if not self.available(exception_flag=False):\n return _extract_version('')\n from gams import GamsWorkspace\n ws = GamsWorkspace()\n version = tuple(int(i) for i in ws._version.split('.'))\n while(len(version) < 4):\n version += (0,)\n version = version[:4]\n return version\n\n def version(self):\n \"\"\"\n Returns a 4-tuple describing the solver executable version.\n \"\"\"\n if self._version is None:\n self._version = self._get_version()\n return self._version\n\n def warm_start_capable(self):\n return True\n\n def default_variable_value(self):\n return self._default_variable_value\n\n def solve(self, *args, **kwds):\n \"\"\"\n Uses GAMS Python API. Visit https://www.gams.com for installation help.\n\n tee=False:\n Output GAMS log to stdout.\n logfile=None:\n Optionally a logfile can be written.\n load_solutions=True:\n Optionally skip loading solution into model, in which case\n the results object will contain the solution data.\n keepfiles=False:\n Keep temporary files. Equivalent of DebugLevel.KeepFiles.\n Summary of temp files can be found in _gams_py_gjo0.pf\n tmpdir=None:\n Specify directory path for storing temporary files.\n A directory will be created if one of this name doesn't exist.\n None (default) uses the system default temporary path.\n report_timing=False:\n Print timing reports for presolve, solver, postsolve, etc.\n io_options:\n Updated with additional keywords passed to solve()\n warmstart=False:\n Warmstart by initializing model's variables to their values.\n symbolic_solver_labels=False:\n Use full Pyomo component names rather than\n shortened symbols (slower, but useful for debugging).\n labeler=None:\n Custom labeler option. Incompatible with symbolic_solver_labels.\n solver=None:\n If None, GAMS will use default solver for model type.\n mtype=None:\n Model type. If None, will chose from lp, nlp, mip, and minlp.\n add_options=None:\n List of additional lines to write directly\n into model file before the solve statement.\n For model attributes, <model name> is GAMS_MODEL.\n skip_trivial_constraints=False:\n Skip writing constraints whose body section is fixed\n file_determinism=1:\n How much effort do we want to put into ensuring the\n GAMS file is written deterministically for a Pyomo model:\n 0 : None\n 1 : sort keys of indexed components (default)\n 2 : sort keys AND sort names (over declaration order)\n put_results=None:\n Filename for optionally writing solution values and\n marginals to (put_results).dat, and solver statuses\n to (put_results + 'stat').dat.\n \"\"\"\n\n # Make sure available() doesn't crash\n self.available()\n\n from gams import GamsWorkspace, DebugLevel\n from gams.workspace import GamsExceptionExecution\n\n if len(args) != 1:\n raise ValueError('Exactly one model must be passed '\n 'to solve method of GAMSSolver.')\n model = args[0]\n\n load_solutions = kwds.pop(\"load_solutions\", True)\n tee = kwds.pop(\"tee\", False)\n logfile = kwds.pop(\"logfile\", None)\n keepfiles = kwds.pop(\"keepfiles\", False)\n tmpdir = kwds.pop(\"tmpdir\", None)\n report_timing = kwds.pop(\"report_timing\", False)\n io_options = kwds.pop(\"io_options\", {})\n\n if len(kwds):\n # Pass remaining keywords to writer, which will handle\n # any unrecognized arguments\n io_options.update(kwds)\n\n initial_time = time.time()\n\n ####################################################################\n # Presolve\n ####################################################################\n\n # Create StringIO stream to pass to gams_writer, on which the\n # model file will be written. The writer also passes this StringIO\n # back, but output_file is defined in advance for clarity.\n output_file = StringIO()\n if isinstance(model, IBlockStorage):\n # Kernel blocks have slightly different write method\n smap_id = model.write(filename=output_file,\n format=ProblemFormat.gams,\n _called_by_solver=True,\n **io_options)\n symbolMap = getattr(model, \"._symbol_maps\")[smap_id]\n else:\n (_, smap_id) = model.write(filename=output_file,\n format=ProblemFormat.gams,\n io_options=io_options)\n symbolMap = model.solutions.symbol_map[smap_id]\n\n presolve_completion_time = time.time()\n if report_timing:\n print(\" %6.2f seconds required for presolve\" %\n (presolve_completion_time - initial_time))\n\n ####################################################################\n # Apply solver\n ####################################################################\n\n # IMPORTANT - only delete the whole tmpdir if the solver was the one\n # that made the directory. Otherwise, just delete the files the solver\n # made, if not keepfiles. That way the user can select a directory\n # they already have, like the current directory, without having to\n # worry about the rest of the contents of that directory being deleted.\n newdir = True\n if tmpdir is not None and os.path.exists(tmpdir):\n newdir = False\n\n ws = GamsWorkspace(debug=DebugLevel.KeepFiles if keepfiles\n else DebugLevel.Off,\n working_directory=tmpdir)\n\n t1 = ws.add_job_from_string(output_file.getvalue())\n\n try:\n with OutputStream(tee=tee, logfile=logfile) as output_stream:\n t1.run(output=output_stream)\n except GamsExceptionExecution as e:\n try:\n if e.rc == 3:\n # Execution Error\n check_expr_evaluation(model, symbolMap, 'direct')\n finally:\n # Always name working directory or delete files,\n # regardless of any errors.\n if keepfiles:\n print(\"\\nGAMS WORKING DIRECTORY: %s\\n\" %\n ws.working_directory)\n elif tmpdir is not None:\n # Garbage collect all references to t1.out_db\n # So that .gdx file can be deleted\n t1 = rec = rec_lo = rec_hi = None\n file_removal_gams_direct(tmpdir, newdir)\n raise\n except:\n # Catch other errors and remove files first\n if keepfiles:\n print(\"\\nGAMS WORKING DIRECTORY: %s\\n\" % ws.working_directory)\n elif tmpdir is not None:\n # Garbage collect all references to t1.out_db\n # So that .gdx file can be deleted\n t1 = rec = rec_lo = rec_hi = None\n file_removal_gams_direct(tmpdir, newdir)\n raise\n\n solve_completion_time = time.time()\n if report_timing:\n print(\" %6.2f seconds required for solver\" %\n (solve_completion_time - presolve_completion_time))\n\n ####################################################################\n # Postsolve\n ####################################################################\n\n # import suffixes must be on the top-level model\n if isinstance(model, IBlockStorage):\n model_suffixes = list(name for (name,comp) \\\n in pyomo.core.kernel.component_suffix.\\\n import_suffix_generator(model,\n active=True,\n descend_into=False,\n return_key=True))\n else:\n model_suffixes = list(name for (name,comp) \\\n in pyomo.core.base.suffix.\\\n active_import_suffix_generator(model))\n extract_dual = ('dual' in model_suffixes)\n extract_rc = ('rc' in model_suffixes)\n\n results = SolverResults()\n results.problem.name = t1.name\n results.problem.lower_bound = t1.out_db[\"OBJEST\"].find_record().value\n results.problem.upper_bound = t1.out_db[\"OBJEST\"].find_record().value\n results.problem.number_of_variables = \\\n t1.out_db[\"NUMVAR\"].find_record().value\n results.problem.number_of_constraints = \\\n t1.out_db[\"NUMEQU\"].find_record().value\n results.problem.number_of_nonzeros = \\\n t1.out_db[\"NUMNZ\"].find_record().value\n results.problem.number_of_binary_variables = None\n # Includes binary vars:\n results.problem.number_of_integer_variables = \\\n t1.out_db[\"NUMDVAR\"].find_record().value\n results.problem.number_of_continuous_variables = \\\n t1.out_db[\"NUMVAR\"].find_record().value \\\n - t1.out_db[\"NUMDVAR\"].find_record().value\n results.problem.number_of_objectives = 1 # required by GAMS writer\n obj = list(model.component_data_objects(Objective, active=True))\n assert len(obj) == 1, 'Only one objective is allowed.'\n obj = obj[0]\n objctvval = t1.out_db[\"OBJVAL\"].find_record().value\n if obj.is_minimizing():\n results.problem.sense = ProblemSense.minimize\n results.problem.upper_bound = objctvval\n else:\n results.problem.sense = ProblemSense.maximize\n results.problem.lower_bound = objctvval\n\n results.solver.name = \"GAMS \" + str(self.version())\n\n # Init termination condition to None to give preference to this first\n # block of code, only set certain TC's below if it's still None\n results.solver.termination_condition = None\n results.solver.message = None\n\n solvestat = t1.out_db[\"SOLVESTAT\"].find_record().value\n if solvestat == 1:\n results.solver.status = SolverStatus.ok\n elif solvestat == 2:\n results.solver.status = SolverStatus.ok\n results.solver.termination_condition = TerminationCondition.maxIterations\n elif solvestat == 3:\n results.solver.status = SolverStatus.ok\n results.solver.termination_condition = TerminationCondition.maxTimeLimit\n elif solvestat == 5:\n results.solver.status = SolverStatus.ok\n results.solver.termination_condition = TerminationCondition.maxEvaluations\n elif solvestat == 7:\n results.solver.status = SolverStatus.aborted\n results.solver.termination_condition = TerminationCondition.licensingProblems\n elif solvestat == 8:\n results.solver.status = SolverStatus.aborted\n results.solver.termination_condition = TerminationCondition.userInterrupt\n elif solvestat == 10:\n results.solver.status = SolverStatus.error\n results.solver.termination_condition = TerminationCondition.solverFailure\n elif solvestat == 11:\n results.solver.status = SolverStatus.error\n results.solver.termination_condition = TerminationCondition.internalSolverError\n elif solvestat == 4:\n results.solver.status = SolverStatus.warning\n results.solver.message = \"Solver quit with a problem (see LST file)\"\n elif solvestat in (9, 12, 13):\n results.solver.status = SolverStatus.error\n elif solvestat == 6:\n results.solver.status = SolverStatus.unknown\n\n results.solver.return_code = 0\n # Not sure if this value is actually user time\n # \"the elapsed time it took to execute a solve statement in total\"\n results.solver.user_time = t1.out_db[\"ETSOLVE\"].find_record().value\n results.solver.system_time = None\n results.solver.wallclock_time = None\n results.solver.termination_message = None\n\n soln = Solution()\n\n modelstat = t1.out_db[\"MODELSTAT\"].find_record().value\n if modelstat == 1:\n results.solver.termination_condition = TerminationCondition.optimal\n soln.status = SolutionStatus.optimal\n elif modelstat == 2:\n results.solver.termination_condition = TerminationCondition.locallyOptimal\n soln.status = SolutionStatus.locallyOptimal\n elif modelstat in [3, 18]:\n results.solver.termination_condition = TerminationCondition.unbounded\n soln.status = SolutionStatus.unbounded\n elif modelstat in [4, 5, 6, 10, 19]:\n results.solver.termination_condition = TerminationCondition.infeasible\n soln.status = SolutionStatus.infeasible\n elif modelstat == 7:\n results.solver.termination_condition = TerminationCondition.feasible\n soln.status = SolutionStatus.feasible\n elif modelstat == 8:\n # 'Integer solution model found'\n results.solver.termination_condition = TerminationCondition.optimal\n soln.status = SolutionStatus.optimal\n elif modelstat == 9:\n results.solver.termination_condition = TerminationCondition.intermediateNonInteger\n soln.status = SolutionStatus.other\n elif modelstat == 11:\n # Should be handled above, if modelstat and solvestat both\n # indicate a licensing problem\n if results.solver.termination_condition is None:\n results.solver.termination_condition = TerminationCondition.licensingProblems\n soln.status = SolutionStatus.error\n elif modelstat in [12, 13]:\n if results.solver.termination_condition is None:\n results.solver.termination_condition = TerminationCondition.error\n soln.status = SolutionStatus.error\n elif modelstat == 14:\n if results.solver.termination_condition is None:\n results.solver.termination_condition = TerminationCondition.noSolution\n soln.status = SolutionStatus.unknown\n elif modelstat in [15, 16, 17]:\n # Having to do with CNS models,\n # not sure what to make of status descriptions\n results.solver.termination_condition = TerminationCondition.optimal\n soln.status = SolutionStatus.unsure\n else:\n # This is just a backup catch, all cases are handled above\n soln.status = SolutionStatus.error\n\n soln.gap = abs(results.problem.upper_bound \\\n - results.problem.lower_bound)\n\n for sym, ref in iteritems(symbolMap.bySymbol):\n obj = ref()\n if isinstance(model, IBlockStorage):\n # Kernel variables have no 'parent_component'\n if obj.ctype is Objective:\n soln.objective[sym] = {'Value': objctvval}\n if obj.ctype is not Var:\n continue\n else:\n if obj.parent_component().type() is Objective:\n soln.objective[sym] = {'Value': objctvval}\n if obj.parent_component().type() is not Var:\n continue\n rec = t1.out_db[sym].find_record()\n # obj.value = rec.level\n soln.variable[sym] = {\"Value\": rec.level}\n if extract_rc and not math.isnan(rec.marginal):\n # Do not set marginals to nan\n # model.rc[obj] = rec.marginal\n soln.variable[sym]['rc'] = rec.marginal\n\n if extract_dual:\n for c in model.component_data_objects(Constraint, active=True):\n if c.body.is_fixed():\n continue\n sym = symbolMap.getSymbol(c)\n if c.equality:\n rec = t1.out_db[sym].find_record()\n if not math.isnan(rec.marginal):\n # model.dual[c] = rec.marginal\n soln.constraint[sym] = {'dual': rec.marginal}\n else:\n # Solver didn't provide marginals,\n # nothing else to do here\n break\n else:\n # Inequality, assume if 2-sided that only\n # one side's marginal is nonzero\n # Negate marginal for _lo equations\n marg = 0\n if c.lower is not None:\n rec_lo = t1.out_db[sym + '_lo'].find_record()\n marg -= rec_lo.marginal\n if c.upper is not None:\n rec_hi = t1.out_db[sym + '_hi'].find_record()\n marg += rec_hi.marginal\n if not math.isnan(marg):\n # model.dual[c] = marg\n soln.constraint[sym] = {'dual': marg}\n else:\n # Solver didn't provide marginals,\n # nothing else to do here\n break\n\n results.solution.insert(soln)\n\n if keepfiles:\n print(\"\\nGAMS WORKING DIRECTORY: %s\\n\" % ws.working_directory)\n elif tmpdir is not None:\n # Garbage collect all references to t1.out_db\n # So that .gdx file can be deleted\n t1 = rec = rec_lo = rec_hi = None\n file_removal_gams_direct(tmpdir, newdir)\n\n ####################################################################\n # Finish with results\n ####################################################################\n\n results._smap_id = smap_id\n results._smap = None\n if isinstance(model, IBlockStorage):\n if len(results.solution) == 1:\n results.solution(0).symbol_map = \\\n getattr(model, \"._symbol_maps\")[results._smap_id]\n results.solution(0).default_variable_value = \\\n self._default_variable_value\n if load_solutions:\n model.load_solution(results.solution(0))\n results.solution.clear()\n else:\n assert len(results.solution) == 0\n # see the hack in the write method\n # we don't want this to stick around on the model\n # after the solve\n assert len(getattr(model, \"._symbol_maps\")) == 1\n delattr(model, \"._symbol_maps\")\n del results._smap_id\n else:\n if load_solutions:\n model.solutions.load_from(results)\n results._smap_id = None\n results.solution.clear()\n else:\n results._smap = model.solutions.symbol_map[smap_id]\n model.solutions.delete_symbol_map(smap_id)\n\n postsolve_completion_time = time.time()\n if report_timing:\n print(\" %6.2f seconds required for postsolve\" %\n (postsolve_completion_time - solve_completion_time))\n print(\" %6.2f seconds required total\" %\n (postsolve_completion_time - initial_time))\n\n return results\n\n\nclass GAMSShell(pyomo.util.plugin.Plugin):\n \"\"\"A generic interface to GAMS solvers\"\"\"\n pyomo.util.plugin.implements(IOptSolver)\n pyomo.util.plugin.alias('_gams_shell', doc='The GAMS modeling language')\n\n def __init__(self, **kwds):\n self._version = None\n self._default_variable_value = None\n\n self._capabilities = Options()\n self._capabilities.linear = True\n self._capabilities.quadratic_objective = True\n self._capabilities.quadratic_constraint = True\n self._capabilities.integer = True\n self._capabilities.sos1 = False\n self._capabilities.sos2 = False\n\n self.options = Options() # ignored\n\n pyomo.util.plugin.Plugin.__init__(self, **kwds)\n\n def available(self, exception_flag=True):\n \"\"\"True if the solver is available\"\"\"\n exe = pyutilib.services.registered_executable(\"gams\")\n if exception_flag is False:\n return exe is not None\n else:\n if exe is not None:\n return True\n else:\n raise NameError(\n \"No 'gams' command found on system PATH - GAMS shell \"\n \"solver functionality is not available.\")\n\n def _default_executable(self):\n executable = pyutilib.services.registered_executable(\"gams\")\n if executable is None:\n logger.warning(\"Could not locate the 'gams' executable, \"\n \"which is required for solver gams\")\n self.enable = False\n return None\n return executable.get_path()\n\n def executable(self):\n \"\"\"\n Returns the executable used by this solver.\n \"\"\"\n return self._default_executable()\n\n def _get_version(self):\n \"\"\"\n Returns a tuple describing the solver executable version.\n \"\"\"\n solver_exec = self.executable()\n\n if solver_exec is None:\n return _extract_version('')\n else:\n results = pyutilib.subprocess.run([solver_exec])\n return _extract_version(results[1])\n\n def version(self):\n \"\"\"\n Returns a 4-tuple describing the solver executable version.\n \"\"\"\n if self._version is None:\n self._version = self._get_version()\n return self._version\n\n def warm_start_capable(self):\n return True\n\n def default_variable_value(self):\n return self._default_variable_value\n\n def solve(self, *args, **kwds):\n \"\"\"\n Uses command line to call GAMS.\n\n tee=False:\n Output GAMS log to stdout.\n logfile=None:\n Optionally a logfile can be written.\n load_solutions=True:\n Optionally skip loading solution into model, in which case\n the results object will contain the solution data.\n keepfiles=False:\n Keep temporary files.\n tmpdir=None:\n Specify directory path for storing temporary files.\n A directory will be created if one of this name doesn't exist.\n None (default) uses the system default temporary path.\n report_timing=False:\n Print timing reports for presolve, solver, postsolve, etc.\n io_options:\n Updated with additional keywords passed to solve()\n warmstart=False:\n Warmstart by initializing model's variables to their values.\n symbolic_solver_labels=False:\n Use full Pyomo component names rather than\n shortened symbols (slower, but useful for debugging).\n labeler=None:\n Custom labeler. Incompatible with symbolic_solver_labels.\n solver=None:\n If None, GAMS will use default solver for model type.\n mtype=None:\n Model type. If None, will chose from lp, nlp, mip, and minlp.\n add_options=None:\n List of additional lines to write directly\n into model file before the solve statement.\n For model attributes, <model name> is GAMS_MODEL.\n skip_trivial_constraints=False:\n Skip writing constraints whose body section is fixed\n file_determinism=1:\n How much effort do we want to put into ensuring the\n GAMS file is written deterministically for a Pyomo model:\n 0 : None\n 1 : sort keys of indexed components (default)\n 2 : sort keys AND sort names (over declaration order)\n put_results='results':\n Not available for modification on GAMSShell solver.\n \"\"\"\n\n # Make sure available() doesn't crash\n self.available()\n\n if len(args) != 1:\n raise ValueError('Exactly one model must be passed '\n 'to solve method of GAMSSolver.')\n model = args[0]\n\n load_solutions = kwds.pop(\"load_solutions\", True)\n tee = kwds.pop(\"tee\", False)\n logfile = kwds.pop(\"logfile\", None)\n keepfiles = kwds.pop(\"keepfiles\", False)\n tmpdir = kwds.pop(\"tmpdir\", None)\n report_timing = kwds.pop(\"report_timing\", False)\n io_options = kwds.pop(\"io_options\", {})\n\n if len(kwds):\n # Pass remaining keywords to writer, which will handle\n # any unrecognized arguments\n io_options.update(kwds)\n\n initial_time = time.time()\n\n ####################################################################\n # Presolve\n ####################################################################\n\n # IMPORTANT - only delete the whole tmpdir if the solver was the one\n # that made the directory. Otherwise, just delete the files the solver\n # made, if not keepfiles. That way the user can select a directory\n # they already have, like the current directory, without having to\n # worry about the rest of the contents of that directory being deleted.\n newdir = False\n if tmpdir is None:\n tmpdir = mkdtemp()\n newdir = True\n elif not os.path.exists(tmpdir):\n # makedirs creates all necessary intermediate directories in order\n # to create the path to tmpdir, if they don't already exist.\n # However, if keepfiles is False, we only delete the final folder,\n # leaving the rest of the intermediate ones.\n os.makedirs(tmpdir)\n newdir = True\n\n output_filename = os.path.join(tmpdir, 'model.gms')\n lst_filename = os.path.join(tmpdir, 'output.lst')\n statresults_filename = os.path.join(tmpdir, 'resultsstat.dat')\n\n io_options['put_results'] = os.path.join(tmpdir, 'results')\n results_filename = os.path.join(tmpdir, 'results.dat')\n\n if isinstance(model, IBlockStorage):\n # Kernel blocks have slightly different write method\n smap_id = model.write(filename=output_filename,\n format=ProblemFormat.gams,\n _called_by_solver=True,\n **io_options)\n symbolMap = getattr(model, \"._symbol_maps\")[smap_id]\n else:\n (_, smap_id) = model.write(filename=output_filename,\n format=ProblemFormat.gams,\n io_options=io_options)\n symbolMap = model.solutions.symbol_map[smap_id]\n\n presolve_completion_time = time.time()\n if report_timing:\n print(\" %6.2f seconds required for presolve\" %\n (presolve_completion_time - initial_time))\n\n ####################################################################\n # Apply solver\n ####################################################################\n\n exe = self.executable()\n command = [exe, output_filename, 'o=' + lst_filename]\n if tee and not logfile:\n # default behaviour of gams is to print to console, for\n # compatability with windows and *nix we want to explicitly log to\n # stdout (see https://www.gams.com/latest/docs/UG_GamsCall.html)\n command.append(\"lo=3\")\n elif not tee and not logfile:\n command.append(\"lo=0\")\n elif not tee and logfile:\n command.append(\"lo=2\")\n elif tee and logfile:\n command.append(\"lo=4\")\n if logfile:\n command.append(\"lf=\" + str(logfile))\n\n try:\n rc, _ = pyutilib.subprocess.run(command, tee=tee)\n\n if keepfiles:\n print(\"\\nGAMS WORKING DIRECTORY: %s\\n\" % tmpdir)\n\n if rc == 1 or rc == 127:\n raise RuntimeError(\"Command 'gams' was not recognized\")\n elif rc != 0:\n if rc == 3:\n # Execution Error\n # Run check_expr_evaluation, which errors if necessary\n check_expr_evaluation(model, symbolMap, 'shell')\n # If nothing was raised, or for all other cases, raise this\n raise RuntimeError(\"GAMS encountered an error during solve. \"\n \"Check listing file for details.\")\n\n with open(results_filename, 'r') as results_file:\n results_text = results_file.read()\n with open(statresults_filename, 'r') as statresults_file:\n statresults_text = statresults_file.read()\n finally:\n if not keepfiles:\n if newdir:\n shutil.rmtree(tmpdir)\n else:\n os.remove(output_filename)\n os.remove(lst_filename)\n os.remove(results_filename)\n os.remove(statresults_filename)\n\n solve_completion_time = time.time()\n if report_timing:\n print(\" %6.2f seconds required for solver\" %\n (solve_completion_time - presolve_completion_time))\n\n ####################################################################\n # Postsolve\n ####################################################################\n\n # import suffixes must be on the top-level model\n if isinstance(model, IBlockStorage):\n model_suffixes = list(name for (name,comp) \\\n in pyomo.core.kernel.component_suffix.\\\n import_suffix_generator(model,\n active=True,\n descend_into=False,\n return_key=True))\n else:\n model_suffixes = list(name for (name,comp) \\\n in pyomo.core.base.suffix.\\\n active_import_suffix_generator(model))\n extract_dual = ('dual' in model_suffixes)\n extract_rc = ('rc' in model_suffixes)\n\n stat_vars = dict()\n # Skip first line of explanatory text\n for line in statresults_text.splitlines()[1:]:\n items = line.split()\n try:\n stat_vars[items[0]] = float(items[1])\n except ValueError:\n # GAMS printed NA, just make it nan\n stat_vars[items[0]] = float('nan')\n\n results = SolverResults()\n results.problem.name = output_filename\n results.problem.lower_bound = stat_vars[\"OBJEST\"]\n results.problem.upper_bound = stat_vars[\"OBJEST\"]\n results.problem.number_of_variables = stat_vars[\"NUMVAR\"]\n results.problem.number_of_constraints = stat_vars[\"NUMEQU\"]\n results.problem.number_of_nonzeros = stat_vars[\"NUMNZ\"]\n results.problem.number_of_binary_variables = None\n # Includes binary vars:\n results.problem.number_of_integer_variables = stat_vars[\"NUMDVAR\"]\n results.problem.number_of_continuous_variables = stat_vars[\"NUMVAR\"] \\\n - stat_vars[\"NUMDVAR\"]\n results.problem.number_of_objectives = 1 # required by GAMS writer\n obj = list(model.component_data_objects(Objective, active=True))\n assert len(obj) == 1, 'Only one objective is allowed.'\n obj = obj[0]\n objctvval = stat_vars[\"OBJVAL\"]\n if obj.is_minimizing():\n results.problem.sense = ProblemSense.minimize\n results.problem.upper_bound = objctvval\n else:\n results.problem.sense = ProblemSense.maximize\n results.problem.lower_bound = objctvval\n\n results.solver.name = \"GAMS \" + str(self.version())\n\n # Init termination condition to None to give preference to this first\n # block of code, only set certain TC's below if it's still None\n results.solver.termination_condition = None\n results.solver.message = None\n\n solvestat = stat_vars[\"SOLVESTAT\"]\n if solvestat == 1:\n results.solver.status = SolverStatus.ok\n elif solvestat == 2:\n results.solver.status = SolverStatus.ok\n results.solver.termination_condition = TerminationCondition.maxIterations\n elif solvestat == 3:\n results.solver.status = SolverStatus.ok\n results.solver.termination_condition = TerminationCondition.maxTimeLimit\n elif solvestat == 5:\n results.solver.status = SolverStatus.ok\n results.solver.termination_condition = TerminationCondition.maxEvaluations\n elif solvestat == 7:\n results.solver.status = SolverStatus.aborted\n results.solver.termination_condition = TerminationCondition.licensingProblems\n elif solvestat == 8:\n results.solver.status = SolverStatus.aborted\n results.solver.termination_condition = TerminationCondition.userInterrupt\n elif solvestat == 10:\n results.solver.status = SolverStatus.error\n results.solver.termination_condition = TerminationCondition.solverFailure\n elif solvestat == 11:\n results.solver.status = SolverStatus.error\n results.solver.termination_condition = TerminationCondition.internalSolverError\n elif solvestat == 4:\n results.solver.status = SolverStatus.warning\n results.solver.message = \"Solver quit with a problem (see LST file)\"\n elif solvestat in (9, 12, 13):\n results.solver.status = SolverStatus.error\n elif solvestat == 6:\n results.solver.status = SolverStatus.unknown\n\n results.solver.return_code = rc # 0\n # Not sure if this value is actually user time\n # \"the elapsed time it took to execute a solve statement in total\"\n results.solver.user_time = stat_vars[\"ETSOLVE\"]\n results.solver.system_time = None\n results.solver.wallclock_time = None\n results.solver.termination_message = None\n\n soln = Solution()\n\n modelstat = stat_vars[\"MODELSTAT\"]\n if modelstat == 1:\n results.solver.termination_condition = TerminationCondition.optimal\n soln.status = SolutionStatus.optimal\n elif modelstat == 2:\n results.solver.termination_condition = TerminationCondition.locallyOptimal\n soln.status = SolutionStatus.locallyOptimal\n elif modelstat in [3, 18]:\n results.solver.termination_condition = TerminationCondition.unbounded\n soln.status = SolutionStatus.unbounded\n elif modelstat in [4, 5, 6, 10, 19]:\n results.solver.termination_condition = TerminationCondition.infeasible\n soln.status = SolutionStatus.infeasible\n elif modelstat == 7:\n results.solver.termination_condition = TerminationCondition.feasible\n soln.status = SolutionStatus.feasible\n elif modelstat == 8:\n # 'Integer solution model found'\n results.solver.termination_condition = TerminationCondition.optimal\n soln.status = SolutionStatus.optimal\n elif modelstat == 9:\n results.solver.termination_condition = TerminationCondition.intermediateNonInteger\n soln.status = SolutionStatus.other\n elif modelstat == 11:\n # Should be handled above, if modelstat and solvestat both\n # indicate a licensing problem\n if results.solver.termination_condition is None:\n results.solver.termination_condition = TerminationCondition.licensingProblems\n soln.status = SolutionStatus.error\n elif modelstat in [12, 13]:\n if results.solver.termination_condition is None:\n results.solver.termination_condition = TerminationCondition.error\n soln.status = SolutionStatus.error\n elif modelstat == 14:\n if results.solver.termination_condition is None:\n results.solver.termination_condition = TerminationCondition.noSolution\n soln.status = SolutionStatus.unknown\n elif modelstat in [15, 16, 17]:\n # Having to do with CNS models,\n # not sure what to make of status descriptions\n results.solver.termination_condition = TerminationCondition.optimal\n soln.status = SolutionStatus.unsure\n else:\n # This is just a backup catch, all cases are handled above\n soln.status = SolutionStatus.error\n\n soln.gap = abs(results.problem.upper_bound \\\n - results.problem.lower_bound)\n\n model_soln = dict()\n # Skip first line of explanatory text\n for line in results_text.splitlines()[1:]:\n items = line.split()\n model_soln[items[0]] = (items[1], items[2])\n\n has_rc_info = True\n for sym, ref in iteritems(symbolMap.bySymbol):\n obj = ref()\n if isinstance(model, IBlockStorage):\n # Kernel variables have no 'parent_component'\n if obj.ctype is Objective:\n soln.objective[sym] = {'Value': objctvval}\n if obj.ctype is not Var:\n continue\n else:\n if obj.parent_component().type() is Objective:\n soln.objective[sym] = {'Value': objctvval}\n if obj.parent_component().type() is not Var:\n continue\n rec = model_soln[sym]\n # obj.value = float(rec[0])\n soln.variable[sym] = {\"Value\": float(rec[0])}\n if extract_rc and has_rc_info:\n try:\n # model.rc[obj] = float(rec[1])\n soln.variable[sym]['rc'] = float(rec[1])\n except ValueError:\n # Solver didn't provide marginals\n has_rc_info = False\n\n if extract_dual:\n for c in model.component_data_objects(Constraint, active=True):\n if c.body.is_fixed():\n continue\n sym = symbolMap.getSymbol(c)\n if c.equality:\n rec = model_soln[sym]\n try:\n # model.dual[c] = float(rec[1])\n soln.constraint[sym] = {'dual': float(rec[1])}\n except ValueError:\n # Solver didn't provide marginals\n # nothing else to do here\n break\n else:\n # Inequality, assume if 2-sided that only\n # one side's marginal is nonzero\n # Negate marginal for _lo equations\n marg = 0\n if c.lower is not None:\n rec_lo = model_soln[sym + '_lo']\n try:\n marg -= float(rec_lo[1])\n except ValueError:\n # Solver didn't provide marginals\n marg = float('nan')\n if c.upper is not None:\n rec_hi = model_soln[sym + '_hi']\n try:\n marg += float(rec_hi[1])\n except ValueError:\n # Solver didn't provide marginals\n marg = float('nan')\n if not math.isnan(marg):\n # model.dual[c] = marg\n soln.constraint[sym] = {'dual': marg}\n else:\n # Solver didn't provide marginals\n # nothing else to do here\n break\n\n results.solution.insert(soln)\n\n ####################################################################\n # Finish with results\n ####################################################################\n\n results._smap_id = smap_id\n results._smap = None\n if isinstance(model, IBlockStorage):\n if len(results.solution) == 1:\n results.solution(0).symbol_map = \\\n getattr(model, \"._symbol_maps\")[results._smap_id]\n results.solution(0).default_variable_value = \\\n self._default_variable_value\n if load_solutions:\n model.load_solution(results.solution(0))\n results.solution.clear()\n else:\n assert len(results.solution) == 0\n # see the hack in the write method\n # we don't want this to stick around on the model\n # after the solve\n assert len(getattr(model, \"._symbol_maps\")) == 1\n delattr(model, \"._symbol_maps\")\n del results._smap_id\n else:\n if load_solutions:\n model.solutions.load_from(results)\n results._smap_id = None\n results.solution.clear()\n else:\n results._smap = model.solutions.symbol_map[smap_id]\n model.solutions.delete_symbol_map(smap_id)\n\n postsolve_completion_time = time.time()\n if report_timing:\n print(\" %6.2f seconds required for postsolve\" %\n (postsolve_completion_time - solve_completion_time))\n print(\" %6.2f seconds required total\" %\n (postsolve_completion_time - initial_time))\n\n return results\n\n\nclass OutputStream:\n \"\"\"Output stream object for simultaneously writing to multiple streams.\n\n tee=False:\n If set writing to this stream will write to stdout.\n logfile=None:\n Optionally a logfile can be written.\n\n \"\"\"\n\n def __init__(self, tee=False, logfile=None):\n \"\"\"Initialize output stream object.\"\"\"\n if tee:\n self.tee = sys.stdout\n else:\n self.tee = None\n self.logfile = logfile\n self.logfile_buffer = None\n\n def __enter__(self):\n \"\"\"Enter context of output stream and open logfile if given.\"\"\"\n if self.logfile is not None:\n self.logfile_buffer = open(self.logfile, 'a')\n return self\n\n def __exit__(self, *args, **kwargs):\n \"\"\"Enter context of output stream and close logfile if necessary.\"\"\"\n if self.logfile_buffer is not None:\n self.logfile_buffer.close()\n self.logfile_buffer = None\n\n def write(self, message):\n \"\"\"Write messages to all streams.\"\"\"\n if self.tee is not None:\n self.tee.write(message)\n if self.logfile_buffer is not None:\n self.logfile_buffer.write(message)\n\n def flush(self):\n \"\"\"Needed for python3 compatibility.\"\"\"\n if self.tee is not None:\n self.tee.flush()\n if self.logfile_buffer is not None:\n self.logfile_buffer.flush()\n\n\ndef check_expr_evaluation(model, symbolMap, solver_io):\n try:\n # Temporarily initialize uninitialized variables in order to call\n # value() on each expression to check domain violations\n uninit_vars = list()\n for var in model.component_data_objects(Var, active=True):\n if var.value is None:\n uninit_vars.append(var)\n var.value = 0\n\n # Constraints\n for con in model.component_data_objects(Constraint, active=True):\n if con.body.is_fixed():\n continue\n check_expr(con.body, con.name, solver_io)\n\n # Objective\n obj = list(model.component_data_objects(Objective, active=True))\n assert len(obj) == 1, \"GAMS writer can only take 1 active objective\"\n obj = obj[0]\n check_expr(obj.expr, obj.name, solver_io)\n finally:\n # Return uninitialized variables to None\n for var in uninit_vars:\n var.value = None\n\ndef check_expr(expr, name, solver_io):\n # Check if GAMS will encounter domain violations in presolver\n # operations at current values, which are None (0) by default\n # Used to handle log and log10 violations, for example\n try:\n value(expr)\n except ValueError:\n logger.warning(\"While evaluating model.%s's expression, GAMS solver \"\n \"encountered an error.\\nGAMS requires that all \"\n \"equations and expressions evaluate at initial values.\\n\"\n \"Ensure variable values do not violate any domains, \"\n \"and use the warmstart=True keyword to solve().\" % name)\n if solver_io == 'shell':\n # For shell, there is no previous exception to worry about\n # overwriting, so raise the ValueError.\n # But for direct, the GamsExceptionExecution will be raised.\n raise\n\ndef file_removal_gams_direct(tmpdir, newdir):\n if newdir:\n shutil.rmtree(tmpdir)\n else:\n os.remove(os.path.join(tmpdir, '_gams_py_gjo0.gms'))\n os.remove(os.path.join(tmpdir, '_gams_py_gjo0.lst'))\n os.remove(os.path.join(tmpdir, '_gams_py_gdb0.gdx'))\n # .pf file is not made when DebugLevel is Off\n", "path": "pyomo/solvers/plugins/solvers/GAMS.py" } ]
diff --git a/pyomo/solvers/plugins/solvers/GAMS.py b/pyomo/solvers/plugins/solvers/GAMS.py index 7e72710d2e0..3139d827934 100644 --- a/pyomo/solvers/plugins/solvers/GAMS.py +++ b/pyomo/solvers/plugins/solvers/GAMS.py @@ -753,7 +753,7 @@ def solve(self, *args, **kwds): command.append("lf=" + str(logfile)) try: - rc, _ = pyutilib.subprocess.run(command) + rc, _ = pyutilib.subprocess.run(command, tee=tee) if keepfiles: print("\nGAMS WORKING DIRECTORY: %s\n" % tmpdir) diff --git a/pyomo/solvers/tests/checks/test_GAMS.py b/pyomo/solvers/tests/checks/test_GAMS.py index 180658b5539..17661957c3e 100644 --- a/pyomo/solvers/tests/checks/test_GAMS.py +++ b/pyomo/solvers/tests/checks/test_GAMS.py @@ -11,7 +11,7 @@ import pyutilib.th as unittest import pyutilib.subprocess -from pyutilib.misc import setup_redirect, reset_redirect +from pyutilib.misc import capture_output from pyomo.environ import * from six import StringIO import contextlib, sys, os, shutil @@ -223,28 +223,28 @@ class GAMSLogfileGmsTests(GAMSLogfileTestBase): def test_no_tee(self): with SolverFactory("gams", solver_io="gms") as opt: - with redirected_subprocess_run() as output: + with capture_output() as output: opt.solve(self.m, tee=False) self._check_stdout(output.getvalue(), exists=False) self._check_logfile(exists=False) def test_tee(self): with SolverFactory("gams", solver_io="gms") as opt: - with redirected_subprocess_run() as output: + with capture_output() as output: opt.solve(self.m, tee=True) self._check_stdout(output.getvalue(), exists=True) self._check_logfile(exists=False) def test_logfile(self): with SolverFactory("gams", solver_io="gms") as opt: - with redirected_subprocess_run() as output: + with capture_output() as output: opt.solve(self.m, logfile=self.logfile) self._check_stdout(output.getvalue(), exists=False) self._check_logfile(exists=True) def test_tee_and_logfile(self): with SolverFactory("gams", solver_io="gms") as opt: - with redirected_subprocess_run() as output: + with capture_output() as output: opt.solve(self.m, logfile=self.logfile, tee=True) self._check_stdout(output.getvalue(), exists=True) self._check_logfile(exists=True) @@ -261,69 +261,33 @@ class GAMSLogfilePyTests(GAMSLogfileTestBase): def test_no_tee(self): with SolverFactory("gams", solver_io="python") as opt: - with redirected_stdout() as output: + with capture_output() as output: opt.solve(self.m, tee=False) self._check_stdout(output.getvalue(), exists=False) self._check_logfile(exists=False) def test_tee(self): with SolverFactory("gams", solver_io="python") as opt: - with redirected_stdout() as output: + with capture_output() as output: opt.solve(self.m, tee=True) self._check_stdout(output.getvalue(), exists=True) self._check_logfile(exists=False) def test_logfile(self): with SolverFactory("gams", solver_io="python") as opt: - with redirected_stdout() as output: + with capture_output() as output: opt.solve(self.m, logfile=self.logfile) self._check_stdout(output.getvalue(), exists=False) self._check_logfile(exists=True) def test_tee_and_logfile(self): with SolverFactory("gams", solver_io="python") as opt: - with redirected_stdout() as output: + with capture_output() as output: opt.solve(self.m, logfile=self.logfile, tee=True) self._check_stdout(output.getvalue(), exists=True) self._check_logfile(exists=True) [email protected] -def redirected_stdout(): - """Temporarily redirect stdout into a string buffer.""" - output = StringIO() - try: - setup_redirect(output) - yield output - finally: - reset_redirect() - - [email protected] -def redirected_subprocess_run(): - """Temporarily redirect subprocess calls stdout into a string buffer.""" - output = StringIO() - old_call = pyutilib.subprocess.run_command - - def run(*args, **kwargs): - returncode, out = old_call(*args, **kwargs) - output.write("\n".join( - [ - s for s in out.splitlines() - if not s.startswith("*** Could not write to console: /dev/tty") - ] - )) - output - return returncode, out - - try: - pyutilib.subprocess.run_command = run - pyutilib.subprocess.run = run - yield output - finally: - pyutilib.subprocess.run_command = old_call - pyutilib.subprocess.run = old_call - if __name__ == "__main__": unittest.main()
zenml-io__zenml-1301
[BUG]: unable to successfully deploy azure stack recipe ### Contact Details [Optional] _No response_ ### System Information ZenML version: 0.33.0 Install path: /Users/fschulz/dev/learn-zenml/.venv/lib/python3.10/site-packages/zenml Python version: 3.10.8 Platform information: {'os': 'mac', 'mac_version': '13.2'} Environment: native Integrations: ['azure', 'github', 'kaniko', 'mlflow', 'pillow', 'scipy', 'sklearn'] [requirements.txt](https://github.com/zenml-io/zenml/files/10683614/requirements.txt) ### What happened? `zenml stack recipe destroy azureml-minimal` doesn't recognize the "azureml-minimal" stack recipe name and throws the following error: `TypeError: destroy() got multiple values for argument 'stack_recipe_name'` Trying with quotes `zenml stack recipe destroy "azureml-minimal"` and variations of it, i.e. `cd ./path/to/recipe && zenml stack recipe destroy .`` lead to the same error. ### Reproduction steps 1. zenml stack recipe pull azureml-minimal 2. update values 3. zenml stack recipe deploy azureml-minimal 4. zenml stack recipe destroy azureml-minimal ### Relevant log output ```shell ❯ zenml stack recipe destroy azureml-minimal Using the default local database. Running with active workspace: 'default' (repository) ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ /Users/fschulz/dev/learn-zenml/.venv/bin/zenml:8 in <module> │ │ │ │ 5 from zenml.cli.cli import cli │ │ 6 if __name__ == '__main__': │ │ 7 │ sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) │ │ ❱ 8 │ sys.exit(cli()) │ │ 9 │ │ │ │ /Users/fschulz/dev/learn-zenml/.venv/lib/python3.10/site-packages/click/core.py:1130 in __call__ │ │ │ │ 1127 │ │ │ 1128 │ def __call__(self, *args: t.Any, **kwargs: t.Any) -> t.Any: │ │ 1129 │ │ """Alias for :meth:`main`.""" │ │ ❱ 1130 │ │ return self.main(*args, **kwargs) │ │ 1131 │ │ 1132 │ │ 1133 class Command(BaseCommand): │ │ │ │ /Users/fschulz/dev/learn-zenml/.venv/lib/python3.10/site-packages/click/core.py:1055 in main │ │ │ │ 1052 │ │ try: │ │ 1053 │ │ │ try: │ │ 1054 │ │ │ │ with self.make_context(prog_name, args, **extra) as ctx: │ │ ❱ 1055 │ │ │ │ │ rv = self.invoke(ctx) │ │ 1056 │ │ │ │ │ if not standalone_mode: │ │ 1057 │ │ │ │ │ │ return rv │ │ 1058 │ │ │ │ │ # it's not safe to `ctx.exit(rv)` here! │ │ │ │ /Users/fschulz/dev/learn-zenml/.venv/lib/python3.10/site-packages/click/core.py:1657 in invoke │ │ │ │ 1654 │ │ │ │ super().invoke(ctx) │ │ 1655 │ │ │ │ sub_ctx = cmd.make_context(cmd_name, args, parent=ctx) │ │ 1656 │ │ │ │ with sub_ctx: │ │ ❱ 1657 │ │ │ │ │ return _process_result(sub_ctx.command.invoke(sub_ctx)) │ │ 1658 │ │ │ │ 1659 │ │ # In chain mode we create the contexts step by step, but after the │ │ 1660 │ │ # base command has been invoked. Because at that point we do not │ │ │ │ /Users/fschulz/dev/learn-zenml/.venv/lib/python3.10/site-packages/click/core.py:1657 in invoke │ │ │ │ 1654 │ │ │ │ super().invoke(ctx) │ │ 1655 │ │ │ │ sub_ctx = cmd.make_context(cmd_name, args, parent=ctx) │ │ 1656 │ │ │ │ with sub_ctx: │ │ ❱ 1657 │ │ │ │ │ return _process_result(sub_ctx.command.invoke(sub_ctx)) │ │ 1658 │ │ │ │ 1659 │ │ # In chain mode we create the contexts step by step, but after the │ │ 1660 │ │ # base command has been invoked. Because at that point we do not │ │ │ │ /Users/fschulz/dev/learn-zenml/.venv/lib/python3.10/site-packages/click/core.py:1657 in invoke │ │ │ │ 1654 │ │ │ │ super().invoke(ctx) │ │ 1655 │ │ │ │ sub_ctx = cmd.make_context(cmd_name, args, parent=ctx) │ │ 1656 │ │ │ │ with sub_ctx: │ │ ❱ 1657 │ │ │ │ │ return _process_result(sub_ctx.command.invoke(sub_ctx)) │ │ 1658 │ │ │ │ 1659 │ │ # In chain mode we create the contexts step by step, but after the │ │ 1660 │ │ # base command has been invoked. Because at that point we do not │ │ │ │ /Users/fschulz/dev/learn-zenml/.venv/lib/python3.10/site-packages/click/core.py:1404 in invoke │ │ │ │ 1401 │ │ │ echo(style(message, fg="red"), err=True) │ │ 1402 │ │ │ │ 1403 │ │ if self.callback is not None: │ │ ❱ 1404 │ │ │ return ctx.invoke(self.callback, **ctx.params) │ │ 1405 │ │ │ 1406 │ def shell_complete(self, ctx: Context, incomplete: str) -> t.List["CompletionItem"]: │ │ 1407 │ │ """Return a list of completions for the incomplete value. Looks │ │ │ │ /Users/fschulz/dev/learn-zenml/.venv/lib/python3.10/site-packages/click/core.py:760 in invoke │ │ │ │ 757 │ │ │ │ 758 │ │ with augment_usage_errors(__self): │ │ 759 │ │ │ with ctx: │ │ ❱ 760 │ │ │ │ return __callback(*args, **kwargs) │ │ 761 │ │ │ 762 │ def forward( │ │ 763 │ │ __self, __cmd: "Command", *args: t.Any, **kwargs: t.Any # noqa: B902 │ │ │ │ /Users/fschulz/dev/learn-zenml/.venv/lib/python3.10/site-packages/click/decorators.py:84 in │ │ new_func │ │ │ │ 81 │ │ │ │ │ " existing." │ │ 82 │ │ │ │ ) │ │ 83 │ │ │ │ │ ❱ 84 │ │ │ return ctx.invoke(f, obj, *args, **kwargs) │ │ 85 │ │ │ │ 86 │ │ return update_wrapper(t.cast(F, new_func), f) │ │ 87 │ │ │ │ /Users/fschulz/dev/learn-zenml/.venv/lib/python3.10/site-packages/click/core.py:760 in invoke │ │ │ │ 757 │ │ │ │ 758 │ │ with augment_usage_errors(__self): │ │ 759 │ │ │ with ctx: │ │ ❱ 760 │ │ │ │ return __callback(*args, **kwargs) │ │ 761 │ │ │ 762 │ def forward( │ │ 763 │ │ __self, __cmd: "Command", *args: t.Any, **kwargs: t.Any # noqa: B902 │ │ │ │ /Users/fschulz/dev/learn-zenml/.venv/lib/python3.10/site-packages/click/decorators.py:26 in │ │ new_func │ │ │ │ 23 │ """ │ │ 24 │ │ │ 25 │ def new_func(*args, **kwargs): # type: ignore │ │ ❱ 26 │ │ return f(get_current_context(), *args, **kwargs) │ │ 27 │ │ │ 28 │ return update_wrapper(t.cast(F, new_func), f) │ │ 29 │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ TypeError: destroy() got multiple values for argument 'stack_recipe_name' ``` ### Code of Conduct - [X] I agree to follow this project's Code of Conduct
[ { "content": "# Copyright (c) ZenML GmbH 2022. All Rights Reserved.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at:\n\n# https://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express\n# or implied. See the License for the specific language governing\n# permissions and limitations under the License.\n\"\"\"Functionality to handle downloading ZenML stacks via the CLI.\"\"\"\n\nimport os\nimport shutil\nimport subprocess\nimport sys\nfrom pathlib import Path\nfrom typing import List, Optional, Tuple\n\nimport click\nfrom rich.text import Text\n\nimport zenml\nfrom zenml.cli import server\nfrom zenml.cli import utils as cli_utils\nfrom zenml.cli.stack import import_stack, stack\nfrom zenml.config.global_config import GlobalConfiguration\nfrom zenml.exceptions import GitNotFoundError\nfrom zenml.io import fileio\nfrom zenml.logger import get_logger\nfrom zenml.utils import io_utils, yaml_utils\nfrom zenml.utils.analytics_utils import AnalyticsEvent, event_handler\n\nlogger = get_logger(__name__)\n\nEXCLUDED_RECIPE_DIRS = [\"\"]\nSTACK_RECIPES_GITHUB_REPO = \"https://github.com/zenml-io/mlops-stacks.git\"\nSTACK_RECIPES_REPO_DIR = \"zenml_stack_recipes\"\nVARIABLES_FILE = \"values.tfvars.json\"\nALPHA_MESSAGE = (\n \"The stack recipes CLI is in alpha and actively being developed. \"\n \"Please avoid running mission-critical workloads on resources deployed \"\n \"through these commands. If you encounter any problems, create an issue \"\n f\"on the repository {STACK_RECIPES_GITHUB_REPO} and we'll help you out!\"\n)\nNOT_INSTALLED_MESSAGE = (\n \"The stack recipe commands seem to be unavailable on your machine. This \"\n \"is probably because ZenML was installed without the optional terraform \"\n \"dependencies. To install the missing dependencies: \\n\\n\"\n f'`pip install \"zenml[stacks]=={zenml.__version__}\"`.'\n)\n\n\nclass LocalStackRecipe:\n \"\"\"Class to encapsulate the local recipe that can be run from the CLI.\"\"\"\n\n def __init__(self, path: Path, name: str) -> None:\n \"\"\"Create a new LocalStack instance.\n\n Args:\n name: The name of the stack, specifically the name of the folder\n on git\n path: Path at which the stack is installed\n \"\"\"\n self.name = name\n self.path = path\n\n def is_present(self) -> bool:\n \"\"\"Checks if the stack_recipe exists at the given path.\n\n Returns:\n True if the stack_recipe exists at the given path, else False.\n \"\"\"\n return fileio.isdir(str(self.path))\n\n @property\n def locals_content(self) -> str:\n \"\"\"Returns the locals.tf content associated with a particular recipe.\n\n Returns:\n The locals.tf content associated with a particular recipe.\n\n Raises:\n ValueError: If the locals.tf file is not found.\n FileNotFoundError: If the locals.tf file is not one of the options.\n \"\"\"\n locals_file = os.path.join(self.path, \"locals.tf\")\n try:\n with open(locals_file) as locals:\n locals_content = locals.read()\n return locals_content\n except FileNotFoundError:\n if fileio.exists(str(self.path)) and fileio.isdir(str(self.path)):\n raise ValueError(f\"No locals.tf file found in \" f\"{self.path}\")\n else:\n raise FileNotFoundError(\n f\"Recipe {self.name} is not one of the available options.\"\n f\"\\n\"\n f\"To list all available recipes, type: `zenml stack recipe \"\n f\"list`\"\n )\n\n\nclass StackRecipe:\n \"\"\"Class for all stack recipe objects.\"\"\"\n\n def __init__(self, name: str, path_in_repo: Path) -> None:\n \"\"\"Create a new StackRecipe instance.\n\n Args:\n name: The name of the recipe, specifically the name of the folder\n on git\n path_in_repo: Path to the local recipe within the global zenml\n folder.\n \"\"\"\n self.name = name\n self.path_in_repo = path_in_repo\n\n @property\n def readme_content(self) -> str:\n \"\"\"Returns the README content associated with a particular recipe.\n\n Returns:\n The README content associated with a particular recipe.\n\n Raises:\n ValueError: If the README file is not found.\n FileNotFoundError: If the README file is not one of the options.\n \"\"\"\n readme_file = os.path.join(self.path_in_repo, \"README.md\")\n try:\n with open(readme_file) as readme:\n readme_content = readme.read()\n return readme_content\n except FileNotFoundError:\n if fileio.exists(str(self.path_in_repo)) and fileio.isdir(\n str(self.path_in_repo)\n ):\n raise ValueError(\n f\"No README.md file found in \" f\"{self.path_in_repo}\"\n )\n else:\n raise FileNotFoundError(\n f\"Recipe {self.name} is not one of the available options.\"\n f\"\\n\"\n f\"To list all available recipes, type: `zenml stack recipe \"\n f\"list`\"\n )\n\n\nclass StackRecipeRepo:\n \"\"\"Class that represents the stack recipes repo.\"\"\"\n\n def __init__(self, cloning_path: Path) -> None:\n \"\"\"Create a new StackRecipeRepo instance.\n\n Args:\n cloning_path: Path to the local stack recipe repository.\n\n Raises:\n GitNotFoundError: If git is not installed.\n \"\"\"\n self.cloning_path = cloning_path\n\n try:\n from git.exc import InvalidGitRepositoryError, NoSuchPathError\n from git.repo.base import Repo\n except ImportError as e:\n logger.error(\n \"In order to use the CLI tool to interact with our recipes, \"\n \"you need to have an installation of Git on your machine.\"\n )\n raise GitNotFoundError(e)\n\n try:\n self.repo = Repo(self.cloning_path)\n except (NoSuchPathError, InvalidGitRepositoryError):\n self.repo = None # type: ignore\n logger.debug(\n f\"`Cloning_path`: {self.cloning_path} was empty, \"\n \"Automatically cloning the recipes.\"\n )\n self.clone()\n self.checkout_latest_release()\n\n @property\n def active_version(self) -> Optional[str]:\n \"\"\"Returns the active version of the repository.\n\n In case a release branch is checked out, this property returns\n that version as a string, else `None` is returned.\n\n Returns:\n The active version of the repository.\n \"\"\"\n for branch in self.repo.heads:\n if (\n branch.name.startswith(\"release/\")\n and branch.commit == self.repo.head.commit\n ):\n return branch.name[len(\"release/\") :]\n\n return None\n\n @property\n def latest_release_branch(self) -> str:\n \"\"\"Returns the name of the latest release branch.\n\n Returns:\n The name of the latest release branch.\n \"\"\"\n from packaging.version import Version, parse\n\n tags = sorted(\n self.repo.tags,\n key=lambda t: t.commit.committed_datetime,\n )\n\n if not tags:\n return \"main\"\n\n latest_tag = parse(tags[-1].name)\n if type(latest_tag) is not Version:\n return \"main\"\n\n latest_release_version: str = tags[-1].name\n return f\"release/{latest_release_version}\"\n\n @property\n def is_cloned(self) -> bool:\n \"\"\"Returns whether we have already cloned the repository.\n\n Returns:\n Whether we have already cloned the repository.\n \"\"\"\n return self.cloning_path.exists()\n\n def clone(self) -> None:\n \"\"\"Clones repo to `cloning_path`.\n\n If you break off the operation with a `KeyBoardInterrupt` before the\n cloning is completed, this method will delete whatever was partially\n downloaded from your system.\n \"\"\"\n self.cloning_path.mkdir(parents=True, exist_ok=False)\n try:\n from git.repo.base import Repo\n\n logger.info(f\"Downloading recipes to {self.cloning_path}\")\n self.repo = Repo.clone_from(\n STACK_RECIPES_GITHUB_REPO, self.cloning_path, branch=\"main\"\n )\n except KeyboardInterrupt:\n self.delete()\n logger.error(\"Canceled download of recipes.. Rolled back.\")\n\n def delete(self) -> None:\n \"\"\"Delete `cloning_path` if it exists.\n\n Raises:\n AssertionError: If `cloning_path` does not exist.\n \"\"\"\n if self.cloning_path.exists():\n shutil.rmtree(self.cloning_path)\n else:\n raise AssertionError(\n f\"Cannot delete the stack recipes repository from \"\n f\"{self.cloning_path} as it does not exist.\"\n )\n\n def checkout(self, branch: str) -> None:\n \"\"\"Checks out a specific branch or tag of the repository.\n\n Args:\n branch: The name of the branch or tag to check out.\n \"\"\"\n logger.info(f\"Checking out branch: {branch}\")\n self.repo.git.checkout(branch)\n\n def checkout_latest_release(self) -> None:\n \"\"\"Checks out the latest release of the repository.\"\"\"\n self.checkout(branch=self.latest_release_branch)\n\n\nclass GitStackRecipesHandler(object):\n \"\"\"Class for the `GitStackRecipesHandler` that interfaces with the CLI.\"\"\"\n\n def __init__(self) -> None:\n \"\"\"Create a new GitStackRecipesHandler instance.\"\"\"\n self.repo_dir = io_utils.get_global_config_directory()\n self.stack_recipes_dir = Path(\n os.path.join(self.repo_dir, STACK_RECIPES_REPO_DIR)\n )\n self.stack_recipe_repo = StackRecipeRepo(self.stack_recipes_dir)\n\n @property\n def stack_recipes(self) -> List[StackRecipe]:\n \"\"\"Property that contains a list of stack recipes.\n\n Returns:\n A list of stack recipes.\n \"\"\"\n return [\n StackRecipe(name, Path(os.path.join(self.stack_recipes_dir, name)))\n for name in sorted(os.listdir(self.stack_recipes_dir))\n if (\n not name.startswith(\".\")\n and not name.startswith(\"__\")\n and not name == \"LICENSE\"\n and not name.endswith(\".md\")\n and not name.endswith(\".sh\")\n )\n ]\n\n def is_stack_recipe(self, stack_recipe_name: Optional[str] = None) -> bool:\n \"\"\"Checks if the given stack_recipe_name corresponds to a stack_recipe.\n\n Args:\n stack_recipe_name: The name of the stack_recipe to check.\n\n Returns:\n Whether the supplied stack_recipe_name corresponds to a\n stack recipe.\n \"\"\"\n stack_recipe_dict = {\n recipe.name: recipe for recipe in self.stack_recipes\n }\n if stack_recipe_name:\n if stack_recipe_name in stack_recipe_dict.keys():\n return True\n\n return False\n\n def get_stack_recipes(\n self, stack_recipe_name: Optional[str] = None\n ) -> List[StackRecipe]:\n \"\"\"Method that allows you to get a stack recipe by name.\n\n If no stack recipe is supplied, all stack recipes are returned.\n\n Args:\n stack_recipe_name: Name of an stack recipe.\n\n Returns:\n A list of stack recipes.\n\n Raises:\n KeyError: If the supplied stack_recipe_name is not found.\n \"\"\"\n stack_recipe_dict = {\n recipe.name: recipe\n for recipe in self.stack_recipes\n if recipe.name not in EXCLUDED_RECIPE_DIRS\n }\n if stack_recipe_name:\n if stack_recipe_name in stack_recipe_dict.keys():\n return [stack_recipe_dict[stack_recipe_name]]\n else:\n raise KeyError(\n f\"Stack recipe {stack_recipe_name} does not exist! \"\n f\"Available Stack Recipes: {list(stack_recipe_dict)}\"\n \"If you want to deploy a custom stack recipe available \"\n \"locally, please call deploy with the `--skip-pull` flag \"\n \"and specify the path to the stack recipe directory with \"\n \"the `--path` or `-p` flag.\"\n )\n else:\n return self.stack_recipes\n\n def pull(\n self,\n branch: str,\n force: bool = False,\n ) -> None:\n \"\"\"Pulls the stack recipes from the main git stack recipes repository.\n\n Args:\n branch: The name of the branch to pull from.\n force: Whether to force the pull.\n \"\"\"\n from git.exc import GitCommandError\n\n if not self.stack_recipe_repo.is_cloned:\n self.stack_recipe_repo.clone()\n elif force:\n self.stack_recipe_repo.delete()\n self.stack_recipe_repo.clone()\n\n try:\n self.stack_recipe_repo.checkout(branch=branch)\n except GitCommandError:\n cli_utils.warning(\n f\"The specified branch {branch} not found in \"\n \"repo, falling back to the latest release.\"\n )\n self.stack_recipe_repo.checkout_latest_release()\n\n def pull_latest_stack_recipes(self) -> None:\n \"\"\"Pulls the latest stack recipes from the stack recipes repository.\"\"\"\n self.pull(\n branch=self.stack_recipe_repo.latest_release_branch, force=True\n )\n\n def copy_stack_recipe(\n self, stack_recipe_instance: StackRecipe, destination_dir: str\n ) -> None:\n \"\"\"Copies a stack recipe to the destination_dir.\n\n Args:\n stack_recipe_instance: The stack recipe to copy.\n destination_dir: The destination directory to copy the recipe to.\n \"\"\"\n io_utils.create_dir_if_not_exists(destination_dir)\n io_utils.copy_dir(\n str(stack_recipe_instance.path_in_repo),\n destination_dir,\n overwrite=True,\n )\n\n @staticmethod\n def clean_current_stack_recipes() -> None:\n \"\"\"Deletes the stack recipes directory from your working directory.\"\"\"\n stack_recipes_directory = os.path.join(\n os.getcwd(), \"zenml_stack_recipes\"\n )\n shutil.rmtree(stack_recipes_directory)\n\n def get_active_version(self) -> Optional[str]:\n \"\"\"Returns the active version of the mlops-stacks repository.\n\n Returns:\n The active version of the repository.\n \"\"\"\n self.stack_recipe_repo.checkout_latest_release()\n return self.stack_recipe_repo.active_version\n\n\npass_git_stack_recipes_handler = click.make_pass_decorator(\n GitStackRecipesHandler, ensure=True\n)\n\n\[email protected](\n \"recipe\",\n help=\"Commands for using the stack recipes.\",\n invoke_without_command=True,\n)\ndef stack_recipe() -> None:\n \"\"\"Access all ZenML stack recipes.\"\"\"\n\n\n@stack_recipe.command(name=\"list\", help=\"List the available stack recipes.\")\n@pass_git_stack_recipes_handler\ndef list_stack_recipes(\n git_stack_recipes_handler: GitStackRecipesHandler,\n) -> None:\n \"\"\"List all available stack recipes.\n\n Args:\n git_stack_recipes_handler: The GitStackRecipesHandler instance.\n \"\"\"\n cli_utils.warning(ALPHA_MESSAGE)\n stack_recipes = [\n {\"stack_recipe_name\": stack_recipe_instance.name}\n for stack_recipe_instance in git_stack_recipes_handler.get_stack_recipes()\n ]\n cli_utils.print_table(stack_recipes)\n\n cli_utils.declare(\"\\n\" + \"To get the latest list of stack recipes, run: \")\n text = Text(\"zenml stack recipe pull -y\", style=\"markdown.code_block\")\n cli_utils.declare(text)\n\n cli_utils.declare(\"\\n\" + \"To pull any individual stack recipe, type: \")\n text = Text(\n \"zenml stack recipe pull RECIPE_NAME\", style=\"markdown.code_block\"\n )\n cli_utils.declare(text)\n\n\n@stack_recipe.command(help=\"Deletes the ZenML stack recipes directory.\")\[email protected](\n \"--path\",\n \"-p\",\n type=click.STRING,\n default=\"zenml_stack_recipes\",\n help=\"Relative path at which you want to clean the stack_recipe(s)\",\n)\n@pass_git_stack_recipes_handler\ndef clean(\n git_stack_recipes_handler: GitStackRecipesHandler, path: str\n) -> None:\n \"\"\"Deletes the stack recipes directory from your working directory.\n\n Args:\n git_stack_recipes_handler: The GitStackRecipesHandler instance.\n path: The path at which you want to clean the stack_recipe(s).\n \"\"\"\n stack_recipes_directory = os.path.join(os.getcwd(), path)\n if fileio.isdir(stack_recipes_directory) and cli_utils.confirmation(\n \"Do you wish to delete the stack recipes directory? \\n\"\n f\"{stack_recipes_directory}\"\n ):\n git_stack_recipes_handler.clean_current_stack_recipes()\n cli_utils.declare(\n \"Stack recipes directory was deleted from your current working \"\n \"directory.\"\n )\n elif not fileio.isdir(stack_recipes_directory):\n logger.error(\n f\"Unable to delete the stack recipes directory - \"\n f\"{stack_recipes_directory} - \"\n \"as it was not found in your current working directory.\"\n )\n\n\n@stack_recipe.command(help=\"Find out more about a stack recipe.\")\n@pass_git_stack_recipes_handler\[email protected](\"stack_recipe_name\")\ndef info(\n git_stack_recipes_handler: GitStackRecipesHandler,\n stack_recipe_name: str,\n) -> None:\n \"\"\"Find out more about a stack recipe.\n\n Outputs a pager view of the stack_recipe's README.md file.\n\n Args:\n git_stack_recipes_handler: The GitStackRecipesHandler instance.\n stack_recipe_name: The name of the stack recipe.\n \"\"\"\n try:\n stack_recipe_obj = git_stack_recipes_handler.get_stack_recipes(\n stack_recipe_name\n )[0]\n except KeyError as e:\n cli_utils.error(str(e))\n\n else:\n print(stack_recipe_obj.readme_content)\n\n\n@stack_recipe.command(\n help=\"Describe the stack components and their tools that are \"\n \"created as part of this recipe.\"\n)\n@pass_git_stack_recipes_handler\[email protected](\"stack_recipe_name\")\ndef describe(\n git_stack_recipes_handler: GitStackRecipesHandler,\n stack_recipe_name: str,\n) -> None:\n \"\"\"Describe the stack components and their tools that are created as part of this recipe.\n\n Outputs the \"Description\" section of the recipe metadata.\n\n Args:\n git_stack_recipes_handler: The GitStackRecipesHandler instance.\n stack_recipe_name: The name of the stack recipe.\n \"\"\"\n try:\n stack_recipe_obj = git_stack_recipes_handler.get_stack_recipes(\n stack_recipe_name\n )[0]\n except KeyError as e:\n cli_utils.error(str(e))\n\n else:\n metadata = yaml_utils.read_yaml(\n file_path=os.path.join(\n stack_recipe_obj.path_in_repo, \"metadata.yaml\"\n )\n )\n logger.info(metadata[\"Description\"])\n\n\n@stack_recipe.command(help=\"The active version of the mlops-stacks repository\")\n@pass_git_stack_recipes_handler\ndef version(\n git_stack_recipes_handler: GitStackRecipesHandler,\n) -> None:\n \"\"\"The active version of the mlops-stacks repository.\n\n Args:\n git_stack_recipes_handler: The GitStackRecipesHandler instance.\n \"\"\"\n active_version = git_stack_recipes_handler.get_active_version()\n if active_version:\n cli_utils.declare(active_version)\n else:\n cli_utils.warning(\"Unable to detect version.\")\n\n\n@stack_recipe.command(\n help=\"Pull stack recipes straight into your current working directory.\"\n)\n@pass_git_stack_recipes_handler\[email protected](\"stack_recipe_name\", required=False, default=None)\[email protected](\n \"--yes\",\n \"-y\",\n \"force\",\n is_flag=True,\n help=\"Force the redownload of the stack_recipes folder to the ZenML config \"\n \"folder.\",\n)\[email protected](\n \"--path\",\n \"-p\",\n type=click.STRING,\n default=\"zenml_stack_recipes\",\n help=\"Relative path at which you want to install the stack recipe(s)\",\n)\ndef pull(\n git_stack_recipes_handler: GitStackRecipesHandler,\n stack_recipe_name: str,\n force: bool,\n path: str,\n) -> None:\n \"\"\"Pull stack_recipes straight into your current working directory.\n\n Add the flag --yes or -y to redownload all the stack_recipes afresh.\n Use the flag --version or -v and the version number to specify\n which version of ZenML you wish to use for the stack_recipes.\n\n Args:\n git_stack_recipes_handler: The GitStackRecipesHandler instance.\n stack_recipe_name: The name of the stack_recipe.\n force: Force the redownload of the stack_recipes folder to the ZenML config\n folder.\n path: The path at which you want to install the stack_recipe(s).\n \"\"\"\n cli_utils.warning(ALPHA_MESSAGE)\n git_stack_recipes_handler.pull(branch=\"main\", force=force)\n\n stack_recipes_dir = os.path.join(os.getcwd(), path)\n io_utils.create_dir_if_not_exists(stack_recipes_dir)\n try:\n stack_recipes = git_stack_recipes_handler.get_stack_recipes(\n stack_recipe_name\n )\n except KeyError as e:\n cli_utils.error(str(e))\n\n else:\n for stack_recipe_instance in stack_recipes:\n with event_handler(\n event=AnalyticsEvent.PULL_STACK_RECIPE,\n metadata={\"stack_recipe_name\": stack_recipe_instance.name},\n ):\n destination_dir = os.path.join(\n os.getcwd(), path, stack_recipe_instance.name\n )\n if LocalStackRecipe(\n name=stack_recipe_instance.name, path=Path(destination_dir)\n ).is_present():\n if force or cli_utils.confirmation(\n f\"Stack recipe {stack_recipe_instance.name} is already \"\n f\"pulled at {destination_dir}.\\nOverwriting this \"\n f\"directory will delete all terraform state files \"\n f\"and the local configuration. We recommend that you \"\n f\"do this only once the remote resources have been \"\n f\"destroyed.Do you wish to proceed with overwriting?\"\n ):\n fileio.rmtree(destination_dir)\n else:\n cli_utils.warning(\n f\"Stack recipe {stack_recipe_instance.name} not \"\n \"overwritten.\"\n )\n continue\n\n cli_utils.declare(\n f\"Pulling stack recipe {stack_recipe_instance.name}...\"\n )\n\n io_utils.create_dir_if_not_exists(destination_dir)\n git_stack_recipes_handler.copy_stack_recipe(\n stack_recipe_instance, destination_dir\n )\n cli_utils.declare(\n f\"Stack recipe pulled in directory: {destination_dir}\"\n )\n cli_utils.declare(\n \"\\n Please edit the configuration values as you see fit, \"\n f\"in the file: {os.path.join(destination_dir, 'locals.tf')} \"\n \"before you run the deploy command.\"\n )\n # also copy the modules folder from the repo (if it exists)\n # this is a temporary fix until we have a proper module registry\n modules_dir = os.path.join(\n git_stack_recipes_handler.stack_recipes_dir, \"modules\"\n )\n if os.path.exists(modules_dir):\n cli_utils.declare(\"Copying modules folder...\")\n io_utils.copy_dir(\n modules_dir, os.path.join(stack_recipes_dir, \"modules\"), True\n )\n\n\n@stack_recipe.command(\n help=\"Run the stack_recipe that you previously pulled with \"\n \"`zenml stack recipe pull`\"\n)\[email protected](\"stack_recipe_name\", required=True)\[email protected](\n \"--path\",\n \"-p\",\n type=click.STRING,\n default=\"zenml_stack_recipes\",\n help=\"Relative path at which local stack recipe(s) should exist\",\n)\[email protected](\n \"--force\",\n \"-f\",\n \"force\",\n is_flag=True,\n help=\"Force pull the stack recipe. This overwrites any existing recipe \"\n \"files present locally, including the terraform state files and the \"\n \"local configuration.\",\n)\[email protected](\n \"--stack-name\",\n \"-n\",\n type=click.STRING,\n required=False,\n help=\"Set a name for the ZenML stack that will be imported from the YAML \"\n \"configuration file which gets generated after deploying the stack recipe. \"\n \"Defaults to the name of the stack recipe being deployed.\",\n)\[email protected](\n \"--import\",\n \"import_stack_flag\",\n is_flag=True,\n help=\"Import the stack automatically after the recipe is deployed.\",\n)\[email protected](\n \"--log-level\",\n type=click.Choice(\n [\"TRACE\", \"DEBUG\", \"INFO\", \"WARN\", \"ERROR\"], case_sensitive=False\n ),\n help=\"Choose one of TRACE, DEBUG, INFO, WARN or ERROR (case insensitive) as \"\n \"log level for the deploy operation.\",\n default=\"ERROR\",\n)\[email protected](\n \"--skip-check\",\n \"-s\",\n is_flag=True,\n help=\"Skip the checking of locals.tf file before executing the recipe.\",\n)\[email protected](\n \"--no-server\",\n is_flag=True,\n help=\"Don't deploy ZenML even if there's no active cloud deployment.\",\n)\[email protected](\n \"--skip-pull\",\n is_flag=True,\n help=\"Skip the pulling of the stack recipe before deploying. This should be used \"\n \"if you have a local copy of your recipe already. Use the `--path` or `-p` flag to \"\n \"specify the directory that hosts your recipe(s).\",\n)\[email protected](\n \"--install\",\n \"-i\",\n \"enabled_services\",\n multiple=True,\n)\n@pass_git_stack_recipes_handler\[email protected]_context\ndef deploy(\n ctx: click.Context,\n git_stack_recipes_handler: GitStackRecipesHandler,\n stack_recipe_name: str,\n path: str,\n force: bool,\n import_stack_flag: bool,\n log_level: str,\n skip_check: bool,\n no_server: bool,\n skip_pull: bool,\n stack_name: Optional[str],\n enabled_services: Tuple[str],\n) -> None:\n \"\"\"Run the stack_recipe at the specified relative path.\n\n `zenml stack_recipe pull <STACK_RECIPE_NAME>` has to be called with the\n same relative path before the `deploy` command.\n\n Args:\n ctx: The click context.\n git_stack_recipes_handler: The GitStackRecipesHandler instance.\n stack_recipe_name: The name of the stack_recipe.\n path: The path at which you want to install the stack_recipe(s).\n force: Force pull the stack recipe, overwriting any existing files.\n stack_name: A name for the ZenML stack that gets imported as a result\n of the recipe deployment.\n import_stack_flag: Import the stack automatically after the recipe is\n deployed. The stack configuration file is always generated and\n can be imported manually otherwise.\n log_level: Choose one of TRACE, DEBUG, INFO, WARN or ERROR (case\n insensitive) as log level for the `deploy` operation.\n skip_check: Skip the checking of locals.tf file before executing the\n recipe.\n no_server: Don't deploy ZenML even if there's no active cloud\n deployment.\n skip_pull: Skip the pull of the stack recipe before deploying. This\n should be used if you have a local copy of your recipe already.\n enabled_services: A list of services to install. Choose from mlflow, seldon,\n kserve, kubeflow, tekton.\n \"\"\"\n with event_handler(\n event=AnalyticsEvent.RUN_STACK_RECIPE,\n metadata={\"stack_recipe_name\": stack_recipe_name},\n ):\n\n import python_terraform\n\n cli_utils.warning(ALPHA_MESSAGE)\n stack_recipes_dir = Path(os.getcwd()) / path\n\n if sys.platform == \"win32\":\n logger.info(\n \"If you are running stack_recipes on Windows, make sure that \"\n \"you have an associated application with executing .sh files. \"\n \"If you don't have any and you see a pop-up during 'zenml \"\n \"stack_recipe run', we suggest to use the Git BASH: \"\n \"https://gitforwindows.org/\"\n )\n\n try:\n if skip_pull:\n pass\n else:\n _ = git_stack_recipes_handler.get_stack_recipes(\n stack_recipe_name\n )[0]\n except KeyError as e:\n cli_utils.error(str(e))\n else:\n stack_recipe_dir = stack_recipes_dir / stack_recipe_name\n local_stack_recipe = LocalStackRecipe(\n stack_recipe_dir, stack_recipe_name\n )\n\n if not local_stack_recipe.is_present():\n if skip_pull:\n cli_utils.error(\n \"You have specified the --skip-pull flag, but the \"\n \"stack recipe is not present locally at the specified \"\n f\"path. Please ensure the {stack_recipe_name} recipe is \"\n f\"present at {stack_recipe_dir} and try again.\"\n )\n else:\n ctx.invoke(\n pull,\n stack_recipe_name=stack_recipe_name,\n path=path,\n force=force,\n )\n\n try:\n # warn that prerequisites should be met\n metadata = yaml_utils.read_yaml(\n file_path=os.path.join(\n local_stack_recipe.path, \"metadata.yaml\"\n )\n )\n if not cli_utils.confirmation(\n \"\\nPrerequisites for running this recipe are as follows.\\n\"\n f\"{metadata['Prerequisites']}\"\n \"\\n\\n Are all of these conditions met?\"\n ):\n cli_utils.warning(\n \"Prerequisites are not installed. Please make sure \"\n \"they are met and run deploy again.\"\n )\n return\n\n if not skip_check:\n logger.info(\n \"The following values are selected for the \"\n \"configuration of your cloud resources. You can \"\n \"change it by modifying the contents of the locals.tf \"\n \"file here: \"\n f\"{os.path.join(local_stack_recipe.path, 'locals.tf')}\\n\"\n )\n\n print(local_stack_recipe.locals_content)\n\n if skip_check or cli_utils.confirmation(\n f\"\\nDo you wish to deploy the {stack_recipe_name} recipe \"\n \"with the above configuration? Please make sure that \"\n \"resources with the same values as above don't already \"\n \"exist on your cloud account.\"\n ):\n from zenml.recipes import StackRecipeService\n from zenml.services.terraform.terraform_service import (\n TerraformServiceConfig,\n )\n\n terraform_config = TerraformServiceConfig(\n root_runtime_path=str(\n StackRecipeService.STACK_RECIPES_CONFIG_PATH\n ),\n directory_path=str(local_stack_recipe.path),\n log_level=log_level,\n variables_file_path=VARIABLES_FILE,\n )\n # find an existing service with the same terraform path\n # create a new one if not found\n stack_recipe_service = StackRecipeService.get_service(\n str(local_stack_recipe.path)\n )\n if stack_recipe_service:\n cli_utils.declare(\n \"An existing deployment of the recipe found. \"\n f\"with path {local_stack_recipe.path}. \"\n \"Proceeding to update or create resources. \"\n )\n else:\n stack_recipe_service = StackRecipeService(\n config=terraform_config,\n enabled_services=enabled_services,\n )\n\n # start the service (the init and apply operation)\n stack_recipe_service.start()\n\n # invoke server deploy\n if no_server:\n logger.info(\n \"The --no-server flag was passed. \"\n \"Skipping the remote deployment of ZenML. \"\n \"Please note that if you wish to use the stack \"\n \"that you created through this recipe, you will \"\n \"need to deploy ZenML on the cloud.\"\n )\n else:\n if zen_server_exists():\n logger.info(\n \"A ZenML deployment exists already with URL: \"\n f\"{GlobalConfiguration().zen_store.url}. \"\n f\"The recipe will mot create a new \"\n f\"installation.\"\n )\n else:\n logger.info(\n \"No remote deployment of ZenML detected. \"\n )\n vars = stack_recipe_service.get_vars()\n filter = [\n \"aws-stores-minimal\",\n \"azureml-minimal\",\n \"vertex-ai\",\n ]\n if Path(\n stack_recipe_service.terraform_client.working_dir\n ).name in filter and (\n \"enable_mlflow\" not in vars\n or vars[\"enable_mlflow\"] is False\n ):\n logger.warning(\n \"This recipe doesn't create a Kubernetes \"\n \"cluster and as of now, an existing \"\n \"cluster is required for ZenML deployment. \"\n \"Please take a look at the \"\n \"guide for steps on how to proceed: \"\n \"https://docs.zenml.io/getting-started/deploying-zenml/cli#option-1-starting-from-scratch\"\n )\n logger.info(\n \"Not attempting to import the generated \"\n \"YAML file since there isn't any active \"\n \"ZenML deployment.\"\n )\n return\n else:\n ctx.invoke(\n server.deploy,\n config=stack_recipe_service.get_deployment_info(),\n connect=True,\n )\n\n # get the stack yaml path\n stack_yaml_file = os.path.join(\n local_stack_recipe.path,\n stack_recipe_service.stack_file_path[2:],\n )\n\n logger.info(\n \"\\nA stack configuration YAML file has been generated \"\n f\"as part of the deployment of the {stack_recipe_name} \"\n f\"recipe. Find it at {stack_yaml_file}.\"\n )\n\n if import_stack_flag:\n logger.info(\n \"\\nThe flag `--import` is set. Proceeding \"\n \"to import a new ZenML stack from the created \"\n \"resources.\"\n )\n import_stack_name = (\n stack_name if stack_name else stack_recipe_name\n )\n cli_utils.declare(\n \"Importing a new stack with the name \"\n f\"{import_stack_name}.\"\n )\n\n # import deployed resources as ZenML stack\n ctx.invoke(\n import_stack,\n stack_name=import_stack_name,\n filename=stack_yaml_file,\n ignore_version_mismatch=True,\n )\n\n cli_utils.declare(\n \"Please consider creating any secrets that your \"\n \"stack components like the metadata store might \"\n \"need. You can inspect the fields of a stack \"\n \"component by running a describe command on them.\"\n )\n cli_utils.declare(\n \"\\n Run 'terraform output' in the recipe's \"\n f\"directory at {local_stack_recipe.path} to get a \"\n f\"list of outputs. To now retrieve sensitive \"\n f\"outputs, for example, the metadata-db-password \"\n \"use the command 'terraform output \"\n \"metadata-db-password' to get the \"\n \"value in the command-line.\"\n )\n\n except RuntimeError as e:\n cli_utils.error(\n f\"Error running recipe {stack_recipe_name}: {str(e)} \"\n \"\\nPlease look at the error message to figure out \"\n \"why the command failed. If the error is due some wrong \"\n \"configuration, please consider checking the locals.tf \"\n \"file to verify if the inputs are correct. Most commonly, \"\n \"the command can fail due to a timeout error. In that \"\n \"case, please run zenml stack recipe deploy \"\n f\"{stack_recipe_name} again.\"\n )\n except python_terraform.TerraformCommandError as e:\n cli_utils.error(\n f\"Error running recipe {stack_recipe_name}: {str(e.err)} \"\n \"\\nPlease look at the error message to figure out why the \"\n \"command failed. If the error is due some wrong \"\n \"configuration, please consider checking the locals.tf \"\n \"file to verify if the inputs are correct. Most commonly, \"\n \"the command can fail due to a timeout error. In that \"\n \"case, please run zenml stack recipe deploy \"\n f\"{stack_recipe_name} again.\"\n )\n\n\ndef zen_server_exists() -> bool:\n \"\"\"Check if a remote ZenServer is active.\n\n Returns:\n True if active, false otherwise.\n \"\"\"\n return not GlobalConfiguration().zen_store.is_local_store()\n\n\n@stack_recipe.command(\n help=\"Destroy the stack components created previously with \"\n \"`zenml stack recipe deploy <name>`\"\n)\[email protected](\"stack_recipe_name\", required=True)\[email protected](\n \"--path\",\n \"-p\",\n type=click.STRING,\n default=\"zenml_stack_recipes\",\n help=\"Relative path at which you want to install the stack_recipe(s)\",\n)\n@pass_git_stack_recipes_handler\[email protected]_context\ndef destroy(\n git_stack_recipes_handler: GitStackRecipesHandler,\n stack_recipe_name: str,\n path: str,\n) -> None:\n \"\"\"Destroy all resources from the stack_recipe at the specified relative path.\n\n `zenml stack_recipe deploy stack_recipe_name` has to be called with the\n same relative path before the destroy command.\n\n Args:\n git_stack_recipes_handler: The GitStackRecipesHandler instance.\n stack_recipe_name: The name of the stack_recipe.\n path: The path of the stack recipe you want to destroy.\n\n Raises:\n ModuleNotFoundError: If the recipe is found at the given path.\n \"\"\"\n with event_handler(\n event=AnalyticsEvent.DESTROY_STACK_RECIPE,\n metadata={\"stack_recipe_name\": stack_recipe_name},\n ):\n import python_terraform\n\n cli_utils.warning(ALPHA_MESSAGE)\n\n stack_recipes_dir = Path(os.getcwd()) / path\n\n if sys.platform == \"win32\":\n logger.info(\n \"If you are running stack_recipes on Windows, make sure that \"\n \"you have an associated application with executing .sh files. \"\n \"If you don't have any and you see a pop-up during 'zenml \"\n \"stack_recipe run', we suggest to use the Git BASH: \"\n \"https://gitforwindows.org/\"\n )\n\n try:\n _ = git_stack_recipes_handler.get_stack_recipes(stack_recipe_name)[\n 0\n ]\n except KeyError as e:\n cli_utils.error(str(e))\n else:\n stack_recipe_dir = stack_recipes_dir / stack_recipe_name\n local_stack_recipe = LocalStackRecipe(\n stack_recipe_dir, stack_recipe_name\n )\n\n if not local_stack_recipe.is_present():\n raise ModuleNotFoundError(\n f\"The recipe {stack_recipe_name} \"\n \"has not been pulled at the specified path. \"\n f\"Run `zenml stack recipe pull {stack_recipe_name}` \"\n f\"followed by `zenml stack recipe deploy \"\n f\"{stack_recipe_name}` first.\"\n )\n\n try:\n # use the stack recipe directory path to find the service instance\n from zenml.recipes import StackRecipeService\n\n stack_recipe_service = StackRecipeService.get_service(\n str(local_stack_recipe.path)\n )\n if not stack_recipe_service:\n cli_utils.error(\n \"No stack recipe found with the path \"\n f\"{local_stack_recipe.path}. You need to first deploy \"\n \"the recipe by running \\nzenml stack recipe deploy \"\n f\"{stack_recipe_name}\"\n )\n # stop the service to destroy resources created by recipe\n stack_recipe_service.stop()\n\n cli_utils.declare(\n \"\\n\"\n + \"Your active stack might now be invalid. Please run:\"\n )\n text = Text(\n \"zenml stack describe\", style=\"markdown.code_block\"\n )\n cli_utils.declare(text)\n cli_utils.declare(\n \"\\n\"\n + \"to investigate and switch to a new stack if needed.\"\n )\n\n except python_terraform.TerraformCommandError as e:\n force_message = \"\"\n if stack_recipe_name == \"aws_minimal\":\n force_message = (\n \"If there are Kubernetes resources that aren't\"\n \"getting deleted, run 'kubectl delete node -all' to \"\n \"delete the nodes and consequently all Kubernetes \"\n \"resources. Run the destroy again after that, to \"\n \"remove any other remaining resources.\"\n )\n cli_utils.error(\n f\"Error destroying recipe {stack_recipe_name}: {str(e.err)}\"\n \"\\nMost commonly, the error occurs if there's some \"\n \"resource that can't be deleted instantly, for example, \"\n \"MySQL stores with backups. In such cases, please try \"\n \"again after around 30 minutes. If the issue persists, \"\n f\"kindly raise an issue at {STACK_RECIPES_GITHUB_REPO}. \"\n f\"\\n{force_message}\"\n )\n except subprocess.CalledProcessError as e:\n cli_utils.warning(\n f\"Error destroying recipe {stack_recipe_name}: {str(e)}\"\n \"\\nThe kubernetes cluster couldn't be removed due to the \"\n \"error above. Please verify if the cluster has already \"\n \"been deleted by running kubectl get nodes to check if \"\n \"there's any active nodes.Ignore this warning if there \"\n \"are no active nodes.\"\n )\n", "path": "src/zenml/cli/stack_recipes.py" } ]
[ { "content": "# Copyright (c) ZenML GmbH 2022. All Rights Reserved.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at:\n\n# https://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express\n# or implied. See the License for the specific language governing\n# permissions and limitations under the License.\n\"\"\"Functionality to handle downloading ZenML stacks via the CLI.\"\"\"\n\nimport os\nimport shutil\nimport subprocess\nimport sys\nfrom pathlib import Path\nfrom typing import List, Optional, Tuple\n\nimport click\nfrom rich.text import Text\n\nimport zenml\nfrom zenml.cli import server\nfrom zenml.cli import utils as cli_utils\nfrom zenml.cli.stack import import_stack, stack\nfrom zenml.config.global_config import GlobalConfiguration\nfrom zenml.exceptions import GitNotFoundError\nfrom zenml.io import fileio\nfrom zenml.logger import get_logger\nfrom zenml.utils import io_utils, yaml_utils\nfrom zenml.utils.analytics_utils import AnalyticsEvent, event_handler\n\nlogger = get_logger(__name__)\n\nEXCLUDED_RECIPE_DIRS = [\"\"]\nSTACK_RECIPES_GITHUB_REPO = \"https://github.com/zenml-io/mlops-stacks.git\"\nSTACK_RECIPES_REPO_DIR = \"zenml_stack_recipes\"\nVARIABLES_FILE = \"values.tfvars.json\"\nALPHA_MESSAGE = (\n \"The stack recipes CLI is in alpha and actively being developed. \"\n \"Please avoid running mission-critical workloads on resources deployed \"\n \"through these commands. If you encounter any problems, create an issue \"\n f\"on the repository {STACK_RECIPES_GITHUB_REPO} and we'll help you out!\"\n)\nNOT_INSTALLED_MESSAGE = (\n \"The stack recipe commands seem to be unavailable on your machine. This \"\n \"is probably because ZenML was installed without the optional terraform \"\n \"dependencies. To install the missing dependencies: \\n\\n\"\n f'`pip install \"zenml[stacks]=={zenml.__version__}\"`.'\n)\n\n\nclass LocalStackRecipe:\n \"\"\"Class to encapsulate the local recipe that can be run from the CLI.\"\"\"\n\n def __init__(self, path: Path, name: str) -> None:\n \"\"\"Create a new LocalStack instance.\n\n Args:\n name: The name of the stack, specifically the name of the folder\n on git\n path: Path at which the stack is installed\n \"\"\"\n self.name = name\n self.path = path\n\n def is_present(self) -> bool:\n \"\"\"Checks if the stack_recipe exists at the given path.\n\n Returns:\n True if the stack_recipe exists at the given path, else False.\n \"\"\"\n return fileio.isdir(str(self.path))\n\n @property\n def locals_content(self) -> str:\n \"\"\"Returns the locals.tf content associated with a particular recipe.\n\n Returns:\n The locals.tf content associated with a particular recipe.\n\n Raises:\n ValueError: If the locals.tf file is not found.\n FileNotFoundError: If the locals.tf file is not one of the options.\n \"\"\"\n locals_file = os.path.join(self.path, \"locals.tf\")\n try:\n with open(locals_file) as locals:\n locals_content = locals.read()\n return locals_content\n except FileNotFoundError:\n if fileio.exists(str(self.path)) and fileio.isdir(str(self.path)):\n raise ValueError(f\"No locals.tf file found in \" f\"{self.path}\")\n else:\n raise FileNotFoundError(\n f\"Recipe {self.name} is not one of the available options.\"\n f\"\\n\"\n f\"To list all available recipes, type: `zenml stack recipe \"\n f\"list`\"\n )\n\n\nclass StackRecipe:\n \"\"\"Class for all stack recipe objects.\"\"\"\n\n def __init__(self, name: str, path_in_repo: Path) -> None:\n \"\"\"Create a new StackRecipe instance.\n\n Args:\n name: The name of the recipe, specifically the name of the folder\n on git\n path_in_repo: Path to the local recipe within the global zenml\n folder.\n \"\"\"\n self.name = name\n self.path_in_repo = path_in_repo\n\n @property\n def readme_content(self) -> str:\n \"\"\"Returns the README content associated with a particular recipe.\n\n Returns:\n The README content associated with a particular recipe.\n\n Raises:\n ValueError: If the README file is not found.\n FileNotFoundError: If the README file is not one of the options.\n \"\"\"\n readme_file = os.path.join(self.path_in_repo, \"README.md\")\n try:\n with open(readme_file) as readme:\n readme_content = readme.read()\n return readme_content\n except FileNotFoundError:\n if fileio.exists(str(self.path_in_repo)) and fileio.isdir(\n str(self.path_in_repo)\n ):\n raise ValueError(\n f\"No README.md file found in \" f\"{self.path_in_repo}\"\n )\n else:\n raise FileNotFoundError(\n f\"Recipe {self.name} is not one of the available options.\"\n f\"\\n\"\n f\"To list all available recipes, type: `zenml stack recipe \"\n f\"list`\"\n )\n\n\nclass StackRecipeRepo:\n \"\"\"Class that represents the stack recipes repo.\"\"\"\n\n def __init__(self, cloning_path: Path) -> None:\n \"\"\"Create a new StackRecipeRepo instance.\n\n Args:\n cloning_path: Path to the local stack recipe repository.\n\n Raises:\n GitNotFoundError: If git is not installed.\n \"\"\"\n self.cloning_path = cloning_path\n\n try:\n from git.exc import InvalidGitRepositoryError, NoSuchPathError\n from git.repo.base import Repo\n except ImportError as e:\n logger.error(\n \"In order to use the CLI tool to interact with our recipes, \"\n \"you need to have an installation of Git on your machine.\"\n )\n raise GitNotFoundError(e)\n\n try:\n self.repo = Repo(self.cloning_path)\n except (NoSuchPathError, InvalidGitRepositoryError):\n self.repo = None # type: ignore\n logger.debug(\n f\"`Cloning_path`: {self.cloning_path} was empty, \"\n \"Automatically cloning the recipes.\"\n )\n self.clone()\n self.checkout_latest_release()\n\n @property\n def active_version(self) -> Optional[str]:\n \"\"\"Returns the active version of the repository.\n\n In case a release branch is checked out, this property returns\n that version as a string, else `None` is returned.\n\n Returns:\n The active version of the repository.\n \"\"\"\n for branch in self.repo.heads:\n if (\n branch.name.startswith(\"release/\")\n and branch.commit == self.repo.head.commit\n ):\n return branch.name[len(\"release/\") :]\n\n return None\n\n @property\n def latest_release_branch(self) -> str:\n \"\"\"Returns the name of the latest release branch.\n\n Returns:\n The name of the latest release branch.\n \"\"\"\n from packaging.version import Version, parse\n\n tags = sorted(\n self.repo.tags,\n key=lambda t: t.commit.committed_datetime,\n )\n\n if not tags:\n return \"main\"\n\n latest_tag = parse(tags[-1].name)\n if type(latest_tag) is not Version:\n return \"main\"\n\n latest_release_version: str = tags[-1].name\n return f\"release/{latest_release_version}\"\n\n @property\n def is_cloned(self) -> bool:\n \"\"\"Returns whether we have already cloned the repository.\n\n Returns:\n Whether we have already cloned the repository.\n \"\"\"\n return self.cloning_path.exists()\n\n def clone(self) -> None:\n \"\"\"Clones repo to `cloning_path`.\n\n If you break off the operation with a `KeyBoardInterrupt` before the\n cloning is completed, this method will delete whatever was partially\n downloaded from your system.\n \"\"\"\n self.cloning_path.mkdir(parents=True, exist_ok=False)\n try:\n from git.repo.base import Repo\n\n logger.info(f\"Downloading recipes to {self.cloning_path}\")\n self.repo = Repo.clone_from(\n STACK_RECIPES_GITHUB_REPO, self.cloning_path, branch=\"main\"\n )\n except KeyboardInterrupt:\n self.delete()\n logger.error(\"Canceled download of recipes.. Rolled back.\")\n\n def delete(self) -> None:\n \"\"\"Delete `cloning_path` if it exists.\n\n Raises:\n AssertionError: If `cloning_path` does not exist.\n \"\"\"\n if self.cloning_path.exists():\n shutil.rmtree(self.cloning_path)\n else:\n raise AssertionError(\n f\"Cannot delete the stack recipes repository from \"\n f\"{self.cloning_path} as it does not exist.\"\n )\n\n def checkout(self, branch: str) -> None:\n \"\"\"Checks out a specific branch or tag of the repository.\n\n Args:\n branch: The name of the branch or tag to check out.\n \"\"\"\n logger.info(f\"Checking out branch: {branch}\")\n self.repo.git.checkout(branch)\n\n def checkout_latest_release(self) -> None:\n \"\"\"Checks out the latest release of the repository.\"\"\"\n self.checkout(branch=self.latest_release_branch)\n\n\nclass GitStackRecipesHandler(object):\n \"\"\"Class for the `GitStackRecipesHandler` that interfaces with the CLI.\"\"\"\n\n def __init__(self) -> None:\n \"\"\"Create a new GitStackRecipesHandler instance.\"\"\"\n self.repo_dir = io_utils.get_global_config_directory()\n self.stack_recipes_dir = Path(\n os.path.join(self.repo_dir, STACK_RECIPES_REPO_DIR)\n )\n self.stack_recipe_repo = StackRecipeRepo(self.stack_recipes_dir)\n\n @property\n def stack_recipes(self) -> List[StackRecipe]:\n \"\"\"Property that contains a list of stack recipes.\n\n Returns:\n A list of stack recipes.\n \"\"\"\n return [\n StackRecipe(name, Path(os.path.join(self.stack_recipes_dir, name)))\n for name in sorted(os.listdir(self.stack_recipes_dir))\n if (\n not name.startswith(\".\")\n and not name.startswith(\"__\")\n and not name == \"LICENSE\"\n and not name.endswith(\".md\")\n and not name.endswith(\".sh\")\n )\n ]\n\n def is_stack_recipe(self, stack_recipe_name: Optional[str] = None) -> bool:\n \"\"\"Checks if the given stack_recipe_name corresponds to a stack_recipe.\n\n Args:\n stack_recipe_name: The name of the stack_recipe to check.\n\n Returns:\n Whether the supplied stack_recipe_name corresponds to a\n stack recipe.\n \"\"\"\n stack_recipe_dict = {\n recipe.name: recipe for recipe in self.stack_recipes\n }\n if stack_recipe_name:\n if stack_recipe_name in stack_recipe_dict.keys():\n return True\n\n return False\n\n def get_stack_recipes(\n self, stack_recipe_name: Optional[str] = None\n ) -> List[StackRecipe]:\n \"\"\"Method that allows you to get a stack recipe by name.\n\n If no stack recipe is supplied, all stack recipes are returned.\n\n Args:\n stack_recipe_name: Name of an stack recipe.\n\n Returns:\n A list of stack recipes.\n\n Raises:\n KeyError: If the supplied stack_recipe_name is not found.\n \"\"\"\n stack_recipe_dict = {\n recipe.name: recipe\n for recipe in self.stack_recipes\n if recipe.name not in EXCLUDED_RECIPE_DIRS\n }\n if stack_recipe_name:\n if stack_recipe_name in stack_recipe_dict.keys():\n return [stack_recipe_dict[stack_recipe_name]]\n else:\n raise KeyError(\n f\"Stack recipe {stack_recipe_name} does not exist! \"\n f\"Available Stack Recipes: {list(stack_recipe_dict)}\"\n \"If you want to deploy a custom stack recipe available \"\n \"locally, please call deploy with the `--skip-pull` flag \"\n \"and specify the path to the stack recipe directory with \"\n \"the `--path` or `-p` flag.\"\n )\n else:\n return self.stack_recipes\n\n def pull(\n self,\n branch: str,\n force: bool = False,\n ) -> None:\n \"\"\"Pulls the stack recipes from the main git stack recipes repository.\n\n Args:\n branch: The name of the branch to pull from.\n force: Whether to force the pull.\n \"\"\"\n from git.exc import GitCommandError\n\n if not self.stack_recipe_repo.is_cloned:\n self.stack_recipe_repo.clone()\n elif force:\n self.stack_recipe_repo.delete()\n self.stack_recipe_repo.clone()\n\n try:\n self.stack_recipe_repo.checkout(branch=branch)\n except GitCommandError:\n cli_utils.warning(\n f\"The specified branch {branch} not found in \"\n \"repo, falling back to the latest release.\"\n )\n self.stack_recipe_repo.checkout_latest_release()\n\n def pull_latest_stack_recipes(self) -> None:\n \"\"\"Pulls the latest stack recipes from the stack recipes repository.\"\"\"\n self.pull(\n branch=self.stack_recipe_repo.latest_release_branch, force=True\n )\n\n def copy_stack_recipe(\n self, stack_recipe_instance: StackRecipe, destination_dir: str\n ) -> None:\n \"\"\"Copies a stack recipe to the destination_dir.\n\n Args:\n stack_recipe_instance: The stack recipe to copy.\n destination_dir: The destination directory to copy the recipe to.\n \"\"\"\n io_utils.create_dir_if_not_exists(destination_dir)\n io_utils.copy_dir(\n str(stack_recipe_instance.path_in_repo),\n destination_dir,\n overwrite=True,\n )\n\n @staticmethod\n def clean_current_stack_recipes() -> None:\n \"\"\"Deletes the stack recipes directory from your working directory.\"\"\"\n stack_recipes_directory = os.path.join(\n os.getcwd(), \"zenml_stack_recipes\"\n )\n shutil.rmtree(stack_recipes_directory)\n\n def get_active_version(self) -> Optional[str]:\n \"\"\"Returns the active version of the mlops-stacks repository.\n\n Returns:\n The active version of the repository.\n \"\"\"\n self.stack_recipe_repo.checkout_latest_release()\n return self.stack_recipe_repo.active_version\n\n\npass_git_stack_recipes_handler = click.make_pass_decorator(\n GitStackRecipesHandler, ensure=True\n)\n\n\[email protected](\n \"recipe\",\n help=\"Commands for using the stack recipes.\",\n invoke_without_command=True,\n)\ndef stack_recipe() -> None:\n \"\"\"Access all ZenML stack recipes.\"\"\"\n\n\n@stack_recipe.command(name=\"list\", help=\"List the available stack recipes.\")\n@pass_git_stack_recipes_handler\ndef list_stack_recipes(\n git_stack_recipes_handler: GitStackRecipesHandler,\n) -> None:\n \"\"\"List all available stack recipes.\n\n Args:\n git_stack_recipes_handler: The GitStackRecipesHandler instance.\n \"\"\"\n cli_utils.warning(ALPHA_MESSAGE)\n stack_recipes = [\n {\"stack_recipe_name\": stack_recipe_instance.name}\n for stack_recipe_instance in git_stack_recipes_handler.get_stack_recipes()\n ]\n cli_utils.print_table(stack_recipes)\n\n cli_utils.declare(\"\\n\" + \"To get the latest list of stack recipes, run: \")\n text = Text(\"zenml stack recipe pull -y\", style=\"markdown.code_block\")\n cli_utils.declare(text)\n\n cli_utils.declare(\"\\n\" + \"To pull any individual stack recipe, type: \")\n text = Text(\n \"zenml stack recipe pull RECIPE_NAME\", style=\"markdown.code_block\"\n )\n cli_utils.declare(text)\n\n\n@stack_recipe.command(help=\"Deletes the ZenML stack recipes directory.\")\[email protected](\n \"--path\",\n \"-p\",\n type=click.STRING,\n default=\"zenml_stack_recipes\",\n help=\"Relative path at which you want to clean the stack_recipe(s)\",\n)\n@pass_git_stack_recipes_handler\ndef clean(\n git_stack_recipes_handler: GitStackRecipesHandler, path: str\n) -> None:\n \"\"\"Deletes the stack recipes directory from your working directory.\n\n Args:\n git_stack_recipes_handler: The GitStackRecipesHandler instance.\n path: The path at which you want to clean the stack_recipe(s).\n \"\"\"\n stack_recipes_directory = os.path.join(os.getcwd(), path)\n if fileio.isdir(stack_recipes_directory) and cli_utils.confirmation(\n \"Do you wish to delete the stack recipes directory? \\n\"\n f\"{stack_recipes_directory}\"\n ):\n git_stack_recipes_handler.clean_current_stack_recipes()\n cli_utils.declare(\n \"Stack recipes directory was deleted from your current working \"\n \"directory.\"\n )\n elif not fileio.isdir(stack_recipes_directory):\n logger.error(\n f\"Unable to delete the stack recipes directory - \"\n f\"{stack_recipes_directory} - \"\n \"as it was not found in your current working directory.\"\n )\n\n\n@stack_recipe.command(help=\"Find out more about a stack recipe.\")\n@pass_git_stack_recipes_handler\[email protected](\"stack_recipe_name\")\ndef info(\n git_stack_recipes_handler: GitStackRecipesHandler,\n stack_recipe_name: str,\n) -> None:\n \"\"\"Find out more about a stack recipe.\n\n Outputs a pager view of the stack_recipe's README.md file.\n\n Args:\n git_stack_recipes_handler: The GitStackRecipesHandler instance.\n stack_recipe_name: The name of the stack recipe.\n \"\"\"\n try:\n stack_recipe_obj = git_stack_recipes_handler.get_stack_recipes(\n stack_recipe_name\n )[0]\n except KeyError as e:\n cli_utils.error(str(e))\n\n else:\n print(stack_recipe_obj.readme_content)\n\n\n@stack_recipe.command(\n help=\"Describe the stack components and their tools that are \"\n \"created as part of this recipe.\"\n)\n@pass_git_stack_recipes_handler\[email protected](\"stack_recipe_name\")\ndef describe(\n git_stack_recipes_handler: GitStackRecipesHandler,\n stack_recipe_name: str,\n) -> None:\n \"\"\"Describe the stack components and their tools that are created as part of this recipe.\n\n Outputs the \"Description\" section of the recipe metadata.\n\n Args:\n git_stack_recipes_handler: The GitStackRecipesHandler instance.\n stack_recipe_name: The name of the stack recipe.\n \"\"\"\n try:\n stack_recipe_obj = git_stack_recipes_handler.get_stack_recipes(\n stack_recipe_name\n )[0]\n except KeyError as e:\n cli_utils.error(str(e))\n\n else:\n metadata = yaml_utils.read_yaml(\n file_path=os.path.join(\n stack_recipe_obj.path_in_repo, \"metadata.yaml\"\n )\n )\n logger.info(metadata[\"Description\"])\n\n\n@stack_recipe.command(help=\"The active version of the mlops-stacks repository\")\n@pass_git_stack_recipes_handler\ndef version(\n git_stack_recipes_handler: GitStackRecipesHandler,\n) -> None:\n \"\"\"The active version of the mlops-stacks repository.\n\n Args:\n git_stack_recipes_handler: The GitStackRecipesHandler instance.\n \"\"\"\n active_version = git_stack_recipes_handler.get_active_version()\n if active_version:\n cli_utils.declare(active_version)\n else:\n cli_utils.warning(\"Unable to detect version.\")\n\n\n@stack_recipe.command(\n help=\"Pull stack recipes straight into your current working directory.\"\n)\n@pass_git_stack_recipes_handler\[email protected](\"stack_recipe_name\", required=False, default=None)\[email protected](\n \"--yes\",\n \"-y\",\n \"force\",\n is_flag=True,\n help=\"Force the redownload of the stack_recipes folder to the ZenML config \"\n \"folder.\",\n)\[email protected](\n \"--path\",\n \"-p\",\n type=click.STRING,\n default=\"zenml_stack_recipes\",\n help=\"Relative path at which you want to install the stack recipe(s)\",\n)\ndef pull(\n git_stack_recipes_handler: GitStackRecipesHandler,\n stack_recipe_name: str,\n force: bool,\n path: str,\n) -> None:\n \"\"\"Pull stack_recipes straight into your current working directory.\n\n Add the flag --yes or -y to redownload all the stack_recipes afresh.\n Use the flag --version or -v and the version number to specify\n which version of ZenML you wish to use for the stack_recipes.\n\n Args:\n git_stack_recipes_handler: The GitStackRecipesHandler instance.\n stack_recipe_name: The name of the stack_recipe.\n force: Force the redownload of the stack_recipes folder to the ZenML config\n folder.\n path: The path at which you want to install the stack_recipe(s).\n \"\"\"\n cli_utils.warning(ALPHA_MESSAGE)\n git_stack_recipes_handler.pull(branch=\"main\", force=force)\n\n stack_recipes_dir = os.path.join(os.getcwd(), path)\n io_utils.create_dir_if_not_exists(stack_recipes_dir)\n try:\n stack_recipes = git_stack_recipes_handler.get_stack_recipes(\n stack_recipe_name\n )\n except KeyError as e:\n cli_utils.error(str(e))\n\n else:\n for stack_recipe_instance in stack_recipes:\n with event_handler(\n event=AnalyticsEvent.PULL_STACK_RECIPE,\n metadata={\"stack_recipe_name\": stack_recipe_instance.name},\n ):\n destination_dir = os.path.join(\n os.getcwd(), path, stack_recipe_instance.name\n )\n if LocalStackRecipe(\n name=stack_recipe_instance.name, path=Path(destination_dir)\n ).is_present():\n if force or cli_utils.confirmation(\n f\"Stack recipe {stack_recipe_instance.name} is already \"\n f\"pulled at {destination_dir}.\\nOverwriting this \"\n f\"directory will delete all terraform state files \"\n f\"and the local configuration. We recommend that you \"\n f\"do this only once the remote resources have been \"\n f\"destroyed.Do you wish to proceed with overwriting?\"\n ):\n fileio.rmtree(destination_dir)\n else:\n cli_utils.warning(\n f\"Stack recipe {stack_recipe_instance.name} not \"\n \"overwritten.\"\n )\n continue\n\n cli_utils.declare(\n f\"Pulling stack recipe {stack_recipe_instance.name}...\"\n )\n\n io_utils.create_dir_if_not_exists(destination_dir)\n git_stack_recipes_handler.copy_stack_recipe(\n stack_recipe_instance, destination_dir\n )\n cli_utils.declare(\n f\"Stack recipe pulled in directory: {destination_dir}\"\n )\n cli_utils.declare(\n \"\\n Please edit the configuration values as you see fit, \"\n f\"in the file: {os.path.join(destination_dir, 'locals.tf')} \"\n \"before you run the deploy command.\"\n )\n # also copy the modules folder from the repo (if it exists)\n # this is a temporary fix until we have a proper module registry\n modules_dir = os.path.join(\n git_stack_recipes_handler.stack_recipes_dir, \"modules\"\n )\n if os.path.exists(modules_dir):\n cli_utils.declare(\"Copying modules folder...\")\n io_utils.copy_dir(\n modules_dir, os.path.join(stack_recipes_dir, \"modules\"), True\n )\n\n\n@stack_recipe.command(\n help=\"Run the stack_recipe that you previously pulled with \"\n \"`zenml stack recipe pull`\"\n)\[email protected](\"stack_recipe_name\", required=True)\[email protected](\n \"--path\",\n \"-p\",\n type=click.STRING,\n default=\"zenml_stack_recipes\",\n help=\"Relative path at which local stack recipe(s) should exist\",\n)\[email protected](\n \"--force\",\n \"-f\",\n \"force\",\n is_flag=True,\n help=\"Force pull the stack recipe. This overwrites any existing recipe \"\n \"files present locally, including the terraform state files and the \"\n \"local configuration.\",\n)\[email protected](\n \"--stack-name\",\n \"-n\",\n type=click.STRING,\n required=False,\n help=\"Set a name for the ZenML stack that will be imported from the YAML \"\n \"configuration file which gets generated after deploying the stack recipe. \"\n \"Defaults to the name of the stack recipe being deployed.\",\n)\[email protected](\n \"--import\",\n \"import_stack_flag\",\n is_flag=True,\n help=\"Import the stack automatically after the recipe is deployed.\",\n)\[email protected](\n \"--log-level\",\n type=click.Choice(\n [\"TRACE\", \"DEBUG\", \"INFO\", \"WARN\", \"ERROR\"], case_sensitive=False\n ),\n help=\"Choose one of TRACE, DEBUG, INFO, WARN or ERROR (case insensitive) as \"\n \"log level for the deploy operation.\",\n default=\"ERROR\",\n)\[email protected](\n \"--skip-check\",\n \"-s\",\n is_flag=True,\n help=\"Skip the checking of locals.tf file before executing the recipe.\",\n)\[email protected](\n \"--no-server\",\n is_flag=True,\n help=\"Don't deploy ZenML even if there's no active cloud deployment.\",\n)\[email protected](\n \"--skip-pull\",\n is_flag=True,\n help=\"Skip the pulling of the stack recipe before deploying. This should be used \"\n \"if you have a local copy of your recipe already. Use the `--path` or `-p` flag to \"\n \"specify the directory that hosts your recipe(s).\",\n)\[email protected](\n \"--install\",\n \"-i\",\n \"enabled_services\",\n multiple=True,\n)\n@pass_git_stack_recipes_handler\[email protected]_context\ndef deploy(\n ctx: click.Context,\n git_stack_recipes_handler: GitStackRecipesHandler,\n stack_recipe_name: str,\n path: str,\n force: bool,\n import_stack_flag: bool,\n log_level: str,\n skip_check: bool,\n no_server: bool,\n skip_pull: bool,\n stack_name: Optional[str],\n enabled_services: Tuple[str],\n) -> None:\n \"\"\"Run the stack_recipe at the specified relative path.\n\n `zenml stack_recipe pull <STACK_RECIPE_NAME>` has to be called with the\n same relative path before the `deploy` command.\n\n Args:\n ctx: The click context.\n git_stack_recipes_handler: The GitStackRecipesHandler instance.\n stack_recipe_name: The name of the stack_recipe.\n path: The path at which you want to install the stack_recipe(s).\n force: Force pull the stack recipe, overwriting any existing files.\n stack_name: A name for the ZenML stack that gets imported as a result\n of the recipe deployment.\n import_stack_flag: Import the stack automatically after the recipe is\n deployed. The stack configuration file is always generated and\n can be imported manually otherwise.\n log_level: Choose one of TRACE, DEBUG, INFO, WARN or ERROR (case\n insensitive) as log level for the `deploy` operation.\n skip_check: Skip the checking of locals.tf file before executing the\n recipe.\n no_server: Don't deploy ZenML even if there's no active cloud\n deployment.\n skip_pull: Skip the pull of the stack recipe before deploying. This\n should be used if you have a local copy of your recipe already.\n enabled_services: A list of services to install. Choose from mlflow, seldon,\n kserve, kubeflow, tekton.\n \"\"\"\n with event_handler(\n event=AnalyticsEvent.RUN_STACK_RECIPE,\n metadata={\"stack_recipe_name\": stack_recipe_name},\n ):\n\n import python_terraform\n\n cli_utils.warning(ALPHA_MESSAGE)\n stack_recipes_dir = Path(os.getcwd()) / path\n\n if sys.platform == \"win32\":\n logger.info(\n \"If you are running stack_recipes on Windows, make sure that \"\n \"you have an associated application with executing .sh files. \"\n \"If you don't have any and you see a pop-up during 'zenml \"\n \"stack_recipe run', we suggest to use the Git BASH: \"\n \"https://gitforwindows.org/\"\n )\n\n try:\n if skip_pull:\n pass\n else:\n _ = git_stack_recipes_handler.get_stack_recipes(\n stack_recipe_name\n )[0]\n except KeyError as e:\n cli_utils.error(str(e))\n else:\n stack_recipe_dir = stack_recipes_dir / stack_recipe_name\n local_stack_recipe = LocalStackRecipe(\n stack_recipe_dir, stack_recipe_name\n )\n\n if not local_stack_recipe.is_present():\n if skip_pull:\n cli_utils.error(\n \"You have specified the --skip-pull flag, but the \"\n \"stack recipe is not present locally at the specified \"\n f\"path. Please ensure the {stack_recipe_name} recipe is \"\n f\"present at {stack_recipe_dir} and try again.\"\n )\n else:\n ctx.invoke(\n pull,\n stack_recipe_name=stack_recipe_name,\n path=path,\n force=force,\n )\n\n try:\n # warn that prerequisites should be met\n metadata = yaml_utils.read_yaml(\n file_path=os.path.join(\n local_stack_recipe.path, \"metadata.yaml\"\n )\n )\n if not cli_utils.confirmation(\n \"\\nPrerequisites for running this recipe are as follows.\\n\"\n f\"{metadata['Prerequisites']}\"\n \"\\n\\n Are all of these conditions met?\"\n ):\n cli_utils.warning(\n \"Prerequisites are not installed. Please make sure \"\n \"they are met and run deploy again.\"\n )\n return\n\n if not skip_check:\n logger.info(\n \"The following values are selected for the \"\n \"configuration of your cloud resources. You can \"\n \"change it by modifying the contents of the locals.tf \"\n \"file here: \"\n f\"{os.path.join(local_stack_recipe.path, 'locals.tf')}\\n\"\n )\n\n print(local_stack_recipe.locals_content)\n\n if skip_check or cli_utils.confirmation(\n f\"\\nDo you wish to deploy the {stack_recipe_name} recipe \"\n \"with the above configuration? Please make sure that \"\n \"resources with the same values as above don't already \"\n \"exist on your cloud account.\"\n ):\n from zenml.recipes import StackRecipeService\n from zenml.services.terraform.terraform_service import (\n TerraformServiceConfig,\n )\n\n terraform_config = TerraformServiceConfig(\n root_runtime_path=str(\n StackRecipeService.STACK_RECIPES_CONFIG_PATH\n ),\n directory_path=str(local_stack_recipe.path),\n log_level=log_level,\n variables_file_path=VARIABLES_FILE,\n )\n # find an existing service with the same terraform path\n # create a new one if not found\n stack_recipe_service = StackRecipeService.get_service(\n str(local_stack_recipe.path)\n )\n if stack_recipe_service:\n cli_utils.declare(\n \"An existing deployment of the recipe found. \"\n f\"with path {local_stack_recipe.path}. \"\n \"Proceeding to update or create resources. \"\n )\n else:\n stack_recipe_service = StackRecipeService(\n config=terraform_config,\n enabled_services=enabled_services,\n )\n\n # start the service (the init and apply operation)\n stack_recipe_service.start()\n\n # invoke server deploy\n if no_server:\n logger.info(\n \"The --no-server flag was passed. \"\n \"Skipping the remote deployment of ZenML. \"\n \"Please note that if you wish to use the stack \"\n \"that you created through this recipe, you will \"\n \"need to deploy ZenML on the cloud.\"\n )\n else:\n if zen_server_exists():\n logger.info(\n \"A ZenML deployment exists already with URL: \"\n f\"{GlobalConfiguration().zen_store.url}. \"\n f\"The recipe will mot create a new \"\n f\"installation.\"\n )\n else:\n logger.info(\n \"No remote deployment of ZenML detected. \"\n )\n vars = stack_recipe_service.get_vars()\n filter = [\n \"aws-stores-minimal\",\n \"azureml-minimal\",\n \"vertex-ai\",\n ]\n if Path(\n stack_recipe_service.terraform_client.working_dir\n ).name in filter and (\n \"enable_mlflow\" not in vars\n or vars[\"enable_mlflow\"] is False\n ):\n logger.warning(\n \"This recipe doesn't create a Kubernetes \"\n \"cluster and as of now, an existing \"\n \"cluster is required for ZenML deployment. \"\n \"Please take a look at the \"\n \"guide for steps on how to proceed: \"\n \"https://docs.zenml.io/getting-started/deploying-zenml/cli#option-1-starting-from-scratch\"\n )\n logger.info(\n \"Not attempting to import the generated \"\n \"YAML file since there isn't any active \"\n \"ZenML deployment.\"\n )\n return\n else:\n ctx.invoke(\n server.deploy,\n config=stack_recipe_service.get_deployment_info(),\n connect=True,\n )\n\n # get the stack yaml path\n stack_yaml_file = os.path.join(\n local_stack_recipe.path,\n stack_recipe_service.stack_file_path[2:],\n )\n\n logger.info(\n \"\\nA stack configuration YAML file has been generated \"\n f\"as part of the deployment of the {stack_recipe_name} \"\n f\"recipe. Find it at {stack_yaml_file}.\"\n )\n\n if import_stack_flag:\n logger.info(\n \"\\nThe flag `--import` is set. Proceeding \"\n \"to import a new ZenML stack from the created \"\n \"resources.\"\n )\n import_stack_name = (\n stack_name if stack_name else stack_recipe_name\n )\n cli_utils.declare(\n \"Importing a new stack with the name \"\n f\"{import_stack_name}.\"\n )\n\n # import deployed resources as ZenML stack\n ctx.invoke(\n import_stack,\n stack_name=import_stack_name,\n filename=stack_yaml_file,\n ignore_version_mismatch=True,\n )\n\n cli_utils.declare(\n \"Please consider creating any secrets that your \"\n \"stack components like the metadata store might \"\n \"need. You can inspect the fields of a stack \"\n \"component by running a describe command on them.\"\n )\n cli_utils.declare(\n \"\\n Run 'terraform output' in the recipe's \"\n f\"directory at {local_stack_recipe.path} to get a \"\n f\"list of outputs. To now retrieve sensitive \"\n f\"outputs, for example, the metadata-db-password \"\n \"use the command 'terraform output \"\n \"metadata-db-password' to get the \"\n \"value in the command-line.\"\n )\n\n except RuntimeError as e:\n cli_utils.error(\n f\"Error running recipe {stack_recipe_name}: {str(e)} \"\n \"\\nPlease look at the error message to figure out \"\n \"why the command failed. If the error is due some wrong \"\n \"configuration, please consider checking the locals.tf \"\n \"file to verify if the inputs are correct. Most commonly, \"\n \"the command can fail due to a timeout error. In that \"\n \"case, please run zenml stack recipe deploy \"\n f\"{stack_recipe_name} again.\"\n )\n except python_terraform.TerraformCommandError as e:\n cli_utils.error(\n f\"Error running recipe {stack_recipe_name}: {str(e.err)} \"\n \"\\nPlease look at the error message to figure out why the \"\n \"command failed. If the error is due some wrong \"\n \"configuration, please consider checking the locals.tf \"\n \"file to verify if the inputs are correct. Most commonly, \"\n \"the command can fail due to a timeout error. In that \"\n \"case, please run zenml stack recipe deploy \"\n f\"{stack_recipe_name} again.\"\n )\n\n\ndef zen_server_exists() -> bool:\n \"\"\"Check if a remote ZenServer is active.\n\n Returns:\n True if active, false otherwise.\n \"\"\"\n return not GlobalConfiguration().zen_store.is_local_store()\n\n\n@stack_recipe.command(\n help=\"Destroy the stack components created previously with \"\n \"`zenml stack recipe deploy <name>`\"\n)\[email protected](\"stack_recipe_name\", required=True)\[email protected](\n \"--path\",\n \"-p\",\n type=click.STRING,\n default=\"zenml_stack_recipes\",\n help=\"Relative path at which you want to install the stack_recipe(s)\",\n)\n@pass_git_stack_recipes_handler\ndef destroy(\n git_stack_recipes_handler: GitStackRecipesHandler,\n stack_recipe_name: str,\n path: str,\n) -> None:\n \"\"\"Destroy all resources from the stack_recipe at the specified relative path.\n\n `zenml stack_recipe deploy stack_recipe_name` has to be called with the\n same relative path before the destroy command.\n\n Args:\n git_stack_recipes_handler: The GitStackRecipesHandler instance.\n stack_recipe_name: The name of the stack_recipe.\n path: The path of the stack recipe you want to destroy.\n\n Raises:\n ModuleNotFoundError: If the recipe is found at the given path.\n \"\"\"\n with event_handler(\n event=AnalyticsEvent.DESTROY_STACK_RECIPE,\n metadata={\"stack_recipe_name\": stack_recipe_name},\n ):\n import python_terraform\n\n cli_utils.warning(ALPHA_MESSAGE)\n\n stack_recipes_dir = Path(os.getcwd()) / path\n\n if sys.platform == \"win32\":\n logger.info(\n \"If you are running stack_recipes on Windows, make sure that \"\n \"you have an associated application with executing .sh files. \"\n \"If you don't have any and you see a pop-up during 'zenml \"\n \"stack_recipe run', we suggest to use the Git BASH: \"\n \"https://gitforwindows.org/\"\n )\n\n try:\n _ = git_stack_recipes_handler.get_stack_recipes(stack_recipe_name)[\n 0\n ]\n except KeyError as e:\n cli_utils.error(str(e))\n else:\n stack_recipe_dir = stack_recipes_dir / stack_recipe_name\n local_stack_recipe = LocalStackRecipe(\n stack_recipe_dir, stack_recipe_name\n )\n\n if not local_stack_recipe.is_present():\n raise ModuleNotFoundError(\n f\"The recipe {stack_recipe_name} \"\n \"has not been pulled at the specified path. \"\n f\"Run `zenml stack recipe pull {stack_recipe_name}` \"\n f\"followed by `zenml stack recipe deploy \"\n f\"{stack_recipe_name}` first.\"\n )\n\n try:\n # use the stack recipe directory path to find the service instance\n from zenml.recipes import StackRecipeService\n\n stack_recipe_service = StackRecipeService.get_service(\n str(local_stack_recipe.path)\n )\n if not stack_recipe_service:\n cli_utils.error(\n \"No stack recipe found with the path \"\n f\"{local_stack_recipe.path}. You need to first deploy \"\n \"the recipe by running \\nzenml stack recipe deploy \"\n f\"{stack_recipe_name}\"\n )\n # stop the service to destroy resources created by recipe\n stack_recipe_service.stop()\n\n cli_utils.declare(\n \"\\n\"\n + \"Your active stack might now be invalid. Please run:\"\n )\n text = Text(\n \"zenml stack describe\", style=\"markdown.code_block\"\n )\n cli_utils.declare(text)\n cli_utils.declare(\n \"\\n\"\n + \"to investigate and switch to a new stack if needed.\"\n )\n\n except python_terraform.TerraformCommandError as e:\n force_message = \"\"\n if stack_recipe_name == \"aws_minimal\":\n force_message = (\n \"If there are Kubernetes resources that aren't\"\n \"getting deleted, run 'kubectl delete node -all' to \"\n \"delete the nodes and consequently all Kubernetes \"\n \"resources. Run the destroy again after that, to \"\n \"remove any other remaining resources.\"\n )\n cli_utils.error(\n f\"Error destroying recipe {stack_recipe_name}: {str(e.err)}\"\n \"\\nMost commonly, the error occurs if there's some \"\n \"resource that can't be deleted instantly, for example, \"\n \"MySQL stores with backups. In such cases, please try \"\n \"again after around 30 minutes. If the issue persists, \"\n f\"kindly raise an issue at {STACK_RECIPES_GITHUB_REPO}. \"\n f\"\\n{force_message}\"\n )\n except subprocess.CalledProcessError as e:\n cli_utils.warning(\n f\"Error destroying recipe {stack_recipe_name}: {str(e)}\"\n \"\\nThe kubernetes cluster couldn't be removed due to the \"\n \"error above. Please verify if the cluster has already \"\n \"been deleted by running kubectl get nodes to check if \"\n \"there's any active nodes.Ignore this warning if there \"\n \"are no active nodes.\"\n )\n", "path": "src/zenml/cli/stack_recipes.py" } ]
diff --git a/src/zenml/cli/stack_recipes.py b/src/zenml/cli/stack_recipes.py index e028bffd570..63c54e6773f 100644 --- a/src/zenml/cli/stack_recipes.py +++ b/src/zenml/cli/stack_recipes.py @@ -1080,7 +1080,6 @@ def zen_server_exists() -> bool: help="Relative path at which you want to install the stack_recipe(s)", ) @pass_git_stack_recipes_handler [email protected]_context def destroy( git_stack_recipes_handler: GitStackRecipesHandler, stack_recipe_name: str,
ivy-llc__ivy-20554
rfftn
[ { "content": "# global\nimport ivy\nfrom ivy.functional.frontends.scipy.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n# fft\n@to_ivy_arrays_and_back\ndef fft(x, n=None, axis=-1, norm=\"backward\", overwrite_x=False):\n return ivy.fft(x, axis, norm=norm, n=n)\n\n\n# ifft\n@to_ivy_arrays_and_back\ndef ifft(x, n=None, axis=-1, norm=\"backward\", overwrite_x=False):\n return ivy.ifft(x, axis, norm=norm, n=n)\n\n\n# dct\n@to_ivy_arrays_and_back\ndef dct(x, type=2, n=None, axis=-1, norm=None, overwrite_x=False, orthogonalize=None):\n return ivy.dct(x, type=type, n=n, axis=axis, norm=norm)\n\n\n# idct\n@to_ivy_arrays_and_back\ndef idct(x, type=2, n=None, axis=-1, norm=None, overwrite_x=False, orthogonalize=None):\n inverse_type = {1: 1, 2: 3, 3: 2, 4: 4}[type]\n return ivy.dct(x, type=inverse_type, n=n, axis=axis, norm=norm)\n\n\n@to_ivy_arrays_and_back\ndef fft2(x, s=None, axes=(-2, -1), norm=None, overwrite_x=False):\n return ivy.fft2(x, s=s, dim=axes, norm=norm)\n\n\n@to_ivy_arrays_and_back\ndef ifftn(\n x, s=None, axes=None, norm=None, overwrite_x=False, workers=None, *, plan=None\n):\n return ivy.ifftn(x, s=s, dim=axes, norm=norm)\n", "path": "ivy/functional/frontends/scipy/fft/fft.py" } ]
[ { "content": "# global\nimport ivy\nfrom ivy.functional.frontends.scipy.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n# fft\n@to_ivy_arrays_and_back\ndef fft(x, n=None, axis=-1, norm=\"backward\", overwrite_x=False):\n return ivy.fft(x, axis, norm=norm, n=n)\n\n\n# ifft\n@to_ivy_arrays_and_back\ndef ifft(x, n=None, axis=-1, norm=\"backward\", overwrite_x=False):\n return ivy.ifft(x, axis, norm=norm, n=n)\n\n\n# dct\n@to_ivy_arrays_and_back\ndef dct(x, type=2, n=None, axis=-1, norm=None, overwrite_x=False, orthogonalize=None):\n return ivy.dct(x, type=type, n=n, axis=axis, norm=norm)\n\n\n# idct\n@to_ivy_arrays_and_back\ndef idct(x, type=2, n=None, axis=-1, norm=None, overwrite_x=False, orthogonalize=None):\n inverse_type = {1: 1, 2: 3, 3: 2, 4: 4}[type]\n return ivy.dct(x, type=inverse_type, n=n, axis=axis, norm=norm)\n\n\n@to_ivy_arrays_and_back\ndef fft2(x, s=None, axes=(-2, -1), norm=None, overwrite_x=False):\n return ivy.fft2(x, s=s, dim=axes, norm=norm)\n\n\n@to_ivy_arrays_and_back\ndef ifftn(\n x, s=None, axes=None, norm=None, overwrite_x=False, workers=None, *, plan=None\n):\n return ivy.ifftn(x, s=s, dim=axes, norm=norm)\n\n\n@to_ivy_arrays_and_back\ndef rfftn(\n x, s=None, axes=None, norm=None, overwrite_x=False, workers=None, *, plan=None\n):\n return ivy.rfftn(x, s=s, dim=axes, norm=norm)\n", "path": "ivy/functional/frontends/scipy/fft/fft.py" } ]
diff --git a/ivy/functional/frontends/scipy/fft/fft.py b/ivy/functional/frontends/scipy/fft/fft.py index f4e5866a82f4b..27480b2c42eac 100644 --- a/ivy/functional/frontends/scipy/fft/fft.py +++ b/ivy/functional/frontends/scipy/fft/fft.py @@ -40,3 +40,10 @@ def ifftn( x, s=None, axes=None, norm=None, overwrite_x=False, workers=None, *, plan=None ): return ivy.ifftn(x, s=s, dim=axes, norm=norm) + + +@to_ivy_arrays_and_back +def rfftn( + x, s=None, axes=None, norm=None, overwrite_x=False, workers=None, *, plan=None +): + return ivy.rfftn(x, s=s, dim=axes, norm=norm) diff --git a/ivy_tests/test_ivy/test_frontends/test_scipy/test_fft/test_fft.py b/ivy_tests/test_ivy/test_frontends/test_scipy/test_fft/test_fft.py index bd664b4a6964a..d0f602d627cf8 100644 --- a/ivy_tests/test_ivy/test_frontends/test_scipy/test_fft/test_fft.py +++ b/ivy_tests/test_ivy/test_frontends/test_scipy/test_fft/test_fft.py @@ -305,3 +305,31 @@ # norm=norm, # workers=workers, # ) + + +# # rfftn +# @handle_frontend_test( +# fn_tree="scipy.fft.rfftn", +# d_x_d_s_n_workers=x_and_ifftn(), +# test_with_out=st.just(False), +# ) +# def test_scipy_rfftn( +# d_x_d_s_n_workers, +# frontend, +# test_flags, +# fn_tree, +# on_device, +# ): +# dtype, x, s, ax, norm, workers = d_x_d_s_n_workers +# helpers.test_frontend_function( +# input_dtypes=dtype, +# frontend=frontend, +# test_flags=test_flags, +# fn_tree=fn_tree, +# on_device=on_device, +# x=x[0], +# s=s, +# axes=ax, +# norm=norm, +# workers=workers, +# )
archlinux__archinstall-763
in xfce, it need xarchiver for create archive & extract here-to in xfce, it need xarchiver for create archive & extract here-to in xfce, it need xarchiver for create archive & extract here-to in xfce, it need xarchiver for create archive & extract here-to
[ { "content": "# A desktop environment using \"Xfce4\"\n\nimport archinstall\n\nis_top_level_profile = False\n\n__packages__ = [\n\t\"xfce4\",\n\t\"xfce4-goodies\",\n\t\"pavucontrol\",\n\t\"lightdm\",\n\t\"lightdm-gtk-greeter\",\n\t\"gvfs\",\n\t\"network-manager-applet\",\n]\n\n\ndef _prep_function(*args, **kwargs):\n\t\"\"\"\n\tMagic function called by the importing installer\n\tbefore continuing any further. It also avoids executing any\n\tother code in this stage. So it's a safe way to ask the user\n\tfor more input before any other installer steps start.\n\t\"\"\"\n\n\t# XFCE requires a functional xorg installation.\n\tprofile = archinstall.Profile(None, 'xorg')\n\twith profile.load_instructions(namespace='xorg.py') as imported:\n\t\tif hasattr(imported, '_prep_function'):\n\t\t\treturn imported._prep_function()\n\t\telse:\n\t\t\tprint('Deprecated (??): xorg profile has no _prep_function() anymore')\n\n\n# Ensures that this code only gets executed if executed\n# through importlib.util.spec_from_file_location(\"xfce4\", \"/somewhere/xfce4.py\")\n# or through conventional import xfce4\nif __name__ == 'xfce4':\n\t# Install dependency profiles\n\tarchinstall.storage['installation_session'].install_profile('xorg')\n\n\t# Install the XFCE4 packages\n\tarchinstall.storage['installation_session'].add_additional_packages(__packages__)\n\n\tarchinstall.storage['installation_session'].enable_service('lightdm') # Light Display Manager\n", "path": "profiles/xfce4.py" }, { "content": "# A desktop environment using \"KDE\".\n\nimport archinstall\n\nis_top_level_profile = False\n\n__packages__ = [\n\t\"plasma-meta\",\n\t\"konsole\",\n\t\"kate\",\n\t\"dolphin\",\n\t\"sddm\",\n\t\"plasma-wayland-session\",\n\t\"egl-wayland\",\n]\n\n\n# TODO: Remove hard dependency of bash (due to .bash_profile)\n\n\ndef _prep_function(*args, **kwargs):\n\t\"\"\"\n\tMagic function called by the importing installer\n\tbefore continuing any further. It also avoids executing any\n\tother code in this stage. So it's a safe way to ask the user\n\tfor more input before any other installer steps start.\n\t\"\"\"\n\n\t# KDE requires a functioning Xorg installation.\n\tprofile = archinstall.Profile(None, 'xorg')\n\twith profile.load_instructions(namespace='xorg.py') as imported:\n\t\tif hasattr(imported, '_prep_function'):\n\t\t\treturn imported._prep_function()\n\t\telse:\n\t\t\tprint('Deprecated (??): xorg profile has no _prep_function() anymore')\n\n\n\"\"\"\ndef _post_install(*args, **kwargs):\n\tif \"nvidia\" in _gfx_driver_packages:\n\t\tprint(\"Plasma Wayland has known compatibility issues with the proprietary Nvidia driver\")\n\tprint(\"After booting, you can choose between Wayland and Xorg using the drop-down menu\")\n\treturn True\n\"\"\"\n\n# Ensures that this code only gets executed if executed\n# through importlib.util.spec_from_file_location(\"kde\", \"/somewhere/kde.py\")\n# or through conventional import kde\nif __name__ == 'kde':\n\t# Install dependency profiles\n\tarchinstall.storage['installation_session'].install_profile('xorg')\n\n\t# Install the KDE packages\n\tarchinstall.storage['installation_session'].add_additional_packages(__packages__)\n\n\t# Enable autostart of KDE for all users\n\tarchinstall.storage['installation_session'].enable_service('sddm')\n", "path": "profiles/kde.py" } ]
[ { "content": "# A desktop environment using \"Xfce4\"\n\nimport archinstall\n\nis_top_level_profile = False\n\n__packages__ = [\n\t\"xfce4\",\n\t\"xfce4-goodies\",\n\t\"pavucontrol\",\n\t\"lightdm\",\n\t\"lightdm-gtk-greeter\",\n\t\"gvfs\",\n\t\"network-manager-applet\",\n\t\"xarchiver\"\n]\n\n\ndef _prep_function(*args, **kwargs):\n\t\"\"\"\n\tMagic function called by the importing installer\n\tbefore continuing any further. It also avoids executing any\n\tother code in this stage. So it's a safe way to ask the user\n\tfor more input before any other installer steps start.\n\t\"\"\"\n\n\t# XFCE requires a functional xorg installation.\n\tprofile = archinstall.Profile(None, 'xorg')\n\twith profile.load_instructions(namespace='xorg.py') as imported:\n\t\tif hasattr(imported, '_prep_function'):\n\t\t\treturn imported._prep_function()\n\t\telse:\n\t\t\tprint('Deprecated (??): xorg profile has no _prep_function() anymore')\n\n\n# Ensures that this code only gets executed if executed\n# through importlib.util.spec_from_file_location(\"xfce4\", \"/somewhere/xfce4.py\")\n# or through conventional import xfce4\nif __name__ == 'xfce4':\n\t# Install dependency profiles\n\tarchinstall.storage['installation_session'].install_profile('xorg')\n\n\t# Install the XFCE4 packages\n\tarchinstall.storage['installation_session'].add_additional_packages(__packages__)\n\n\tarchinstall.storage['installation_session'].enable_service('lightdm') # Light Display Manager\n", "path": "profiles/xfce4.py" }, { "content": "# A desktop environment using \"KDE\".\n\nimport archinstall\n\nis_top_level_profile = False\n\n__packages__ = [\n\t\"plasma-meta\",\n\t\"konsole\",\n\t\"kate\",\n\t\"dolphin\",\n\t\"ark\",\n\t\"sddm\",\n\t\"plasma-wayland-session\",\n\t\"egl-wayland\",\n]\n\n\n# TODO: Remove hard dependency of bash (due to .bash_profile)\n\n\ndef _prep_function(*args, **kwargs):\n\t\"\"\"\n\tMagic function called by the importing installer\n\tbefore continuing any further. It also avoids executing any\n\tother code in this stage. So it's a safe way to ask the user\n\tfor more input before any other installer steps start.\n\t\"\"\"\n\n\t# KDE requires a functioning Xorg installation.\n\tprofile = archinstall.Profile(None, 'xorg')\n\twith profile.load_instructions(namespace='xorg.py') as imported:\n\t\tif hasattr(imported, '_prep_function'):\n\t\t\treturn imported._prep_function()\n\t\telse:\n\t\t\tprint('Deprecated (??): xorg profile has no _prep_function() anymore')\n\n\n\"\"\"\ndef _post_install(*args, **kwargs):\n\tif \"nvidia\" in _gfx_driver_packages:\n\t\tprint(\"Plasma Wayland has known compatibility issues with the proprietary Nvidia driver\")\n\tprint(\"After booting, you can choose between Wayland and Xorg using the drop-down menu\")\n\treturn True\n\"\"\"\n\n# Ensures that this code only gets executed if executed\n# through importlib.util.spec_from_file_location(\"kde\", \"/somewhere/kde.py\")\n# or through conventional import kde\nif __name__ == 'kde':\n\t# Install dependency profiles\n\tarchinstall.storage['installation_session'].install_profile('xorg')\n\n\t# Install the KDE packages\n\tarchinstall.storage['installation_session'].add_additional_packages(__packages__)\n\n\t# Enable autostart of KDE for all users\n\tarchinstall.storage['installation_session'].enable_service('sddm')\n", "path": "profiles/kde.py" } ]
diff --git a/profiles/kde.py b/profiles/kde.py index c58f4f45dd..0679859372 100644 --- a/profiles/kde.py +++ b/profiles/kde.py @@ -9,6 +9,7 @@ "konsole", "kate", "dolphin", + "ark", "sddm", "plasma-wayland-session", "egl-wayland", diff --git a/profiles/xfce4.py b/profiles/xfce4.py index 89c04f7cb5..2a4280864c 100644 --- a/profiles/xfce4.py +++ b/profiles/xfce4.py @@ -12,6 +12,7 @@ "lightdm-gtk-greeter", "gvfs", "network-manager-applet", + "xarchiver" ]
iterative__dvc-2457
dvc remove CLI documentation inconsistency `dvc remove` (without `targets`) prints help which states that `targets` are optional, and if not specified will remove all DVC-files. Clearly not the case. ```bash $ dvc remove [...] targets DVC-files to remove. Optional. (Finds all DVC-files in the workspace by default.) ```
[ { "content": "from __future__ import unicode_literals\n\nimport argparse\nimport logging\n\nimport dvc.prompt as prompt\nfrom dvc.exceptions import DvcException\nfrom dvc.command.base import CmdBase, append_doc_link\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CmdRemove(CmdBase):\n def _is_outs_only(self, target):\n if not self.args.purge:\n return True\n\n if self.args.force:\n return False\n\n msg = \"Are you sure you want to remove {} with its outputs?\".format(\n target\n )\n\n if prompt.confirm(msg):\n return False\n\n raise DvcException(\n \"Cannot purge without a confirmation from the user.\"\n \" Use '-f' to force.\"\n )\n\n def run(self):\n for target in self.args.targets:\n try:\n outs_only = self._is_outs_only(target)\n self.repo.remove(target, outs_only=outs_only)\n except DvcException:\n logger.exception(\"failed to remove {}\".format(target))\n return 1\n return 0\n\n\ndef add_parser(subparsers, parent_parser):\n REMOVE_HELP = \"Remove DVC-file outputs.\"\n remove_parser = subparsers.add_parser(\n \"remove\",\n parents=[parent_parser],\n description=append_doc_link(REMOVE_HELP, \"remove\"),\n help=REMOVE_HELP,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n remove_parser_group = remove_parser.add_mutually_exclusive_group()\n remove_parser_group.add_argument(\n \"-o\",\n \"--outs\",\n action=\"store_true\",\n default=True,\n help=\"Only remove DVC-file outputs. (Default)\",\n )\n remove_parser_group.add_argument(\n \"-p\",\n \"--purge\",\n action=\"store_true\",\n default=False,\n help=\"Remove DVC-file and all its outputs.\",\n )\n remove_parser.add_argument(\n \"-f\",\n \"--force\",\n action=\"store_true\",\n default=False,\n help=\"Force purge.\",\n )\n remove_parser.add_argument(\n \"targets\",\n nargs=\"+\",\n help=\"DVC-files to remove. Optional. \"\n \"(Finds all DVC-files in the workspace by default.)\",\n )\n remove_parser.set_defaults(func=CmdRemove)\n", "path": "dvc/command/remove.py" } ]
[ { "content": "from __future__ import unicode_literals\n\nimport argparse\nimport logging\n\nimport dvc.prompt as prompt\nfrom dvc.exceptions import DvcException\nfrom dvc.command.base import CmdBase, append_doc_link\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CmdRemove(CmdBase):\n def _is_outs_only(self, target):\n if not self.args.purge:\n return True\n\n if self.args.force:\n return False\n\n msg = \"Are you sure you want to remove {} with its outputs?\".format(\n target\n )\n\n if prompt.confirm(msg):\n return False\n\n raise DvcException(\n \"Cannot purge without a confirmation from the user.\"\n \" Use '-f' to force.\"\n )\n\n def run(self):\n for target in self.args.targets:\n try:\n outs_only = self._is_outs_only(target)\n self.repo.remove(target, outs_only=outs_only)\n except DvcException:\n logger.exception(\"failed to remove {}\".format(target))\n return 1\n return 0\n\n\ndef add_parser(subparsers, parent_parser):\n REMOVE_HELP = \"Remove DVC-file outputs.\"\n remove_parser = subparsers.add_parser(\n \"remove\",\n parents=[parent_parser],\n description=append_doc_link(REMOVE_HELP, \"remove\"),\n help=REMOVE_HELP,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n remove_parser_group = remove_parser.add_mutually_exclusive_group()\n remove_parser_group.add_argument(\n \"-o\",\n \"--outs\",\n action=\"store_true\",\n default=True,\n help=\"Only remove DVC-file outputs. (Default)\",\n )\n remove_parser_group.add_argument(\n \"-p\",\n \"--purge\",\n action=\"store_true\",\n default=False,\n help=\"Remove DVC-file and all its outputs.\",\n )\n remove_parser.add_argument(\n \"-f\",\n \"--force\",\n action=\"store_true\",\n default=False,\n help=\"Force purge.\",\n )\n remove_parser.add_argument(\n \"targets\", nargs=\"+\", help=\"DVC-files to remove.\"\n )\n remove_parser.set_defaults(func=CmdRemove)\n", "path": "dvc/command/remove.py" } ]
diff --git a/dvc/command/remove.py b/dvc/command/remove.py index b8e98200c9..814adcfae3 100644 --- a/dvc/command/remove.py +++ b/dvc/command/remove.py @@ -74,9 +74,6 @@ def add_parser(subparsers, parent_parser): help="Force purge.", ) remove_parser.add_argument( - "targets", - nargs="+", - help="DVC-files to remove. Optional. " - "(Finds all DVC-files in the workspace by default.)", + "targets", nargs="+", help="DVC-files to remove." ) remove_parser.set_defaults(func=CmdRemove)
PrefectHQ__prefect-1168
flow.update(flow) doesn't maintain mapped edges ``` from prefect import task, Flow, Parameter @task def add_one(x): return x + 1 @task def printit(p): print(p) with Flow("Test Flow") as test_flow: test_list = Parameter("test_list") add_one_list = add_one.map(test_list) printit(add_one_list) with Flow("Second Test") as second_test_flow: second_test_flow.update(test_flow) test_flow.run(test_list=[1, 2, 3]) second_test_flow.run(test_list=[1, 2, 3]) ``` In this example, the `second_test_flow.run` will fail with the following error in the `add_one` task: ``` Unexpected error: TypeError('can only concatenate list (not "int") to list') ``` Is this the intended effect? If not, it should be fixable by updating `add_edge` call in `Flow.update` [here](https://github.com/PrefectHQ/prefect/blob/cfb186a4e6fb387e610d35cb530d10f4032f2da2/src/prefect/core/flow.py#L547-L552) to: ``` self.add_edge( upstream_task=edge.upstream_task, downstream_task=edge.downstream_task, key=edge.key, mapped=edge.mapped, validate=validate, ) ```
[ { "content": "import collections\nimport copy\nimport functools\nimport inspect\nimport json\nimport os\nimport tempfile\nimport time\nimport uuid\nimport warnings\nfrom collections import Counter\nfrom typing import (\n Any,\n Callable,\n Dict,\n Iterable,\n List,\n Mapping,\n Optional,\n Set,\n Tuple,\n Union,\n cast,\n)\n\nimport pendulum\nfrom mypy_extensions import TypedDict\n\nimport prefect\nimport prefect.schedules\nfrom prefect.core.edge import Edge\nfrom prefect.core.task import Parameter, Task\nfrom prefect.engine.result import NoResult\nfrom prefect.engine.result_handlers import ResultHandler\nfrom prefect.environments import CloudEnvironment, Environment\nfrom prefect.environments.storage import Storage\nfrom prefect.utilities import logging\nfrom prefect.utilities.notifications import callback_factory\nfrom prefect.utilities.serialization import to_qualified_name\nfrom prefect.utilities.tasks import as_task, unmapped\n\nParameterDetails = TypedDict(\"ParameterDetails\", {\"default\": Any, \"required\": bool})\n\n\ndef cache(method: Callable) -> Callable:\n \"\"\"\n Decorator for caching Flow methods.\n\n Each Flow has a _cache dict that can be used to memoize expensive functions. This\n decorator automatically compares a hash of the Flow's current tasks, edges, and reference_tasks\n to a cached hash; if the hash is the same, it attempts to retrieve a value from the cache.\n If the hash is different, it invalidates the cache.\n \"\"\"\n\n @functools.wraps(method)\n def wrapper(self, *args, **kwargs): # type: ignore\n\n cache_check = dict(\n tasks=self.tasks.copy(),\n edges=self.edges.copy(),\n reference_tasks=copy.copy(self._reference_tasks),\n )\n if any(self._cache.get(k) != v for k, v in cache_check.items()):\n self._cache.clear()\n self._cache.update(cache_check)\n\n callargs = inspect.signature(method).bind(self, *args, **kwargs).arguments\n key = (method.__name__, tuple(callargs.items())[1:])\n if key not in self._cache:\n self._cache[key] = method(self, *args, **kwargs)\n return self._cache[key]\n\n return wrapper\n\n\nclass Flow:\n \"\"\"\n The Flow class is used as the representation of a collection of dependent Tasks.\n Flows track Task dependencies, parameters and provide the main API for constructing and managing workflows.\n\n Initializing Flow example:\n ```python\n class MyTask(Task):\n def run(self):\n return \"hello\"\n\n task_1 = MyTask()\n flow = Flow(name=\"my_flow\", tasks=[task_1])\n\n flow.run()\n ```\n\n Initializing Flow as context manager example:\n ```python\n @task\n def my_task():\n return \"hello\"\n\n with Flow(\"my_flow\") as flow:\n task_1 = my_task()\n\n flow.run()\n ```\n\n Args:\n - name (str): The name of the flow. Cannot be `None` or an empty string\n - schedule (prefect.schedules.Schedule, optional): A default schedule for the flow\n - environment (prefect.environments.Environment, optional): The environment\n that the flow should be run in. If `None`, a `CloudEnvironment` will be created.\n - storage (prefect.environments.storage.Storage, optional): The unit of storage\n that the flow will be written into.\n - tasks ([Task], optional): If provided, a list of tasks that will initialize the flow\n - edges ([Edge], optional): A list of edges between tasks\n - reference_tasks ([Task], optional): A list of tasks which determine the final\n state of a flow\n - state_handlers (Iterable[Callable], optional): A list of state change handlers\n that will be called whenever the flow changes state, providing an\n opportunity to inspect or modify the new state. The handler\n will be passed the flow instance, the old (prior) state, and the new\n (current) state, with the following signature:\n `state_handler(flow: Flow, old_state: State, new_state: State) -> Optional[State]`\n If multiple functions are passed, then the `new_state` argument will be the\n result of the previous handler.\n - on_failure (Callable, optional): A function with signature `fn(flow: Flow, state: State) -> None`\n which will be called anytime this Flow enters a failure state\n - validate (bool, optional): Whether or not to check the validity of\n the flow (e.g., presence of cycles and illegal keys) after adding the edges passed\n in the `edges` argument. Defaults to the value of `eager_edge_validation` in\n your prefect configuration file.\n - result_handler (ResultHandler, optional): the handler to use for\n retrieving and storing state results during execution; if not provided, will default\n to the one specified in your config\n\n \"\"\"\n\n def __init__(\n self,\n name: str,\n schedule: prefect.schedules.Schedule = None,\n environment: Environment = None,\n storage: Storage = None,\n tasks: Iterable[Task] = None,\n edges: Iterable[Edge] = None,\n reference_tasks: Iterable[Task] = None,\n state_handlers: List[Callable] = None,\n on_failure: Callable = None,\n validate: bool = None,\n result_handler: ResultHandler = None,\n ):\n self._cache = {} # type: dict\n\n self.logger = logging.get_logger(\"Flow\")\n\n if not name:\n raise ValueError(\"A name must be provided for the flow.\")\n\n self.name = name\n self.schedule = schedule\n self.environment = environment or prefect.environments.CloudEnvironment()\n self.storage = storage\n self.result_handler = (\n result_handler or prefect.engine.get_default_result_handler_class()()\n )\n\n self.tasks = set() # type: Set[Task]\n self.edges = set() # type: Set[Edge]\n\n for t in tasks or []:\n self.add_task(t)\n\n self.set_reference_tasks(reference_tasks or [])\n for e in edges or []:\n self.add_edge(\n upstream_task=e.upstream_task,\n downstream_task=e.downstream_task,\n key=e.key,\n mapped=e.mapped,\n validate=validate,\n )\n\n self._prefect_version = prefect.__version__\n\n if state_handlers and not isinstance(state_handlers, collections.Sequence):\n raise TypeError(\"state_handlers should be iterable.\")\n self.state_handlers = state_handlers or []\n if on_failure is not None:\n self.state_handlers.append(\n callback_factory(on_failure, check=lambda s: s.is_failed())\n )\n\n super().__init__()\n\n def __eq__(self, other: Any) -> bool:\n if type(self) == type(other):\n s = (self.name, self.tasks, self.edges, self.reference_tasks())\n o = (other.name, other.tasks, other.edges, other.reference_tasks())\n return s == o\n return False\n\n def __repr__(self) -> str:\n template = '<{cls}: name=\"{self.name}\">'\n return template.format(cls=type(self).__name__, self=self)\n\n def __iter__(self) -> Iterable[Task]:\n yield from self.sorted_tasks()\n\n def copy(self) -> \"Flow\":\n \"\"\"\n Create and returns a copy of the current Flow.\n \"\"\"\n new = copy.copy(self)\n # create a new cache\n new._cache = dict()\n new.tasks = self.tasks.copy()\n new.edges = self.edges.copy()\n new.set_reference_tasks(self._reference_tasks)\n return new\n\n # Identification -----------------------------------------------------------\n\n def get_tasks(\n self,\n name: str = None,\n slug: str = None,\n tags: Iterable[str] = None,\n task_type: type = None,\n ) -> List[Task]:\n \"\"\"\n Helper method for retrieving tasks from this flow based on certain attributes.\n The _intersection_ of all provided attributes is taken, i.e., only those tasks\n which match _all_ provided conditions are returned.\n\n Args:\n - name (str, optional): the name of the task\n - slug (str, optional): the slug of the task\n - tags ([str], optional): an iterable of task tags\n - task_type (type, optional): a possible task class type\n\n Returns:\n - [Task]: a list of tasks which meet the required conditions\n \"\"\"\n\n def sieve(t: Task) -> bool:\n keep = True\n if name is not None:\n keep &= t.name == name\n if slug is not None:\n keep &= t.slug == slug\n if tags is not None:\n keep &= t.tags.issuperset(tags)\n if task_type is not None:\n keep &= isinstance(t, task_type)\n return keep\n\n keep_tasks = filter(sieve, self.tasks)\n return list(keep_tasks)\n\n def replace(self, old: Task, new: Task, validate: bool = True) -> None:\n \"\"\"\n Performs an inplace replacement of the old task with the provided new task.\n\n Args:\n - old (Task): the old task to replace\n - new (Task): the new task to replace the old with; if not a Prefect\n Task, Prefect will attempt to convert it to one\n - validate (boolean, optional): whether to validate the Flow after\n the replace has been completed; defaults to `True`\n\n Raises:\n - ValueError: if the `old` task is not a part of this flow\n \"\"\"\n if old not in self.tasks:\n raise ValueError(\"Task {t} was not found in Flow {f}\".format(t=old, f=self))\n\n new = as_task(new)\n\n # update tasks\n self.tasks.remove(old)\n self.add_task(new)\n\n self._cache.clear()\n\n affected_edges = {e for e in self.edges if old in e.tasks}\n\n # remove old edges\n for edge in affected_edges:\n self.edges.remove(edge)\n\n # replace with new edges\n for edge in affected_edges:\n upstream = new if edge.upstream_task == old else edge.upstream_task\n downstream = new if edge.downstream_task == old else edge.downstream_task\n self.add_edge(\n upstream_task=upstream,\n downstream_task=downstream,\n key=edge.key,\n mapped=edge.mapped,\n validate=False,\n )\n\n # update auxiliary task collections\n ref_tasks = self.reference_tasks()\n new_refs = [t for t in ref_tasks if t != old] + (\n [new] if old in ref_tasks else []\n )\n self.set_reference_tasks(new_refs)\n\n if validate:\n self.validate()\n\n # Context Manager ----------------------------------------------------------\n\n def __enter__(self) -> \"Flow\":\n self.__previous_flow = prefect.context.get(\"flow\")\n prefect.context.update(flow=self)\n return self\n\n def __exit__(self, _type, _value, _tb) -> None: # type: ignore\n del prefect.context.flow\n if self.__previous_flow is not None:\n prefect.context.update(flow=self.__previous_flow)\n\n del self.__previous_flow\n\n # Introspection ------------------------------------------------------------\n\n @cache\n def root_tasks(self) -> Set[Task]:\n \"\"\"\n Get the tasks in the flow that have no upstream dependencies; these are\n the tasks which, by default, flow execution begins with.\n\n Returns:\n - set of Task objects that have no upstream dependencies\n \"\"\"\n return set(t for t in self.tasks if not self.edges_to(t))\n\n @cache\n def terminal_tasks(self) -> Set[Task]:\n \"\"\"\n Get the tasks in the flow that have no downstream dependencies\n\n Returns:\n - set of Task objects that have no downstream dependencies\n \"\"\"\n return set(t for t in self.tasks if not self.edges_from(t))\n\n def parameters(self) -> Set[Parameter]:\n \"\"\"\n Returns any parameters of the flow.\n\n Returns:\n - set: a set of any Parameters in this flow\n \"\"\"\n return {p for p in self.tasks if isinstance(p, Parameter)}\n\n def reference_tasks(self) -> Set[Task]:\n \"\"\"\n A flow's \"reference tasks\" are used to determine its state when it runs. If all the reference\n tasks are successful, then the flow run is considered successful. However, if\n any of the reference tasks fail, the flow is considered to fail. (Note that skips are\n counted as successes; see [the state documentation](../engine/state.html) for a full description\n of what is considered failure, success, etc.)\n\n By default, a flow's reference tasks are its terminal tasks. This means the state of a\n flow is determined by those tasks which have no downstream dependencies.\n\n In some situations, users may want to customize this behavior; for example, if a\n flow's terminal tasks are \"clean up\" tasks for the rest of the flow that only run\n if certain (more relevant) tasks fail, we might not want them determining the overall\n state of the flow run. The `flow.set_reference_tasks()` method can be used to set such custom `reference_tasks`.\n\n Please note that even if `reference_tasks` are provided that are not terminal tasks, the flow\n will not be considered \"finished\" until all terminal tasks have completed. Only then\n will state be determined, using the reference tasks.\n\n Returns:\n - set of Task objects which are the reference tasks in the flow\n \"\"\"\n if self._reference_tasks:\n return set(self._reference_tasks)\n else:\n return self.terminal_tasks()\n\n def set_reference_tasks(self, tasks: Iterable[Task]) -> None:\n \"\"\"\n Sets the `reference_tasks` for the flow. See `flow.reference_tasks` for more details.\n\n Args:\n - tasks ([Task]): the tasks that should be set as a flow's reference tasks\n\n Returns:\n - None\n \"\"\"\n self._cache.clear()\n reference_tasks = set(tasks)\n if any(t not in self.tasks for t in reference_tasks):\n raise ValueError(\"reference tasks must be part of the flow.\")\n self._reference_tasks = reference_tasks\n\n # Graph --------------------------------------------------------------------\n\n def add_task(self, task: Task) -> Task:\n \"\"\"\n Add a task to the flow if the task does not already exist. The tasks are\n uniquely identified by their `slug`.\n\n Args:\n - task (Task): the new Task to be added to the flow\n\n Returns:\n - Task: the `Task` object passed in if the task was successfully added\n\n Raises:\n - TypeError: if the `task` is not of type `Task`\n - ValueError: if the `task.slug` matches that of a task already in the flow\n \"\"\"\n if not isinstance(task, Task):\n raise TypeError(\n \"Tasks must be Task instances (received {})\".format(type(task))\n )\n elif task not in self.tasks:\n if task.slug and any(task.slug == t.slug for t in self.tasks):\n raise ValueError(\n 'A task with the slug \"{}\" already exists in this '\n \"flow.\".format(task.slug)\n )\n\n if task not in self.tasks:\n self.tasks.add(task)\n self._cache.clear()\n\n return task\n\n def add_edge(\n self,\n upstream_task: Task,\n downstream_task: Task,\n key: str = None,\n mapped: bool = False,\n validate: bool = None,\n ) -> Edge:\n \"\"\"\n Add an edge in the flow between two tasks. All edges are directed beginning with\n an upstream task and ending with a downstream task.\n\n Args:\n - upstream_task (Task): The task that the edge should start from\n - downstream_task (Task): The task that the edge should end with\n - key (str, optional): The key to be set for the new edge; the result of the upstream task\n will be passed to the downstream task's `run()` method under this keyword argument\n - mapped (bool, optional): Whether this edge represents a call to `Task.map()`; defaults to `False`\n - validate (bool, optional): Whether or not to check the validity of\n the flow (e.g., presence of cycles and illegal keys). Defaults to the value\n of `eager_edge_validation` in your prefect configuration file.\n\n Returns:\n - prefect.core.edge.Edge: The `Edge` object that was successfully added to the flow\n\n Raises:\n - ValueError: if the `downstream_task` is of type `Parameter`\n - ValueError: if the edge exists with this `key` and `downstream_task`\n \"\"\"\n if validate is None:\n validate = cast(bool, prefect.config.flows.eager_edge_validation)\n if isinstance(downstream_task, Parameter):\n raise ValueError(\n \"Parameters must be root tasks and can not have upstream dependencies.\"\n )\n\n self.add_task(upstream_task)\n self.add_task(downstream_task)\n\n # we can only check the downstream task's edges once it has been added to the\n # flow, so we need to perform this check here and not earlier.\n if validate and key and key in {e.key for e in self.edges_to(downstream_task)}:\n raise ValueError(\n 'Argument \"{a}\" for task {t} has already been assigned in '\n \"this flow. If you are trying to call the task again with \"\n \"new arguments, call Task.copy() before adding the result \"\n \"to this flow.\".format(a=key, t=downstream_task)\n )\n\n edge = Edge(\n upstream_task=upstream_task,\n downstream_task=downstream_task,\n key=key,\n mapped=mapped,\n )\n self.edges.add(edge)\n\n # check that the edges are valid keywords by binding them\n if validate and key is not None:\n edge_keys = {\n e.key: None for e in self.edges_to(downstream_task) if e.key is not None\n }\n inspect.signature(downstream_task.run).bind_partial(**edge_keys)\n\n self._cache.clear()\n\n # check for cycles\n if validate:\n self.validate()\n\n return edge\n\n def chain(self, *tasks: Task, validate: bool = None) -> List[Edge]:\n \"\"\"\n Adds a sequence of dependent tasks to the flow; each task should be provided\n as an argument (or splatted from a list).\n\n Args:\n - *tasks (list): A list of tasks to chain together\n - validate (bool, optional): Whether or not to check the validity of\n the flow (e.g., presence of cycles). Defaults to the value of `eager_edge_validation`\n in your prefect configuration file.\n\n Returns:\n - A list of Edge objects added to the flow\n \"\"\"\n edges = []\n for u_task, d_task in zip(tasks, tasks[1:]):\n edges.append(\n self.add_edge(\n upstream_task=u_task, downstream_task=d_task, validate=validate\n )\n )\n return edges\n\n def update(self, flow: \"Flow\", validate: bool = None) -> None:\n \"\"\"\n Take all tasks and edges in another flow and add it to this flow\n\n Args:\n - flow (Flow): A flow which is used to update this flow\n - validate (bool, optional): Whether or not to check the validity of the flow\n\n Returns:\n - None\n \"\"\"\n for task in flow.tasks:\n if task not in self.tasks:\n self.add_task(task)\n\n for edge in flow.edges:\n if edge not in self.edges:\n self.add_edge(\n upstream_task=edge.upstream_task,\n downstream_task=edge.downstream_task,\n key=edge.key,\n validate=validate,\n )\n\n @cache\n def all_upstream_edges(self) -> Dict[Task, Set[Edge]]:\n \"\"\"\n Returns a dictionary relating each task in the Flow to the set of\n all _upstream_ edges for the task\n\n Returns:\n - dict with the key as tasks and the value as a set of upstream edges\n \"\"\"\n edges = {t: set() for t in self.tasks} # type: Dict[Task, Set[Edge]]\n for edge in self.edges:\n edges[edge.downstream_task].add(edge)\n return edges\n\n @cache\n def all_downstream_edges(self) -> Dict[Task, Set[Edge]]:\n \"\"\"\n Returns a dictionary relating each task in the Flow to the set of\n all _downstream_ edges for the task\n\n Returns:\n - dict with the key as tasks and the value as a set of downstream edges\n \"\"\"\n edges = {t: set() for t in self.tasks} # type: Dict[Task, Set[Edge]]\n for edge in self.edges:\n edges[edge.upstream_task].add(edge)\n return edges\n\n def edges_to(self, task: Task) -> Set[Edge]:\n \"\"\"\n Get all of the edges leading to a task (i.e., the upstream edges)\n\n Args:\n - task (Task): The task that we want to find edges leading to\n\n Returns:\n - dict with the key as the task passed in and the value as a set of all edges\n leading to that task\n\n Raises:\n - ValueError: if `task` is not found in this flow\n \"\"\"\n if task not in self.tasks:\n raise ValueError(\n \"Task {t} was not found in Flow {f}\".format(t=task, f=self)\n )\n return self.all_upstream_edges()[task]\n\n def edges_from(self, task: Task) -> Set[Edge]:\n \"\"\"\n Get all of the edges leading from a task (i.e., the downstream edges)\n\n Args:\n - task (Task): The task that we want to find edges leading from\n\n Returns:\n - dict with the key as the task passed in and the value as a set of all edges\n leading from that task\n\n Raises:\n - ValueError: if `task` is not found in this flow\n \"\"\"\n if task not in self.tasks:\n raise ValueError(\n \"Task {t} was not found in Flow {f}\".format(t=task, f=self)\n )\n return self.all_downstream_edges()[task]\n\n def upstream_tasks(self, task: Task) -> Set[Task]:\n \"\"\"\n Get all of the tasks upstream of a task\n\n Args:\n - task (Task): The task that we want to find upstream tasks of\n\n Returns:\n - set of Task objects which are upstream of `task`\n \"\"\"\n return set(e.upstream_task for e in self.edges_to(task))\n\n def downstream_tasks(self, task: Task) -> Set[Task]:\n \"\"\"\n Get all of the tasks downstream of a task\n\n Args:\n - task (Task): The task that we want to find downstream tasks from\n\n Returns:\n - set of Task objects which are downstream of `task`\n \"\"\"\n return set(e.downstream_task for e in self.edges_from(task))\n\n def validate(self) -> None:\n \"\"\"\n Checks that the flow is valid.\n\n Returns:\n - None\n\n Raises:\n - ValueError: if edges refer to tasks that are not in this flow\n - ValueError: if specified reference tasks are not in this flow\n - ValueError: if any tasks do not have assigned IDs\n \"\"\"\n\n self._cache.clear()\n\n if any(e.upstream_task not in self.tasks for e in self.edges) or any(\n e.downstream_task not in self.tasks for e in self.edges\n ):\n raise ValueError(\"Some edges refer to tasks not contained in this flow.\")\n\n self.sorted_tasks()\n\n if any(t not in self.tasks for t in self.reference_tasks()):\n raise ValueError(\"Some reference tasks are not contained in this flow.\")\n\n def sorted_tasks(self, root_tasks: Iterable[Task] = None) -> Tuple[Task, ...]:\n \"\"\"\n Get the tasks in this flow in a sorted manner. This allows us to find if any\n cycles exist in this flow's DAG.\n\n Args:\n - root_tasks ([Tasks], optional): an `Iterable` of `Task` objects to\n start the sorting from\n\n Returns:\n - tuple of task objects that were sorted\n\n Raises:\n - ValueError: if a cycle is found in the flow's DAG\n \"\"\"\n return self._sorted_tasks(root_tasks=tuple(root_tasks or []))\n\n @cache\n def _sorted_tasks(self, root_tasks: Tuple[Task, ...] = None) -> Tuple[Task, ...]:\n \"\"\"\n Computes a topological sort of the flow's tasks.\n\n Flow.sorted_tasks() can accept non-hashable arguments and therefore can't be\n cached, so this private method is called and cached instead.\n \"\"\"\n\n # begin by getting all tasks under consideration (root tasks and all\n # downstream tasks)\n if root_tasks:\n tasks = set(root_tasks)\n seen = set() # type: Set[Task]\n\n # while the set of tasks is different from the seen tasks...\n while tasks.difference(seen):\n # iterate over the new tasks...\n for t in list(tasks.difference(seen)):\n # add its downstream tasks to the task list\n tasks.update(self.downstream_tasks(t))\n # mark it as seen\n seen.add(t)\n else:\n tasks = self.tasks\n\n # build the list of sorted tasks\n remaining_tasks = list(tasks)\n sorted_tasks = []\n while remaining_tasks:\n # mark the flow as cyclic unless we prove otherwise\n cyclic = True\n\n # iterate over each remaining task\n for task in remaining_tasks.copy():\n # check all the upstream tasks of that task\n for upstream_task in self.upstream_tasks(task):\n # if the upstream task is also remaining, it means it\n # hasn't been sorted, so we can't sort this task either\n if upstream_task in remaining_tasks:\n break\n else:\n # but if all upstream tasks have been sorted, we can sort\n # this one too. We note that we found no cycle this time.\n cyclic = False\n remaining_tasks.remove(task)\n sorted_tasks.append(task)\n\n # if we were unable to match any upstream tasks, we have a cycle\n if cyclic:\n raise ValueError(\"Cycle found; flows must be acyclic!\")\n\n return tuple(sorted_tasks)\n\n # Dependencies ------------------------------------------------------------\n\n def set_dependencies(\n self,\n task: object,\n upstream_tasks: Iterable[object] = None,\n downstream_tasks: Iterable[object] = None,\n keyword_tasks: Mapping[str, object] = None,\n mapped: bool = False,\n validate: bool = None,\n ) -> None:\n \"\"\"\n Convenience function for adding task dependencies.\n\n Args:\n - task (object): a Task that will become part of the Flow. If the task is not a\n Task subclass, Prefect will attempt to convert it to one.\n - upstream_tasks ([object], optional): Tasks that will run before the task runs. If any task\n is not a Task subclass, Prefect will attempt to convert it to one.\n - downstream_tasks ([object], optional): Tasks that will run after the task runs. If any task\n is not a Task subclass, Prefect will attempt to convert it to one.\n - keyword_tasks ({key: object}, optional): The results of these tasks\n will be provided to the task under the specified keyword\n arguments. If any task is not a Task subclass, Prefect will attempt to\n convert it to one.\n - mapped (bool, optional): Whether the upstream tasks (both keyed\n and non-keyed) should be mapped over; defaults to `False`. If `True`, any\n tasks wrapped in the `prefect.utilities.tasks.unmapped` container will\n _not_ be mapped over.\n - validate (bool, optional): Whether or not to check the validity of\n the flow (e.g., presence of cycles). Defaults to the value of `eager_edge_validation`\n in your Prefect configuration file.\n\n Returns:\n - None\n \"\"\"\n\n task = as_task(task)\n assert isinstance(task, Task) # mypy assert\n\n # add the main task (in case it was called with no arguments)\n self.add_task(task)\n\n # add upstream tasks\n for t in upstream_tasks or []:\n is_mapped = mapped & (not isinstance(t, unmapped))\n t = as_task(t)\n assert isinstance(t, Task) # mypy assert\n self.add_edge(\n upstream_task=t,\n downstream_task=task,\n validate=validate,\n mapped=is_mapped,\n )\n\n # add downstream tasks\n for t in downstream_tasks or []:\n t = as_task(t)\n assert isinstance(t, Task) # mypy assert\n self.add_edge(upstream_task=task, downstream_task=t, validate=validate)\n\n # add data edges to upstream tasks\n for key, t in (keyword_tasks or {}).items():\n is_mapped = mapped & (not isinstance(t, unmapped))\n t = as_task(t)\n assert isinstance(t, Task) # mypy assert\n self.add_edge(\n upstream_task=t,\n downstream_task=task,\n key=key,\n validate=validate,\n mapped=is_mapped,\n )\n\n # Execution ---------------------------------------------------------------\n\n def _run_on_schedule(\n self, parameters: Dict[str, Any], runner_cls: type, **kwargs: Any\n ) -> \"prefect.engine.state.State\":\n\n ## determine time of first run\n try:\n if self.schedule is not None:\n next_run_time = self.schedule.next(1)[0]\n else:\n next_run_time = pendulum.now(\"utc\")\n except IndexError:\n raise ValueError(\"Flow has no more scheduled runs.\") from None\n\n ## setup initial states\n flow_state = prefect.engine.state.Scheduled(start_time=next_run_time, result={})\n flow_state = kwargs.pop(\"state\", flow_state)\n if not isinstance(flow_state.result, dict):\n flow_state.result = {}\n task_states = kwargs.pop(\"task_states\", {})\n flow_state.result.update(task_states)\n prefect.context.caches = {}\n\n ## run this flow indefinitely, so long as its schedule has future dates\n while True:\n if flow_state.is_scheduled():\n next_run_time = flow_state.start_time\n now = pendulum.now(\"utc\")\n naptime = max((next_run_time - now).total_seconds(), 0)\n if naptime > 0:\n self.logger.info(\n \"Waiting for next scheduled run at {}\".format(next_run_time)\n )\n time.sleep(naptime)\n\n ## begin a single flow run\n while not flow_state.is_finished():\n runner = runner_cls(flow=self)\n flow_state = runner.run(\n parameters=parameters,\n return_tasks=self.tasks,\n state=flow_state,\n task_states=flow_state.result,\n **kwargs\n )\n if not isinstance(flow_state.result, dict):\n return flow_state # something went wrong\n\n task_states = list(flow_state.result.values())\n for s in filter(lambda x: x.is_mapped(), task_states):\n task_states.extend(s.map_states)\n earliest_start = min(\n [s.start_time for s in task_states if s.is_scheduled()],\n default=pendulum.now(\"utc\"),\n )\n\n ## wait until first task is ready for retry\n now = pendulum.now(\"utc\")\n naptime = max((earliest_start - now).total_seconds(), 0)\n if naptime > 0:\n self.logger.info(\n \"Waiting for next available Task run at {}\".format(\n earliest_start\n )\n )\n time.sleep(naptime)\n\n ## create next scheduled run\n try:\n # update context cache\n for t, s in flow_state.result.items():\n if s.is_cached():\n cached_sub_states = [s]\n elif s.is_mapped() and any(\n sub_state.is_cached() for sub_state in s.map_states\n ):\n cached_sub_states = [\n sub_state\n for sub_state in s.map_states\n if sub_state.is_cached()\n ]\n else:\n cached_sub_states = []\n\n fresh_states = [\n s\n for s in prefect.context.caches.get(t.name, [])\n + cached_sub_states\n if s.cached_result_expiration > now\n ]\n prefect.context.caches[t.name] = fresh_states\n if self.schedule is not None:\n next_run_time = self.schedule.next(1)[0]\n else:\n break\n except IndexError:\n break\n flow_state = prefect.engine.state.Scheduled(\n start_time=next_run_time, result={}\n )\n return flow_state\n\n def run(\n self,\n parameters: Dict[str, Any] = None,\n run_on_schedule: bool = None,\n runner_cls: type = None,\n **kwargs: Any\n ) -> \"prefect.engine.state.State\":\n \"\"\"\n Run the flow on its schedule using an instance of a FlowRunner. If the Flow has no schedule,\n a single stateful run will occur (including retries).\n\n Note that this command will block and run this Flow on its schedule indefinitely (if it has one);\n all task states will be stored in memory, and task retries will not occur until every Task in the Flow has had a chance\n to run.\n\n Args:\n - parameters (Dict[str, Any], optional): values to pass into the runner\n - run_on_schedule (bool, optional): whether to run this flow on its schedule, or simply run a single execution;\n if not provided, will default to the value set in your user config\n - runner_cls (type): an optional FlowRunner class (will use the default if not provided)\n - **kwargs: additional keyword arguments; if any provided keywords\n match known parameter names, they will be used as such. Otherwise they will be passed to the\n `FlowRunner.run()` method\n\n Raises:\n - ValueError: if this Flow has a Schedule with no more scheduled runs\n - ValueError: if the `return_tasks` keyword argument is provided\n\n Returns:\n - State: the state of the flow after its final run\n \"\"\"\n # protect against old behavior\n if \"return_tasks\" in kwargs:\n raise ValueError(\n \"The `return_tasks` keyword cannot be provided to `flow.run()`; \"\n \"all task states are always returned. If you want to receive a subset \"\n \"of task states, use a FlowRunner directly.\"\n )\n\n if runner_cls is None:\n runner_cls = prefect.engine.get_default_flow_runner_class()\n\n # build parameters from passed dictionary and also kwargs\n parameters = parameters or {}\n for p in self.parameters():\n if p.name in kwargs:\n parameters[p.name] = kwargs.pop(p.name)\n\n # check for parameters that don't match the flow\n unknown_params = [\n p for p in parameters if p not in {fp.name for fp in self.parameters()}\n ]\n if unknown_params:\n fmt_params = \", \".join(unknown_params)\n raise ValueError(\n \"Flow.run received the following unexpected parameters: {}\".format(\n fmt_params\n )\n )\n\n # check for parameters that are required by the flow, but weren't passed\n missing_params = [\n p.name for p in self.parameters() if p.required and p.name not in parameters\n ]\n if missing_params:\n fmt_params = \", \".join(missing_params)\n raise ValueError(\n \"Flow.run did not receive the following required parameters: {}\".format(\n fmt_params\n )\n )\n\n if run_on_schedule is None:\n run_on_schedule = cast(bool, prefect.config.flows.run_on_schedule)\n if run_on_schedule is False:\n runner = runner_cls(flow=self)\n state = runner.run(parameters=parameters, **kwargs)\n else:\n state = self._run_on_schedule(\n parameters=parameters, runner_cls=runner_cls, **kwargs\n )\n\n # state always should return a dict of tasks. If it's NoResult (meaning the run was\n # interrupted before any tasks were executed), we set the dict manually.\n if state.result == NoResult:\n state.result = {}\n elif isinstance(state.result, Exception):\n self.logger.error(\n \"Unexpected error occured in {runner}: {exc}\".format(\n runner=runner_cls.__name__, exc=repr(state.result)\n )\n )\n return state\n\n for task in self.tasks or []:\n if task not in state.result:\n state.result[task] = prefect.engine.state.Pending(\n message=\"Task not run.\"\n )\n return state\n\n # Visualization ------------------------------------------------------------\n\n def visualize(\n self, flow_state: \"prefect.engine.state.State\" = None, filename: str = None\n ) -> object:\n \"\"\"\n Creates graphviz object for representing the current flow; this graphviz\n object will be rendered inline if called from an IPython notebook, otherwise\n it will be rendered in a new window. If a `filename` is provided, the object\n will not be rendered and instead saved to the location specified.\n\n Args:\n - flow_state (State, optional): flow state object used to optionally color the nodes\n - filename (str, optional): a filename specifying a location to save this visualization to; if provided,\n the visualization will not be rendered automatically\n\n Raises:\n - ImportError: if `graphviz` is not installed\n \"\"\"\n\n try:\n import graphviz\n except ImportError:\n msg = (\n \"This feature requires graphviz.\\n\"\n \"Try re-installing prefect with `pip install prefect[viz]`\"\n )\n raise ImportError(msg)\n\n def get_color(task: Task, map_index: int = None) -> str:\n assert flow_state\n assert isinstance(flow_state.result, dict)\n\n if map_index is not None:\n state = flow_state.result[task].map_states[map_index]\n else:\n state = flow_state.result.get(task)\n if state is not None:\n assert state is not None # mypy assert\n return state.color + \"80\"\n return \"#00000080\"\n\n graph = graphviz.Digraph()\n\n for t in self.tasks:\n is_mapped = any(edge.mapped for edge in self.edges_to(t))\n shape = \"box\" if is_mapped else \"ellipse\"\n name = \"{} <map>\".format(t.name) if is_mapped else t.name\n if is_mapped and flow_state:\n assert isinstance(flow_state.result, dict)\n for map_index, _ in enumerate(flow_state.result[t].map_states):\n kwargs = dict(\n color=get_color(t, map_index=map_index),\n style=\"filled\",\n colorscheme=\"svg\",\n )\n graph.node(str(id(t)) + str(map_index), name, shape=shape, **kwargs)\n else:\n kwargs = (\n {}\n if not flow_state\n else dict(color=get_color(t), style=\"filled\", colorscheme=\"svg\")\n )\n graph.node(str(id(t)), name, shape=shape, **kwargs)\n\n for e in self.edges:\n style = \"dashed\" if e.mapped else None\n if (\n e.mapped\n or any(edge.mapped for edge in self.edges_to(e.downstream_task))\n ) and flow_state:\n assert isinstance(flow_state.result, dict)\n for map_index, _ in enumerate(\n flow_state.result[e.downstream_task].map_states\n ):\n upstream_id = str(id(e.upstream_task))\n if any(edge.mapped for edge in self.edges_to(e.upstream_task)):\n upstream_id += str(map_index)\n graph.edge(\n upstream_id,\n str(id(e.downstream_task)) + str(map_index),\n e.key,\n style=style,\n )\n else:\n graph.edge(\n str(id(e.upstream_task)),\n str(id(e.downstream_task)),\n e.key,\n style=style,\n )\n\n if filename:\n graph.render(filename, view=False)\n else:\n try:\n from IPython import get_ipython\n\n assert get_ipython().config.get(\"IPKernelApp\") is not None\n except Exception:\n with tempfile.NamedTemporaryFile(delete=False) as tmp:\n tmp.close()\n try:\n graph.render(tmp.name, view=True)\n finally:\n os.unlink(tmp.name)\n\n return graph\n\n # Building / Serialization ----------------------------------------------------\n\n def serialize(self, build: bool = False) -> dict:\n \"\"\"\n Creates a serialized representation of the flow.\n\n Args:\n - build (bool, optional): if `True`, the flow's environment is built\n prior to serialization\n\n Returns:\n - dict representing the flow\n \"\"\"\n\n self.validate()\n schema = prefect.serialization.flow.FlowSchema\n serialized = schema(exclude=[\"storage\"]).dump(self)\n\n if build:\n if not self.storage:\n raise ValueError(\"This flow has no storage to build\")\n if self.name not in self.storage:\n self.storage.add_flow(self)\n else:\n warnings.warn(\n \"A flow with the same name is already contained in storage; if you changed your Flow since\"\n \" the last build, you might experience unexpected issues and should re-create your storage object.\"\n )\n storage = self.storage.build() # type: Optional[Storage]\n else:\n storage = self.storage\n\n serialized.update(schema(only=[\"storage\"]).dump({\"storage\": storage}))\n\n return serialized\n\n # Deployment ------------------------------------------------------------------\n\n def deploy(\n self, project_name: str, build: bool = True, set_schedule_active: bool = True\n ) -> str:\n \"\"\"\n Deploy the flow to Prefect Cloud\n\n Args:\n - project_name (str): the project that should contain this flow.\n - build (bool, optional): if `True`, the flow's environment is built\n prior to serialization; defaults to `True`\n - set_schedule_active (bool, optional): if `False`, will set the\n schedule to inactive in the database to prevent auto-scheduling runs (if the Flow has a schedule).\n Defaults to `True`. This can be changed later.\n\n Returns:\n - str: the ID of the flow that was deployed\n \"\"\"\n client = prefect.Client()\n deployed_flow = client.deploy(\n flow=self,\n build=build,\n project_name=project_name,\n set_schedule_active=set_schedule_active,\n )\n return deployed_flow\n\n def __mifflin__(self) -> None: # coverage: ignore\n \"Calls Dunder Mifflin\"\n import webbrowser\n\n webbrowser.open(\"https://cicdw.github.io/welcome.html\")\n", "path": "src/prefect/core/flow.py" } ]
[ { "content": "import collections\nimport copy\nimport functools\nimport inspect\nimport json\nimport os\nimport tempfile\nimport time\nimport uuid\nimport warnings\nfrom collections import Counter\nfrom typing import (\n Any,\n Callable,\n Dict,\n Iterable,\n List,\n Mapping,\n Optional,\n Set,\n Tuple,\n Union,\n cast,\n)\n\nimport pendulum\nfrom mypy_extensions import TypedDict\n\nimport prefect\nimport prefect.schedules\nfrom prefect.core.edge import Edge\nfrom prefect.core.task import Parameter, Task\nfrom prefect.engine.result import NoResult\nfrom prefect.engine.result_handlers import ResultHandler\nfrom prefect.environments import CloudEnvironment, Environment\nfrom prefect.environments.storage import Storage\nfrom prefect.utilities import logging\nfrom prefect.utilities.notifications import callback_factory\nfrom prefect.utilities.serialization import to_qualified_name\nfrom prefect.utilities.tasks import as_task, unmapped\n\nParameterDetails = TypedDict(\"ParameterDetails\", {\"default\": Any, \"required\": bool})\n\n\ndef cache(method: Callable) -> Callable:\n \"\"\"\n Decorator for caching Flow methods.\n\n Each Flow has a _cache dict that can be used to memoize expensive functions. This\n decorator automatically compares a hash of the Flow's current tasks, edges, and reference_tasks\n to a cached hash; if the hash is the same, it attempts to retrieve a value from the cache.\n If the hash is different, it invalidates the cache.\n \"\"\"\n\n @functools.wraps(method)\n def wrapper(self, *args, **kwargs): # type: ignore\n\n cache_check = dict(\n tasks=self.tasks.copy(),\n edges=self.edges.copy(),\n reference_tasks=copy.copy(self._reference_tasks),\n )\n if any(self._cache.get(k) != v for k, v in cache_check.items()):\n self._cache.clear()\n self._cache.update(cache_check)\n\n callargs = inspect.signature(method).bind(self, *args, **kwargs).arguments\n key = (method.__name__, tuple(callargs.items())[1:])\n if key not in self._cache:\n self._cache[key] = method(self, *args, **kwargs)\n return self._cache[key]\n\n return wrapper\n\n\nclass Flow:\n \"\"\"\n The Flow class is used as the representation of a collection of dependent Tasks.\n Flows track Task dependencies, parameters and provide the main API for constructing and managing workflows.\n\n Initializing Flow example:\n ```python\n class MyTask(Task):\n def run(self):\n return \"hello\"\n\n task_1 = MyTask()\n flow = Flow(name=\"my_flow\", tasks=[task_1])\n\n flow.run()\n ```\n\n Initializing Flow as context manager example:\n ```python\n @task\n def my_task():\n return \"hello\"\n\n with Flow(\"my_flow\") as flow:\n task_1 = my_task()\n\n flow.run()\n ```\n\n Args:\n - name (str): The name of the flow. Cannot be `None` or an empty string\n - schedule (prefect.schedules.Schedule, optional): A default schedule for the flow\n - environment (prefect.environments.Environment, optional): The environment\n that the flow should be run in. If `None`, a `CloudEnvironment` will be created.\n - storage (prefect.environments.storage.Storage, optional): The unit of storage\n that the flow will be written into.\n - tasks ([Task], optional): If provided, a list of tasks that will initialize the flow\n - edges ([Edge], optional): A list of edges between tasks\n - reference_tasks ([Task], optional): A list of tasks which determine the final\n state of a flow\n - state_handlers (Iterable[Callable], optional): A list of state change handlers\n that will be called whenever the flow changes state, providing an\n opportunity to inspect or modify the new state. The handler\n will be passed the flow instance, the old (prior) state, and the new\n (current) state, with the following signature:\n `state_handler(flow: Flow, old_state: State, new_state: State) -> Optional[State]`\n If multiple functions are passed, then the `new_state` argument will be the\n result of the previous handler.\n - on_failure (Callable, optional): A function with signature `fn(flow: Flow, state: State) -> None`\n which will be called anytime this Flow enters a failure state\n - validate (bool, optional): Whether or not to check the validity of\n the flow (e.g., presence of cycles and illegal keys) after adding the edges passed\n in the `edges` argument. Defaults to the value of `eager_edge_validation` in\n your prefect configuration file.\n - result_handler (ResultHandler, optional): the handler to use for\n retrieving and storing state results during execution; if not provided, will default\n to the one specified in your config\n\n \"\"\"\n\n def __init__(\n self,\n name: str,\n schedule: prefect.schedules.Schedule = None,\n environment: Environment = None,\n storage: Storage = None,\n tasks: Iterable[Task] = None,\n edges: Iterable[Edge] = None,\n reference_tasks: Iterable[Task] = None,\n state_handlers: List[Callable] = None,\n on_failure: Callable = None,\n validate: bool = None,\n result_handler: ResultHandler = None,\n ):\n self._cache = {} # type: dict\n\n self.logger = logging.get_logger(\"Flow\")\n\n if not name:\n raise ValueError(\"A name must be provided for the flow.\")\n\n self.name = name\n self.schedule = schedule\n self.environment = environment or prefect.environments.CloudEnvironment()\n self.storage = storage\n self.result_handler = (\n result_handler or prefect.engine.get_default_result_handler_class()()\n )\n\n self.tasks = set() # type: Set[Task]\n self.edges = set() # type: Set[Edge]\n\n for t in tasks or []:\n self.add_task(t)\n\n self.set_reference_tasks(reference_tasks or [])\n for e in edges or []:\n self.add_edge(\n upstream_task=e.upstream_task,\n downstream_task=e.downstream_task,\n key=e.key,\n mapped=e.mapped,\n validate=validate,\n )\n\n self._prefect_version = prefect.__version__\n\n if state_handlers and not isinstance(state_handlers, collections.Sequence):\n raise TypeError(\"state_handlers should be iterable.\")\n self.state_handlers = state_handlers or []\n if on_failure is not None:\n self.state_handlers.append(\n callback_factory(on_failure, check=lambda s: s.is_failed())\n )\n\n super().__init__()\n\n def __eq__(self, other: Any) -> bool:\n if type(self) == type(other):\n s = (self.name, self.tasks, self.edges, self.reference_tasks())\n o = (other.name, other.tasks, other.edges, other.reference_tasks())\n return s == o\n return False\n\n def __repr__(self) -> str:\n template = '<{cls}: name=\"{self.name}\">'\n return template.format(cls=type(self).__name__, self=self)\n\n def __iter__(self) -> Iterable[Task]:\n yield from self.sorted_tasks()\n\n def copy(self) -> \"Flow\":\n \"\"\"\n Create and returns a copy of the current Flow.\n \"\"\"\n new = copy.copy(self)\n # create a new cache\n new._cache = dict()\n new.tasks = self.tasks.copy()\n new.edges = self.edges.copy()\n new.set_reference_tasks(self._reference_tasks)\n return new\n\n # Identification -----------------------------------------------------------\n\n def get_tasks(\n self,\n name: str = None,\n slug: str = None,\n tags: Iterable[str] = None,\n task_type: type = None,\n ) -> List[Task]:\n \"\"\"\n Helper method for retrieving tasks from this flow based on certain attributes.\n The _intersection_ of all provided attributes is taken, i.e., only those tasks\n which match _all_ provided conditions are returned.\n\n Args:\n - name (str, optional): the name of the task\n - slug (str, optional): the slug of the task\n - tags ([str], optional): an iterable of task tags\n - task_type (type, optional): a possible task class type\n\n Returns:\n - [Task]: a list of tasks which meet the required conditions\n \"\"\"\n\n def sieve(t: Task) -> bool:\n keep = True\n if name is not None:\n keep &= t.name == name\n if slug is not None:\n keep &= t.slug == slug\n if tags is not None:\n keep &= t.tags.issuperset(tags)\n if task_type is not None:\n keep &= isinstance(t, task_type)\n return keep\n\n keep_tasks = filter(sieve, self.tasks)\n return list(keep_tasks)\n\n def replace(self, old: Task, new: Task, validate: bool = True) -> None:\n \"\"\"\n Performs an inplace replacement of the old task with the provided new task.\n\n Args:\n - old (Task): the old task to replace\n - new (Task): the new task to replace the old with; if not a Prefect\n Task, Prefect will attempt to convert it to one\n - validate (boolean, optional): whether to validate the Flow after\n the replace has been completed; defaults to `True`\n\n Raises:\n - ValueError: if the `old` task is not a part of this flow\n \"\"\"\n if old not in self.tasks:\n raise ValueError(\"Task {t} was not found in Flow {f}\".format(t=old, f=self))\n\n new = as_task(new)\n\n # update tasks\n self.tasks.remove(old)\n self.add_task(new)\n\n self._cache.clear()\n\n affected_edges = {e for e in self.edges if old in e.tasks}\n\n # remove old edges\n for edge in affected_edges:\n self.edges.remove(edge)\n\n # replace with new edges\n for edge in affected_edges:\n upstream = new if edge.upstream_task == old else edge.upstream_task\n downstream = new if edge.downstream_task == old else edge.downstream_task\n self.add_edge(\n upstream_task=upstream,\n downstream_task=downstream,\n key=edge.key,\n mapped=edge.mapped,\n validate=False,\n )\n\n # update auxiliary task collections\n ref_tasks = self.reference_tasks()\n new_refs = [t for t in ref_tasks if t != old] + (\n [new] if old in ref_tasks else []\n )\n self.set_reference_tasks(new_refs)\n\n if validate:\n self.validate()\n\n # Context Manager ----------------------------------------------------------\n\n def __enter__(self) -> \"Flow\":\n self.__previous_flow = prefect.context.get(\"flow\")\n prefect.context.update(flow=self)\n return self\n\n def __exit__(self, _type, _value, _tb) -> None: # type: ignore\n del prefect.context.flow\n if self.__previous_flow is not None:\n prefect.context.update(flow=self.__previous_flow)\n\n del self.__previous_flow\n\n # Introspection ------------------------------------------------------------\n\n @cache\n def root_tasks(self) -> Set[Task]:\n \"\"\"\n Get the tasks in the flow that have no upstream dependencies; these are\n the tasks which, by default, flow execution begins with.\n\n Returns:\n - set of Task objects that have no upstream dependencies\n \"\"\"\n return set(t for t in self.tasks if not self.edges_to(t))\n\n @cache\n def terminal_tasks(self) -> Set[Task]:\n \"\"\"\n Get the tasks in the flow that have no downstream dependencies\n\n Returns:\n - set of Task objects that have no downstream dependencies\n \"\"\"\n return set(t for t in self.tasks if not self.edges_from(t))\n\n def parameters(self) -> Set[Parameter]:\n \"\"\"\n Returns any parameters of the flow.\n\n Returns:\n - set: a set of any Parameters in this flow\n \"\"\"\n return {p for p in self.tasks if isinstance(p, Parameter)}\n\n def reference_tasks(self) -> Set[Task]:\n \"\"\"\n A flow's \"reference tasks\" are used to determine its state when it runs. If all the reference\n tasks are successful, then the flow run is considered successful. However, if\n any of the reference tasks fail, the flow is considered to fail. (Note that skips are\n counted as successes; see [the state documentation](../engine/state.html) for a full description\n of what is considered failure, success, etc.)\n\n By default, a flow's reference tasks are its terminal tasks. This means the state of a\n flow is determined by those tasks which have no downstream dependencies.\n\n In some situations, users may want to customize this behavior; for example, if a\n flow's terminal tasks are \"clean up\" tasks for the rest of the flow that only run\n if certain (more relevant) tasks fail, we might not want them determining the overall\n state of the flow run. The `flow.set_reference_tasks()` method can be used to set such custom `reference_tasks`.\n\n Please note that even if `reference_tasks` are provided that are not terminal tasks, the flow\n will not be considered \"finished\" until all terminal tasks have completed. Only then\n will state be determined, using the reference tasks.\n\n Returns:\n - set of Task objects which are the reference tasks in the flow\n \"\"\"\n if self._reference_tasks:\n return set(self._reference_tasks)\n else:\n return self.terminal_tasks()\n\n def set_reference_tasks(self, tasks: Iterable[Task]) -> None:\n \"\"\"\n Sets the `reference_tasks` for the flow. See `flow.reference_tasks` for more details.\n\n Args:\n - tasks ([Task]): the tasks that should be set as a flow's reference tasks\n\n Returns:\n - None\n \"\"\"\n self._cache.clear()\n reference_tasks = set(tasks)\n if any(t not in self.tasks for t in reference_tasks):\n raise ValueError(\"reference tasks must be part of the flow.\")\n self._reference_tasks = reference_tasks\n\n # Graph --------------------------------------------------------------------\n\n def add_task(self, task: Task) -> Task:\n \"\"\"\n Add a task to the flow if the task does not already exist. The tasks are\n uniquely identified by their `slug`.\n\n Args:\n - task (Task): the new Task to be added to the flow\n\n Returns:\n - Task: the `Task` object passed in if the task was successfully added\n\n Raises:\n - TypeError: if the `task` is not of type `Task`\n - ValueError: if the `task.slug` matches that of a task already in the flow\n \"\"\"\n if not isinstance(task, Task):\n raise TypeError(\n \"Tasks must be Task instances (received {})\".format(type(task))\n )\n elif task not in self.tasks:\n if task.slug and any(task.slug == t.slug for t in self.tasks):\n raise ValueError(\n 'A task with the slug \"{}\" already exists in this '\n \"flow.\".format(task.slug)\n )\n\n if task not in self.tasks:\n self.tasks.add(task)\n self._cache.clear()\n\n return task\n\n def add_edge(\n self,\n upstream_task: Task,\n downstream_task: Task,\n key: str = None,\n mapped: bool = False,\n validate: bool = None,\n ) -> Edge:\n \"\"\"\n Add an edge in the flow between two tasks. All edges are directed beginning with\n an upstream task and ending with a downstream task.\n\n Args:\n - upstream_task (Task): The task that the edge should start from\n - downstream_task (Task): The task that the edge should end with\n - key (str, optional): The key to be set for the new edge; the result of the upstream task\n will be passed to the downstream task's `run()` method under this keyword argument\n - mapped (bool, optional): Whether this edge represents a call to `Task.map()`; defaults to `False`\n - validate (bool, optional): Whether or not to check the validity of\n the flow (e.g., presence of cycles and illegal keys). Defaults to the value\n of `eager_edge_validation` in your prefect configuration file.\n\n Returns:\n - prefect.core.edge.Edge: The `Edge` object that was successfully added to the flow\n\n Raises:\n - ValueError: if the `downstream_task` is of type `Parameter`\n - ValueError: if the edge exists with this `key` and `downstream_task`\n \"\"\"\n if validate is None:\n validate = cast(bool, prefect.config.flows.eager_edge_validation)\n if isinstance(downstream_task, Parameter):\n raise ValueError(\n \"Parameters must be root tasks and can not have upstream dependencies.\"\n )\n\n self.add_task(upstream_task)\n self.add_task(downstream_task)\n\n # we can only check the downstream task's edges once it has been added to the\n # flow, so we need to perform this check here and not earlier.\n if validate and key and key in {e.key for e in self.edges_to(downstream_task)}:\n raise ValueError(\n 'Argument \"{a}\" for task {t} has already been assigned in '\n \"this flow. If you are trying to call the task again with \"\n \"new arguments, call Task.copy() before adding the result \"\n \"to this flow.\".format(a=key, t=downstream_task)\n )\n\n edge = Edge(\n upstream_task=upstream_task,\n downstream_task=downstream_task,\n key=key,\n mapped=mapped,\n )\n self.edges.add(edge)\n\n # check that the edges are valid keywords by binding them\n if validate and key is not None:\n edge_keys = {\n e.key: None for e in self.edges_to(downstream_task) if e.key is not None\n }\n inspect.signature(downstream_task.run).bind_partial(**edge_keys)\n\n self._cache.clear()\n\n # check for cycles\n if validate:\n self.validate()\n\n return edge\n\n def chain(self, *tasks: Task, validate: bool = None) -> List[Edge]:\n \"\"\"\n Adds a sequence of dependent tasks to the flow; each task should be provided\n as an argument (or splatted from a list).\n\n Args:\n - *tasks (list): A list of tasks to chain together\n - validate (bool, optional): Whether or not to check the validity of\n the flow (e.g., presence of cycles). Defaults to the value of `eager_edge_validation`\n in your prefect configuration file.\n\n Returns:\n - A list of Edge objects added to the flow\n \"\"\"\n edges = []\n for u_task, d_task in zip(tasks, tasks[1:]):\n edges.append(\n self.add_edge(\n upstream_task=u_task, downstream_task=d_task, validate=validate\n )\n )\n return edges\n\n def update(self, flow: \"Flow\", validate: bool = None) -> None:\n \"\"\"\n Take all tasks and edges in another flow and add it to this flow\n\n Args:\n - flow (Flow): A flow which is used to update this flow\n - validate (bool, optional): Whether or not to check the validity of the flow\n\n Returns:\n - None\n \"\"\"\n for task in flow.tasks:\n if task not in self.tasks:\n self.add_task(task)\n\n for edge in flow.edges:\n if edge not in self.edges:\n self.add_edge(\n upstream_task=edge.upstream_task,\n downstream_task=edge.downstream_task,\n key=edge.key,\n mapped=edge.mapped,\n validate=validate,\n )\n\n @cache\n def all_upstream_edges(self) -> Dict[Task, Set[Edge]]:\n \"\"\"\n Returns a dictionary relating each task in the Flow to the set of\n all _upstream_ edges for the task\n\n Returns:\n - dict with the key as tasks and the value as a set of upstream edges\n \"\"\"\n edges = {t: set() for t in self.tasks} # type: Dict[Task, Set[Edge]]\n for edge in self.edges:\n edges[edge.downstream_task].add(edge)\n return edges\n\n @cache\n def all_downstream_edges(self) -> Dict[Task, Set[Edge]]:\n \"\"\"\n Returns a dictionary relating each task in the Flow to the set of\n all _downstream_ edges for the task\n\n Returns:\n - dict with the key as tasks and the value as a set of downstream edges\n \"\"\"\n edges = {t: set() for t in self.tasks} # type: Dict[Task, Set[Edge]]\n for edge in self.edges:\n edges[edge.upstream_task].add(edge)\n return edges\n\n def edges_to(self, task: Task) -> Set[Edge]:\n \"\"\"\n Get all of the edges leading to a task (i.e., the upstream edges)\n\n Args:\n - task (Task): The task that we want to find edges leading to\n\n Returns:\n - dict with the key as the task passed in and the value as a set of all edges\n leading to that task\n\n Raises:\n - ValueError: if `task` is not found in this flow\n \"\"\"\n if task not in self.tasks:\n raise ValueError(\n \"Task {t} was not found in Flow {f}\".format(t=task, f=self)\n )\n return self.all_upstream_edges()[task]\n\n def edges_from(self, task: Task) -> Set[Edge]:\n \"\"\"\n Get all of the edges leading from a task (i.e., the downstream edges)\n\n Args:\n - task (Task): The task that we want to find edges leading from\n\n Returns:\n - dict with the key as the task passed in and the value as a set of all edges\n leading from that task\n\n Raises:\n - ValueError: if `task` is not found in this flow\n \"\"\"\n if task not in self.tasks:\n raise ValueError(\n \"Task {t} was not found in Flow {f}\".format(t=task, f=self)\n )\n return self.all_downstream_edges()[task]\n\n def upstream_tasks(self, task: Task) -> Set[Task]:\n \"\"\"\n Get all of the tasks upstream of a task\n\n Args:\n - task (Task): The task that we want to find upstream tasks of\n\n Returns:\n - set of Task objects which are upstream of `task`\n \"\"\"\n return set(e.upstream_task for e in self.edges_to(task))\n\n def downstream_tasks(self, task: Task) -> Set[Task]:\n \"\"\"\n Get all of the tasks downstream of a task\n\n Args:\n - task (Task): The task that we want to find downstream tasks from\n\n Returns:\n - set of Task objects which are downstream of `task`\n \"\"\"\n return set(e.downstream_task for e in self.edges_from(task))\n\n def validate(self) -> None:\n \"\"\"\n Checks that the flow is valid.\n\n Returns:\n - None\n\n Raises:\n - ValueError: if edges refer to tasks that are not in this flow\n - ValueError: if specified reference tasks are not in this flow\n - ValueError: if any tasks do not have assigned IDs\n \"\"\"\n\n self._cache.clear()\n\n if any(e.upstream_task not in self.tasks for e in self.edges) or any(\n e.downstream_task not in self.tasks for e in self.edges\n ):\n raise ValueError(\"Some edges refer to tasks not contained in this flow.\")\n\n self.sorted_tasks()\n\n if any(t not in self.tasks for t in self.reference_tasks()):\n raise ValueError(\"Some reference tasks are not contained in this flow.\")\n\n def sorted_tasks(self, root_tasks: Iterable[Task] = None) -> Tuple[Task, ...]:\n \"\"\"\n Get the tasks in this flow in a sorted manner. This allows us to find if any\n cycles exist in this flow's DAG.\n\n Args:\n - root_tasks ([Tasks], optional): an `Iterable` of `Task` objects to\n start the sorting from\n\n Returns:\n - tuple of task objects that were sorted\n\n Raises:\n - ValueError: if a cycle is found in the flow's DAG\n \"\"\"\n return self._sorted_tasks(root_tasks=tuple(root_tasks or []))\n\n @cache\n def _sorted_tasks(self, root_tasks: Tuple[Task, ...] = None) -> Tuple[Task, ...]:\n \"\"\"\n Computes a topological sort of the flow's tasks.\n\n Flow.sorted_tasks() can accept non-hashable arguments and therefore can't be\n cached, so this private method is called and cached instead.\n \"\"\"\n\n # begin by getting all tasks under consideration (root tasks and all\n # downstream tasks)\n if root_tasks:\n tasks = set(root_tasks)\n seen = set() # type: Set[Task]\n\n # while the set of tasks is different from the seen tasks...\n while tasks.difference(seen):\n # iterate over the new tasks...\n for t in list(tasks.difference(seen)):\n # add its downstream tasks to the task list\n tasks.update(self.downstream_tasks(t))\n # mark it as seen\n seen.add(t)\n else:\n tasks = self.tasks\n\n # build the list of sorted tasks\n remaining_tasks = list(tasks)\n sorted_tasks = []\n while remaining_tasks:\n # mark the flow as cyclic unless we prove otherwise\n cyclic = True\n\n # iterate over each remaining task\n for task in remaining_tasks.copy():\n # check all the upstream tasks of that task\n for upstream_task in self.upstream_tasks(task):\n # if the upstream task is also remaining, it means it\n # hasn't been sorted, so we can't sort this task either\n if upstream_task in remaining_tasks:\n break\n else:\n # but if all upstream tasks have been sorted, we can sort\n # this one too. We note that we found no cycle this time.\n cyclic = False\n remaining_tasks.remove(task)\n sorted_tasks.append(task)\n\n # if we were unable to match any upstream tasks, we have a cycle\n if cyclic:\n raise ValueError(\"Cycle found; flows must be acyclic!\")\n\n return tuple(sorted_tasks)\n\n # Dependencies ------------------------------------------------------------\n\n def set_dependencies(\n self,\n task: object,\n upstream_tasks: Iterable[object] = None,\n downstream_tasks: Iterable[object] = None,\n keyword_tasks: Mapping[str, object] = None,\n mapped: bool = False,\n validate: bool = None,\n ) -> None:\n \"\"\"\n Convenience function for adding task dependencies.\n\n Args:\n - task (object): a Task that will become part of the Flow. If the task is not a\n Task subclass, Prefect will attempt to convert it to one.\n - upstream_tasks ([object], optional): Tasks that will run before the task runs. If any task\n is not a Task subclass, Prefect will attempt to convert it to one.\n - downstream_tasks ([object], optional): Tasks that will run after the task runs. If any task\n is not a Task subclass, Prefect will attempt to convert it to one.\n - keyword_tasks ({key: object}, optional): The results of these tasks\n will be provided to the task under the specified keyword\n arguments. If any task is not a Task subclass, Prefect will attempt to\n convert it to one.\n - mapped (bool, optional): Whether the upstream tasks (both keyed\n and non-keyed) should be mapped over; defaults to `False`. If `True`, any\n tasks wrapped in the `prefect.utilities.tasks.unmapped` container will\n _not_ be mapped over.\n - validate (bool, optional): Whether or not to check the validity of\n the flow (e.g., presence of cycles). Defaults to the value of `eager_edge_validation`\n in your Prefect configuration file.\n\n Returns:\n - None\n \"\"\"\n\n task = as_task(task)\n assert isinstance(task, Task) # mypy assert\n\n # add the main task (in case it was called with no arguments)\n self.add_task(task)\n\n # add upstream tasks\n for t in upstream_tasks or []:\n is_mapped = mapped & (not isinstance(t, unmapped))\n t = as_task(t)\n assert isinstance(t, Task) # mypy assert\n self.add_edge(\n upstream_task=t,\n downstream_task=task,\n validate=validate,\n mapped=is_mapped,\n )\n\n # add downstream tasks\n for t in downstream_tasks or []:\n t = as_task(t)\n assert isinstance(t, Task) # mypy assert\n self.add_edge(upstream_task=task, downstream_task=t, validate=validate)\n\n # add data edges to upstream tasks\n for key, t in (keyword_tasks or {}).items():\n is_mapped = mapped & (not isinstance(t, unmapped))\n t = as_task(t)\n assert isinstance(t, Task) # mypy assert\n self.add_edge(\n upstream_task=t,\n downstream_task=task,\n key=key,\n validate=validate,\n mapped=is_mapped,\n )\n\n # Execution ---------------------------------------------------------------\n\n def _run_on_schedule(\n self, parameters: Dict[str, Any], runner_cls: type, **kwargs: Any\n ) -> \"prefect.engine.state.State\":\n\n ## determine time of first run\n try:\n if self.schedule is not None:\n next_run_time = self.schedule.next(1)[0]\n else:\n next_run_time = pendulum.now(\"utc\")\n except IndexError:\n raise ValueError(\"Flow has no more scheduled runs.\") from None\n\n ## setup initial states\n flow_state = prefect.engine.state.Scheduled(start_time=next_run_time, result={})\n flow_state = kwargs.pop(\"state\", flow_state)\n if not isinstance(flow_state.result, dict):\n flow_state.result = {}\n task_states = kwargs.pop(\"task_states\", {})\n flow_state.result.update(task_states)\n prefect.context.caches = {}\n\n ## run this flow indefinitely, so long as its schedule has future dates\n while True:\n if flow_state.is_scheduled():\n next_run_time = flow_state.start_time\n now = pendulum.now(\"utc\")\n naptime = max((next_run_time - now).total_seconds(), 0)\n if naptime > 0:\n self.logger.info(\n \"Waiting for next scheduled run at {}\".format(next_run_time)\n )\n time.sleep(naptime)\n\n ## begin a single flow run\n while not flow_state.is_finished():\n runner = runner_cls(flow=self)\n flow_state = runner.run(\n parameters=parameters,\n return_tasks=self.tasks,\n state=flow_state,\n task_states=flow_state.result,\n **kwargs\n )\n if not isinstance(flow_state.result, dict):\n return flow_state # something went wrong\n\n task_states = list(flow_state.result.values())\n for s in filter(lambda x: x.is_mapped(), task_states):\n task_states.extend(s.map_states)\n earliest_start = min(\n [s.start_time for s in task_states if s.is_scheduled()],\n default=pendulum.now(\"utc\"),\n )\n\n ## wait until first task is ready for retry\n now = pendulum.now(\"utc\")\n naptime = max((earliest_start - now).total_seconds(), 0)\n if naptime > 0:\n self.logger.info(\n \"Waiting for next available Task run at {}\".format(\n earliest_start\n )\n )\n time.sleep(naptime)\n\n ## create next scheduled run\n try:\n # update context cache\n for t, s in flow_state.result.items():\n if s.is_cached():\n cached_sub_states = [s]\n elif s.is_mapped() and any(\n sub_state.is_cached() for sub_state in s.map_states\n ):\n cached_sub_states = [\n sub_state\n for sub_state in s.map_states\n if sub_state.is_cached()\n ]\n else:\n cached_sub_states = []\n\n fresh_states = [\n s\n for s in prefect.context.caches.get(t.name, [])\n + cached_sub_states\n if s.cached_result_expiration > now\n ]\n prefect.context.caches[t.name] = fresh_states\n if self.schedule is not None:\n next_run_time = self.schedule.next(1)[0]\n else:\n break\n except IndexError:\n break\n flow_state = prefect.engine.state.Scheduled(\n start_time=next_run_time, result={}\n )\n return flow_state\n\n def run(\n self,\n parameters: Dict[str, Any] = None,\n run_on_schedule: bool = None,\n runner_cls: type = None,\n **kwargs: Any\n ) -> \"prefect.engine.state.State\":\n \"\"\"\n Run the flow on its schedule using an instance of a FlowRunner. If the Flow has no schedule,\n a single stateful run will occur (including retries).\n\n Note that this command will block and run this Flow on its schedule indefinitely (if it has one);\n all task states will be stored in memory, and task retries will not occur until every Task in the Flow has had a chance\n to run.\n\n Args:\n - parameters (Dict[str, Any], optional): values to pass into the runner\n - run_on_schedule (bool, optional): whether to run this flow on its schedule, or simply run a single execution;\n if not provided, will default to the value set in your user config\n - runner_cls (type): an optional FlowRunner class (will use the default if not provided)\n - **kwargs: additional keyword arguments; if any provided keywords\n match known parameter names, they will be used as such. Otherwise they will be passed to the\n `FlowRunner.run()` method\n\n Raises:\n - ValueError: if this Flow has a Schedule with no more scheduled runs\n - ValueError: if the `return_tasks` keyword argument is provided\n\n Returns:\n - State: the state of the flow after its final run\n \"\"\"\n # protect against old behavior\n if \"return_tasks\" in kwargs:\n raise ValueError(\n \"The `return_tasks` keyword cannot be provided to `flow.run()`; \"\n \"all task states are always returned. If you want to receive a subset \"\n \"of task states, use a FlowRunner directly.\"\n )\n\n if runner_cls is None:\n runner_cls = prefect.engine.get_default_flow_runner_class()\n\n # build parameters from passed dictionary and also kwargs\n parameters = parameters or {}\n for p in self.parameters():\n if p.name in kwargs:\n parameters[p.name] = kwargs.pop(p.name)\n\n # check for parameters that don't match the flow\n unknown_params = [\n p for p in parameters if p not in {fp.name for fp in self.parameters()}\n ]\n if unknown_params:\n fmt_params = \", \".join(unknown_params)\n raise ValueError(\n \"Flow.run received the following unexpected parameters: {}\".format(\n fmt_params\n )\n )\n\n # check for parameters that are required by the flow, but weren't passed\n missing_params = [\n p.name for p in self.parameters() if p.required and p.name not in parameters\n ]\n if missing_params:\n fmt_params = \", \".join(missing_params)\n raise ValueError(\n \"Flow.run did not receive the following required parameters: {}\".format(\n fmt_params\n )\n )\n\n if run_on_schedule is None:\n run_on_schedule = cast(bool, prefect.config.flows.run_on_schedule)\n if run_on_schedule is False:\n runner = runner_cls(flow=self)\n state = runner.run(parameters=parameters, **kwargs)\n else:\n state = self._run_on_schedule(\n parameters=parameters, runner_cls=runner_cls, **kwargs\n )\n\n # state always should return a dict of tasks. If it's NoResult (meaning the run was\n # interrupted before any tasks were executed), we set the dict manually.\n if state.result == NoResult:\n state.result = {}\n elif isinstance(state.result, Exception):\n self.logger.error(\n \"Unexpected error occured in {runner}: {exc}\".format(\n runner=runner_cls.__name__, exc=repr(state.result)\n )\n )\n return state\n\n for task in self.tasks or []:\n if task not in state.result:\n state.result[task] = prefect.engine.state.Pending(\n message=\"Task not run.\"\n )\n return state\n\n # Visualization ------------------------------------------------------------\n\n def visualize(\n self, flow_state: \"prefect.engine.state.State\" = None, filename: str = None\n ) -> object:\n \"\"\"\n Creates graphviz object for representing the current flow; this graphviz\n object will be rendered inline if called from an IPython notebook, otherwise\n it will be rendered in a new window. If a `filename` is provided, the object\n will not be rendered and instead saved to the location specified.\n\n Args:\n - flow_state (State, optional): flow state object used to optionally color the nodes\n - filename (str, optional): a filename specifying a location to save this visualization to; if provided,\n the visualization will not be rendered automatically\n\n Raises:\n - ImportError: if `graphviz` is not installed\n \"\"\"\n\n try:\n import graphviz\n except ImportError:\n msg = (\n \"This feature requires graphviz.\\n\"\n \"Try re-installing prefect with `pip install prefect[viz]`\"\n )\n raise ImportError(msg)\n\n def get_color(task: Task, map_index: int = None) -> str:\n assert flow_state\n assert isinstance(flow_state.result, dict)\n\n if map_index is not None:\n state = flow_state.result[task].map_states[map_index]\n else:\n state = flow_state.result.get(task)\n if state is not None:\n assert state is not None # mypy assert\n return state.color + \"80\"\n return \"#00000080\"\n\n graph = graphviz.Digraph()\n\n for t in self.tasks:\n is_mapped = any(edge.mapped for edge in self.edges_to(t))\n shape = \"box\" if is_mapped else \"ellipse\"\n name = \"{} <map>\".format(t.name) if is_mapped else t.name\n if is_mapped and flow_state:\n assert isinstance(flow_state.result, dict)\n for map_index, _ in enumerate(flow_state.result[t].map_states):\n kwargs = dict(\n color=get_color(t, map_index=map_index),\n style=\"filled\",\n colorscheme=\"svg\",\n )\n graph.node(str(id(t)) + str(map_index), name, shape=shape, **kwargs)\n else:\n kwargs = (\n {}\n if not flow_state\n else dict(color=get_color(t), style=\"filled\", colorscheme=\"svg\")\n )\n graph.node(str(id(t)), name, shape=shape, **kwargs)\n\n for e in self.edges:\n style = \"dashed\" if e.mapped else None\n if (\n e.mapped\n or any(edge.mapped for edge in self.edges_to(e.downstream_task))\n ) and flow_state:\n assert isinstance(flow_state.result, dict)\n for map_index, _ in enumerate(\n flow_state.result[e.downstream_task].map_states\n ):\n upstream_id = str(id(e.upstream_task))\n if any(edge.mapped for edge in self.edges_to(e.upstream_task)):\n upstream_id += str(map_index)\n graph.edge(\n upstream_id,\n str(id(e.downstream_task)) + str(map_index),\n e.key,\n style=style,\n )\n else:\n graph.edge(\n str(id(e.upstream_task)),\n str(id(e.downstream_task)),\n e.key,\n style=style,\n )\n\n if filename:\n graph.render(filename, view=False)\n else:\n try:\n from IPython import get_ipython\n\n assert get_ipython().config.get(\"IPKernelApp\") is not None\n except Exception:\n with tempfile.NamedTemporaryFile(delete=False) as tmp:\n tmp.close()\n try:\n graph.render(tmp.name, view=True)\n finally:\n os.unlink(tmp.name)\n\n return graph\n\n # Building / Serialization ----------------------------------------------------\n\n def serialize(self, build: bool = False) -> dict:\n \"\"\"\n Creates a serialized representation of the flow.\n\n Args:\n - build (bool, optional): if `True`, the flow's environment is built\n prior to serialization\n\n Returns:\n - dict representing the flow\n \"\"\"\n\n self.validate()\n schema = prefect.serialization.flow.FlowSchema\n serialized = schema(exclude=[\"storage\"]).dump(self)\n\n if build:\n if not self.storage:\n raise ValueError(\"This flow has no storage to build\")\n if self.name not in self.storage:\n self.storage.add_flow(self)\n else:\n warnings.warn(\n \"A flow with the same name is already contained in storage; if you changed your Flow since\"\n \" the last build, you might experience unexpected issues and should re-create your storage object.\"\n )\n storage = self.storage.build() # type: Optional[Storage]\n else:\n storage = self.storage\n\n serialized.update(schema(only=[\"storage\"]).dump({\"storage\": storage}))\n\n return serialized\n\n # Deployment ------------------------------------------------------------------\n\n def deploy(\n self, project_name: str, build: bool = True, set_schedule_active: bool = True\n ) -> str:\n \"\"\"\n Deploy the flow to Prefect Cloud\n\n Args:\n - project_name (str): the project that should contain this flow.\n - build (bool, optional): if `True`, the flow's environment is built\n prior to serialization; defaults to `True`\n - set_schedule_active (bool, optional): if `False`, will set the\n schedule to inactive in the database to prevent auto-scheduling runs (if the Flow has a schedule).\n Defaults to `True`. This can be changed later.\n\n Returns:\n - str: the ID of the flow that was deployed\n \"\"\"\n client = prefect.Client()\n deployed_flow = client.deploy(\n flow=self,\n build=build,\n project_name=project_name,\n set_schedule_active=set_schedule_active,\n )\n return deployed_flow\n\n def __mifflin__(self) -> None: # coverage: ignore\n \"Calls Dunder Mifflin\"\n import webbrowser\n\n webbrowser.open(\"https://cicdw.github.io/welcome.html\")\n", "path": "src/prefect/core/flow.py" } ]
diff --git a/CHANGELOG.md b/CHANGELOG.md index 6aff68b6a3d3..50eddd5ca746 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -28,6 +28,7 @@ These changes are available in the [master branch](https://github.com/PrefectHQ/ - Fix issue with Result Handlers deserializing incorrectly in Cloud - [#1112](https://github.com/PrefectHQ/prefect/issues/1112) - Fix issue caused by breaking change in `marshmallow==3.0.0rc7` - [#1151](https://github.com/PrefectHQ/prefect/pull/1151) - Fix issue with passing results to Prefect signals - [#1163](https://github.com/PrefectHQ/prefect/issues/1163) +- Fix issue with `flow.update` not preserving mapped edges - [#1164](https://github.com/PrefectHQ/prefect/issues/1164) ### Breaking Changes diff --git a/src/prefect/core/flow.py b/src/prefect/core/flow.py index b86c97db35fe..5a97c97e47a6 100644 --- a/src/prefect/core/flow.py +++ b/src/prefect/core/flow.py @@ -548,6 +548,7 @@ def update(self, flow: "Flow", validate: bool = None) -> None: upstream_task=edge.upstream_task, downstream_task=edge.downstream_task, key=edge.key, + mapped=edge.mapped, validate=validate, ) diff --git a/tests/core/test_flow.py b/tests/core/test_flow.py index aca3d7e38526..e03fbfc8bf1c 100644 --- a/tests/core/test_flow.py +++ b/tests/core/test_flow.py @@ -697,7 +697,7 @@ def test_equality_based_on_reference_tasks(self): assert f1 == f2 -def test_merge(): +def test_update(): f1 = Flow(name="test") f2 = Flow(name="test") @@ -709,10 +709,27 @@ def test_merge(): f2.add_edge(t2, t3) f2.update(f1) - assert f2.tasks == set([t1, t2, t3]) + assert f2.tasks == {t1, t2, t3} assert len(f2.edges) == 2 +def test_update_with_mapped_edges(): + t1 = Task() + t2 = Task() + t3 = Task() + + with Flow(name="test") as f1: + m = t2.map(upstream_tasks=[t1]) + + f2 = Flow(name="test") + f2.add_edge(t2, t3) + + f2.update(f1) + assert f2.tasks == {m, t1, t2, t3} + assert len(f2.edges) == 2 + assert len([e for e in f2.edges if e.mapped]) == 1 + + def test_upstream_and_downstream_error_msgs_when_task_is_not_in_flow(): f = Flow(name="test") t = Task()
Bitmessage__PyBitmessage-726
Trouble sending on multicor machines on 0.4.4 I've seen this on both an OSX box (8 cores) and a linux box (4 cores). I was only able to do the full reproducible on linux, as my `keys.dat` file prevented me from going back to 0.4.3 on the OSX box. 1. Check out v0.4.3. 2. Open top 3. Open bitmessage. 4. Send a message. 5. Processes will start up for each core in top to calculate the PoW more quickly. Message will send. 6. Close bitmessage. 7. Check out `ProtoV3` 8. Send a message. 9. Processes will fire up in top. They'll consume 100% cpu for a few minutes. One by one, the CPU usage on each process will drop to zero. 10. The bitmessage app will still say that we're doing work to calculate the PoW. The message never sends.
[ { "content": "#!/usr/bin/env python2.7\n# Copyright (c) 2012 Jonathan Warren\n# Copyright (c) 2012 The Bitmessage developers\n# Distributed under the MIT/X11 software license. See the accompanying\n# file COPYING or http://www.opensource.org/licenses/mit-license.php.\n\n# Right now, PyBitmessage only support connecting to stream 1. It doesn't\n# yet contain logic to expand into further streams.\n\n# The software version variable is now held in shared.py\n\n\nimport sys\n#Version check\n#Older versions of Python don't support the print function while Python 3 doesn't\n#like the print statement, so we use sys.stdout for the version check. After this\n#check we can then use the print function in the remainder of this file. Currently\n#in order to use logging, a lot of unnecessary code needs to be executed which could\n#potentially render this version check useless. So logging won't be used here until\n#there is a more efficient way to configure logging\nif sys.hexversion >= 0x3000000:\n msg = \"PyBitmessage does not support Python 3. Python 2.7.3 or later is required. Your version: %s\" % sys.version\n #logger.critical(msg)\n sys.stdout.write(msg)\n sys.exit(0)\nif sys.hexversion < 0x20703F0:\n msg = \"You should use Python 2.7.3 or greater (but not Python 3). Your version: %s\" % sys.version\n #logger.critical(msg)\n sys.stdout.write(msg)\n sys.exit(0)\n\nimport signal # Used to capture a Ctrl-C keypress so that Bitmessage can shutdown gracefully.\n# The next 3 are used for the API\nimport singleton\nimport os\nimport socket\nimport ctypes\nfrom struct import pack\n\nfrom SimpleXMLRPCServer import SimpleXMLRPCServer\nfrom api import MySimpleXMLRPCRequestHandler\nfrom helper_startup import isOurOperatingSystemLimitedToHavingVeryFewHalfOpenConnections\n\nimport shared\nfrom helper_sql import sqlQuery\nimport threading\n\n# Classes\n#from helper_sql import *\n#from class_sqlThread import *\nfrom class_sqlThread import sqlThread\nfrom class_singleCleaner import singleCleaner\n#from class_singleWorker import *\nfrom class_objectProcessor import objectProcessor\nfrom class_outgoingSynSender import outgoingSynSender\nfrom class_singleListener import singleListener\nfrom class_singleWorker import singleWorker\n#from class_addressGenerator import *\nfrom class_addressGenerator import addressGenerator\nfrom debug import logger\n\n# Helper Functions\nimport helper_bootstrap\nimport helper_generic\n\nfrom subprocess import call\nimport time\n \n\ndef connectToStream(streamNumber):\n shared.streamsInWhichIAmParticipating[streamNumber] = 'no data'\n selfInitiatedConnections[streamNumber] = {}\n shared.inventorySets[streamNumber] = set()\n queryData = sqlQuery('''SELECT hash FROM inventory WHERE streamnumber=?''', streamNumber)\n for row in queryData:\n shared.inventorySets[streamNumber].add(row[0])\n\n \n if isOurOperatingSystemLimitedToHavingVeryFewHalfOpenConnections():\n # Some XP and Vista systems can only have 10 outgoing connections at a time.\n maximumNumberOfHalfOpenConnections = 9\n else:\n maximumNumberOfHalfOpenConnections = 64\n for i in range(maximumNumberOfHalfOpenConnections):\n a = outgoingSynSender()\n a.setup(streamNumber, selfInitiatedConnections)\n a.start()\n\ndef _fixWinsock():\n if not ('win32' in sys.platform) and not ('win64' in sys.platform):\n return\n\n # Python 2 on Windows doesn't define a wrapper for\n # socket.inet_ntop but we can make one ourselves using ctypes\n if not hasattr(socket, 'inet_ntop'):\n addressToString = ctypes.windll.ws2_32.WSAAddressToStringA\n def inet_ntop(family, host):\n if family == socket.AF_INET:\n if len(host) != 4:\n raise ValueError(\"invalid IPv4 host\")\n host = pack(\"hH4s8s\", socket.AF_INET, 0, host, \"\\0\" * 8)\n elif family == socket.AF_INET6:\n if len(host) != 16:\n raise ValueError(\"invalid IPv6 host\")\n host = pack(\"hHL16sL\", socket.AF_INET6, 0, 0, host, 0)\n else:\n raise ValueError(\"invalid address family\")\n buf = \"\\0\" * 64\n lengthBuf = pack(\"I\", len(buf))\n addressToString(host, len(host), None, buf, lengthBuf)\n return buf[0:buf.index(\"\\0\")]\n socket.inet_ntop = inet_ntop\n\n # Same for inet_pton\n if not hasattr(socket, 'inet_pton'):\n stringToAddress = ctypes.windll.ws2_32.WSAStringToAddressA\n def inet_pton(family, host):\n buf = \"\\0\" * 28\n lengthBuf = pack(\"I\", len(buf))\n if stringToAddress(str(host),\n int(family),\n None,\n buf,\n lengthBuf) != 0:\n raise socket.error(\"illegal IP address passed to inet_pton\")\n if family == socket.AF_INET:\n return buf[4:8]\n elif family == socket.AF_INET6:\n return buf[8:24]\n else:\n raise ValueError(\"invalid address family\")\n socket.inet_pton = inet_pton\n\n # These sockopts are needed on for IPv6 support\n if not hasattr(socket, 'IPPROTO_IPV6'):\n socket.IPPROTO_IPV6 = 41\n if not hasattr(socket, 'IPV6_V6ONLY'):\n socket.IPV6_V6ONLY = 27\n\n# This thread, of which there is only one, runs the API.\nclass singleAPI(threading.Thread):\n\n def __init__(self):\n threading.Thread.__init__(self)\n\n def run(self):\n se = SimpleXMLRPCServer((shared.config.get('bitmessagesettings', 'apiinterface'), shared.config.getint(\n 'bitmessagesettings', 'apiport')), MySimpleXMLRPCRequestHandler, True, True)\n se.register_introspection_functions()\n se.serve_forever()\n\n# This is a list of current connections (the thread pointers at least)\nselfInitiatedConnections = {}\n\nif shared.useVeryEasyProofOfWorkForTesting:\n shared.networkDefaultProofOfWorkNonceTrialsPerByte = int(\n shared.networkDefaultProofOfWorkNonceTrialsPerByte / 16)\n shared.networkDefaultPayloadLengthExtraBytes = int(\n shared.networkDefaultPayloadLengthExtraBytes / 7000)\n\nclass Main:\n def start(self, daemon=False):\n _fixWinsock()\n\n shared.daemon = daemon\n # is the application already running? If yes then exit.\n thisapp = singleton.singleinstance()\n\n # get curses flag\n curses = False\n if '-c' in sys.argv:\n curses = True\n\n signal.signal(signal.SIGINT, helper_generic.signal_handler)\n signal.signal(signal.SIGTERM, helper_generic.signal_handler)\n # signal.signal(signal.SIGINT, signal.SIG_DFL)\n\n helper_bootstrap.knownNodes()\n # Start the address generation thread\n addressGeneratorThread = addressGenerator()\n addressGeneratorThread.daemon = True # close the main program even if there are threads left\n addressGeneratorThread.start()\n\n # Start the thread that calculates POWs\n singleWorkerThread = singleWorker()\n singleWorkerThread.daemon = True # close the main program even if there are threads left\n singleWorkerThread.start()\n\n # Start the SQL thread\n sqlLookup = sqlThread()\n sqlLookup.daemon = False # DON'T close the main program even if there are threads left. The closeEvent should command this thread to exit gracefully.\n sqlLookup.start()\n\n # Start the thread that calculates POWs\n objectProcessorThread = objectProcessor()\n objectProcessorThread.daemon = False # DON'T close the main program even the thread remains. This thread checks the shutdown variable after processing each object.\n objectProcessorThread.start()\n\n # Start the cleanerThread\n singleCleanerThread = singleCleaner()\n singleCleanerThread.daemon = True # close the main program even if there are threads left\n singleCleanerThread.start()\n\n shared.reloadMyAddressHashes()\n shared.reloadBroadcastSendersForWhichImWatching()\n\n if shared.safeConfigGetBoolean('bitmessagesettings', 'apienabled'):\n try:\n apiNotifyPath = shared.config.get(\n 'bitmessagesettings', 'apinotifypath')\n except:\n apiNotifyPath = ''\n if apiNotifyPath != '':\n with shared.printLock:\n print('Trying to call', apiNotifyPath)\n\n call([apiNotifyPath, \"startingUp\"])\n singleAPIThread = singleAPI()\n singleAPIThread.daemon = True # close the main program even if there are threads left\n singleAPIThread.start()\n\n connectToStream(1)\n\n singleListenerThread = singleListener()\n singleListenerThread.setup(selfInitiatedConnections)\n singleListenerThread.daemon = True # close the main program even if there are threads left\n singleListenerThread.start()\n\n if daemon == False and shared.safeConfigGetBoolean('bitmessagesettings', 'daemon') == False:\n if curses == False:\n try:\n from PyQt4 import QtCore, QtGui\n except Exception as err:\n print('PyBitmessage requires PyQt unless you want to run it as a daemon and interact with it using the API. You can download PyQt from http://www.riverbankcomputing.com/software/pyqt/download or by searching Google for \\'PyQt Download\\'. If you want to run in daemon mode, see https://bitmessage.org/wiki/Daemon')\n print('Error message:', err)\n print('You can also run PyBitmessage with the new curses interface by providing \\'-c\\' as a commandline argument.')\n os._exit(0)\n\n import bitmessageqt\n bitmessageqt.run()\n else:\n print('Running with curses')\n import bitmessagecurses\n bitmessagecurses.runwrapper()\n else:\n shared.config.remove_option('bitmessagesettings', 'dontconnect')\n\n if daemon:\n with shared.printLock:\n print('Running as a daemon. The main program should exit this thread.')\n else:\n with shared.printLock:\n print('Running as a daemon. You can use Ctrl+C to exit.')\n while True:\n time.sleep(20)\n\n def stop(self):\n with shared.printLock:\n print('Stopping Bitmessage Deamon.')\n shared.doCleanShutdown()\n\n\n #TODO: nice function but no one is using this \n def getApiAddress(self):\n if not shared.safeConfigGetBoolean('bitmessagesettings', 'apienabled'):\n return None\n address = shared.config.get('bitmessagesettings', 'apiinterface')\n port = shared.config.getint('bitmessagesettings', 'apiport')\n return {'address':address,'port':port}\n\nif __name__ == \"__main__\":\n mainprogram = Main()\n mainprogram.start()\n\n\n# So far, the creation of and management of the Bitmessage protocol and this\n# client is a one-man operation. Bitcoin tips are quite appreciated.\n# 1H5XaDA6fYENLbknwZyjiYXYPQaFjjLX2u\n", "path": "src/bitmessagemain.py" } ]
[ { "content": "#!/usr/bin/env python2.7\n# Copyright (c) 2012 Jonathan Warren\n# Copyright (c) 2012 The Bitmessage developers\n# Distributed under the MIT/X11 software license. See the accompanying\n# file COPYING or http://www.opensource.org/licenses/mit-license.php.\n\n# Right now, PyBitmessage only support connecting to stream 1. It doesn't\n# yet contain logic to expand into further streams.\n\n# The software version variable is now held in shared.py\n\n\nimport sys\n#Version check\n#Older versions of Python don't support the print function while Python 3 doesn't\n#like the print statement, so we use sys.stdout for the version check. After this\n#check we can then use the print function in the remainder of this file. Currently\n#in order to use logging, a lot of unnecessary code needs to be executed which could\n#potentially render this version check useless. So logging won't be used here until\n#there is a more efficient way to configure logging\nif sys.hexversion >= 0x3000000:\n msg = \"PyBitmessage does not support Python 3. Python 2.7.3 or later is required. Your version: %s\" % sys.version\n #logger.critical(msg)\n sys.stdout.write(msg)\n sys.exit(0)\nif sys.hexversion < 0x20703F0:\n msg = \"You should use Python 2.7.3 or greater (but not Python 3). Your version: %s\" % sys.version\n #logger.critical(msg)\n sys.stdout.write(msg)\n sys.exit(0)\n\nimport signal # Used to capture a Ctrl-C keypress so that Bitmessage can shutdown gracefully.\n# The next 3 are used for the API\nimport singleton\nimport os\nimport socket\nimport ctypes\nfrom struct import pack\n\nfrom SimpleXMLRPCServer import SimpleXMLRPCServer\nfrom api import MySimpleXMLRPCRequestHandler\nfrom helper_startup import isOurOperatingSystemLimitedToHavingVeryFewHalfOpenConnections\n\nimport shared\nfrom helper_sql import sqlQuery\nimport threading\n\n# Classes\n#from helper_sql import *\n#from class_sqlThread import *\nfrom class_sqlThread import sqlThread\nfrom class_singleCleaner import singleCleaner\n#from class_singleWorker import *\nfrom class_objectProcessor import objectProcessor\nfrom class_outgoingSynSender import outgoingSynSender\nfrom class_singleListener import singleListener\nfrom class_singleWorker import singleWorker\n#from class_addressGenerator import *\nfrom class_addressGenerator import addressGenerator\nfrom debug import logger\n\n# Helper Functions\nimport helper_bootstrap\nimport helper_generic\n\nfrom subprocess import call\nimport time\n \n\ndef connectToStream(streamNumber):\n shared.streamsInWhichIAmParticipating[streamNumber] = 'no data'\n selfInitiatedConnections[streamNumber] = {}\n shared.inventorySets[streamNumber] = set()\n queryData = sqlQuery('''SELECT hash FROM inventory WHERE streamnumber=?''', streamNumber)\n for row in queryData:\n shared.inventorySets[streamNumber].add(row[0])\n\n \n if isOurOperatingSystemLimitedToHavingVeryFewHalfOpenConnections():\n # Some XP and Vista systems can only have 10 outgoing connections at a time.\n maximumNumberOfHalfOpenConnections = 9\n else:\n maximumNumberOfHalfOpenConnections = 64\n for i in range(maximumNumberOfHalfOpenConnections):\n a = outgoingSynSender()\n a.setup(streamNumber, selfInitiatedConnections)\n a.start()\n\ndef _fixWinsock():\n if not ('win32' in sys.platform) and not ('win64' in sys.platform):\n return\n\n # Python 2 on Windows doesn't define a wrapper for\n # socket.inet_ntop but we can make one ourselves using ctypes\n if not hasattr(socket, 'inet_ntop'):\n addressToString = ctypes.windll.ws2_32.WSAAddressToStringA\n def inet_ntop(family, host):\n if family == socket.AF_INET:\n if len(host) != 4:\n raise ValueError(\"invalid IPv4 host\")\n host = pack(\"hH4s8s\", socket.AF_INET, 0, host, \"\\0\" * 8)\n elif family == socket.AF_INET6:\n if len(host) != 16:\n raise ValueError(\"invalid IPv6 host\")\n host = pack(\"hHL16sL\", socket.AF_INET6, 0, 0, host, 0)\n else:\n raise ValueError(\"invalid address family\")\n buf = \"\\0\" * 64\n lengthBuf = pack(\"I\", len(buf))\n addressToString(host, len(host), None, buf, lengthBuf)\n return buf[0:buf.index(\"\\0\")]\n socket.inet_ntop = inet_ntop\n\n # Same for inet_pton\n if not hasattr(socket, 'inet_pton'):\n stringToAddress = ctypes.windll.ws2_32.WSAStringToAddressA\n def inet_pton(family, host):\n buf = \"\\0\" * 28\n lengthBuf = pack(\"I\", len(buf))\n if stringToAddress(str(host),\n int(family),\n None,\n buf,\n lengthBuf) != 0:\n raise socket.error(\"illegal IP address passed to inet_pton\")\n if family == socket.AF_INET:\n return buf[4:8]\n elif family == socket.AF_INET6:\n return buf[8:24]\n else:\n raise ValueError(\"invalid address family\")\n socket.inet_pton = inet_pton\n\n # These sockopts are needed on for IPv6 support\n if not hasattr(socket, 'IPPROTO_IPV6'):\n socket.IPPROTO_IPV6 = 41\n if not hasattr(socket, 'IPV6_V6ONLY'):\n socket.IPV6_V6ONLY = 27\n\n# This thread, of which there is only one, runs the API.\nclass singleAPI(threading.Thread):\n\n def __init__(self):\n threading.Thread.__init__(self)\n\n def run(self):\n se = SimpleXMLRPCServer((shared.config.get('bitmessagesettings', 'apiinterface'), shared.config.getint(\n 'bitmessagesettings', 'apiport')), MySimpleXMLRPCRequestHandler, True, True)\n se.register_introspection_functions()\n se.serve_forever()\n\n# This is a list of current connections (the thread pointers at least)\nselfInitiatedConnections = {}\n\nif shared.useVeryEasyProofOfWorkForTesting:\n shared.networkDefaultProofOfWorkNonceTrialsPerByte = int(\n shared.networkDefaultProofOfWorkNonceTrialsPerByte / 16)\n shared.networkDefaultPayloadLengthExtraBytes = int(\n shared.networkDefaultPayloadLengthExtraBytes / 7000)\n\nclass Main:\n def start(self, daemon=False):\n _fixWinsock()\n\n shared.daemon = daemon\n # is the application already running? If yes then exit.\n thisapp = singleton.singleinstance()\n\n # get curses flag\n curses = False\n if '-c' in sys.argv:\n curses = True\n\n signal.signal(signal.SIGINT, helper_generic.signal_handler)\n # signal.signal(signal.SIGINT, signal.SIG_DFL)\n\n helper_bootstrap.knownNodes()\n # Start the address generation thread\n addressGeneratorThread = addressGenerator()\n addressGeneratorThread.daemon = True # close the main program even if there are threads left\n addressGeneratorThread.start()\n\n # Start the thread that calculates POWs\n singleWorkerThread = singleWorker()\n singleWorkerThread.daemon = True # close the main program even if there are threads left\n singleWorkerThread.start()\n\n # Start the SQL thread\n sqlLookup = sqlThread()\n sqlLookup.daemon = False # DON'T close the main program even if there are threads left. The closeEvent should command this thread to exit gracefully.\n sqlLookup.start()\n\n # Start the thread that calculates POWs\n objectProcessorThread = objectProcessor()\n objectProcessorThread.daemon = False # DON'T close the main program even the thread remains. This thread checks the shutdown variable after processing each object.\n objectProcessorThread.start()\n\n # Start the cleanerThread\n singleCleanerThread = singleCleaner()\n singleCleanerThread.daemon = True # close the main program even if there are threads left\n singleCleanerThread.start()\n\n shared.reloadMyAddressHashes()\n shared.reloadBroadcastSendersForWhichImWatching()\n\n if shared.safeConfigGetBoolean('bitmessagesettings', 'apienabled'):\n try:\n apiNotifyPath = shared.config.get(\n 'bitmessagesettings', 'apinotifypath')\n except:\n apiNotifyPath = ''\n if apiNotifyPath != '':\n with shared.printLock:\n print('Trying to call', apiNotifyPath)\n\n call([apiNotifyPath, \"startingUp\"])\n singleAPIThread = singleAPI()\n singleAPIThread.daemon = True # close the main program even if there are threads left\n singleAPIThread.start()\n\n connectToStream(1)\n\n singleListenerThread = singleListener()\n singleListenerThread.setup(selfInitiatedConnections)\n singleListenerThread.daemon = True # close the main program even if there are threads left\n singleListenerThread.start()\n\n if daemon == False and shared.safeConfigGetBoolean('bitmessagesettings', 'daemon') == False:\n if curses == False:\n try:\n from PyQt4 import QtCore, QtGui\n except Exception as err:\n print('PyBitmessage requires PyQt unless you want to run it as a daemon and interact with it using the API. You can download PyQt from http://www.riverbankcomputing.com/software/pyqt/download or by searching Google for \\'PyQt Download\\'. If you want to run in daemon mode, see https://bitmessage.org/wiki/Daemon')\n print('Error message:', err)\n print('You can also run PyBitmessage with the new curses interface by providing \\'-c\\' as a commandline argument.')\n os._exit(0)\n\n import bitmessageqt\n bitmessageqt.run()\n else:\n print('Running with curses')\n import bitmessagecurses\n bitmessagecurses.runwrapper()\n else:\n shared.config.remove_option('bitmessagesettings', 'dontconnect')\n\n if daemon:\n with shared.printLock:\n print('Running as a daemon. The main program should exit this thread.')\n else:\n with shared.printLock:\n print('Running as a daemon. You can use Ctrl+C to exit.')\n while True:\n time.sleep(20)\n\n def stop(self):\n with shared.printLock:\n print('Stopping Bitmessage Deamon.')\n shared.doCleanShutdown()\n\n\n #TODO: nice function but no one is using this \n def getApiAddress(self):\n if not shared.safeConfigGetBoolean('bitmessagesettings', 'apienabled'):\n return None\n address = shared.config.get('bitmessagesettings', 'apiinterface')\n port = shared.config.getint('bitmessagesettings', 'apiport')\n return {'address':address,'port':port}\n\nif __name__ == \"__main__\":\n mainprogram = Main()\n mainprogram.start()\n\n\n# So far, the creation of and management of the Bitmessage protocol and this\n# client is a one-man operation. Bitcoin tips are quite appreciated.\n# 1H5XaDA6fYENLbknwZyjiYXYPQaFjjLX2u\n", "path": "src/bitmessagemain.py" } ]
diff --git a/src/bitmessagemain.py b/src/bitmessagemain.py index fffe99e722..491b82f045 100755 --- a/src/bitmessagemain.py +++ b/src/bitmessagemain.py @@ -172,7 +172,6 @@ def start(self, daemon=False): curses = True signal.signal(signal.SIGINT, helper_generic.signal_handler) - signal.signal(signal.SIGTERM, helper_generic.signal_handler) # signal.signal(signal.SIGINT, signal.SIG_DFL) helper_bootstrap.knownNodes()
python-discord__bot-935
Modlog Events Should Check for DM At least one of the event handlers in the modlog do not properly account for messages being sent in DMs & attempt to perform a guild ID comparison, which doesn't exist in a DM. This should be guarded against across the cog. ``` AttributeError: 'NoneType' object has no attribute 'id' File "discord/client.py", line 312, in _run_event await coro(*args, **kwargs) File "bot/cogs/moderation/modlog.py", line 555, in on_message_delete if message.guild.id != GuildConstant.id or channel.id in GuildConstant.modlog_blacklist: Unhandled exception in on_message_delete. ``` Sentry Issue: [BOT-44](https://sentry.io/organizations/python-discord/issues/1656406094/?referrer=github_integration)
[ { "content": "import asyncio\nimport difflib\nimport itertools\nimport logging\nimport typing as t\nfrom datetime import datetime\nfrom itertools import zip_longest\n\nimport discord\nfrom dateutil.relativedelta import relativedelta\nfrom deepdiff import DeepDiff\nfrom discord import Colour\nfrom discord.abc import GuildChannel\nfrom discord.ext.commands import Cog, Context\nfrom discord.utils import escape_markdown\n\nfrom bot.bot import Bot\nfrom bot.constants import Categories, Channels, Colours, Emojis, Event, Guild as GuildConstant, Icons, URLs\nfrom bot.utils.time import humanize_delta\n\nlog = logging.getLogger(__name__)\n\nGUILD_CHANNEL = t.Union[discord.CategoryChannel, discord.TextChannel, discord.VoiceChannel]\n\nCHANNEL_CHANGES_UNSUPPORTED = (\"permissions\",)\nCHANNEL_CHANGES_SUPPRESSED = (\"_overwrites\", \"position\")\nMEMBER_CHANGES_SUPPRESSED = (\"status\", \"activities\", \"_client_status\", \"nick\")\nROLE_CHANGES_UNSUPPORTED = (\"colour\", \"permissions\")\n\nVOICE_STATE_ATTRIBUTES = {\n \"channel.name\": \"Channel\",\n \"self_stream\": \"Streaming\",\n \"self_video\": \"Broadcasting\",\n}\n\n\nclass ModLog(Cog, name=\"ModLog\"):\n \"\"\"Logging for server events and staff actions.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n self._ignored = {event: [] for event in Event}\n\n self._cached_deletes = []\n self._cached_edits = []\n\n async def upload_log(\n self,\n messages: t.Iterable[discord.Message],\n actor_id: int,\n attachments: t.Iterable[t.List[str]] = None\n ) -> str:\n \"\"\"Upload message logs to the database and return a URL to a page for viewing the logs.\"\"\"\n if attachments is None:\n attachments = []\n\n response = await self.bot.api_client.post(\n 'bot/deleted-messages',\n json={\n 'actor': actor_id,\n 'creation': datetime.utcnow().isoformat(),\n 'deletedmessage_set': [\n {\n 'id': message.id,\n 'author': message.author.id,\n 'channel_id': message.channel.id,\n 'content': message.content,\n 'embeds': [embed.to_dict() for embed in message.embeds],\n 'attachments': attachment,\n }\n for message, attachment in zip_longest(messages, attachments, fillvalue=[])\n ]\n }\n )\n\n return f\"{URLs.site_logs_view}/{response['id']}\"\n\n def ignore(self, event: Event, *items: int) -> None:\n \"\"\"Add event to ignored events to suppress log emission.\"\"\"\n for item in items:\n if item not in self._ignored[event]:\n self._ignored[event].append(item)\n\n async def send_log_message(\n self,\n icon_url: t.Optional[str],\n colour: t.Union[discord.Colour, int],\n title: t.Optional[str],\n text: str,\n thumbnail: t.Optional[t.Union[str, discord.Asset]] = None,\n channel_id: int = Channels.mod_log,\n ping_everyone: bool = False,\n files: t.Optional[t.List[discord.File]] = None,\n content: t.Optional[str] = None,\n additional_embeds: t.Optional[t.List[discord.Embed]] = None,\n additional_embeds_msg: t.Optional[str] = None,\n timestamp_override: t.Optional[datetime] = None,\n footer: t.Optional[str] = None,\n ) -> Context:\n \"\"\"Generate log embed and send to logging channel.\"\"\"\n # Truncate string directly here to avoid removing newlines\n embed = discord.Embed(\n description=text[:2045] + \"...\" if len(text) > 2048 else text\n )\n\n if title and icon_url:\n embed.set_author(name=title, icon_url=icon_url)\n\n embed.colour = colour\n embed.timestamp = timestamp_override or datetime.utcnow()\n\n if footer:\n embed.set_footer(text=footer)\n\n if thumbnail:\n embed.set_thumbnail(url=thumbnail)\n\n if ping_everyone:\n if content:\n content = f\"@everyone\\n{content}\"\n else:\n content = \"@everyone\"\n\n channel = self.bot.get_channel(channel_id)\n log_message = await channel.send(content=content, embed=embed, files=files)\n\n if additional_embeds:\n if additional_embeds_msg:\n await channel.send(additional_embeds_msg)\n for additional_embed in additional_embeds:\n await channel.send(embed=additional_embed)\n\n return await self.bot.get_context(log_message) # Optionally return for use with antispam\n\n @Cog.listener()\n async def on_guild_channel_create(self, channel: GUILD_CHANNEL) -> None:\n \"\"\"Log channel create event to mod log.\"\"\"\n if channel.guild.id != GuildConstant.id:\n return\n\n if isinstance(channel, discord.CategoryChannel):\n title = \"Category created\"\n message = f\"{channel.name} (`{channel.id}`)\"\n elif isinstance(channel, discord.VoiceChannel):\n title = \"Voice channel created\"\n\n if channel.category:\n message = f\"{channel.category}/{channel.name} (`{channel.id}`)\"\n else:\n message = f\"{channel.name} (`{channel.id}`)\"\n else:\n title = \"Text channel created\"\n\n if channel.category:\n message = f\"{channel.category}/{channel.name} (`{channel.id}`)\"\n else:\n message = f\"{channel.name} (`{channel.id}`)\"\n\n await self.send_log_message(Icons.hash_green, Colours.soft_green, title, message)\n\n @Cog.listener()\n async def on_guild_channel_delete(self, channel: GUILD_CHANNEL) -> None:\n \"\"\"Log channel delete event to mod log.\"\"\"\n if channel.guild.id != GuildConstant.id:\n return\n\n if isinstance(channel, discord.CategoryChannel):\n title = \"Category deleted\"\n elif isinstance(channel, discord.VoiceChannel):\n title = \"Voice channel deleted\"\n else:\n title = \"Text channel deleted\"\n\n if channel.category and not isinstance(channel, discord.CategoryChannel):\n message = f\"{channel.category}/{channel.name} (`{channel.id}`)\"\n else:\n message = f\"{channel.name} (`{channel.id}`)\"\n\n await self.send_log_message(\n Icons.hash_red, Colours.soft_red,\n title, message\n )\n\n @Cog.listener()\n async def on_guild_channel_update(self, before: GUILD_CHANNEL, after: GuildChannel) -> None:\n \"\"\"Log channel update event to mod log.\"\"\"\n if before.guild.id != GuildConstant.id:\n return\n\n if before.id in self._ignored[Event.guild_channel_update]:\n self._ignored[Event.guild_channel_update].remove(before.id)\n return\n\n # Two channel updates are sent for a single edit: 1 for topic and 1 for category change.\n # TODO: remove once support is added for ignoring multiple occurrences for the same channel.\n help_categories = (Categories.help_available, Categories.help_dormant, Categories.help_in_use)\n if after.category and after.category.id in help_categories:\n return\n\n diff = DeepDiff(before, after)\n changes = []\n done = []\n\n diff_values = diff.get(\"values_changed\", {})\n diff_values.update(diff.get(\"type_changes\", {}))\n\n for key, value in diff_values.items():\n if not key: # Not sure why, but it happens\n continue\n\n key = key[5:] # Remove \"root.\" prefix\n\n if \"[\" in key:\n key = key.split(\"[\", 1)[0]\n\n if \".\" in key:\n key = key.split(\".\", 1)[0]\n\n if key in done or key in CHANNEL_CHANGES_SUPPRESSED:\n continue\n\n if key in CHANNEL_CHANGES_UNSUPPORTED:\n changes.append(f\"**{key.title()}** updated\")\n else:\n new = value[\"new_value\"]\n old = value[\"old_value\"]\n\n # Discord does not treat consecutive backticks (\"``\") as an empty inline code block, so the markdown\n # formatting is broken when `new` and/or `old` are empty values. \"None\" is used for these cases so\n # formatting is preserved.\n changes.append(f\"**{key.title()}:** `{old or 'None'}` **→** `{new or 'None'}`\")\n\n done.append(key)\n\n if not changes:\n return\n\n message = \"\"\n\n for item in sorted(changes):\n message += f\"{Emojis.bullet} {item}\\n\"\n\n if after.category:\n message = f\"**{after.category}/#{after.name} (`{after.id}`)**\\n{message}\"\n else:\n message = f\"**#{after.name}** (`{after.id}`)\\n{message}\"\n\n await self.send_log_message(\n Icons.hash_blurple, Colour.blurple(),\n \"Channel updated\", message\n )\n\n @Cog.listener()\n async def on_guild_role_create(self, role: discord.Role) -> None:\n \"\"\"Log role create event to mod log.\"\"\"\n if role.guild.id != GuildConstant.id:\n return\n\n await self.send_log_message(\n Icons.crown_green, Colours.soft_green,\n \"Role created\", f\"`{role.id}`\"\n )\n\n @Cog.listener()\n async def on_guild_role_delete(self, role: discord.Role) -> None:\n \"\"\"Log role delete event to mod log.\"\"\"\n if role.guild.id != GuildConstant.id:\n return\n\n await self.send_log_message(\n Icons.crown_red, Colours.soft_red,\n \"Role removed\", f\"{role.name} (`{role.id}`)\"\n )\n\n @Cog.listener()\n async def on_guild_role_update(self, before: discord.Role, after: discord.Role) -> None:\n \"\"\"Log role update event to mod log.\"\"\"\n if before.guild.id != GuildConstant.id:\n return\n\n diff = DeepDiff(before, after)\n changes = []\n done = []\n\n diff_values = diff.get(\"values_changed\", {})\n diff_values.update(diff.get(\"type_changes\", {}))\n\n for key, value in diff_values.items():\n if not key: # Not sure why, but it happens\n continue\n\n key = key[5:] # Remove \"root.\" prefix\n\n if \"[\" in key:\n key = key.split(\"[\", 1)[0]\n\n if \".\" in key:\n key = key.split(\".\", 1)[0]\n\n if key in done or key == \"color\":\n continue\n\n if key in ROLE_CHANGES_UNSUPPORTED:\n changes.append(f\"**{key.title()}** updated\")\n else:\n new = value[\"new_value\"]\n old = value[\"old_value\"]\n\n changes.append(f\"**{key.title()}:** `{old}` **→** `{new}`\")\n\n done.append(key)\n\n if not changes:\n return\n\n message = \"\"\n\n for item in sorted(changes):\n message += f\"{Emojis.bullet} {item}\\n\"\n\n message = f\"**{after.name}** (`{after.id}`)\\n{message}\"\n\n await self.send_log_message(\n Icons.crown_blurple, Colour.blurple(),\n \"Role updated\", message\n )\n\n @Cog.listener()\n async def on_guild_update(self, before: discord.Guild, after: discord.Guild) -> None:\n \"\"\"Log guild update event to mod log.\"\"\"\n if before.id != GuildConstant.id:\n return\n\n diff = DeepDiff(before, after)\n changes = []\n done = []\n\n diff_values = diff.get(\"values_changed\", {})\n diff_values.update(diff.get(\"type_changes\", {}))\n\n for key, value in diff_values.items():\n if not key: # Not sure why, but it happens\n continue\n\n key = key[5:] # Remove \"root.\" prefix\n\n if \"[\" in key:\n key = key.split(\"[\", 1)[0]\n\n if \".\" in key:\n key = key.split(\".\", 1)[0]\n\n if key in done:\n continue\n\n new = value[\"new_value\"]\n old = value[\"old_value\"]\n\n changes.append(f\"**{key.title()}:** `{old}` **→** `{new}`\")\n\n done.append(key)\n\n if not changes:\n return\n\n message = \"\"\n\n for item in sorted(changes):\n message += f\"{Emojis.bullet} {item}\\n\"\n\n message = f\"**{after.name}** (`{after.id}`)\\n{message}\"\n\n await self.send_log_message(\n Icons.guild_update, Colour.blurple(),\n \"Guild updated\", message,\n thumbnail=after.icon_url_as(format=\"png\")\n )\n\n @Cog.listener()\n async def on_member_ban(self, guild: discord.Guild, member: discord.Member) -> None:\n \"\"\"Log ban event to user log.\"\"\"\n if guild.id != GuildConstant.id:\n return\n\n if member.id in self._ignored[Event.member_ban]:\n self._ignored[Event.member_ban].remove(member.id)\n return\n\n await self.send_log_message(\n Icons.user_ban, Colours.soft_red,\n \"User banned\", f\"{member} (`{member.id}`)\",\n thumbnail=member.avatar_url_as(static_format=\"png\"),\n channel_id=Channels.user_log\n )\n\n @Cog.listener()\n async def on_member_join(self, member: discord.Member) -> None:\n \"\"\"Log member join event to user log.\"\"\"\n if member.guild.id != GuildConstant.id:\n return\n\n member_str = escape_markdown(str(member))\n message = f\"{member_str} (`{member.id}`)\"\n now = datetime.utcnow()\n difference = abs(relativedelta(now, member.created_at))\n\n message += \"\\n\\n**Account age:** \" + humanize_delta(difference)\n\n if difference.days < 1 and difference.months < 1 and difference.years < 1: # New user account!\n message = f\"{Emojis.new} {message}\"\n\n await self.send_log_message(\n Icons.sign_in, Colours.soft_green,\n \"User joined\", message,\n thumbnail=member.avatar_url_as(static_format=\"png\"),\n channel_id=Channels.user_log\n )\n\n @Cog.listener()\n async def on_member_remove(self, member: discord.Member) -> None:\n \"\"\"Log member leave event to user log.\"\"\"\n if member.guild.id != GuildConstant.id:\n return\n\n if member.id in self._ignored[Event.member_remove]:\n self._ignored[Event.member_remove].remove(member.id)\n return\n\n member_str = escape_markdown(str(member))\n await self.send_log_message(\n Icons.sign_out, Colours.soft_red,\n \"User left\", f\"{member_str} (`{member.id}`)\",\n thumbnail=member.avatar_url_as(static_format=\"png\"),\n channel_id=Channels.user_log\n )\n\n @Cog.listener()\n async def on_member_unban(self, guild: discord.Guild, member: discord.User) -> None:\n \"\"\"Log member unban event to mod log.\"\"\"\n if guild.id != GuildConstant.id:\n return\n\n if member.id in self._ignored[Event.member_unban]:\n self._ignored[Event.member_unban].remove(member.id)\n return\n\n member_str = escape_markdown(str(member))\n await self.send_log_message(\n Icons.user_unban, Colour.blurple(),\n \"User unbanned\", f\"{member_str} (`{member.id}`)\",\n thumbnail=member.avatar_url_as(static_format=\"png\"),\n channel_id=Channels.mod_log\n )\n\n @Cog.listener()\n async def on_member_update(self, before: discord.Member, after: discord.Member) -> None:\n \"\"\"Log member update event to user log.\"\"\"\n if before.guild.id != GuildConstant.id:\n return\n\n if before.id in self._ignored[Event.member_update]:\n self._ignored[Event.member_update].remove(before.id)\n return\n\n diff = DeepDiff(before, after)\n changes = []\n done = []\n\n diff_values = {}\n\n diff_values.update(diff.get(\"values_changed\", {}))\n diff_values.update(diff.get(\"type_changes\", {}))\n diff_values.update(diff.get(\"iterable_item_removed\", {}))\n diff_values.update(diff.get(\"iterable_item_added\", {}))\n\n diff_user = DeepDiff(before._user, after._user)\n\n diff_values.update(diff_user.get(\"values_changed\", {}))\n diff_values.update(diff_user.get(\"type_changes\", {}))\n diff_values.update(diff_user.get(\"iterable_item_removed\", {}))\n diff_values.update(diff_user.get(\"iterable_item_added\", {}))\n\n for key, value in diff_values.items():\n if not key: # Not sure why, but it happens\n continue\n\n key = key[5:] # Remove \"root.\" prefix\n\n if \"[\" in key:\n key = key.split(\"[\", 1)[0]\n\n if \".\" in key:\n key = key.split(\".\", 1)[0]\n\n if key in done or key in MEMBER_CHANGES_SUPPRESSED:\n continue\n\n if key == \"_roles\":\n new_roles = after.roles\n old_roles = before.roles\n\n for role in old_roles:\n if role not in new_roles:\n changes.append(f\"**Role removed:** {role.name} (`{role.id}`)\")\n\n for role in new_roles:\n if role not in old_roles:\n changes.append(f\"**Role added:** {role.name} (`{role.id}`)\")\n\n else:\n new = value.get(\"new_value\")\n old = value.get(\"old_value\")\n\n if new and old:\n changes.append(f\"**{key.title()}:** `{old}` **→** `{new}`\")\n\n done.append(key)\n\n if before.name != after.name:\n changes.append(\n f\"**Username:** `{before.name}` **→** `{after.name}`\"\n )\n\n if before.discriminator != after.discriminator:\n changes.append(\n f\"**Discriminator:** `{before.discriminator}` **→** `{after.discriminator}`\"\n )\n\n if before.display_name != after.display_name:\n changes.append(\n f\"**Display name:** `{before.display_name}` **→** `{after.display_name}`\"\n )\n\n if not changes:\n return\n\n message = \"\"\n\n for item in sorted(changes):\n message += f\"{Emojis.bullet} {item}\\n\"\n\n member_str = escape_markdown(str(after))\n message = f\"**{member_str}** (`{after.id}`)\\n{message}\"\n\n await self.send_log_message(\n Icons.user_update, Colour.blurple(),\n \"Member updated\", message,\n thumbnail=after.avatar_url_as(static_format=\"png\"),\n channel_id=Channels.user_log\n )\n\n @Cog.listener()\n async def on_message_delete(self, message: discord.Message) -> None:\n \"\"\"Log message delete event to message change log.\"\"\"\n channel = message.channel\n author = message.author\n\n if message.guild.id != GuildConstant.id or channel.id in GuildConstant.modlog_blacklist:\n return\n\n self._cached_deletes.append(message.id)\n\n if message.id in self._ignored[Event.message_delete]:\n self._ignored[Event.message_delete].remove(message.id)\n return\n\n if author.bot:\n return\n\n author_str = escape_markdown(str(author))\n if channel.category:\n response = (\n f\"**Author:** {author_str} (`{author.id}`)\\n\"\n f\"**Channel:** {channel.category}/#{channel.name} (`{channel.id}`)\\n\"\n f\"**Message ID:** `{message.id}`\\n\"\n \"\\n\"\n )\n else:\n response = (\n f\"**Author:** {author_str} (`{author.id}`)\\n\"\n f\"**Channel:** #{channel.name} (`{channel.id}`)\\n\"\n f\"**Message ID:** `{message.id}`\\n\"\n \"\\n\"\n )\n\n if message.attachments:\n # Prepend the message metadata with the number of attachments\n response = f\"**Attachments:** {len(message.attachments)}\\n\" + response\n\n # Shorten the message content if necessary\n content = message.clean_content\n remaining_chars = 2040 - len(response)\n\n if len(content) > remaining_chars:\n botlog_url = await self.upload_log(messages=[message], actor_id=message.author.id)\n ending = f\"\\n\\nMessage truncated, [full message here]({botlog_url}).\"\n truncation_point = remaining_chars - len(ending)\n content = f\"{content[:truncation_point]}...{ending}\"\n\n response += f\"{content}\"\n\n await self.send_log_message(\n Icons.message_delete, Colours.soft_red,\n \"Message deleted\",\n response,\n channel_id=Channels.message_log\n )\n\n @Cog.listener()\n async def on_raw_message_delete(self, event: discord.RawMessageDeleteEvent) -> None:\n \"\"\"Log raw message delete event to message change log.\"\"\"\n if event.guild_id != GuildConstant.id or event.channel_id in GuildConstant.modlog_blacklist:\n return\n\n await asyncio.sleep(1) # Wait here in case the normal event was fired\n\n if event.message_id in self._cached_deletes:\n # It was in the cache and the normal event was fired, so we can just ignore it\n self._cached_deletes.remove(event.message_id)\n return\n\n if event.message_id in self._ignored[Event.message_delete]:\n self._ignored[Event.message_delete].remove(event.message_id)\n return\n\n channel = self.bot.get_channel(event.channel_id)\n\n if channel.category:\n response = (\n f\"**Channel:** {channel.category}/#{channel.name} (`{channel.id}`)\\n\"\n f\"**Message ID:** `{event.message_id}`\\n\"\n \"\\n\"\n \"This message was not cached, so the message content cannot be displayed.\"\n )\n else:\n response = (\n f\"**Channel:** #{channel.name} (`{channel.id}`)\\n\"\n f\"**Message ID:** `{event.message_id}`\\n\"\n \"\\n\"\n \"This message was not cached, so the message content cannot be displayed.\"\n )\n\n await self.send_log_message(\n Icons.message_delete, Colours.soft_red,\n \"Message deleted\",\n response,\n channel_id=Channels.message_log\n )\n\n @Cog.listener()\n async def on_message_edit(self, msg_before: discord.Message, msg_after: discord.Message) -> None:\n \"\"\"Log message edit event to message change log.\"\"\"\n if (\n not msg_before.guild\n or msg_before.guild.id != GuildConstant.id\n or msg_before.channel.id in GuildConstant.modlog_blacklist\n or msg_before.author.bot\n ):\n return\n\n self._cached_edits.append(msg_before.id)\n\n if msg_before.content == msg_after.content:\n return\n\n author = msg_before.author\n author_str = escape_markdown(str(author))\n\n channel = msg_before.channel\n channel_name = f\"{channel.category}/#{channel.name}\" if channel.category else f\"#{channel.name}\"\n\n # Getting the difference per words and group them by type - add, remove, same\n # Note that this is intended grouping without sorting\n diff = difflib.ndiff(msg_before.clean_content.split(), msg_after.clean_content.split())\n diff_groups = tuple(\n (diff_type, tuple(s[2:] for s in diff_words))\n for diff_type, diff_words in itertools.groupby(diff, key=lambda s: s[0])\n )\n\n content_before: t.List[str] = []\n content_after: t.List[str] = []\n\n for index, (diff_type, words) in enumerate(diff_groups):\n sub = ' '.join(words)\n if diff_type == '-':\n content_before.append(f\"[{sub}](http://o.hi)\")\n elif diff_type == '+':\n content_after.append(f\"[{sub}](http://o.hi)\")\n elif diff_type == ' ':\n if len(words) > 2:\n sub = (\n f\"{words[0] if index > 0 else ''}\"\n \" ... \"\n f\"{words[-1] if index < len(diff_groups) - 1 else ''}\"\n )\n content_before.append(sub)\n content_after.append(sub)\n\n response = (\n f\"**Author:** {author_str} (`{author.id}`)\\n\"\n f\"**Channel:** {channel_name} (`{channel.id}`)\\n\"\n f\"**Message ID:** `{msg_before.id}`\\n\"\n \"\\n\"\n f\"**Before**:\\n{' '.join(content_before)}\\n\"\n f\"**After**:\\n{' '.join(content_after)}\\n\"\n \"\\n\"\n f\"[Jump to message]({msg_after.jump_url})\"\n )\n\n if msg_before.edited_at:\n # Message was previously edited, to assist with self-bot detection, use the edited_at\n # datetime as the baseline and create a human-readable delta between this edit event\n # and the last time the message was edited\n timestamp = msg_before.edited_at\n delta = humanize_delta(relativedelta(msg_after.edited_at, msg_before.edited_at))\n footer = f\"Last edited {delta} ago\"\n else:\n # Message was not previously edited, use the created_at datetime as the baseline, no\n # delta calculation needed\n timestamp = msg_before.created_at\n footer = None\n\n await self.send_log_message(\n Icons.message_edit, Colour.blurple(), \"Message edited\", response,\n channel_id=Channels.message_log, timestamp_override=timestamp, footer=footer\n )\n\n @Cog.listener()\n async def on_raw_message_edit(self, event: discord.RawMessageUpdateEvent) -> None:\n \"\"\"Log raw message edit event to message change log.\"\"\"\n try:\n channel = self.bot.get_channel(int(event.data[\"channel_id\"]))\n message = await channel.fetch_message(event.message_id)\n except discord.NotFound: # Was deleted before we got the event\n return\n\n if (\n not message.guild\n or message.guild.id != GuildConstant.id\n or message.channel.id in GuildConstant.modlog_blacklist\n or message.author.bot\n ):\n return\n\n await asyncio.sleep(1) # Wait here in case the normal event was fired\n\n if event.message_id in self._cached_edits:\n # It was in the cache and the normal event was fired, so we can just ignore it\n self._cached_edits.remove(event.message_id)\n return\n\n author = message.author\n channel = message.channel\n channel_name = f\"{channel.category}/#{channel.name}\" if channel.category else f\"#{channel.name}\"\n\n before_response = (\n f\"**Author:** {author} (`{author.id}`)\\n\"\n f\"**Channel:** {channel_name} (`{channel.id}`)\\n\"\n f\"**Message ID:** `{message.id}`\\n\"\n \"\\n\"\n \"This message was not cached, so the message content cannot be displayed.\"\n )\n\n after_response = (\n f\"**Author:** {author} (`{author.id}`)\\n\"\n f\"**Channel:** {channel_name} (`{channel.id}`)\\n\"\n f\"**Message ID:** `{message.id}`\\n\"\n \"\\n\"\n f\"{message.clean_content}\"\n )\n\n await self.send_log_message(\n Icons.message_edit, Colour.blurple(), \"Message edited (Before)\",\n before_response, channel_id=Channels.message_log\n )\n\n await self.send_log_message(\n Icons.message_edit, Colour.blurple(), \"Message edited (After)\",\n after_response, channel_id=Channels.message_log\n )\n\n @Cog.listener()\n async def on_voice_state_update(\n self,\n member: discord.Member,\n before: discord.VoiceState,\n after: discord.VoiceState\n ) -> None:\n \"\"\"Log member voice state changes to the voice log channel.\"\"\"\n if (\n member.guild.id != GuildConstant.id\n or (before.channel and before.channel.id in GuildConstant.modlog_blacklist)\n ):\n return\n\n if member.id in self._ignored[Event.voice_state_update]:\n self._ignored[Event.voice_state_update].remove(member.id)\n return\n\n # Exclude all channel attributes except the name.\n diff = DeepDiff(\n before,\n after,\n exclude_paths=(\"root.session_id\", \"root.afk\"),\n exclude_regex_paths=r\"root\\.channel\\.(?!name)\",\n )\n\n # A type change seems to always take precedent over a value change. Furthermore, it will\n # include the value change along with the type change anyway. Therefore, it's OK to\n # \"overwrite\" values_changed; in practice there will never even be anything to overwrite.\n diff_values = {**diff.get(\"values_changed\", {}), **diff.get(\"type_changes\", {})}\n\n icon = Icons.voice_state_blue\n colour = Colour.blurple()\n changes = []\n\n for attr, values in diff_values.items():\n if not attr: # Not sure why, but it happens.\n continue\n\n old = values[\"old_value\"]\n new = values[\"new_value\"]\n\n attr = attr[5:] # Remove \"root.\" prefix.\n attr = VOICE_STATE_ATTRIBUTES.get(attr, attr.replace(\"_\", \" \").capitalize())\n\n changes.append(f\"**{attr}:** `{old}` **→** `{new}`\")\n\n # Set the embed icon and colour depending on which attribute changed.\n if any(name in attr for name in (\"Channel\", \"deaf\", \"mute\")):\n if new is None or new is True:\n # Left a channel or was muted/deafened.\n icon = Icons.voice_state_red\n colour = Colours.soft_red\n elif old is None or old is True:\n # Joined a channel or was unmuted/undeafened.\n icon = Icons.voice_state_green\n colour = Colours.soft_green\n\n if not changes:\n return\n\n member_str = escape_markdown(str(member))\n message = \"\\n\".join(f\"{Emojis.bullet} {item}\" for item in sorted(changes))\n message = f\"**{member_str}** (`{member.id}`)\\n{message}\"\n\n await self.send_log_message(\n icon_url=icon,\n colour=colour,\n title=\"Voice state updated\",\n text=message,\n thumbnail=member.avatar_url_as(static_format=\"png\"),\n channel_id=Channels.voice_log\n )\n", "path": "bot/cogs/moderation/modlog.py" } ]
[ { "content": "import asyncio\nimport difflib\nimport itertools\nimport logging\nimport typing as t\nfrom datetime import datetime\nfrom itertools import zip_longest\n\nimport discord\nfrom dateutil.relativedelta import relativedelta\nfrom deepdiff import DeepDiff\nfrom discord import Colour\nfrom discord.abc import GuildChannel\nfrom discord.ext.commands import Cog, Context\nfrom discord.utils import escape_markdown\n\nfrom bot.bot import Bot\nfrom bot.constants import Categories, Channels, Colours, Emojis, Event, Guild as GuildConstant, Icons, URLs\nfrom bot.utils.time import humanize_delta\n\nlog = logging.getLogger(__name__)\n\nGUILD_CHANNEL = t.Union[discord.CategoryChannel, discord.TextChannel, discord.VoiceChannel]\n\nCHANNEL_CHANGES_UNSUPPORTED = (\"permissions\",)\nCHANNEL_CHANGES_SUPPRESSED = (\"_overwrites\", \"position\")\nMEMBER_CHANGES_SUPPRESSED = (\"status\", \"activities\", \"_client_status\", \"nick\")\nROLE_CHANGES_UNSUPPORTED = (\"colour\", \"permissions\")\n\nVOICE_STATE_ATTRIBUTES = {\n \"channel.name\": \"Channel\",\n \"self_stream\": \"Streaming\",\n \"self_video\": \"Broadcasting\",\n}\n\n\nclass ModLog(Cog, name=\"ModLog\"):\n \"\"\"Logging for server events and staff actions.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n self._ignored = {event: [] for event in Event}\n\n self._cached_deletes = []\n self._cached_edits = []\n\n async def upload_log(\n self,\n messages: t.Iterable[discord.Message],\n actor_id: int,\n attachments: t.Iterable[t.List[str]] = None\n ) -> str:\n \"\"\"Upload message logs to the database and return a URL to a page for viewing the logs.\"\"\"\n if attachments is None:\n attachments = []\n\n response = await self.bot.api_client.post(\n 'bot/deleted-messages',\n json={\n 'actor': actor_id,\n 'creation': datetime.utcnow().isoformat(),\n 'deletedmessage_set': [\n {\n 'id': message.id,\n 'author': message.author.id,\n 'channel_id': message.channel.id,\n 'content': message.content,\n 'embeds': [embed.to_dict() for embed in message.embeds],\n 'attachments': attachment,\n }\n for message, attachment in zip_longest(messages, attachments, fillvalue=[])\n ]\n }\n )\n\n return f\"{URLs.site_logs_view}/{response['id']}\"\n\n def ignore(self, event: Event, *items: int) -> None:\n \"\"\"Add event to ignored events to suppress log emission.\"\"\"\n for item in items:\n if item not in self._ignored[event]:\n self._ignored[event].append(item)\n\n async def send_log_message(\n self,\n icon_url: t.Optional[str],\n colour: t.Union[discord.Colour, int],\n title: t.Optional[str],\n text: str,\n thumbnail: t.Optional[t.Union[str, discord.Asset]] = None,\n channel_id: int = Channels.mod_log,\n ping_everyone: bool = False,\n files: t.Optional[t.List[discord.File]] = None,\n content: t.Optional[str] = None,\n additional_embeds: t.Optional[t.List[discord.Embed]] = None,\n additional_embeds_msg: t.Optional[str] = None,\n timestamp_override: t.Optional[datetime] = None,\n footer: t.Optional[str] = None,\n ) -> Context:\n \"\"\"Generate log embed and send to logging channel.\"\"\"\n # Truncate string directly here to avoid removing newlines\n embed = discord.Embed(\n description=text[:2045] + \"...\" if len(text) > 2048 else text\n )\n\n if title and icon_url:\n embed.set_author(name=title, icon_url=icon_url)\n\n embed.colour = colour\n embed.timestamp = timestamp_override or datetime.utcnow()\n\n if footer:\n embed.set_footer(text=footer)\n\n if thumbnail:\n embed.set_thumbnail(url=thumbnail)\n\n if ping_everyone:\n if content:\n content = f\"@everyone\\n{content}\"\n else:\n content = \"@everyone\"\n\n channel = self.bot.get_channel(channel_id)\n log_message = await channel.send(content=content, embed=embed, files=files)\n\n if additional_embeds:\n if additional_embeds_msg:\n await channel.send(additional_embeds_msg)\n for additional_embed in additional_embeds:\n await channel.send(embed=additional_embed)\n\n return await self.bot.get_context(log_message) # Optionally return for use with antispam\n\n @Cog.listener()\n async def on_guild_channel_create(self, channel: GUILD_CHANNEL) -> None:\n \"\"\"Log channel create event to mod log.\"\"\"\n if channel.guild.id != GuildConstant.id:\n return\n\n if isinstance(channel, discord.CategoryChannel):\n title = \"Category created\"\n message = f\"{channel.name} (`{channel.id}`)\"\n elif isinstance(channel, discord.VoiceChannel):\n title = \"Voice channel created\"\n\n if channel.category:\n message = f\"{channel.category}/{channel.name} (`{channel.id}`)\"\n else:\n message = f\"{channel.name} (`{channel.id}`)\"\n else:\n title = \"Text channel created\"\n\n if channel.category:\n message = f\"{channel.category}/{channel.name} (`{channel.id}`)\"\n else:\n message = f\"{channel.name} (`{channel.id}`)\"\n\n await self.send_log_message(Icons.hash_green, Colours.soft_green, title, message)\n\n @Cog.listener()\n async def on_guild_channel_delete(self, channel: GUILD_CHANNEL) -> None:\n \"\"\"Log channel delete event to mod log.\"\"\"\n if channel.guild.id != GuildConstant.id:\n return\n\n if isinstance(channel, discord.CategoryChannel):\n title = \"Category deleted\"\n elif isinstance(channel, discord.VoiceChannel):\n title = \"Voice channel deleted\"\n else:\n title = \"Text channel deleted\"\n\n if channel.category and not isinstance(channel, discord.CategoryChannel):\n message = f\"{channel.category}/{channel.name} (`{channel.id}`)\"\n else:\n message = f\"{channel.name} (`{channel.id}`)\"\n\n await self.send_log_message(\n Icons.hash_red, Colours.soft_red,\n title, message\n )\n\n @Cog.listener()\n async def on_guild_channel_update(self, before: GUILD_CHANNEL, after: GuildChannel) -> None:\n \"\"\"Log channel update event to mod log.\"\"\"\n if before.guild.id != GuildConstant.id:\n return\n\n if before.id in self._ignored[Event.guild_channel_update]:\n self._ignored[Event.guild_channel_update].remove(before.id)\n return\n\n # Two channel updates are sent for a single edit: 1 for topic and 1 for category change.\n # TODO: remove once support is added for ignoring multiple occurrences for the same channel.\n help_categories = (Categories.help_available, Categories.help_dormant, Categories.help_in_use)\n if after.category and after.category.id in help_categories:\n return\n\n diff = DeepDiff(before, after)\n changes = []\n done = []\n\n diff_values = diff.get(\"values_changed\", {})\n diff_values.update(diff.get(\"type_changes\", {}))\n\n for key, value in diff_values.items():\n if not key: # Not sure why, but it happens\n continue\n\n key = key[5:] # Remove \"root.\" prefix\n\n if \"[\" in key:\n key = key.split(\"[\", 1)[0]\n\n if \".\" in key:\n key = key.split(\".\", 1)[0]\n\n if key in done or key in CHANNEL_CHANGES_SUPPRESSED:\n continue\n\n if key in CHANNEL_CHANGES_UNSUPPORTED:\n changes.append(f\"**{key.title()}** updated\")\n else:\n new = value[\"new_value\"]\n old = value[\"old_value\"]\n\n # Discord does not treat consecutive backticks (\"``\") as an empty inline code block, so the markdown\n # formatting is broken when `new` and/or `old` are empty values. \"None\" is used for these cases so\n # formatting is preserved.\n changes.append(f\"**{key.title()}:** `{old or 'None'}` **→** `{new or 'None'}`\")\n\n done.append(key)\n\n if not changes:\n return\n\n message = \"\"\n\n for item in sorted(changes):\n message += f\"{Emojis.bullet} {item}\\n\"\n\n if after.category:\n message = f\"**{after.category}/#{after.name} (`{after.id}`)**\\n{message}\"\n else:\n message = f\"**#{after.name}** (`{after.id}`)\\n{message}\"\n\n await self.send_log_message(\n Icons.hash_blurple, Colour.blurple(),\n \"Channel updated\", message\n )\n\n @Cog.listener()\n async def on_guild_role_create(self, role: discord.Role) -> None:\n \"\"\"Log role create event to mod log.\"\"\"\n if role.guild.id != GuildConstant.id:\n return\n\n await self.send_log_message(\n Icons.crown_green, Colours.soft_green,\n \"Role created\", f\"`{role.id}`\"\n )\n\n @Cog.listener()\n async def on_guild_role_delete(self, role: discord.Role) -> None:\n \"\"\"Log role delete event to mod log.\"\"\"\n if role.guild.id != GuildConstant.id:\n return\n\n await self.send_log_message(\n Icons.crown_red, Colours.soft_red,\n \"Role removed\", f\"{role.name} (`{role.id}`)\"\n )\n\n @Cog.listener()\n async def on_guild_role_update(self, before: discord.Role, after: discord.Role) -> None:\n \"\"\"Log role update event to mod log.\"\"\"\n if before.guild.id != GuildConstant.id:\n return\n\n diff = DeepDiff(before, after)\n changes = []\n done = []\n\n diff_values = diff.get(\"values_changed\", {})\n diff_values.update(diff.get(\"type_changes\", {}))\n\n for key, value in diff_values.items():\n if not key: # Not sure why, but it happens\n continue\n\n key = key[5:] # Remove \"root.\" prefix\n\n if \"[\" in key:\n key = key.split(\"[\", 1)[0]\n\n if \".\" in key:\n key = key.split(\".\", 1)[0]\n\n if key in done or key == \"color\":\n continue\n\n if key in ROLE_CHANGES_UNSUPPORTED:\n changes.append(f\"**{key.title()}** updated\")\n else:\n new = value[\"new_value\"]\n old = value[\"old_value\"]\n\n changes.append(f\"**{key.title()}:** `{old}` **→** `{new}`\")\n\n done.append(key)\n\n if not changes:\n return\n\n message = \"\"\n\n for item in sorted(changes):\n message += f\"{Emojis.bullet} {item}\\n\"\n\n message = f\"**{after.name}** (`{after.id}`)\\n{message}\"\n\n await self.send_log_message(\n Icons.crown_blurple, Colour.blurple(),\n \"Role updated\", message\n )\n\n @Cog.listener()\n async def on_guild_update(self, before: discord.Guild, after: discord.Guild) -> None:\n \"\"\"Log guild update event to mod log.\"\"\"\n if before.id != GuildConstant.id:\n return\n\n diff = DeepDiff(before, after)\n changes = []\n done = []\n\n diff_values = diff.get(\"values_changed\", {})\n diff_values.update(diff.get(\"type_changes\", {}))\n\n for key, value in diff_values.items():\n if not key: # Not sure why, but it happens\n continue\n\n key = key[5:] # Remove \"root.\" prefix\n\n if \"[\" in key:\n key = key.split(\"[\", 1)[0]\n\n if \".\" in key:\n key = key.split(\".\", 1)[0]\n\n if key in done:\n continue\n\n new = value[\"new_value\"]\n old = value[\"old_value\"]\n\n changes.append(f\"**{key.title()}:** `{old}` **→** `{new}`\")\n\n done.append(key)\n\n if not changes:\n return\n\n message = \"\"\n\n for item in sorted(changes):\n message += f\"{Emojis.bullet} {item}\\n\"\n\n message = f\"**{after.name}** (`{after.id}`)\\n{message}\"\n\n await self.send_log_message(\n Icons.guild_update, Colour.blurple(),\n \"Guild updated\", message,\n thumbnail=after.icon_url_as(format=\"png\")\n )\n\n @Cog.listener()\n async def on_member_ban(self, guild: discord.Guild, member: discord.Member) -> None:\n \"\"\"Log ban event to user log.\"\"\"\n if guild.id != GuildConstant.id:\n return\n\n if member.id in self._ignored[Event.member_ban]:\n self._ignored[Event.member_ban].remove(member.id)\n return\n\n await self.send_log_message(\n Icons.user_ban, Colours.soft_red,\n \"User banned\", f\"{member} (`{member.id}`)\",\n thumbnail=member.avatar_url_as(static_format=\"png\"),\n channel_id=Channels.user_log\n )\n\n @Cog.listener()\n async def on_member_join(self, member: discord.Member) -> None:\n \"\"\"Log member join event to user log.\"\"\"\n if member.guild.id != GuildConstant.id:\n return\n\n member_str = escape_markdown(str(member))\n message = f\"{member_str} (`{member.id}`)\"\n now = datetime.utcnow()\n difference = abs(relativedelta(now, member.created_at))\n\n message += \"\\n\\n**Account age:** \" + humanize_delta(difference)\n\n if difference.days < 1 and difference.months < 1 and difference.years < 1: # New user account!\n message = f\"{Emojis.new} {message}\"\n\n await self.send_log_message(\n Icons.sign_in, Colours.soft_green,\n \"User joined\", message,\n thumbnail=member.avatar_url_as(static_format=\"png\"),\n channel_id=Channels.user_log\n )\n\n @Cog.listener()\n async def on_member_remove(self, member: discord.Member) -> None:\n \"\"\"Log member leave event to user log.\"\"\"\n if member.guild.id != GuildConstant.id:\n return\n\n if member.id in self._ignored[Event.member_remove]:\n self._ignored[Event.member_remove].remove(member.id)\n return\n\n member_str = escape_markdown(str(member))\n await self.send_log_message(\n Icons.sign_out, Colours.soft_red,\n \"User left\", f\"{member_str} (`{member.id}`)\",\n thumbnail=member.avatar_url_as(static_format=\"png\"),\n channel_id=Channels.user_log\n )\n\n @Cog.listener()\n async def on_member_unban(self, guild: discord.Guild, member: discord.User) -> None:\n \"\"\"Log member unban event to mod log.\"\"\"\n if guild.id != GuildConstant.id:\n return\n\n if member.id in self._ignored[Event.member_unban]:\n self._ignored[Event.member_unban].remove(member.id)\n return\n\n member_str = escape_markdown(str(member))\n await self.send_log_message(\n Icons.user_unban, Colour.blurple(),\n \"User unbanned\", f\"{member_str} (`{member.id}`)\",\n thumbnail=member.avatar_url_as(static_format=\"png\"),\n channel_id=Channels.mod_log\n )\n\n @Cog.listener()\n async def on_member_update(self, before: discord.Member, after: discord.Member) -> None:\n \"\"\"Log member update event to user log.\"\"\"\n if before.guild.id != GuildConstant.id:\n return\n\n if before.id in self._ignored[Event.member_update]:\n self._ignored[Event.member_update].remove(before.id)\n return\n\n diff = DeepDiff(before, after)\n changes = []\n done = []\n\n diff_values = {}\n\n diff_values.update(diff.get(\"values_changed\", {}))\n diff_values.update(diff.get(\"type_changes\", {}))\n diff_values.update(diff.get(\"iterable_item_removed\", {}))\n diff_values.update(diff.get(\"iterable_item_added\", {}))\n\n diff_user = DeepDiff(before._user, after._user)\n\n diff_values.update(diff_user.get(\"values_changed\", {}))\n diff_values.update(diff_user.get(\"type_changes\", {}))\n diff_values.update(diff_user.get(\"iterable_item_removed\", {}))\n diff_values.update(diff_user.get(\"iterable_item_added\", {}))\n\n for key, value in diff_values.items():\n if not key: # Not sure why, but it happens\n continue\n\n key = key[5:] # Remove \"root.\" prefix\n\n if \"[\" in key:\n key = key.split(\"[\", 1)[0]\n\n if \".\" in key:\n key = key.split(\".\", 1)[0]\n\n if key in done or key in MEMBER_CHANGES_SUPPRESSED:\n continue\n\n if key == \"_roles\":\n new_roles = after.roles\n old_roles = before.roles\n\n for role in old_roles:\n if role not in new_roles:\n changes.append(f\"**Role removed:** {role.name} (`{role.id}`)\")\n\n for role in new_roles:\n if role not in old_roles:\n changes.append(f\"**Role added:** {role.name} (`{role.id}`)\")\n\n else:\n new = value.get(\"new_value\")\n old = value.get(\"old_value\")\n\n if new and old:\n changes.append(f\"**{key.title()}:** `{old}` **→** `{new}`\")\n\n done.append(key)\n\n if before.name != after.name:\n changes.append(\n f\"**Username:** `{before.name}` **→** `{after.name}`\"\n )\n\n if before.discriminator != after.discriminator:\n changes.append(\n f\"**Discriminator:** `{before.discriminator}` **→** `{after.discriminator}`\"\n )\n\n if before.display_name != after.display_name:\n changes.append(\n f\"**Display name:** `{before.display_name}` **→** `{after.display_name}`\"\n )\n\n if not changes:\n return\n\n message = \"\"\n\n for item in sorted(changes):\n message += f\"{Emojis.bullet} {item}\\n\"\n\n member_str = escape_markdown(str(after))\n message = f\"**{member_str}** (`{after.id}`)\\n{message}\"\n\n await self.send_log_message(\n Icons.user_update, Colour.blurple(),\n \"Member updated\", message,\n thumbnail=after.avatar_url_as(static_format=\"png\"),\n channel_id=Channels.user_log\n )\n\n @Cog.listener()\n async def on_message_delete(self, message: discord.Message) -> None:\n \"\"\"Log message delete event to message change log.\"\"\"\n channel = message.channel\n author = message.author\n\n # Ignore DMs.\n if not message.guild:\n return\n\n if message.guild.id != GuildConstant.id or channel.id in GuildConstant.modlog_blacklist:\n return\n\n self._cached_deletes.append(message.id)\n\n if message.id in self._ignored[Event.message_delete]:\n self._ignored[Event.message_delete].remove(message.id)\n return\n\n if author.bot:\n return\n\n author_str = escape_markdown(str(author))\n if channel.category:\n response = (\n f\"**Author:** {author_str} (`{author.id}`)\\n\"\n f\"**Channel:** {channel.category}/#{channel.name} (`{channel.id}`)\\n\"\n f\"**Message ID:** `{message.id}`\\n\"\n \"\\n\"\n )\n else:\n response = (\n f\"**Author:** {author_str} (`{author.id}`)\\n\"\n f\"**Channel:** #{channel.name} (`{channel.id}`)\\n\"\n f\"**Message ID:** `{message.id}`\\n\"\n \"\\n\"\n )\n\n if message.attachments:\n # Prepend the message metadata with the number of attachments\n response = f\"**Attachments:** {len(message.attachments)}\\n\" + response\n\n # Shorten the message content if necessary\n content = message.clean_content\n remaining_chars = 2040 - len(response)\n\n if len(content) > remaining_chars:\n botlog_url = await self.upload_log(messages=[message], actor_id=message.author.id)\n ending = f\"\\n\\nMessage truncated, [full message here]({botlog_url}).\"\n truncation_point = remaining_chars - len(ending)\n content = f\"{content[:truncation_point]}...{ending}\"\n\n response += f\"{content}\"\n\n await self.send_log_message(\n Icons.message_delete, Colours.soft_red,\n \"Message deleted\",\n response,\n channel_id=Channels.message_log\n )\n\n @Cog.listener()\n async def on_raw_message_delete(self, event: discord.RawMessageDeleteEvent) -> None:\n \"\"\"Log raw message delete event to message change log.\"\"\"\n if event.guild_id != GuildConstant.id or event.channel_id in GuildConstant.modlog_blacklist:\n return\n\n await asyncio.sleep(1) # Wait here in case the normal event was fired\n\n if event.message_id in self._cached_deletes:\n # It was in the cache and the normal event was fired, so we can just ignore it\n self._cached_deletes.remove(event.message_id)\n return\n\n if event.message_id in self._ignored[Event.message_delete]:\n self._ignored[Event.message_delete].remove(event.message_id)\n return\n\n channel = self.bot.get_channel(event.channel_id)\n\n if channel.category:\n response = (\n f\"**Channel:** {channel.category}/#{channel.name} (`{channel.id}`)\\n\"\n f\"**Message ID:** `{event.message_id}`\\n\"\n \"\\n\"\n \"This message was not cached, so the message content cannot be displayed.\"\n )\n else:\n response = (\n f\"**Channel:** #{channel.name} (`{channel.id}`)\\n\"\n f\"**Message ID:** `{event.message_id}`\\n\"\n \"\\n\"\n \"This message was not cached, so the message content cannot be displayed.\"\n )\n\n await self.send_log_message(\n Icons.message_delete, Colours.soft_red,\n \"Message deleted\",\n response,\n channel_id=Channels.message_log\n )\n\n @Cog.listener()\n async def on_message_edit(self, msg_before: discord.Message, msg_after: discord.Message) -> None:\n \"\"\"Log message edit event to message change log.\"\"\"\n if (\n not msg_before.guild\n or msg_before.guild.id != GuildConstant.id\n or msg_before.channel.id in GuildConstant.modlog_blacklist\n or msg_before.author.bot\n ):\n return\n\n self._cached_edits.append(msg_before.id)\n\n if msg_before.content == msg_after.content:\n return\n\n author = msg_before.author\n author_str = escape_markdown(str(author))\n\n channel = msg_before.channel\n channel_name = f\"{channel.category}/#{channel.name}\" if channel.category else f\"#{channel.name}\"\n\n # Getting the difference per words and group them by type - add, remove, same\n # Note that this is intended grouping without sorting\n diff = difflib.ndiff(msg_before.clean_content.split(), msg_after.clean_content.split())\n diff_groups = tuple(\n (diff_type, tuple(s[2:] for s in diff_words))\n for diff_type, diff_words in itertools.groupby(diff, key=lambda s: s[0])\n )\n\n content_before: t.List[str] = []\n content_after: t.List[str] = []\n\n for index, (diff_type, words) in enumerate(diff_groups):\n sub = ' '.join(words)\n if diff_type == '-':\n content_before.append(f\"[{sub}](http://o.hi)\")\n elif diff_type == '+':\n content_after.append(f\"[{sub}](http://o.hi)\")\n elif diff_type == ' ':\n if len(words) > 2:\n sub = (\n f\"{words[0] if index > 0 else ''}\"\n \" ... \"\n f\"{words[-1] if index < len(diff_groups) - 1 else ''}\"\n )\n content_before.append(sub)\n content_after.append(sub)\n\n response = (\n f\"**Author:** {author_str} (`{author.id}`)\\n\"\n f\"**Channel:** {channel_name} (`{channel.id}`)\\n\"\n f\"**Message ID:** `{msg_before.id}`\\n\"\n \"\\n\"\n f\"**Before**:\\n{' '.join(content_before)}\\n\"\n f\"**After**:\\n{' '.join(content_after)}\\n\"\n \"\\n\"\n f\"[Jump to message]({msg_after.jump_url})\"\n )\n\n if msg_before.edited_at:\n # Message was previously edited, to assist with self-bot detection, use the edited_at\n # datetime as the baseline and create a human-readable delta between this edit event\n # and the last time the message was edited\n timestamp = msg_before.edited_at\n delta = humanize_delta(relativedelta(msg_after.edited_at, msg_before.edited_at))\n footer = f\"Last edited {delta} ago\"\n else:\n # Message was not previously edited, use the created_at datetime as the baseline, no\n # delta calculation needed\n timestamp = msg_before.created_at\n footer = None\n\n await self.send_log_message(\n Icons.message_edit, Colour.blurple(), \"Message edited\", response,\n channel_id=Channels.message_log, timestamp_override=timestamp, footer=footer\n )\n\n @Cog.listener()\n async def on_raw_message_edit(self, event: discord.RawMessageUpdateEvent) -> None:\n \"\"\"Log raw message edit event to message change log.\"\"\"\n try:\n channel = self.bot.get_channel(int(event.data[\"channel_id\"]))\n message = await channel.fetch_message(event.message_id)\n except discord.NotFound: # Was deleted before we got the event\n return\n\n if (\n not message.guild\n or message.guild.id != GuildConstant.id\n or message.channel.id in GuildConstant.modlog_blacklist\n or message.author.bot\n ):\n return\n\n await asyncio.sleep(1) # Wait here in case the normal event was fired\n\n if event.message_id in self._cached_edits:\n # It was in the cache and the normal event was fired, so we can just ignore it\n self._cached_edits.remove(event.message_id)\n return\n\n author = message.author\n channel = message.channel\n channel_name = f\"{channel.category}/#{channel.name}\" if channel.category else f\"#{channel.name}\"\n\n before_response = (\n f\"**Author:** {author} (`{author.id}`)\\n\"\n f\"**Channel:** {channel_name} (`{channel.id}`)\\n\"\n f\"**Message ID:** `{message.id}`\\n\"\n \"\\n\"\n \"This message was not cached, so the message content cannot be displayed.\"\n )\n\n after_response = (\n f\"**Author:** {author} (`{author.id}`)\\n\"\n f\"**Channel:** {channel_name} (`{channel.id}`)\\n\"\n f\"**Message ID:** `{message.id}`\\n\"\n \"\\n\"\n f\"{message.clean_content}\"\n )\n\n await self.send_log_message(\n Icons.message_edit, Colour.blurple(), \"Message edited (Before)\",\n before_response, channel_id=Channels.message_log\n )\n\n await self.send_log_message(\n Icons.message_edit, Colour.blurple(), \"Message edited (After)\",\n after_response, channel_id=Channels.message_log\n )\n\n @Cog.listener()\n async def on_voice_state_update(\n self,\n member: discord.Member,\n before: discord.VoiceState,\n after: discord.VoiceState\n ) -> None:\n \"\"\"Log member voice state changes to the voice log channel.\"\"\"\n if (\n member.guild.id != GuildConstant.id\n or (before.channel and before.channel.id in GuildConstant.modlog_blacklist)\n ):\n return\n\n if member.id in self._ignored[Event.voice_state_update]:\n self._ignored[Event.voice_state_update].remove(member.id)\n return\n\n # Exclude all channel attributes except the name.\n diff = DeepDiff(\n before,\n after,\n exclude_paths=(\"root.session_id\", \"root.afk\"),\n exclude_regex_paths=r\"root\\.channel\\.(?!name)\",\n )\n\n # A type change seems to always take precedent over a value change. Furthermore, it will\n # include the value change along with the type change anyway. Therefore, it's OK to\n # \"overwrite\" values_changed; in practice there will never even be anything to overwrite.\n diff_values = {**diff.get(\"values_changed\", {}), **diff.get(\"type_changes\", {})}\n\n icon = Icons.voice_state_blue\n colour = Colour.blurple()\n changes = []\n\n for attr, values in diff_values.items():\n if not attr: # Not sure why, but it happens.\n continue\n\n old = values[\"old_value\"]\n new = values[\"new_value\"]\n\n attr = attr[5:] # Remove \"root.\" prefix.\n attr = VOICE_STATE_ATTRIBUTES.get(attr, attr.replace(\"_\", \" \").capitalize())\n\n changes.append(f\"**{attr}:** `{old}` **→** `{new}`\")\n\n # Set the embed icon and colour depending on which attribute changed.\n if any(name in attr for name in (\"Channel\", \"deaf\", \"mute\")):\n if new is None or new is True:\n # Left a channel or was muted/deafened.\n icon = Icons.voice_state_red\n colour = Colours.soft_red\n elif old is None or old is True:\n # Joined a channel or was unmuted/undeafened.\n icon = Icons.voice_state_green\n colour = Colours.soft_green\n\n if not changes:\n return\n\n member_str = escape_markdown(str(member))\n message = \"\\n\".join(f\"{Emojis.bullet} {item}\" for item in sorted(changes))\n message = f\"**{member_str}** (`{member.id}`)\\n{message}\"\n\n await self.send_log_message(\n icon_url=icon,\n colour=colour,\n title=\"Voice state updated\",\n text=message,\n thumbnail=member.avatar_url_as(static_format=\"png\"),\n channel_id=Channels.voice_log\n )\n", "path": "bot/cogs/moderation/modlog.py" } ]
diff --git a/bot/cogs/moderation/modlog.py b/bot/cogs/moderation/modlog.py index 9d28030d90..41472c64cc 100644 --- a/bot/cogs/moderation/modlog.py +++ b/bot/cogs/moderation/modlog.py @@ -555,6 +555,10 @@ async def on_message_delete(self, message: discord.Message) -> None: channel = message.channel author = message.author + # Ignore DMs. + if not message.guild: + return + if message.guild.id != GuildConstant.id or channel.id in GuildConstant.modlog_blacklist: return
lightly-ai__lightly-656
Incorrect inputsize for BarlowTwins Lightning Example Code Should the input_size in [1] be `32` instead of `224`? In [2], we use `input_size=32`. [1] https://github.com/lightly-ai/lightly/blob/master/examples/pytorch_lightning/barlowtwins.py#L44 [2] https://github.com/lightly-ai/lightly/blob/master/examples/pytorch/barlowtwins.py#L35
[ { "content": "import torch\nfrom torch import nn\nimport torchvision\nimport pytorch_lightning as pl\n\nfrom lightly.data import LightlyDataset\nfrom lightly.data import ImageCollateFunction\nfrom lightly.loss import BarlowTwinsLoss\nfrom lightly.models.modules import BarlowTwinsProjectionHead\n\n\nclass BarlowTwins(pl.LightningModule):\n def __init__(self):\n super().__init__()\n resnet = torchvision.models.resnet18()\n self.backbone = nn.Sequential(*list(resnet.children())[:-1])\n self.projection_head = BarlowTwinsProjectionHead(512, 2048, 2048)\n self.criterion = BarlowTwinsLoss()\n\n def forward(self, x):\n x = self.backbone(x).flatten(start_dim=1)\n z = self.projection_head(x)\n return z\n\n def training_step(self, batch, batch_index):\n (x0, x1), _, _ = batch\n z0 = self.forward(x0)\n z1 = self.forward(x1)\n loss = self.criterion(z0, z1)\n return loss\n\n def configure_optimizers(self):\n optim = torch.optim.SGD(self.parameters(), lr=0.06)\n return optim\n\n\nmodel = BarlowTwins()\n\ncifar10 = torchvision.datasets.CIFAR10(\"datasets/cifar10\", download=True)\ndataset = LightlyDataset.from_torch_dataset(cifar10)\n# or create a dataset from a folder containing images or videos:\n# dataset = LightlyDataset(\"path/to/folder\")\n\ncollate_fn = ImageCollateFunction(input_size=224)\n\ndataloader = torch.utils.data.DataLoader(\n dataset,\n batch_size=256,\n collate_fn=collate_fn,\n shuffle=True,\n drop_last=True,\n num_workers=8,\n)\n\ngpus = 1 if torch.cuda.is_available() else 0\n\ntrainer = pl.Trainer(max_epochs=10, gpus=gpus)\ntrainer.fit(model=model, train_dataloaders=dataloader)\n", "path": "examples/pytorch_lightning/barlowtwins.py" } ]
[ { "content": "import torch\nfrom torch import nn\nimport torchvision\nimport pytorch_lightning as pl\n\nfrom lightly.data import LightlyDataset\nfrom lightly.data import ImageCollateFunction\nfrom lightly.loss import BarlowTwinsLoss\nfrom lightly.models.modules import BarlowTwinsProjectionHead\n\n\nclass BarlowTwins(pl.LightningModule):\n def __init__(self):\n super().__init__()\n resnet = torchvision.models.resnet18()\n self.backbone = nn.Sequential(*list(resnet.children())[:-1])\n self.projection_head = BarlowTwinsProjectionHead(512, 2048, 2048)\n self.criterion = BarlowTwinsLoss()\n\n def forward(self, x):\n x = self.backbone(x).flatten(start_dim=1)\n z = self.projection_head(x)\n return z\n\n def training_step(self, batch, batch_index):\n (x0, x1), _, _ = batch\n z0 = self.forward(x0)\n z1 = self.forward(x1)\n loss = self.criterion(z0, z1)\n return loss\n\n def configure_optimizers(self):\n optim = torch.optim.SGD(self.parameters(), lr=0.06)\n return optim\n\n\nmodel = BarlowTwins()\n\ncifar10 = torchvision.datasets.CIFAR10(\"datasets/cifar10\", download=True)\ndataset = LightlyDataset.from_torch_dataset(cifar10)\n# or create a dataset from a folder containing images or videos:\n# dataset = LightlyDataset(\"path/to/folder\")\n\ncollate_fn = ImageCollateFunction(input_size=32)\n\ndataloader = torch.utils.data.DataLoader(\n dataset,\n batch_size=256,\n collate_fn=collate_fn,\n shuffle=True,\n drop_last=True,\n num_workers=8,\n)\n\ngpus = 1 if torch.cuda.is_available() else 0\n\ntrainer = pl.Trainer(max_epochs=10, gpus=gpus)\ntrainer.fit(model=model, train_dataloaders=dataloader)\n", "path": "examples/pytorch_lightning/barlowtwins.py" } ]
diff --git a/examples/pytorch_lightning/barlowtwins.py b/examples/pytorch_lightning/barlowtwins.py index fa896134f..697c77bc4 100644 --- a/examples/pytorch_lightning/barlowtwins.py +++ b/examples/pytorch_lightning/barlowtwins.py @@ -41,7 +41,7 @@ def configure_optimizers(self): # or create a dataset from a folder containing images or videos: # dataset = LightlyDataset("path/to/folder") -collate_fn = ImageCollateFunction(input_size=224) +collate_fn = ImageCollateFunction(input_size=32) dataloader = torch.utils.data.DataLoader( dataset,
qtile__qtile-1624
widget.WindowTabs default selected task indicator produces invalid pango markup # Issue description The default _selected task indicator_ (``("<", ">")``) for ``widget.WindowTabs`` produces invalid pango markup and thus the call to ``pango_parse_markup`` fails. It leads to invalid tag names for single word window names (e.g. ``<terminal>``) or invalid syntax for multiword names (e.g. ``<qtile - Mozilla Firefox>``). Possible fixes: - change default to e.g. ``('[', ']')`` or different foreground color - default to no markup - at least add a note in the documentation, but defaults should be working If this is wanted, I'm happy to prepare a PR based on the outcome of the discussion here. # Qtile version Qtile version ``0.15.1``. Also [latest revision of libqtile/widget/windowtabs.py](https://github.com/qtile/qtile/blob/d47347ad0f37b4a5735faa8b7061f484e8cf81d9/libqtile/widget/windowtabs.py) (d47347a) # Configuration Use default ``widget.WindowTabs()``
[ { "content": "# Copyright (c) 2012-2013 Craig Barnes\n# Copyright (c) 2012 roger\n# Copyright (c) 2012, 2014 Tycho Andersen\n# Copyright (c) 2014 Sean Vig\n# Copyright (c) 2014 Adi Sieker\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nfrom .. import hook, bar\nfrom . import base\n\n\nclass WindowTabs(base._TextBox):\n \"\"\"\n Displays the name of each window in the current group.\n Contrary to TaskList this is not an interactive widget.\n The window that currently has focus is highlighted.\n \"\"\"\n orientations = base.ORIENTATION_HORIZONTAL\n defaults = [\n (\"separator\", \" | \", \"Task separator text.\"),\n (\"selected\", (\"<\", \">\"), \"Selected task indicator\"),\n ]\n\n def __init__(self, **config):\n base._TextBox.__init__(self, width=bar.STRETCH, **config)\n self.add_defaults(WindowTabs.defaults)\n if not isinstance(self.selected, (tuple, list)):\n self.selected = (self.selected, self.selected)\n\n def _configure(self, qtile, bar):\n base._TextBox._configure(self, qtile, bar)\n hook.subscribe.client_name_updated(self.update)\n hook.subscribe.focus_change(self.update)\n hook.subscribe.float_change(self.update)\n\n def button_press(self, x, y, button):\n self.bar.screen.group.cmd_next_window()\n\n def update(self, *args):\n names = []\n for w in self.bar.screen.group.windows:\n state = ''\n if w is None:\n pass\n elif w.maximized:\n state = '[] '\n elif w.minimized:\n state = '_ '\n elif w.floating:\n state = 'V '\n task = \"%s%s\" % (state, w.name if w and w.name else \" \")\n if w is self.bar.screen.group.current_window:\n task = task.join(self.selected)\n names.append(task)\n self.text = self.separator.join(names)\n self.bar.draw()\n", "path": "libqtile/widget/windowtabs.py" } ]
[ { "content": "# Copyright (c) 2012-2013 Craig Barnes\n# Copyright (c) 2012 roger\n# Copyright (c) 2012, 2014 Tycho Andersen\n# Copyright (c) 2014 Sean Vig\n# Copyright (c) 2014 Adi Sieker\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nfrom .. import hook, bar\nfrom . import base\n\n\nclass WindowTabs(base._TextBox):\n \"\"\"\n Displays the name of each window in the current group.\n Contrary to TaskList this is not an interactive widget.\n The window that currently has focus is highlighted.\n \"\"\"\n orientations = base.ORIENTATION_HORIZONTAL\n defaults = [\n (\"separator\", \" | \", \"Task separator text.\"),\n (\"selected\", (\"<b>\", \"</b>\"), \"Selected task indicator\"),\n ]\n\n def __init__(self, **config):\n base._TextBox.__init__(self, width=bar.STRETCH, **config)\n self.add_defaults(WindowTabs.defaults)\n if not isinstance(self.selected, (tuple, list)):\n self.selected = (self.selected, self.selected)\n\n def _configure(self, qtile, bar):\n base._TextBox._configure(self, qtile, bar)\n hook.subscribe.client_name_updated(self.update)\n hook.subscribe.focus_change(self.update)\n hook.subscribe.float_change(self.update)\n\n def button_press(self, x, y, button):\n self.bar.screen.group.cmd_next_window()\n\n def update(self, *args):\n names = []\n for w in self.bar.screen.group.windows:\n state = ''\n if w is None:\n pass\n elif w.maximized:\n state = '[] '\n elif w.minimized:\n state = '_ '\n elif w.floating:\n state = 'V '\n task = \"%s%s\" % (state, w.name if w and w.name else \" \")\n if w is self.bar.screen.group.current_window:\n task = task.join(self.selected)\n names.append(task)\n self.text = self.separator.join(names)\n self.bar.draw()\n", "path": "libqtile/widget/windowtabs.py" } ]
diff --git a/libqtile/widget/windowtabs.py b/libqtile/widget/windowtabs.py index d6ec7e4c8a..6261cb19d0 100644 --- a/libqtile/widget/windowtabs.py +++ b/libqtile/widget/windowtabs.py @@ -35,7 +35,7 @@ class WindowTabs(base._TextBox): orientations = base.ORIENTATION_HORIZONTAL defaults = [ ("separator", " | ", "Task separator text."), - ("selected", ("<", ">"), "Selected task indicator"), + ("selected", ("<b>", "</b>"), "Selected task indicator"), ] def __init__(self, **config):
gratipay__gratipay.com-3792
log spam during test What's this `TypeError` about? Seems spurious ... ``` pid-13897 thread-4384100352 (Thread-1) Traceback (most recent call last): pid-13897 thread-4384100352 (Thread-1) File "/Users/whit537/personal/gratipay/gratipay.com/gratipay/cron.py", line 26, in f pid-13897 thread-4384100352 (Thread-1) func() pid-13897 thread-4384100352 (Thread-1) File "/Users/whit537/personal/gratipay/gratipay.com/gratipay/main.py", line 82, in <lambda> pid-13897 thread-4384100352 (Thread-1) cron(env.update_cta_every, lambda: utils.update_cta(website)) pid-13897 thread-4384100352 (Thread-1) File "/Users/whit537/personal/gratipay/gratipay.com/gratipay/utils/__init__.py", line 145, in update_cta pid-13897 thread-4384100352 (Thread-1) website.support_current = cur = int(round(nreceiving_from / nusers * 100)) if nusers else 0 pid-13897 thread-4384100352 (Thread-1) TypeError: unsupported operand type(s) for /: 'int' and 'tuple' ``` log spam during test What's this `TypeError` about? Seems spurious ... ``` pid-13897 thread-4384100352 (Thread-1) Traceback (most recent call last): pid-13897 thread-4384100352 (Thread-1) File "/Users/whit537/personal/gratipay/gratipay.com/gratipay/cron.py", line 26, in f pid-13897 thread-4384100352 (Thread-1) func() pid-13897 thread-4384100352 (Thread-1) File "/Users/whit537/personal/gratipay/gratipay.com/gratipay/main.py", line 82, in <lambda> pid-13897 thread-4384100352 (Thread-1) cron(env.update_cta_every, lambda: utils.update_cta(website)) pid-13897 thread-4384100352 (Thread-1) File "/Users/whit537/personal/gratipay/gratipay.com/gratipay/utils/__init__.py", line 145, in update_cta pid-13897 thread-4384100352 (Thread-1) website.support_current = cur = int(round(nreceiving_from / nusers * 100)) if nusers else 0 pid-13897 thread-4384100352 (Thread-1) TypeError: unsupported operand type(s) for /: 'int' and 'tuple' ```
[ { "content": "# encoding: utf8\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom datetime import datetime, timedelta\n\nfrom aspen import Response, json\nfrom aspen.utils import to_rfc822, utcnow\nfrom dependency_injection import resolve_dependencies\nfrom postgres.cursors import SimpleCursorBase\n\nimport gratipay\n\n\nBEGINNING_OF_EPOCH = to_rfc822(datetime(1970, 1, 1)).encode('ascii')\n\n# Difference between current time and credit card expiring date when\n# card is considered as expiring\nEXPIRING_DELTA = timedelta(days = 30)\n\n\ndef dict_to_querystring(mapping):\n if not mapping:\n return u''\n\n arguments = []\n for key, values in mapping.iteritems():\n for val in values:\n arguments.append(u'='.join([key, val]))\n\n return u'?' + u'&'.join(arguments)\n\n\ndef use_tildes_for_participants(website, request):\n if request.path.raw.startswith('/~/'):\n to = '/~' + request.path.raw[3:]\n if request.qs.raw:\n to += '?' + request.qs.raw\n website.redirect(to)\n elif request.path.raw.startswith('/~'):\n request.path.__init__('/~/' + request.path.raw[2:])\n\n\ndef canonicalize(redirect, path, base, canonical, given, arguments=None):\n if given != canonical:\n assert canonical.lower() == given.lower() # sanity check\n remainder = path[len(base + given):]\n\n if arguments is not None:\n arguments = dict_to_querystring(arguments)\n\n newpath = base + canonical + remainder + arguments or ''\n redirect(newpath)\n\n\ndef get_participant(state, restrict=True, resolve_unclaimed=True):\n \"\"\"Given a Request, raise Response or return Participant.\n\n If restrict is True then we'll restrict access to owners and admins.\n\n \"\"\"\n redirect = state['website'].redirect\n request = state['request']\n user = state['user']\n slug = request.line.uri.path['username']\n qs = request.line.uri.querystring\n _ = state['_']\n\n if restrict:\n if user.ANON:\n raise Response(403, _(\"You need to log in to access this page.\"))\n\n from gratipay.models.participant import Participant # avoid circular import\n participant = Participant.from_username(slug)\n\n if participant is None:\n raise Response(404)\n\n canonicalize(redirect, request.line.uri.path.raw, '/~/', participant.username, slug, qs)\n\n if participant.is_closed:\n if user.ADMIN:\n return participant\n raise Response(410)\n\n if participant.claimed_time is None and resolve_unclaimed:\n to = participant.resolve_unclaimed()\n if to:\n # This is a stub account (someone on another platform who hasn't\n # actually registered with Gratipay yet)\n redirect(to)\n else:\n # This is an archived account (result of take_over)\n if user.ADMIN:\n return participant\n raise Response(404)\n\n if restrict:\n if participant != user.participant:\n if not user.ADMIN:\n raise Response(403, _(\"You are not authorized to access this page.\"))\n\n return participant\n\n\ndef get_team(state):\n \"\"\"Given a Request, raise Response or return Team.\n \"\"\"\n redirect = state['website'].redirect\n request = state['request']\n user = state['user']\n slug = request.line.uri.path['team']\n qs = request.line.uri.querystring\n\n from gratipay.models.team import Team # avoid circular import\n team = Team.from_slug(slug)\n\n if team is None:\n # Try to redirect to a Participant.\n from gratipay.models.participant import Participant # avoid circular import\n participant = Participant.from_username(slug)\n if participant is not None:\n qs = '?' + request.qs.raw if request.qs.raw else ''\n redirect('/~' + request.path.raw[1:] + qs)\n raise Response(404)\n\n canonicalize(redirect, request.line.uri.path.raw, '/', team.slug, slug, qs)\n\n if team.is_closed and not user.ADMIN:\n raise Response(410)\n\n return team\n\n\ndef update_cta(website):\n nusers = website.db.one(\"\"\"\n SELECT nusers FROM paydays\n ORDER BY ts_end DESC LIMIT 1\n \"\"\", default=(0.0, 0))\n nreceiving_from = website.db.one(\"\"\"\n SELECT nreceiving_from\n FROM teams\n WHERE slug = 'Gratipay'\n \"\"\", default=0)\n website.support_current = cur = int(round(nreceiving_from / nusers * 100)) if nusers else 0\n if cur < 10: goal = 20\n elif cur < 15: goal = 30\n elif cur < 25: goal = 40\n elif cur < 35: goal = 50\n elif cur < 45: goal = 60\n elif cur < 55: goal = 70\n elif cur < 65: goal = 80\n elif cur > 70: goal = None\n website.support_goal = goal\n\n\ndef _execute(this, sql, params=[]):\n print(sql.strip(), params)\n super(SimpleCursorBase, this).execute(sql, params)\n\ndef log_cursor(f):\n \"Prints sql and params to stdout. Works globaly so watch for threaded use.\"\n def wrapper(*a, **kw):\n try:\n SimpleCursorBase.execute = _execute\n ret = f(*a, **kw)\n finally:\n del SimpleCursorBase.execute\n return ret\n return wrapper\n\n\ndef format_money(money):\n format = '%.2f' if money < 1000 else '%.0f'\n return format % money\n\n\ndef excerpt_intro(text, length=175, append=u'…'):\n if not text:\n return ''\n if len(text) > length:\n return text[:length] + append\n return text\n\n\ndef is_card_expiring(expiration_year, expiration_month):\n now = datetime.utcnow()\n expiring_date = datetime(expiration_year, expiration_month, 1)\n delta = expiring_date - now\n return delta < EXPIRING_DELTA\n\n\ndef set_cookie(cookies, key, value, expires=None, httponly=True, path=b'/'):\n cookies[key] = value\n cookie = cookies[key]\n if expires:\n if isinstance(expires, timedelta):\n expires += utcnow()\n if isinstance(expires, datetime):\n expires = to_rfc822(expires).encode('ascii')\n cookie[b'expires'] = expires\n if httponly:\n cookie[b'httponly'] = True\n if path:\n cookie[b'path'] = path\n if gratipay.use_secure_cookies:\n cookie[b'secure'] = True\n\n\ndef erase_cookie(cookies, key, **kw):\n set_cookie(cookies, key, '', BEGINNING_OF_EPOCH, **kw)\n\n\ndef filter_profile_nav(user, participant, pages):\n out = []\n for foo, bar, show_them, show_others in pages:\n if (user.participant == participant and show_them) \\\n or (user.participant != participant and show_others) \\\n or user.ADMIN:\n out.append((foo, bar, show_them, show_others))\n return out\n\n\ndef to_javascript(obj):\n \"\"\"For when you want to inject an object into a <script> tag.\n \"\"\"\n return json.dumps(obj).replace('</', '<\\\\/')\n\n\nclass LazyResponse(Response):\n\n def __init__(self, code, lazy_body, **kw):\n Response.__init__(self, code, '', **kw)\n self.lazy_body = lazy_body\n\n def render_body(self, state):\n f = self.lazy_body\n self.body = f(*resolve_dependencies(f, state).as_args)\n", "path": "gratipay/utils/__init__.py" } ]
[ { "content": "# encoding: utf8\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom datetime import datetime, timedelta\n\nfrom aspen import Response, json\nfrom aspen.utils import to_rfc822, utcnow\nfrom dependency_injection import resolve_dependencies\nfrom postgres.cursors import SimpleCursorBase\n\nimport gratipay\n\n\nBEGINNING_OF_EPOCH = to_rfc822(datetime(1970, 1, 1)).encode('ascii')\n\n# Difference between current time and credit card expiring date when\n# card is considered as expiring\nEXPIRING_DELTA = timedelta(days = 30)\n\n\ndef dict_to_querystring(mapping):\n if not mapping:\n return u''\n\n arguments = []\n for key, values in mapping.iteritems():\n for val in values:\n arguments.append(u'='.join([key, val]))\n\n return u'?' + u'&'.join(arguments)\n\n\ndef use_tildes_for_participants(website, request):\n if request.path.raw.startswith('/~/'):\n to = '/~' + request.path.raw[3:]\n if request.qs.raw:\n to += '?' + request.qs.raw\n website.redirect(to)\n elif request.path.raw.startswith('/~'):\n request.path.__init__('/~/' + request.path.raw[2:])\n\n\ndef canonicalize(redirect, path, base, canonical, given, arguments=None):\n if given != canonical:\n assert canonical.lower() == given.lower() # sanity check\n remainder = path[len(base + given):]\n\n if arguments is not None:\n arguments = dict_to_querystring(arguments)\n\n newpath = base + canonical + remainder + arguments or ''\n redirect(newpath)\n\n\ndef get_participant(state, restrict=True, resolve_unclaimed=True):\n \"\"\"Given a Request, raise Response or return Participant.\n\n If restrict is True then we'll restrict access to owners and admins.\n\n \"\"\"\n redirect = state['website'].redirect\n request = state['request']\n user = state['user']\n slug = request.line.uri.path['username']\n qs = request.line.uri.querystring\n _ = state['_']\n\n if restrict:\n if user.ANON:\n raise Response(403, _(\"You need to log in to access this page.\"))\n\n from gratipay.models.participant import Participant # avoid circular import\n participant = Participant.from_username(slug)\n\n if participant is None:\n raise Response(404)\n\n canonicalize(redirect, request.line.uri.path.raw, '/~/', participant.username, slug, qs)\n\n if participant.is_closed:\n if user.ADMIN:\n return participant\n raise Response(410)\n\n if participant.claimed_time is None and resolve_unclaimed:\n to = participant.resolve_unclaimed()\n if to:\n # This is a stub account (someone on another platform who hasn't\n # actually registered with Gratipay yet)\n redirect(to)\n else:\n # This is an archived account (result of take_over)\n if user.ADMIN:\n return participant\n raise Response(404)\n\n if restrict:\n if participant != user.participant:\n if not user.ADMIN:\n raise Response(403, _(\"You are not authorized to access this page.\"))\n\n return participant\n\n\ndef get_team(state):\n \"\"\"Given a Request, raise Response or return Team.\n \"\"\"\n redirect = state['website'].redirect\n request = state['request']\n user = state['user']\n slug = request.line.uri.path['team']\n qs = request.line.uri.querystring\n\n from gratipay.models.team import Team # avoid circular import\n team = Team.from_slug(slug)\n\n if team is None:\n # Try to redirect to a Participant.\n from gratipay.models.participant import Participant # avoid circular import\n participant = Participant.from_username(slug)\n if participant is not None:\n qs = '?' + request.qs.raw if request.qs.raw else ''\n redirect('/~' + request.path.raw[1:] + qs)\n raise Response(404)\n\n canonicalize(redirect, request.line.uri.path.raw, '/', team.slug, slug, qs)\n\n if team.is_closed and not user.ADMIN:\n raise Response(410)\n\n return team\n\n\ndef update_cta(website):\n nusers = website.db.one(\"\"\"\n SELECT nusers FROM paydays\n ORDER BY ts_end DESC LIMIT 1\n \"\"\", default=0)\n nreceiving_from = website.db.one(\"\"\"\n SELECT nreceiving_from\n FROM teams\n WHERE slug = 'Gratipay'\n \"\"\", default=0)\n website.support_current = cur = int(round(nreceiving_from / nusers * 100)) if nusers else 0\n if cur < 10: goal = 20\n elif cur < 15: goal = 30\n elif cur < 25: goal = 40\n elif cur < 35: goal = 50\n elif cur < 45: goal = 60\n elif cur < 55: goal = 70\n elif cur < 65: goal = 80\n elif cur > 70: goal = None\n website.support_goal = goal\n\n\ndef _execute(this, sql, params=[]):\n print(sql.strip(), params)\n super(SimpleCursorBase, this).execute(sql, params)\n\ndef log_cursor(f):\n \"Prints sql and params to stdout. Works globaly so watch for threaded use.\"\n def wrapper(*a, **kw):\n try:\n SimpleCursorBase.execute = _execute\n ret = f(*a, **kw)\n finally:\n del SimpleCursorBase.execute\n return ret\n return wrapper\n\n\ndef format_money(money):\n format = '%.2f' if money < 1000 else '%.0f'\n return format % money\n\n\ndef excerpt_intro(text, length=175, append=u'…'):\n if not text:\n return ''\n if len(text) > length:\n return text[:length] + append\n return text\n\n\ndef is_card_expiring(expiration_year, expiration_month):\n now = datetime.utcnow()\n expiring_date = datetime(expiration_year, expiration_month, 1)\n delta = expiring_date - now\n return delta < EXPIRING_DELTA\n\n\ndef set_cookie(cookies, key, value, expires=None, httponly=True, path=b'/'):\n cookies[key] = value\n cookie = cookies[key]\n if expires:\n if isinstance(expires, timedelta):\n expires += utcnow()\n if isinstance(expires, datetime):\n expires = to_rfc822(expires).encode('ascii')\n cookie[b'expires'] = expires\n if httponly:\n cookie[b'httponly'] = True\n if path:\n cookie[b'path'] = path\n if gratipay.use_secure_cookies:\n cookie[b'secure'] = True\n\n\ndef erase_cookie(cookies, key, **kw):\n set_cookie(cookies, key, '', BEGINNING_OF_EPOCH, **kw)\n\n\ndef filter_profile_nav(user, participant, pages):\n out = []\n for foo, bar, show_them, show_others in pages:\n if (user.participant == participant and show_them) \\\n or (user.participant != participant and show_others) \\\n or user.ADMIN:\n out.append((foo, bar, show_them, show_others))\n return out\n\n\ndef to_javascript(obj):\n \"\"\"For when you want to inject an object into a <script> tag.\n \"\"\"\n return json.dumps(obj).replace('</', '<\\\\/')\n\n\nclass LazyResponse(Response):\n\n def __init__(self, code, lazy_body, **kw):\n Response.__init__(self, code, '', **kw)\n self.lazy_body = lazy_body\n\n def render_body(self, state):\n f = self.lazy_body\n self.body = f(*resolve_dependencies(f, state).as_args)\n", "path": "gratipay/utils/__init__.py" } ]
diff --git a/gratipay/utils/__init__.py b/gratipay/utils/__init__.py index ed2a5adb80..8624a6b076 100644 --- a/gratipay/utils/__init__.py +++ b/gratipay/utils/__init__.py @@ -136,7 +136,7 @@ def update_cta(website): nusers = website.db.one(""" SELECT nusers FROM paydays ORDER BY ts_end DESC LIMIT 1 - """, default=(0.0, 0)) + """, default=0) nreceiving_from = website.db.one(""" SELECT nreceiving_from FROM teams
jazzband__django-oauth-toolkit-1090
Default value for CLEAR_EXPIRED_TOKENS_BATCH_INTERVAL @merito Sorry this is after the fact but wouldn't a default value of 0 be best, especially since the sleep is always executed even if the batch is tiny. https://github.com/merito/django-oauth-toolkit/blob/725c3c9d8927379c9808abd1badb4fcd9ff1cbaa/oauth2_provider/models.py#L636 _Originally posted by @n2ygk in https://github.com/jazzband/django-oauth-toolkit/pull/969#discussion_r782459085_
[ { "content": "\"\"\"\nThis module is largely inspired by django-rest-framework settings.\n\nSettings for the OAuth2 Provider are all namespaced in the OAUTH2_PROVIDER setting.\nFor example your project's `settings.py` file might look like this:\n\nOAUTH2_PROVIDER = {\n \"CLIENT_ID_GENERATOR_CLASS\":\n \"oauth2_provider.generators.ClientIdGenerator\",\n \"CLIENT_SECRET_GENERATOR_CLASS\":\n \"oauth2_provider.generators.ClientSecretGenerator\",\n}\n\nThis module provides the `oauth2_settings` object, that is used to access\nOAuth2 Provider settings, checking for user settings first, then falling\nback to the defaults.\n\"\"\"\n\nfrom django.conf import settings\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.http import HttpRequest\nfrom django.test.signals import setting_changed\nfrom django.urls import reverse\nfrom django.utils.module_loading import import_string\nfrom oauthlib.common import Request\n\n\nUSER_SETTINGS = getattr(settings, \"OAUTH2_PROVIDER\", None)\n\nAPPLICATION_MODEL = getattr(settings, \"OAUTH2_PROVIDER_APPLICATION_MODEL\", \"oauth2_provider.Application\")\nACCESS_TOKEN_MODEL = getattr(settings, \"OAUTH2_PROVIDER_ACCESS_TOKEN_MODEL\", \"oauth2_provider.AccessToken\")\nID_TOKEN_MODEL = getattr(settings, \"OAUTH2_PROVIDER_ID_TOKEN_MODEL\", \"oauth2_provider.IDToken\")\nGRANT_MODEL = getattr(settings, \"OAUTH2_PROVIDER_GRANT_MODEL\", \"oauth2_provider.Grant\")\nREFRESH_TOKEN_MODEL = getattr(settings, \"OAUTH2_PROVIDER_REFRESH_TOKEN_MODEL\", \"oauth2_provider.RefreshToken\")\n\nDEFAULTS = {\n \"CLIENT_ID_GENERATOR_CLASS\": \"oauth2_provider.generators.ClientIdGenerator\",\n \"CLIENT_SECRET_GENERATOR_CLASS\": \"oauth2_provider.generators.ClientSecretGenerator\",\n \"CLIENT_SECRET_GENERATOR_LENGTH\": 128,\n \"ACCESS_TOKEN_GENERATOR\": None,\n \"REFRESH_TOKEN_GENERATOR\": None,\n \"EXTRA_SERVER_KWARGS\": {},\n \"OAUTH2_SERVER_CLASS\": \"oauthlib.oauth2.Server\",\n \"OIDC_SERVER_CLASS\": \"oauthlib.openid.Server\",\n \"OAUTH2_VALIDATOR_CLASS\": \"oauth2_provider.oauth2_validators.OAuth2Validator\",\n \"OAUTH2_BACKEND_CLASS\": \"oauth2_provider.oauth2_backends.OAuthLibCore\",\n \"SCOPES\": {\"read\": \"Reading scope\", \"write\": \"Writing scope\"},\n \"DEFAULT_SCOPES\": [\"__all__\"],\n \"SCOPES_BACKEND_CLASS\": \"oauth2_provider.scopes.SettingsScopes\",\n \"READ_SCOPE\": \"read\",\n \"WRITE_SCOPE\": \"write\",\n \"AUTHORIZATION_CODE_EXPIRE_SECONDS\": 60,\n \"ACCESS_TOKEN_EXPIRE_SECONDS\": 36000,\n \"ID_TOKEN_EXPIRE_SECONDS\": 36000,\n \"REFRESH_TOKEN_EXPIRE_SECONDS\": None,\n \"REFRESH_TOKEN_GRACE_PERIOD_SECONDS\": 0,\n \"ROTATE_REFRESH_TOKEN\": True,\n \"ERROR_RESPONSE_WITH_SCOPES\": False,\n \"APPLICATION_MODEL\": APPLICATION_MODEL,\n \"ACCESS_TOKEN_MODEL\": ACCESS_TOKEN_MODEL,\n \"ID_TOKEN_MODEL\": ID_TOKEN_MODEL,\n \"GRANT_MODEL\": GRANT_MODEL,\n \"REFRESH_TOKEN_MODEL\": REFRESH_TOKEN_MODEL,\n \"APPLICATION_ADMIN_CLASS\": \"oauth2_provider.admin.ApplicationAdmin\",\n \"ACCESS_TOKEN_ADMIN_CLASS\": \"oauth2_provider.admin.AccessTokenAdmin\",\n \"GRANT_ADMIN_CLASS\": \"oauth2_provider.admin.GrantAdmin\",\n \"ID_TOKEN_ADMIN_CLASS\": \"oauth2_provider.admin.IDTokenAdmin\",\n \"REFRESH_TOKEN_ADMIN_CLASS\": \"oauth2_provider.admin.RefreshTokenAdmin\",\n \"REQUEST_APPROVAL_PROMPT\": \"force\",\n \"ALLOWED_REDIRECT_URI_SCHEMES\": [\"http\", \"https\"],\n \"OIDC_ENABLED\": False,\n \"OIDC_ISS_ENDPOINT\": \"\",\n \"OIDC_USERINFO_ENDPOINT\": \"\",\n \"OIDC_RSA_PRIVATE_KEY\": \"\",\n \"OIDC_RSA_PRIVATE_KEYS_INACTIVE\": [],\n \"OIDC_JWKS_MAX_AGE_SECONDS\": 3600,\n \"OIDC_RESPONSE_TYPES_SUPPORTED\": [\n \"code\",\n \"token\",\n \"id_token\",\n \"id_token token\",\n \"code token\",\n \"code id_token\",\n \"code id_token token\",\n ],\n \"OIDC_SUBJECT_TYPES_SUPPORTED\": [\"public\"],\n \"OIDC_TOKEN_ENDPOINT_AUTH_METHODS_SUPPORTED\": [\n \"client_secret_post\",\n \"client_secret_basic\",\n ],\n # Special settings that will be evaluated at runtime\n \"_SCOPES\": [],\n \"_DEFAULT_SCOPES\": [],\n # Resource Server with Token Introspection\n \"RESOURCE_SERVER_INTROSPECTION_URL\": None,\n \"RESOURCE_SERVER_AUTH_TOKEN\": None,\n \"RESOURCE_SERVER_INTROSPECTION_CREDENTIALS\": None,\n \"RESOURCE_SERVER_TOKEN_CACHING_SECONDS\": 36000,\n # Whether or not PKCE is required\n \"PKCE_REQUIRED\": False,\n # Whether to re-create OAuthlibCore on every request.\n # Should only be required in testing.\n \"ALWAYS_RELOAD_OAUTHLIB_CORE\": False,\n \"CLEAR_EXPIRED_TOKENS_BATCH_SIZE\": 10000,\n \"CLEAR_EXPIRED_TOKENS_BATCH_INTERVAL\": 0.1,\n}\n\n# List of settings that cannot be empty\nMANDATORY = (\n \"CLIENT_ID_GENERATOR_CLASS\",\n \"CLIENT_SECRET_GENERATOR_CLASS\",\n \"OAUTH2_SERVER_CLASS\",\n \"OAUTH2_VALIDATOR_CLASS\",\n \"OAUTH2_BACKEND_CLASS\",\n \"SCOPES\",\n \"ALLOWED_REDIRECT_URI_SCHEMES\",\n \"OIDC_RESPONSE_TYPES_SUPPORTED\",\n \"OIDC_SUBJECT_TYPES_SUPPORTED\",\n \"OIDC_TOKEN_ENDPOINT_AUTH_METHODS_SUPPORTED\",\n)\n\n# List of settings that may be in string import notation.\nIMPORT_STRINGS = (\n \"CLIENT_ID_GENERATOR_CLASS\",\n \"CLIENT_SECRET_GENERATOR_CLASS\",\n \"ACCESS_TOKEN_GENERATOR\",\n \"REFRESH_TOKEN_GENERATOR\",\n \"OAUTH2_SERVER_CLASS\",\n \"OAUTH2_VALIDATOR_CLASS\",\n \"OAUTH2_BACKEND_CLASS\",\n \"SCOPES_BACKEND_CLASS\",\n \"APPLICATION_ADMIN_CLASS\",\n \"ACCESS_TOKEN_ADMIN_CLASS\",\n \"GRANT_ADMIN_CLASS\",\n \"ID_TOKEN_ADMIN_CLASS\",\n \"REFRESH_TOKEN_ADMIN_CLASS\",\n)\n\n\ndef perform_import(val, setting_name):\n \"\"\"\n If the given setting is a string import notation,\n then perform the necessary import or imports.\n \"\"\"\n if val is None:\n return None\n elif isinstance(val, str):\n return import_from_string(val, setting_name)\n elif isinstance(val, (list, tuple)):\n return [import_from_string(item, setting_name) for item in val]\n return val\n\n\ndef import_from_string(val, setting_name):\n \"\"\"\n Attempt to import a class from a string representation.\n \"\"\"\n try:\n return import_string(val)\n except ImportError as e:\n msg = \"Could not import %r for setting %r. %s: %s.\" % (val, setting_name, e.__class__.__name__, e)\n raise ImportError(msg)\n\n\nclass _PhonyHttpRequest(HttpRequest):\n _scheme = \"http\"\n\n def _get_scheme(self):\n return self._scheme\n\n\nclass OAuth2ProviderSettings:\n \"\"\"\n A settings object, that allows OAuth2 Provider settings to be accessed as properties.\n\n Any setting with string import paths will be automatically resolved\n and return the class, rather than the string literal.\n \"\"\"\n\n def __init__(self, user_settings=None, defaults=None, import_strings=None, mandatory=None):\n self._user_settings = user_settings or {}\n self.defaults = defaults or DEFAULTS\n self.import_strings = import_strings or IMPORT_STRINGS\n self.mandatory = mandatory or ()\n self._cached_attrs = set()\n\n @property\n def user_settings(self):\n if not hasattr(self, \"_user_settings\"):\n self._user_settings = getattr(settings, \"OAUTH2_PROVIDER\", {})\n return self._user_settings\n\n def __getattr__(self, attr):\n if attr not in self.defaults:\n raise AttributeError(\"Invalid OAuth2Provider setting: %s\" % attr)\n try:\n # Check if present in user settings\n val = self.user_settings[attr]\n except KeyError:\n # Fall back to defaults\n # Special case OAUTH2_SERVER_CLASS - if not specified, and OIDC is\n # enabled, use the OIDC_SERVER_CLASS setting instead\n if attr == \"OAUTH2_SERVER_CLASS\" and self.OIDC_ENABLED:\n val = self.defaults[\"OIDC_SERVER_CLASS\"]\n else:\n val = self.defaults[attr]\n\n # Coerce import strings into classes\n if val and attr in self.import_strings:\n val = perform_import(val, attr)\n\n # Overriding special settings\n if attr == \"_SCOPES\":\n val = list(self.SCOPES.keys())\n if attr == \"_DEFAULT_SCOPES\":\n if \"__all__\" in self.DEFAULT_SCOPES:\n # If DEFAULT_SCOPES is set to [\"__all__\"] the whole set of scopes is returned\n val = list(self._SCOPES)\n else:\n # Otherwise we return a subset (that can be void) of SCOPES\n val = []\n for scope in self.DEFAULT_SCOPES:\n if scope in self._SCOPES:\n val.append(scope)\n else:\n raise ImproperlyConfigured(\"Defined DEFAULT_SCOPES not present in SCOPES\")\n\n self.validate_setting(attr, val)\n\n # Cache the result\n self._cached_attrs.add(attr)\n setattr(self, attr, val)\n return val\n\n def validate_setting(self, attr, val):\n if not val and attr in self.mandatory:\n raise AttributeError(\"OAuth2Provider setting: %s is mandatory\" % attr)\n\n @property\n def server_kwargs(self):\n \"\"\"\n This is used to communicate settings to oauth server.\n\n Takes relevant settings and format them accordingly.\n There's also EXTRA_SERVER_KWARGS that can override every value\n and is more flexible regarding keys and acceptable values\n but doesn't have import string magic or any additional\n processing, callables have to be assigned directly.\n For the likes of signed_token_generator it means something like\n\n {\"token_generator\": signed_token_generator(privkey, **kwargs)}\n \"\"\"\n kwargs = {\n key: getattr(self, value)\n for key, value in [\n (\"token_expires_in\", \"ACCESS_TOKEN_EXPIRE_SECONDS\"),\n (\"refresh_token_expires_in\", \"REFRESH_TOKEN_EXPIRE_SECONDS\"),\n (\"token_generator\", \"ACCESS_TOKEN_GENERATOR\"),\n (\"refresh_token_generator\", \"REFRESH_TOKEN_GENERATOR\"),\n ]\n }\n kwargs.update(self.EXTRA_SERVER_KWARGS)\n return kwargs\n\n def reload(self):\n for attr in self._cached_attrs:\n delattr(self, attr)\n self._cached_attrs.clear()\n if hasattr(self, \"_user_settings\"):\n delattr(self, \"_user_settings\")\n\n def oidc_issuer(self, request):\n \"\"\"\n Helper function to get the OIDC issuer URL, either from the settings\n or constructing it from the passed request.\n\n If only an oauthlib request is available, a dummy django request is\n built from that and used to generate the URL.\n \"\"\"\n if self.OIDC_ISS_ENDPOINT:\n return self.OIDC_ISS_ENDPOINT\n if isinstance(request, HttpRequest):\n django_request = request\n elif isinstance(request, Request):\n django_request = _PhonyHttpRequest()\n django_request.META = request.headers\n if request.headers.get(\"X_DJANGO_OAUTH_TOOLKIT_SECURE\", False):\n django_request._scheme = \"https\"\n else:\n raise TypeError(\"request must be a django or oauthlib request: got %r\" % request)\n abs_url = django_request.build_absolute_uri(reverse(\"oauth2_provider:oidc-connect-discovery-info\"))\n return abs_url[: -len(\"/.well-known/openid-configuration/\")]\n\n\noauth2_settings = OAuth2ProviderSettings(USER_SETTINGS, DEFAULTS, IMPORT_STRINGS, MANDATORY)\n\n\ndef reload_oauth2_settings(*args, **kwargs):\n setting = kwargs[\"setting\"]\n if setting == \"OAUTH2_PROVIDER\":\n oauth2_settings.reload()\n\n\nsetting_changed.connect(reload_oauth2_settings)\n", "path": "oauth2_provider/settings.py" } ]
[ { "content": "\"\"\"\nThis module is largely inspired by django-rest-framework settings.\n\nSettings for the OAuth2 Provider are all namespaced in the OAUTH2_PROVIDER setting.\nFor example your project's `settings.py` file might look like this:\n\nOAUTH2_PROVIDER = {\n \"CLIENT_ID_GENERATOR_CLASS\":\n \"oauth2_provider.generators.ClientIdGenerator\",\n \"CLIENT_SECRET_GENERATOR_CLASS\":\n \"oauth2_provider.generators.ClientSecretGenerator\",\n}\n\nThis module provides the `oauth2_settings` object, that is used to access\nOAuth2 Provider settings, checking for user settings first, then falling\nback to the defaults.\n\"\"\"\n\nfrom django.conf import settings\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.http import HttpRequest\nfrom django.test.signals import setting_changed\nfrom django.urls import reverse\nfrom django.utils.module_loading import import_string\nfrom oauthlib.common import Request\n\n\nUSER_SETTINGS = getattr(settings, \"OAUTH2_PROVIDER\", None)\n\nAPPLICATION_MODEL = getattr(settings, \"OAUTH2_PROVIDER_APPLICATION_MODEL\", \"oauth2_provider.Application\")\nACCESS_TOKEN_MODEL = getattr(settings, \"OAUTH2_PROVIDER_ACCESS_TOKEN_MODEL\", \"oauth2_provider.AccessToken\")\nID_TOKEN_MODEL = getattr(settings, \"OAUTH2_PROVIDER_ID_TOKEN_MODEL\", \"oauth2_provider.IDToken\")\nGRANT_MODEL = getattr(settings, \"OAUTH2_PROVIDER_GRANT_MODEL\", \"oauth2_provider.Grant\")\nREFRESH_TOKEN_MODEL = getattr(settings, \"OAUTH2_PROVIDER_REFRESH_TOKEN_MODEL\", \"oauth2_provider.RefreshToken\")\n\nDEFAULTS = {\n \"CLIENT_ID_GENERATOR_CLASS\": \"oauth2_provider.generators.ClientIdGenerator\",\n \"CLIENT_SECRET_GENERATOR_CLASS\": \"oauth2_provider.generators.ClientSecretGenerator\",\n \"CLIENT_SECRET_GENERATOR_LENGTH\": 128,\n \"ACCESS_TOKEN_GENERATOR\": None,\n \"REFRESH_TOKEN_GENERATOR\": None,\n \"EXTRA_SERVER_KWARGS\": {},\n \"OAUTH2_SERVER_CLASS\": \"oauthlib.oauth2.Server\",\n \"OIDC_SERVER_CLASS\": \"oauthlib.openid.Server\",\n \"OAUTH2_VALIDATOR_CLASS\": \"oauth2_provider.oauth2_validators.OAuth2Validator\",\n \"OAUTH2_BACKEND_CLASS\": \"oauth2_provider.oauth2_backends.OAuthLibCore\",\n \"SCOPES\": {\"read\": \"Reading scope\", \"write\": \"Writing scope\"},\n \"DEFAULT_SCOPES\": [\"__all__\"],\n \"SCOPES_BACKEND_CLASS\": \"oauth2_provider.scopes.SettingsScopes\",\n \"READ_SCOPE\": \"read\",\n \"WRITE_SCOPE\": \"write\",\n \"AUTHORIZATION_CODE_EXPIRE_SECONDS\": 60,\n \"ACCESS_TOKEN_EXPIRE_SECONDS\": 36000,\n \"ID_TOKEN_EXPIRE_SECONDS\": 36000,\n \"REFRESH_TOKEN_EXPIRE_SECONDS\": None,\n \"REFRESH_TOKEN_GRACE_PERIOD_SECONDS\": 0,\n \"ROTATE_REFRESH_TOKEN\": True,\n \"ERROR_RESPONSE_WITH_SCOPES\": False,\n \"APPLICATION_MODEL\": APPLICATION_MODEL,\n \"ACCESS_TOKEN_MODEL\": ACCESS_TOKEN_MODEL,\n \"ID_TOKEN_MODEL\": ID_TOKEN_MODEL,\n \"GRANT_MODEL\": GRANT_MODEL,\n \"REFRESH_TOKEN_MODEL\": REFRESH_TOKEN_MODEL,\n \"APPLICATION_ADMIN_CLASS\": \"oauth2_provider.admin.ApplicationAdmin\",\n \"ACCESS_TOKEN_ADMIN_CLASS\": \"oauth2_provider.admin.AccessTokenAdmin\",\n \"GRANT_ADMIN_CLASS\": \"oauth2_provider.admin.GrantAdmin\",\n \"ID_TOKEN_ADMIN_CLASS\": \"oauth2_provider.admin.IDTokenAdmin\",\n \"REFRESH_TOKEN_ADMIN_CLASS\": \"oauth2_provider.admin.RefreshTokenAdmin\",\n \"REQUEST_APPROVAL_PROMPT\": \"force\",\n \"ALLOWED_REDIRECT_URI_SCHEMES\": [\"http\", \"https\"],\n \"OIDC_ENABLED\": False,\n \"OIDC_ISS_ENDPOINT\": \"\",\n \"OIDC_USERINFO_ENDPOINT\": \"\",\n \"OIDC_RSA_PRIVATE_KEY\": \"\",\n \"OIDC_RSA_PRIVATE_KEYS_INACTIVE\": [],\n \"OIDC_JWKS_MAX_AGE_SECONDS\": 3600,\n \"OIDC_RESPONSE_TYPES_SUPPORTED\": [\n \"code\",\n \"token\",\n \"id_token\",\n \"id_token token\",\n \"code token\",\n \"code id_token\",\n \"code id_token token\",\n ],\n \"OIDC_SUBJECT_TYPES_SUPPORTED\": [\"public\"],\n \"OIDC_TOKEN_ENDPOINT_AUTH_METHODS_SUPPORTED\": [\n \"client_secret_post\",\n \"client_secret_basic\",\n ],\n # Special settings that will be evaluated at runtime\n \"_SCOPES\": [],\n \"_DEFAULT_SCOPES\": [],\n # Resource Server with Token Introspection\n \"RESOURCE_SERVER_INTROSPECTION_URL\": None,\n \"RESOURCE_SERVER_AUTH_TOKEN\": None,\n \"RESOURCE_SERVER_INTROSPECTION_CREDENTIALS\": None,\n \"RESOURCE_SERVER_TOKEN_CACHING_SECONDS\": 36000,\n # Whether or not PKCE is required\n \"PKCE_REQUIRED\": False,\n # Whether to re-create OAuthlibCore on every request.\n # Should only be required in testing.\n \"ALWAYS_RELOAD_OAUTHLIB_CORE\": False,\n \"CLEAR_EXPIRED_TOKENS_BATCH_SIZE\": 10000,\n \"CLEAR_EXPIRED_TOKENS_BATCH_INTERVAL\": 0,\n}\n\n# List of settings that cannot be empty\nMANDATORY = (\n \"CLIENT_ID_GENERATOR_CLASS\",\n \"CLIENT_SECRET_GENERATOR_CLASS\",\n \"OAUTH2_SERVER_CLASS\",\n \"OAUTH2_VALIDATOR_CLASS\",\n \"OAUTH2_BACKEND_CLASS\",\n \"SCOPES\",\n \"ALLOWED_REDIRECT_URI_SCHEMES\",\n \"OIDC_RESPONSE_TYPES_SUPPORTED\",\n \"OIDC_SUBJECT_TYPES_SUPPORTED\",\n \"OIDC_TOKEN_ENDPOINT_AUTH_METHODS_SUPPORTED\",\n)\n\n# List of settings that may be in string import notation.\nIMPORT_STRINGS = (\n \"CLIENT_ID_GENERATOR_CLASS\",\n \"CLIENT_SECRET_GENERATOR_CLASS\",\n \"ACCESS_TOKEN_GENERATOR\",\n \"REFRESH_TOKEN_GENERATOR\",\n \"OAUTH2_SERVER_CLASS\",\n \"OAUTH2_VALIDATOR_CLASS\",\n \"OAUTH2_BACKEND_CLASS\",\n \"SCOPES_BACKEND_CLASS\",\n \"APPLICATION_ADMIN_CLASS\",\n \"ACCESS_TOKEN_ADMIN_CLASS\",\n \"GRANT_ADMIN_CLASS\",\n \"ID_TOKEN_ADMIN_CLASS\",\n \"REFRESH_TOKEN_ADMIN_CLASS\",\n)\n\n\ndef perform_import(val, setting_name):\n \"\"\"\n If the given setting is a string import notation,\n then perform the necessary import or imports.\n \"\"\"\n if val is None:\n return None\n elif isinstance(val, str):\n return import_from_string(val, setting_name)\n elif isinstance(val, (list, tuple)):\n return [import_from_string(item, setting_name) for item in val]\n return val\n\n\ndef import_from_string(val, setting_name):\n \"\"\"\n Attempt to import a class from a string representation.\n \"\"\"\n try:\n return import_string(val)\n except ImportError as e:\n msg = \"Could not import %r for setting %r. %s: %s.\" % (val, setting_name, e.__class__.__name__, e)\n raise ImportError(msg)\n\n\nclass _PhonyHttpRequest(HttpRequest):\n _scheme = \"http\"\n\n def _get_scheme(self):\n return self._scheme\n\n\nclass OAuth2ProviderSettings:\n \"\"\"\n A settings object, that allows OAuth2 Provider settings to be accessed as properties.\n\n Any setting with string import paths will be automatically resolved\n and return the class, rather than the string literal.\n \"\"\"\n\n def __init__(self, user_settings=None, defaults=None, import_strings=None, mandatory=None):\n self._user_settings = user_settings or {}\n self.defaults = defaults or DEFAULTS\n self.import_strings = import_strings or IMPORT_STRINGS\n self.mandatory = mandatory or ()\n self._cached_attrs = set()\n\n @property\n def user_settings(self):\n if not hasattr(self, \"_user_settings\"):\n self._user_settings = getattr(settings, \"OAUTH2_PROVIDER\", {})\n return self._user_settings\n\n def __getattr__(self, attr):\n if attr not in self.defaults:\n raise AttributeError(\"Invalid OAuth2Provider setting: %s\" % attr)\n try:\n # Check if present in user settings\n val = self.user_settings[attr]\n except KeyError:\n # Fall back to defaults\n # Special case OAUTH2_SERVER_CLASS - if not specified, and OIDC is\n # enabled, use the OIDC_SERVER_CLASS setting instead\n if attr == \"OAUTH2_SERVER_CLASS\" and self.OIDC_ENABLED:\n val = self.defaults[\"OIDC_SERVER_CLASS\"]\n else:\n val = self.defaults[attr]\n\n # Coerce import strings into classes\n if val and attr in self.import_strings:\n val = perform_import(val, attr)\n\n # Overriding special settings\n if attr == \"_SCOPES\":\n val = list(self.SCOPES.keys())\n if attr == \"_DEFAULT_SCOPES\":\n if \"__all__\" in self.DEFAULT_SCOPES:\n # If DEFAULT_SCOPES is set to [\"__all__\"] the whole set of scopes is returned\n val = list(self._SCOPES)\n else:\n # Otherwise we return a subset (that can be void) of SCOPES\n val = []\n for scope in self.DEFAULT_SCOPES:\n if scope in self._SCOPES:\n val.append(scope)\n else:\n raise ImproperlyConfigured(\"Defined DEFAULT_SCOPES not present in SCOPES\")\n\n self.validate_setting(attr, val)\n\n # Cache the result\n self._cached_attrs.add(attr)\n setattr(self, attr, val)\n return val\n\n def validate_setting(self, attr, val):\n if not val and attr in self.mandatory:\n raise AttributeError(\"OAuth2Provider setting: %s is mandatory\" % attr)\n\n @property\n def server_kwargs(self):\n \"\"\"\n This is used to communicate settings to oauth server.\n\n Takes relevant settings and format them accordingly.\n There's also EXTRA_SERVER_KWARGS that can override every value\n and is more flexible regarding keys and acceptable values\n but doesn't have import string magic or any additional\n processing, callables have to be assigned directly.\n For the likes of signed_token_generator it means something like\n\n {\"token_generator\": signed_token_generator(privkey, **kwargs)}\n \"\"\"\n kwargs = {\n key: getattr(self, value)\n for key, value in [\n (\"token_expires_in\", \"ACCESS_TOKEN_EXPIRE_SECONDS\"),\n (\"refresh_token_expires_in\", \"REFRESH_TOKEN_EXPIRE_SECONDS\"),\n (\"token_generator\", \"ACCESS_TOKEN_GENERATOR\"),\n (\"refresh_token_generator\", \"REFRESH_TOKEN_GENERATOR\"),\n ]\n }\n kwargs.update(self.EXTRA_SERVER_KWARGS)\n return kwargs\n\n def reload(self):\n for attr in self._cached_attrs:\n delattr(self, attr)\n self._cached_attrs.clear()\n if hasattr(self, \"_user_settings\"):\n delattr(self, \"_user_settings\")\n\n def oidc_issuer(self, request):\n \"\"\"\n Helper function to get the OIDC issuer URL, either from the settings\n or constructing it from the passed request.\n\n If only an oauthlib request is available, a dummy django request is\n built from that and used to generate the URL.\n \"\"\"\n if self.OIDC_ISS_ENDPOINT:\n return self.OIDC_ISS_ENDPOINT\n if isinstance(request, HttpRequest):\n django_request = request\n elif isinstance(request, Request):\n django_request = _PhonyHttpRequest()\n django_request.META = request.headers\n if request.headers.get(\"X_DJANGO_OAUTH_TOOLKIT_SECURE\", False):\n django_request._scheme = \"https\"\n else:\n raise TypeError(\"request must be a django or oauthlib request: got %r\" % request)\n abs_url = django_request.build_absolute_uri(reverse(\"oauth2_provider:oidc-connect-discovery-info\"))\n return abs_url[: -len(\"/.well-known/openid-configuration/\")]\n\n\noauth2_settings = OAuth2ProviderSettings(USER_SETTINGS, DEFAULTS, IMPORT_STRINGS, MANDATORY)\n\n\ndef reload_oauth2_settings(*args, **kwargs):\n setting = kwargs[\"setting\"]\n if setting == \"OAUTH2_PROVIDER\":\n oauth2_settings.reload()\n\n\nsetting_changed.connect(reload_oauth2_settings)\n", "path": "oauth2_provider/settings.py" } ]
diff --git a/docs/settings.rst b/docs/settings.rst index 49460bc0e..01baaaf4b 100644 --- a/docs/settings.rst +++ b/docs/settings.rst @@ -345,10 +345,13 @@ The size of delete batches used by ``cleartokens`` management command. CLEAR_EXPIRED_TOKENS_BATCH_INTERVAL ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Default: ``0.1`` +Default: ``0`` Time of sleep in seconds used by ``cleartokens`` management command between batch deletions. +Set this to a non-zero value (e.g. `0.1`) to add a pause between batch sizes to reduce system +load when clearing large batches of expired tokens. + Settings imported from Django project -------------------------- diff --git a/oauth2_provider/settings.py b/oauth2_provider/settings.py index 22e067716..3b7dea3f8 100644 --- a/oauth2_provider/settings.py +++ b/oauth2_provider/settings.py @@ -102,7 +102,7 @@ # Should only be required in testing. "ALWAYS_RELOAD_OAUTHLIB_CORE": False, "CLEAR_EXPIRED_TOKENS_BATCH_SIZE": 10000, - "CLEAR_EXPIRED_TOKENS_BATCH_INTERVAL": 0.1, + "CLEAR_EXPIRED_TOKENS_BATCH_INTERVAL": 0, } # List of settings that cannot be empty
googleapis__google-api-python-client-1629
Python 3.10 compatibility issue #### Environment details - OS type and version: Windows 10 - Python version: `python --version` 3.10.1 - pip version: `pip --version` 21.2.4 - `google-api-python-client` version: `pip show google-api-python-client` - 2.33.0 uritemplate package 3.0.0 is not compatible with python 3.10. Need to update the requirements. Partial Stack Trace service = build('gmail', 'v1', credentials=creds) File "C:\JA\Envs\GIC\lib\site-packages\googleapiclient\_helpers.py", line 130, in positional_wrapper return wrapped(*args, **kwargs) File "C:\JA\Envs\GIC\lib\site-packages\googleapiclient\discovery.py", line 219, in build requested_url = uritemplate.expand(discovery_url, params) File "C:\JA\Envs\GIC\lib\site-packages\uritemplate\api.py", line 33, in expand return URITemplate(uri).expand(var_dict, **kwargs) File "C:\JA\Envs\GIC\lib\site-packages\uritemplate\template.py", line 132, in expand return self._expand(_merge(var_dict, kwargs), False) File "C:\JA\Envs\GIC\lib\site-packages\uritemplate\template.py", line 97, in _expand expanded.update(v.expand(expansion)) File "C:\JA\Envs\GIC\lib\site-packages\uritemplate\variable.py", line 338, in expand expanded = expansion(name, value, opts['explode'], opts['prefix']) File "C:\JA\Envs\GIC\lib\site-packages\uritemplate\variable.py", line 278, in _string_expansion if dict_test(value) or tuples: File "C:\JA\Envs\GIC\lib\site-packages\uritemplate\variable.py", line 363, in dict_test return isinstance(value, (dict, collections.MutableMapping)) AttributeError: module 'collections' has no attribute 'MutableMapping'
[ { "content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Setup script for Google API Python client.\n\nAlso installs included versions of third party libraries, if those libraries\nare not already installed.\n\"\"\"\nfrom __future__ import print_function\n\nimport sys\n\nif sys.version_info < (3, 6):\n print(\"google-api-python-client requires python3 version >= 3.6.\", file=sys.stderr)\n sys.exit(1)\n\nimport io\nimport os\nfrom setuptools import setup\n\npackages = [\"apiclient\", \"googleapiclient\", \"googleapiclient/discovery_cache\"]\n\ninstall_requires = [\n \"httplib2>=0.15.0,<1dev\",\n # NOTE: Maintainers, please do not require google-auth>=2.x.x\n # Until this issue is closed\n # https://github.com/googleapis/google-cloud-python/issues/10566\n \"google-auth>=1.16.0,<3.0.0dev\",\n \"google-auth-httplib2>=0.1.0\",\n # NOTE: Maintainers, please do not require google-api-core>=2.x.x\n # Until this issue is closed\n # https://github.com/googleapis/google-cloud-python/issues/10566\n \"google-api-core>=1.21.0,<3.0.0dev\",\n \"uritemplate>=3.0.0,<5\",\n]\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.md\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nversion = {}\nwith open(os.path.join(package_root, \"googleapiclient/version.py\")) as fp:\n exec(fp.read(), version)\nversion = version[\"__version__\"]\n\nsetup(\n name=\"google-api-python-client\",\n version=version,\n description=\"Google API Client Library for Python\",\n long_description=readme,\n long_description_content_type='text/markdown',\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n url=\"https://github.com/googleapis/google-api-python-client/\",\n install_requires=install_requires,\n python_requires=\">=3.6\",\n packages=packages,\n package_data={\"googleapiclient\": [\"discovery_cache/documents/*.json\"]},\n license=\"Apache 2.0\",\n keywords=\"google api client\",\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n ],\n)\n", "path": "setup.py" } ]
[ { "content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Setup script for Google API Python client.\n\nAlso installs included versions of third party libraries, if those libraries\nare not already installed.\n\"\"\"\nfrom __future__ import print_function\n\nimport sys\n\nif sys.version_info < (3, 6):\n print(\"google-api-python-client requires python3 version >= 3.6.\", file=sys.stderr)\n sys.exit(1)\n\nimport io\nimport os\nfrom setuptools import setup\n\npackages = [\"apiclient\", \"googleapiclient\", \"googleapiclient/discovery_cache\"]\n\ninstall_requires = [\n \"httplib2>=0.15.0,<1dev\",\n # NOTE: Maintainers, please do not require google-auth>=2.x.x\n # Until this issue is closed\n # https://github.com/googleapis/google-cloud-python/issues/10566\n \"google-auth>=1.16.0,<3.0.0dev\",\n \"google-auth-httplib2>=0.1.0\",\n # NOTE: Maintainers, please do not require google-api-core>=2.x.x\n # Until this issue is closed\n # https://github.com/googleapis/google-cloud-python/issues/10566\n \"google-api-core>=1.21.0,<3.0.0dev\",\n \"uritemplate>=3.0.1,<5\",\n]\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.md\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nversion = {}\nwith open(os.path.join(package_root, \"googleapiclient/version.py\")) as fp:\n exec(fp.read(), version)\nversion = version[\"__version__\"]\n\nsetup(\n name=\"google-api-python-client\",\n version=version,\n description=\"Google API Client Library for Python\",\n long_description=readme,\n long_description_content_type='text/markdown',\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n url=\"https://github.com/googleapis/google-api-python-client/\",\n install_requires=install_requires,\n python_requires=\">=3.6\",\n packages=packages,\n package_data={\"googleapiclient\": [\"discovery_cache/documents/*.json\"]},\n license=\"Apache 2.0\",\n keywords=\"google api client\",\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n ],\n)\n", "path": "setup.py" } ]
diff --git a/setup.py b/setup.py index a311f168514..344ba624a40 100644 --- a/setup.py +++ b/setup.py @@ -42,7 +42,7 @@ # Until this issue is closed # https://github.com/googleapis/google-cloud-python/issues/10566 "google-api-core>=1.21.0,<3.0.0dev", - "uritemplate>=3.0.0,<5", + "uritemplate>=3.0.1,<5", ] package_root = os.path.abspath(os.path.dirname(__file__)) diff --git a/testing/constraints-3.6.txt b/testing/constraints-3.6.txt index 0c0e7a2e53b..35fb5748093 100644 --- a/testing/constraints-3.6.txt +++ b/testing/constraints-3.6.txt @@ -9,4 +9,4 @@ httplib2==0.15.0 google-auth==1.16.0 google-auth-httplib2==0.0.3 google-api-core==1.21.0 -uritemplate==3.0.0 \ No newline at end of file +uritemplate==3.0.1 \ No newline at end of file
internetarchive__openlibrary-5923
Reversion: author searches with wildcards fail <!-- What problem are we solving? What does the experience look like today? What are the symptoms? --> Wildcard asterisk fails in author name search, but works in "All" search. This used to work. ### Evidence / Screenshot (if possible) ![A097E003-93BE-454E-8696-6BD66A64068D](https://user-images.githubusercontent.com/11435431/138561672-765afb2c-3f70-4db6-840e-d4e6f4c97ffb.png) ![22BF5133-458D-463E-B26B-DB2B43E9519C](https://user-images.githubusercontent.com/11435431/138561674-16449abc-5473-4c9c-aed4-67c4ad4db603.png) ### Relevant url? <!-- `https://openlibrary.org/...` --> https://openlibrary.org/search?q=Jon+kabat*&mode=everything ### Steps to Reproduce <!-- What steps caused you to find the bug? --> 1. Go to search bar 2. Enter a partial name ending with * 3. Select "Author" as search type 4. Run query <!-- What actually happened after these steps? What did you expect to happen? --> * Actual: nothing found * Expected: list of matching authors ### Details - **Logged in (Y/N)?** y - **Browser type/version?** Safari or Chrome - **Operating system?** iPadOS - **Environment (prod/dev/local)?** prod <!-- If not sure, put prod --> ### Proposal & Constraints <!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? --> ### Related files <!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. --> ### Stakeholders <!-- @ tag stakeholders of this bug -->
[ { "content": "from datetime import datetime\nimport copy\nimport json\nimport logging\nimport random\nimport re\nimport string\nfrom typing import List, Tuple, Any, Union, Optional, Iterable, Dict\nfrom unicodedata import normalize\nfrom json import JSONDecodeError\nimport requests\nimport web\nfrom lxml.etree import XML, XMLSyntaxError\nfrom requests import Response\nfrom six.moves import urllib\n\nfrom infogami import config\nfrom infogami.utils import delegate, stats\nfrom infogami.utils.view import public, render, render_template, safeint\nfrom openlibrary.core.lending import add_availability, get_availability_of_ocaids\nfrom openlibrary.core.models import Edition # noqa: E402\nfrom openlibrary.plugins.inside.code import fulltext_search\nfrom openlibrary.plugins.openlibrary.lists import get_list_editions\nfrom openlibrary.plugins.openlibrary.processors import urlsafe\nfrom openlibrary.plugins.upstream.utils import urlencode\nfrom openlibrary.utils import escape_bracket\nfrom openlibrary.utils.ddc import (\n normalize_ddc,\n normalize_ddc_prefix,\n normalize_ddc_range,\n)\nfrom openlibrary.utils.isbn import normalize_isbn\nfrom openlibrary.utils.lcc import (\n normalize_lcc_prefix,\n normalize_lcc_range,\n short_lcc_to_sortable_lcc,\n)\n\nlogger = logging.getLogger(\"openlibrary.worksearch\")\n\nif hasattr(config, 'plugin_worksearch'):\n solr_select_url = (\n config.plugin_worksearch.get('solr_base_url', 'localhost') + '/select'\n )\n\n default_spellcheck_count = config.plugin_worksearch.get('spellcheck_count', 10)\n\n\nALL_FIELDS = [\n \"key\",\n \"redirects\",\n \"title\",\n \"subtitle\",\n \"alternative_title\",\n \"alternative_subtitle\",\n \"edition_key\",\n \"by_statement\",\n \"publish_date\",\n \"lccn\",\n \"ia\",\n \"oclc\",\n \"isbn\",\n \"contributor\",\n \"publish_place\",\n \"publisher\",\n \"first_sentence\",\n \"author_key\",\n \"author_name\",\n \"author_alternative_name\",\n \"subject\",\n \"person\",\n \"place\",\n \"time\",\n \"has_fulltext\",\n \"title_suggest\",\n \"edition_count\",\n \"publish_year\",\n \"language\",\n \"number_of_pages\",\n \"ia_count\",\n \"publisher_facet\",\n \"author_facet\",\n \"first_publish_year\",\n # Subjects\n \"subject_key\",\n \"person_key\",\n \"place_key\",\n \"time_key\",\n # Classifications\n \"lcc\",\n \"ddc\",\n \"lcc_sort\",\n \"ddc_sort\",\n]\nFACET_FIELDS = [\n \"has_fulltext\",\n \"author_facet\",\n \"language\",\n \"first_publish_year\",\n \"publisher_facet\",\n \"subject_facet\",\n \"person_facet\",\n \"place_facet\",\n \"time_facet\",\n \"public_scan_b\",\n]\nFIELD_NAME_MAP = {\n 'author': 'author_name',\n 'authors': 'author_name',\n 'by': 'author_name',\n 'publishers': 'publisher',\n # \"Private\" fields\n # This is private because we'll change it to a multi-valued field instead of a\n # plain string at the next opportunity, which will make it much more usable.\n '_ia_collection': 'ia_collection_s',\n}\nSORTS = {\n 'editions': 'edition_count desc',\n 'old': 'first_publish_year asc',\n 'new': 'first_publish_year desc',\n 'scans': 'ia_count desc',\n # Classifications\n 'lcc_sort': 'lcc_sort asc',\n 'lcc_sort asc': 'lcc_sort asc',\n 'lcc_sort desc': 'lcc_sort desc',\n 'ddc_sort': 'ddc_sort asc',\n 'ddc_sort asc': 'ddc_sort asc',\n 'ddc_sort desc': 'ddc_sort desc',\n # Random\n 'random': 'random_1 asc',\n 'random asc': 'random_1 asc',\n 'random desc': 'random_1 desc',\n 'random.hourly': lambda: f'random_{datetime.now():%Y%m%dT%H} asc',\n 'random.daily': lambda: f'random_{datetime.now():%Y%m%d} asc',\n}\nDEFAULT_SEARCH_FIELDS = {\n 'key',\n 'author_name',\n 'author_key',\n 'title',\n 'subtitle',\n 'edition_count',\n 'ia',\n 'has_fulltext',\n 'first_publish_year',\n 'cover_i',\n 'cover_edition_key',\n 'public_scan_b',\n 'lending_edition_s',\n 'lending_identifier_s',\n 'language',\n 'ia_collection_s',\n # FIXME: These should be fetched from book_providers, but can't cause circular dep\n 'id_project_gutenberg',\n 'id_librivox',\n 'id_standard_ebooks',\n}\nOLID_URLS = {'A': 'authors', 'M': 'books', 'W': 'works'}\n\nre_to_esc = re.compile(r'[\\[\\]:/]')\nre_isbn_field = re.compile(r'^\\s*(?:isbn[:\\s]*)?([-0-9X]{9,})\\s*$', re.I)\nre_author_key = re.compile(r'(OL\\d+A)')\nre_fields = re.compile(r'(-?%s):' % '|'.join(ALL_FIELDS + list(FIELD_NAME_MAP)), re.I)\nre_op = re.compile(' +(OR|AND)$')\nre_range = re.compile(r'\\[(?P<start>.*) TO (?P<end>.*)\\]')\nre_author_facet = re.compile(r'^(OL\\d+A) (.*)$')\nre_pre = re.compile(r'<pre>(.*)</pre>', re.S)\nre_subject_types = re.compile('^(places|times|people)/(.*)')\nre_olid = re.compile(r'^OL\\d+([AMW])$')\n\nplurals = {f + 's': f for f in ('publisher', 'author')}\n\n\n@public\ndef get_solr_works(work_key: Iterable[str]) -> dict[str, dict]:\n from openlibrary.plugins.worksearch.search import get_solr\n\n return {\n doc['key']: doc\n for doc in get_solr().get_many(set(work_key), fields=DEFAULT_SEARCH_FIELDS)\n }\n\n\ndef process_sort(raw_sort):\n \"\"\"\n :param str raw_sort:\n :rtype: str\n\n >>> process_sort('editions')\n 'edition_count desc'\n >>> process_sort('editions, new')\n 'edition_count desc,first_publish_year desc'\n >>> process_sort('random')\n 'random_1 asc'\n >>> process_sort('random_custom_seed')\n 'random_custom_seed asc'\n >>> process_sort('random_custom_seed desc')\n 'random_custom_seed desc'\n >>> process_sort('random_custom_seed asc')\n 'random_custom_seed asc'\n \"\"\"\n\n def process_individual_sort(sort):\n if sort.startswith('random_'):\n return sort if ' ' in sort else sort + ' asc'\n else:\n solr_sort = SORTS[sort]\n return solr_sort() if callable(solr_sort) else solr_sort\n\n return ','.join(process_individual_sort(s.strip()) for s in raw_sort.split(','))\n\n\ndef read_author_facet(af):\n # example input: \"OL26783A Leo Tolstoy\"\n return re_author_facet.match(af).groups()\n\n\ndef get_language_name(code):\n lang = web.ctx.site.get('/languages/' + code)\n return lang.name if lang else \"'%s' unknown\" % code\n\n\ndef read_facets(root):\n e_facet_counts = root.find(\"lst[@name='facet_counts']\")\n e_facet_fields = e_facet_counts.find(\"lst[@name='facet_fields']\")\n facets = {}\n for e_lst in e_facet_fields:\n assert e_lst.tag == 'lst'\n name = e_lst.attrib['name']\n if name == 'author_facet':\n name = 'author_key'\n if name == 'has_fulltext': # boolean facets\n e_true = e_lst.find(\"int[@name='true']\")\n true_count = e_true.text if e_true is not None else 0\n e_false = e_lst.find(\"int[@name='false']\")\n false_count = e_false.text if e_false is not None else 0\n facets[name] = [\n ('true', 'yes', true_count),\n ('false', 'no', false_count),\n ]\n continue\n facets[name] = []\n for e in e_lst:\n if e.text == '0':\n continue\n k = e.attrib['name']\n if name == 'author_key':\n k, display = read_author_facet(k)\n elif name == 'language':\n display = get_language_name(k)\n else:\n display = k\n facets[name].append((k, display, e.text))\n return facets\n\n\ndef lcc_transform(raw):\n \"\"\"\n Transform the lcc search field value\n :param str raw:\n :rtype: str\n \"\"\"\n # e.g. lcc:[NC1 TO NC1000] to lcc:[NC-0001.00000000 TO NC-1000.00000000]\n # for proper range search\n m = re_range.match(raw)\n if m:\n lcc_range = [m.group('start').strip(), m.group('end').strip()]\n normed = normalize_lcc_range(*lcc_range)\n return f'[{normed[0] or lcc_range[0]} TO {normed[1] or lcc_range[1]}]'\n elif '*' in raw and not raw.startswith('*'):\n # Marshals human repr into solr repr\n # lcc:A720* should become A--0720*\n parts = raw.split('*', 1)\n lcc_prefix = normalize_lcc_prefix(parts[0])\n return (lcc_prefix or parts[0]) + '*' + parts[1]\n else:\n normed = short_lcc_to_sortable_lcc(raw.strip('\"'))\n if normed:\n use_quotes = ' ' in normed or raw.startswith('\"')\n return ('\"%s\"' if use_quotes else '%s*') % normed\n\n # If none of the transforms took\n return raw\n\n\ndef ddc_transform(raw):\n \"\"\"\n Transform the ddc search field value\n :param str raw:\n :rtype: str\n \"\"\"\n m = re_range.match(raw)\n if m:\n raw = [m.group('start').strip(), m.group('end').strip()]\n normed = normalize_ddc_range(*raw)\n return f'[{normed[0] or raw[0]} TO {normed[1] or raw[1]}]'\n elif raw.endswith('*'):\n return normalize_ddc_prefix(raw[:-1]) + '*'\n else:\n normed = normalize_ddc(raw.strip('\"'))\n if normed:\n return normed[0]\n\n # if none of the transforms took\n return raw\n\n\ndef ia_collection_s_transform(raw):\n \"\"\"\n Because this field is not a multi-valued field in solr, but a simple ;-separate\n string, we have to do searches like this for now.\n \"\"\"\n result = raw\n if not result.startswith('*'):\n result = '*' + result\n if not result.endswith('*'):\n result += '*'\n return result\n\n\ndef parse_query_fields(q):\n found = [(m.start(), m.end()) for m in re_fields.finditer(q)]\n first = q[: found[0][0]].strip() if found else q.strip()\n if first:\n yield {'field': 'text', 'value': first.replace(':', r'\\:')}\n for field_num in range(len(found)):\n op_found = None\n f = found[field_num]\n field_name = q[f[0] : f[1] - 1].lower()\n if field_name in FIELD_NAME_MAP:\n field_name = FIELD_NAME_MAP[field_name]\n if field_num == len(found) - 1:\n v = q[f[1] :].strip()\n else:\n v = q[f[1] : found[field_num + 1][0]].strip()\n m = re_op.search(v)\n if m:\n v = v[: -len(m.group(0))]\n op_found = m.group(1)\n if field_name == 'isbn':\n isbn = normalize_isbn(v)\n if isbn:\n v = isbn\n if field_name in ('lcc', 'lcc_sort'):\n v = lcc_transform(v)\n if field_name == ('ddc', 'ddc_sort'):\n v = ddc_transform(v)\n if field_name == 'ia_collection_s':\n v = ia_collection_s_transform(v)\n\n yield {'field': field_name, 'value': v.replace(':', r'\\:')}\n if op_found:\n yield {'op': op_found}\n\n\ndef build_q_list(param):\n q_list = []\n if 'q' in param:\n # Solr 4+ has support for regexes (eg `key:/foo.*/`)! But for now, let's not\n # expose that and escape all '/'. Otherwise `key:/works/OL1W` is interpreted as\n # a regex.\n q_param = param['q'].strip().replace('/', '\\\\/')\n else:\n q_param = None\n use_dismax = False\n if q_param:\n if q_param == '*:*':\n q_list.append(q_param)\n elif 'NOT ' in q_param: # this is a hack\n q_list.append(q_param.strip())\n elif re_fields.search(q_param):\n q_list.extend(\n i['op'] if 'op' in i else '{}:({})'.format(i['field'], i['value'])\n for i in parse_query_fields(q_param)\n )\n else:\n isbn = normalize_isbn(q_param)\n if isbn and len(isbn) in (10, 13):\n q_list.append('isbn:(%s)' % isbn)\n else:\n q_list.append(q_param.strip().replace(':', r'\\:'))\n use_dismax = True\n else:\n if 'author' in param:\n v = param['author'].strip()\n m = re_author_key.search(v)\n if m:\n q_list.append(\"author_key:(%s)\" % m.group(1))\n else:\n v = re_to_esc.sub(r'\\\\\\g<0>', v)\n # Somehow v can be empty at this point,\n # passing the following with empty strings causes a severe error in SOLR\n if v:\n q_list.append(\n \"(author_name:({name}) OR author_alternative_name:({name}))\".format(\n name=v\n )\n )\n\n check_params = [\n 'title',\n 'publisher',\n 'oclc',\n 'lccn',\n 'contributor',\n 'subject',\n 'place',\n 'person',\n 'time',\n ]\n q_list += [\n '{}:({})'.format(k, re_to_esc.sub(r'\\\\\\g<0>', param[k]))\n for k in check_params\n if k in param\n ]\n if param.get('isbn'):\n q_list.append(\n 'isbn:(%s)' % (normalize_isbn(param['isbn']) or param['isbn'])\n )\n return (q_list, use_dismax)\n\n\ndef execute_solr_query(\n solr_path: str, params: Union[dict, list[tuple[str, Any]]]\n) -> Optional[Response]:\n stats.begin(\"solr\", url=f'{solr_path}?{urlencode(params)}')\n try:\n response = requests.get(solr_path, params=params, timeout=10)\n response.raise_for_status()\n except requests.HTTPError:\n logger.exception(\"Failed solr query\")\n return None\n finally:\n stats.end()\n return response\n\n\ndef parse_json_from_solr_query(\n solr_path: str, params: Union[dict, list[tuple[str, Any]]]\n) -> Optional[dict]:\n \"\"\"\n Returns a json.loaded Python object or None\n \"\"\"\n response = execute_solr_query(solr_path, params)\n if not response:\n logger.error(\"Error parsing empty search engine response\")\n return None\n try:\n return response.json()\n except JSONDecodeError:\n logger.exception(\"Error parsing search engine response\")\n return None\n\n\ndef run_solr_query(\n param=None,\n rows=100,\n page=1,\n sort=None,\n spellcheck_count=None,\n offset=None,\n fields=None,\n facet=True,\n):\n param = param or {}\n\n # use page when offset is not specified\n if offset is None:\n offset = rows * (page - 1)\n\n (q_list, use_dismax) = build_q_list(param)\n params = [\n ('fl', ','.join(fields or DEFAULT_SEARCH_FIELDS)),\n ('fq', 'type:work'),\n ('q.op', 'AND'),\n ('start', offset),\n ('rows', rows),\n ]\n\n if spellcheck_count is None:\n spellcheck_count = default_spellcheck_count\n\n if spellcheck_count:\n params.append(('spellcheck', 'true'))\n params.append(('spellcheck.count', spellcheck_count))\n\n if facet:\n params.append(('facet', 'true'))\n for facet in FACET_FIELDS:\n params.append(('facet.field', facet))\n\n if q_list:\n if use_dismax:\n params.append(('q', ' '.join(q_list)))\n params.append(('defType', 'dismax'))\n params.append(('qf', 'text title^20 author_name^20'))\n params.append(('bf', 'min(100,edition_count)'))\n else:\n params.append(('q', ' '.join(q_list + ['_val_:\"sqrt(edition_count)\"^10'])))\n\n if 'public_scan' in param:\n v = param.pop('public_scan').lower()\n if v in ('true', 'false'):\n if v == 'false':\n # also constrain on print disabled since the index may not be in sync\n param.setdefault('print_disabled', 'false')\n params.append(('fq', 'public_scan_b:%s' % v))\n\n if 'print_disabled' in param:\n v = param.pop('print_disabled').lower()\n if v in ('true', 'false'):\n minus = '-' if v == 'false' else ''\n params.append(('fq', '%ssubject_key:protected_daisy' % minus))\n\n if 'has_fulltext' in param:\n v = param['has_fulltext'].lower()\n if v not in ('true', 'false'):\n del param['has_fulltext']\n params.append(('fq', 'has_fulltext:%s' % v))\n\n for field in FACET_FIELDS:\n if field == 'has_fulltext':\n continue\n if field == 'author_facet':\n field = 'author_key'\n if field not in param:\n continue\n values = param[field]\n params += [('fq', f'{field}:\"{val}\"') for val in values if val]\n\n if sort:\n params.append(('sort', sort))\n\n if 'wt' in param:\n params.append(('wt', param.get('wt')))\n url = f'{solr_select_url}?{urlencode(params)}'\n\n response = execute_solr_query(solr_select_url, params)\n solr_result = response.content if response else None # bytes or None\n return (solr_result, url, q_list)\n\n\ndef do_search(param, sort, page=1, rows=100, spellcheck_count=None):\n if sort:\n sort = process_sort(sort)\n (solr_result, solr_select, q_list) = run_solr_query(\n param, rows, page, sort, spellcheck_count\n )\n is_bad = False\n if not solr_result or solr_result.startswith(b'<html'):\n is_bad = True\n if not is_bad:\n try:\n root = XML(solr_result)\n except XMLSyntaxError:\n is_bad = True\n if is_bad:\n m = re_pre.search(solr_result)\n return web.storage(\n facet_counts=None,\n docs=[],\n is_advanced=bool(param.get('q')),\n num_found=None,\n solr_select=solr_select,\n q_list=q_list,\n error=(web.htmlunquote(m.group(1)) if m else solr_result),\n )\n\n spellcheck = root.find(\"lst[@name='spellcheck']\")\n spell_map = {}\n if spellcheck is not None and len(spellcheck):\n for e in spellcheck.find(\"lst[@name='suggestions']\"):\n assert e.tag == 'lst'\n a = e.attrib['name']\n if a in spell_map or a in ('sqrt', 'edition_count'):\n continue\n spell_map[a] = [i.text for i in e.find(\"arr[@name='suggestion']\")]\n\n docs = root.find('result')\n return web.storage(\n facet_counts=read_facets(root),\n docs=docs,\n is_advanced=bool(param.get('q')),\n num_found=(int(docs.attrib['numFound']) if docs is not None else None),\n solr_select=solr_select,\n q_list=q_list,\n error=None,\n spellcheck=spell_map,\n )\n\n\ndef get_doc(doc): # called from work_search template\n e_ia = doc.find(\"arr[@name='ia']\")\n e_id_project_gutenberg = doc.find(\"arr[@name='id_project_gutenberg']\") or []\n e_id_librivox = doc.find(\"arr[@name='id_librivox']\") or []\n e_id_standard_ebooks = doc.find(\"arr[@name='id_standard_ebooks']\") or []\n\n first_pub = None\n e_first_pub = doc.find(\"int[@name='first_publish_year']\")\n if e_first_pub is not None:\n first_pub = e_first_pub.text\n e_first_edition = doc.find(\"str[@name='first_edition']\")\n first_edition = None\n if e_first_edition is not None:\n first_edition = e_first_edition.text\n\n work_subtitle = None\n e_subtitle = doc.find(\"str[@name='subtitle']\")\n if e_subtitle is not None:\n work_subtitle = e_subtitle.text\n\n if doc.find(\"arr[@name='author_key']\") is None:\n assert doc.find(\"arr[@name='author_name']\") is None\n authors = []\n else:\n ak = [e.text for e in doc.find(\"arr[@name='author_key']\")]\n an = [e.text for e in doc.find(\"arr[@name='author_name']\")]\n authors = [\n web.storage(\n key=key,\n name=name,\n url=\"/authors/{}/{}\".format(\n key, (urlsafe(name) if name is not None else 'noname')\n ),\n )\n for key, name in zip(ak, an)\n ]\n cover = doc.find(\"str[@name='cover_edition_key']\")\n languages = doc.find(\"arr[@name='language']\")\n e_public_scan = doc.find(\"bool[@name='public_scan_b']\")\n e_lending_edition = doc.find(\"str[@name='lending_edition_s']\")\n e_lending_identifier = doc.find(\"str[@name='lending_identifier_s']\")\n e_collection = doc.find(\"str[@name='ia_collection_s']\")\n collections = set()\n if e_collection is not None:\n collections = set(e_collection.text.split(';'))\n\n doc = web.storage(\n key=doc.find(\"str[@name='key']\").text,\n title=doc.find(\"str[@name='title']\").text,\n edition_count=int(doc.find(\"int[@name='edition_count']\").text),\n ia=[e.text for e in (e_ia if e_ia is not None else [])],\n has_fulltext=(doc.find(\"bool[@name='has_fulltext']\").text == 'true'),\n public_scan=(\n (e_public_scan.text == 'true')\n if e_public_scan is not None\n else (e_ia is not None)\n ),\n lending_edition=(\n e_lending_edition.text if e_lending_edition is not None else None\n ),\n lending_identifier=(\n e_lending_identifier.text if e_lending_identifier is not None else None\n ),\n collections=collections,\n authors=authors,\n first_publish_year=first_pub,\n first_edition=first_edition,\n subtitle=work_subtitle,\n cover_edition_key=(cover.text if cover is not None else None),\n languages=languages and [lang.text for lang in languages],\n id_project_gutenberg=[e.text for e in e_id_project_gutenberg],\n id_librivox=[e.text for e in e_id_librivox],\n id_standard_ebooks=[e.text for e in e_id_standard_ebooks],\n )\n\n doc.url = doc.key + '/' + urlsafe(doc.title)\n return doc\n\n\ndef work_object(w): # called by works_by_author\n ia = w.get('ia', [])\n obj = dict(\n authors=[\n web.storage(key='/authors/' + k, name=n)\n for k, n in zip(w['author_key'], w['author_name'])\n ],\n edition_count=w['edition_count'],\n key=w['key'],\n title=w['title'],\n public_scan=w.get('public_scan_b', bool(ia)),\n lending_edition=w.get('lending_edition_s', ''),\n lending_identifier=w.get('lending_identifier_s', ''),\n collections=set(\n w['ia_collection_s'].split(';') if 'ia_collection_s' in w else []\n ),\n url=w['key'] + '/' + urlsafe(w['title']),\n cover_edition_key=w.get('cover_edition_key'),\n first_publish_year=(\n w['first_publish_year'] if 'first_publish_year' in w else None\n ),\n ia=w.get('ia', []),\n cover_i=w.get('cover_i'),\n id_project_gutenberg=w.get('id_project_gutenberg'),\n id_librivox=w.get('id_librivox'),\n id_standard_ebooks=w.get('id_standard_ebooks'),\n )\n\n for f in 'has_fulltext', 'subtitle':\n if w.get(f):\n obj[f] = w[f]\n return web.storage(obj)\n\n\nclass scan(delegate.page):\n \"\"\"\n Experimental EAN barcode scanner page to scan and add/view books by their barcodes.\n \"\"\"\n\n path = \"/barcodescanner\"\n\n def GET(self):\n return render.barcodescanner()\n\n\nclass search(delegate.page):\n def redirect_if_needed(self, i):\n params = {}\n need_redirect = False\n for k, v in i.items():\n if k in plurals:\n params[k] = None\n k = plurals[k]\n need_redirect = True\n if isinstance(v, list):\n if v == []:\n continue\n clean = [normalize('NFC', b.strip()) for b in v]\n if clean != v:\n need_redirect = True\n if len(clean) == 1 and clean[0] == '':\n clean = None\n else:\n clean = normalize('NFC', v.strip())\n if clean == '':\n need_redirect = True\n clean = None\n if clean != v:\n need_redirect = True\n params[k] = clean\n if need_redirect:\n raise web.seeother(web.changequery(**params))\n\n def isbn_redirect(self, isbn_param):\n isbn = normalize_isbn(isbn_param)\n if not isbn:\n return\n\n ed = Edition.from_isbn(isbn)\n if ed:\n web.seeother(ed.key)\n\n def GET(self):\n # Enable patrons to search for query q2 within collection q\n # q2 param gets removed and prepended to q via a redirect\n _i = web.input(q='', q2='')\n if _i.q.strip() and _i.q2.strip():\n _i.q = _i.q2.strip() + ' ' + _i.q.strip()\n _i.pop('q2')\n raise web.seeother('/search?' + urllib.parse.urlencode(_i))\n\n i = web.input(\n author_key=[],\n language=[],\n first_publish_year=[],\n publisher_facet=[],\n subject_facet=[],\n person_facet=[],\n place_facet=[],\n time_facet=[],\n public_scan_b=[],\n )\n\n # Send to full-text Search Inside if checkbox checked\n if i.get('search-fulltext'):\n raise web.seeother(\n '/search/inside?' + urllib.parse.urlencode({'q': i.get('q', '')})\n )\n\n if i.get('wisbn'):\n i.isbn = i.wisbn\n\n self.redirect_if_needed(i)\n\n if 'isbn' in i:\n self.isbn_redirect(i.isbn)\n\n q_list = []\n q = i.get('q', '').strip()\n if q:\n m = re_olid.match(q)\n if m:\n raise web.seeother(f'/{OLID_URLS[m.group(1)]}/{q}')\n m = re_isbn_field.match(q)\n if m:\n self.isbn_redirect(m.group(1))\n q_list.append(q)\n for k in ('title', 'author', 'isbn', 'subject', 'place', 'person', 'publisher'):\n if k in i:\n v = re_to_esc.sub(r'\\\\\\g<0>', i[k].strip())\n q_list.append(k + ':' + v)\n return render.work_search(\n i,\n ' '.join(q_list),\n do_search,\n get_doc,\n get_availability_of_ocaids,\n fulltext_search,\n FACET_FIELDS,\n )\n\n\ndef works_by_author(\n akey, sort='editions', page=1, rows=100, has_fulltext=False, query=None\n):\n # called by merge_author_works\n q = 'author_key:' + akey\n if query:\n q = query\n\n offset = rows * (page - 1)\n params = [\n ('fq', 'author_key:' + akey),\n ('fq', 'type:work'),\n ('q', q),\n ('start', offset),\n ('rows', rows),\n (\n 'fl',\n ','.join(\n [\n 'key',\n 'author_name',\n 'author_key',\n 'title',\n 'subtitle',\n 'edition_count',\n 'ia',\n 'cover_edition_key',\n 'has_fulltext',\n 'language',\n 'first_publish_year',\n 'public_scan_b',\n 'lending_edition_s',\n 'lending_identifier_s',\n 'ia_collection_s',\n 'id_project_gutenberg',\n 'id_librivox',\n 'id_standard_ebooks',\n 'cover_i',\n ]\n ),\n ),\n ('wt', 'json'),\n ('q.op', 'AND'),\n ('facet', 'true'),\n ('facet.mincount', 1),\n ('f.author_facet.facet.sort', 'count'),\n ('f.publish_year.facet.limit', -1),\n ('facet.limit', 25),\n ]\n\n if has_fulltext:\n params.append(('fq', 'has_fulltext:true'))\n\n if sort == \"editions\":\n params.append(('sort', 'edition_count desc'))\n elif sort.startswith('old'):\n params.append(('sort', 'first_publish_year asc'))\n elif sort.startswith('new'):\n params.append(('sort', 'first_publish_year desc'))\n elif sort.startswith('title'):\n params.append(('sort', 'title asc'))\n\n facet_fields = [\n \"author_facet\",\n \"language\",\n \"publish_year\",\n \"publisher_facet\",\n \"subject_facet\",\n \"person_facet\",\n \"place_facet\",\n \"time_facet\",\n ]\n for f in facet_fields:\n params.append((\"facet.field\", f))\n\n reply = parse_json_from_solr_query(solr_select_url, params)\n if reply is None:\n return web.storage(\n num_found=0,\n works=[],\n years=[],\n get_facet=[],\n sort=sort,\n )\n # TODO: Deep JSON structure defense - for now, let it blow up so easier to detect\n facets = reply['facet_counts']['facet_fields']\n works = [work_object(w) for w in reply['response']['docs']]\n\n def get_facet(f, limit=None):\n return list(web.group(facets[f][: limit * 2] if limit else facets[f], 2))\n\n return web.storage(\n num_found=int(reply['response']['numFound']),\n works=add_availability(works),\n years=[(int(k), v) for k, v in get_facet('publish_year')],\n get_facet=get_facet,\n sort=sort,\n )\n\n\ndef sorted_work_editions(wkey, json_data=None):\n \"\"\"Setting json_data to a real value simulates getting SOLR data back, i.e. for testing (but ick!)\"\"\"\n q = 'key:' + wkey\n if json_data:\n reply = json.loads(json_data)\n else:\n reply = parse_json_from_solr_query(\n solr_select_url,\n {\n 'q.op': 'AND',\n 'q': q,\n 'rows': 10,\n 'fl': 'edition_key',\n 'qt': 'standard',\n 'wt': 'json',\n },\n )\n if reply is None or reply.get('response', {}).get('numFound', 0) == 0:\n return []\n # TODO: Deep JSON structure defense - for now, let it blow up so easier to detect\n return reply[\"response\"]['docs'][0].get('edition_key', [])\n\n\ndef top_books_from_author(akey, rows=5, offset=0):\n q = 'author_key:(' + akey + ')'\n json_result = parse_json_from_solr_query(\n solr_select_url,\n {\n 'q': q,\n 'start': offset,\n 'rows': rows,\n 'fl': 'key,title,edition_count,first_publish_year',\n 'sort': 'edition_count desc',\n 'wt': 'json',\n },\n )\n if json_result is None:\n return {'books': [], 'total': 0}\n # TODO: Deep JSON structure defense - for now, let it blow up so easier to detect\n response = json_result['response']\n return {\n 'books': [web.storage(doc) for doc in response['docs']],\n 'total': response['numFound'],\n }\n\n\nclass advancedsearch(delegate.page):\n path = \"/advancedsearch\"\n\n def GET(self):\n return render_template(\"search/advancedsearch.html\")\n\n\ndef escape_colon(q, vf):\n if ':' not in q:\n return q\n parts = q.split(':')\n result = parts.pop(0)\n while parts:\n if not any(result.endswith(f) for f in vf):\n result += '\\\\'\n result += ':' + parts.pop(0)\n return result\n\n\ndef run_solr_search(solr_select: str, params: dict):\n response = execute_solr_query(solr_select, params)\n json_data = response.content if response else None # bytes or None\n return parse_search_response(json_data)\n\n\ndef parse_search_response(json_data):\n \"\"\"Construct response for any input\"\"\"\n if json_data is None:\n return {'error': 'Error parsing empty search engine response'}\n try:\n return json.loads(json_data)\n except json.JSONDecodeError:\n logger.exception(\"Error parsing search engine response\")\n m = re_pre.search(json_data)\n if m is None:\n return {'error': 'Error parsing search engine response'}\n error = web.htmlunquote(m.group(1))\n solr_error = 'org.apache.lucene.queryParser.ParseException: '\n if error.startswith(solr_error):\n error = error[len(solr_error) :]\n return {'error': error}\n\n\nclass list_search(delegate.page):\n path = '/search/lists'\n\n def GET(self):\n i = web.input(q='', offset='0', limit='10')\n\n lists = self.get_results(i.q, i.offset, i.limit)\n\n return render_template('search/lists.tmpl', q=i.q, lists=lists)\n\n def get_results(self, q, offset=0, limit=100):\n if 'env' not in web.ctx:\n delegate.fakeload()\n\n keys = web.ctx.site.things(\n {\n \"type\": \"/type/list\",\n \"name~\": q,\n \"limit\": int(limit),\n \"offset\": int(offset),\n }\n )\n\n return web.ctx.site.get_many(keys)\n\n\nclass list_search_json(list_search):\n path = '/search/lists'\n encoding = 'json'\n\n def GET(self):\n i = web.input(q='', offset=0, limit=10)\n offset = safeint(i.offset, 0)\n limit = safeint(i.limit, 10)\n limit = min(100, limit)\n\n docs = self.get_results(i.q, offset=offset, limit=limit)\n\n response = {'start': offset, 'docs': [doc.preview() for doc in docs]}\n\n web.header('Content-Type', 'application/json')\n return delegate.RawText(json.dumps(response))\n\n\nclass subject_search(delegate.page):\n path = '/search/subjects'\n\n def GET(self):\n return render_template('search/subjects.tmpl', self.get_results)\n\n def get_results(self, q, offset=0, limit=100):\n valid_fields = ['key', 'name', 'subject_type', 'work_count']\n q = escape_colon(escape_bracket(q), valid_fields)\n\n results = run_solr_search(\n solr_select_url,\n {\n \"fq\": \"type:subject\",\n \"q.op\": \"AND\",\n \"q\": q,\n \"start\": offset,\n \"rows\": limit,\n \"fl\": \",\".join(valid_fields),\n \"qt\": \"standard\",\n \"wt\": \"json\",\n \"sort\": \"work_count desc\",\n },\n )\n response = results['response']\n\n for doc in response['docs']:\n doc['type'] = doc.get('subject_type', 'subject')\n doc['count'] = doc.get('work_count', 0)\n\n return results\n\n\nclass subject_search_json(subject_search):\n path = '/search/subjects'\n encoding = 'json'\n\n def GET(self):\n i = web.input(q='', offset=0, limit=100)\n offset = safeint(i.offset, 0)\n limit = safeint(i.limit, 100)\n limit = min(1000, limit) # limit limit to 1000.\n\n response = self.get_results(i.q, offset=offset, limit=limit)['response']\n web.header('Content-Type', 'application/json')\n return delegate.RawText(json.dumps(response))\n\n\nclass author_search(delegate.page):\n path = '/search/authors'\n\n def GET(self):\n return render_template('search/authors.tmpl', self.get_results)\n\n def get_results(self, q, offset=0, limit=100):\n valid_fields = [\n 'key',\n 'name',\n 'alternate_names',\n 'birth_date',\n 'death_date',\n 'date',\n 'work_count',\n ]\n q = escape_colon(escape_bracket(q), valid_fields)\n q_has_fields = ':' in q.replace(r'\\:', '')\n\n d = run_solr_search(\n solr_select_url,\n {\n 'fq': 'type:author',\n 'q.op': 'AND',\n 'q': q,\n 'start': offset,\n 'rows': limit,\n 'fl': '*',\n 'qt': 'standard',\n 'sort': 'work_count desc',\n 'wt': 'json',\n **(\n {}\n if q_has_fields\n else {'defType': 'dismax', 'qf': 'name alternate_names'}\n ),\n },\n )\n\n docs = d.get('response', {}).get('docs', [])\n for doc in docs:\n # replace /authors/OL1A with OL1A\n # The template still expects the key to be in the old format\n doc['key'] = doc['key'].split(\"/\")[-1]\n return d\n\n\nclass author_search_json(author_search):\n path = '/search/authors'\n encoding = 'json'\n\n def GET(self):\n i = web.input(q='', offset=0, limit=100)\n offset = safeint(i.offset, 0)\n limit = safeint(i.limit, 100)\n limit = min(1000, limit) # limit limit to 1000.\n\n response = self.get_results(i.q, offset=offset, limit=limit)['response']\n web.header('Content-Type', 'application/json')\n return delegate.RawText(json.dumps(response))\n\n\n@public\ndef random_author_search(limit=10):\n \"\"\"\n Returns a dict that contains a random list of authors. Amount of authors\n returned is set be the given limit.\n \"\"\"\n letters_and_digits = string.ascii_letters + string.digits\n seed = ''.join(random.choice(letters_and_digits) for _ in range(10))\n\n search_results = run_solr_search(\n solr_select_url,\n {\n 'q': 'type:author',\n 'rows': limit,\n 'sort': f'random_{seed} desc',\n 'wt': 'json',\n },\n )\n\n docs = search_results.get('response', {}).get('docs', [])\n\n assert docs, f\"random_author_search({limit}) returned no docs\"\n assert (\n len(docs) == limit\n ), f\"random_author_search({limit}) returned {len(docs)} docs\"\n\n for doc in docs:\n # replace /authors/OL1A with OL1A\n # The template still expects the key to be in the old format\n doc['key'] = doc['key'].split(\"/\")[-1]\n\n return search_results['response']\n\n\ndef rewrite_list_editions_query(q, page, offset, limit):\n \"\"\"Takes a solr query. If it doesn't contain a /lists/ key, then\n return the query, unchanged, exactly as it entered the\n function. If it does contain a lists key, then use the pagination\n information to fetch the right block of keys from the\n lists_editions API and then feed these editions resulting work\n keys into solr with the form key:(OL123W, OL234W). This way, we\n can use the solr API to fetch list works and render them in\n carousels in the right format.\n \"\"\"\n if '/lists/' in q:\n editions = get_list_editions(q, offset=offset, limit=limit)\n work_ids = [ed.get('works')[0]['key'] for ed in editions]\n q = 'key:(' + ' OR '.join(work_ids) + ')'\n # We've applied the offset to fetching get_list_editions to\n # produce the right set of discrete work IDs. We don't want\n # it applied to paginate our resulting solr query.\n offset = 0\n page = 1\n return q, page, offset, limit\n\n\n@public\ndef work_search(\n query,\n sort=None,\n page=1,\n offset=0,\n limit=100,\n fields='*',\n facet=True,\n spellcheck_count=None,\n):\n \"\"\"\n params:\n query: dict\n sort: str editions|old|new|scans\n \"\"\"\n # Ensure we don't mutate the `query` passed in by reference\n query = copy.deepcopy(query)\n query['wt'] = 'json'\n if sort:\n sort = process_sort(sort)\n\n # deal with special /lists/ key queries\n query['q'], page, offset, limit = rewrite_list_editions_query(\n query['q'], page, offset, limit\n )\n try:\n (reply, solr_select, q_list) = run_solr_query(\n query,\n rows=limit,\n page=page,\n sort=sort,\n offset=offset,\n fields=fields,\n facet=facet,\n spellcheck_count=spellcheck_count,\n )\n response = json.loads(reply)['response'] or ''\n except (ValueError, OSError) as e:\n logger.error(\"Error in processing search API.\")\n response = dict(start=0, numFound=0, docs=[], error=str(e))\n\n # backward compatibility\n response['num_found'] = response['numFound']\n if fields == '*' or 'availability' in fields:\n response['docs'] = add_availability(response['docs'])\n return response\n\n\nclass search_json(delegate.page):\n path = \"/search\"\n encoding = \"json\"\n\n def GET(self):\n i = web.input(\n author_key=[],\n subject_facet=[],\n person_facet=[],\n place_facet=[],\n time_facet=[],\n first_publish_year=[],\n publisher_facet=[],\n language=[],\n public_scan_b=[],\n )\n if 'query' in i:\n query = json.loads(i.query)\n else:\n query = i\n\n sort = query.get('sort', None)\n\n limit = safeint(query.pop(\"limit\", \"100\"), default=100)\n if \"offset\" in query:\n offset = safeint(query.pop(\"offset\", 0), default=0)\n page = None\n else:\n offset = None\n page = safeint(query.pop(\"page\", \"1\"), default=1)\n\n fields = query.pop('fields', '*').split(',')\n facet = query.pop('_facet', 'true').lower() in ['true']\n spellcheck_count = safeint(\n query.pop(\"_spellcheck_count\", default_spellcheck_count),\n default=default_spellcheck_count,\n )\n\n # If the query is a /list/ key, create custom list_editions_query\n q = query.get('q', '')\n query['q'], page, offset, limit = rewrite_list_editions_query(\n q, page, offset, limit\n )\n response = work_search(\n query,\n sort=sort,\n page=page,\n offset=offset,\n limit=limit,\n fields=fields,\n facet=facet,\n spellcheck_count=spellcheck_count,\n )\n response['q'] = q\n response['offset'] = offset\n response['docs'] = response['docs']\n web.header('Content-Type', 'application/json')\n return delegate.RawText(json.dumps(response, indent=4))\n\n\ndef setup():\n from openlibrary.plugins.worksearch import subjects\n\n # subjects module needs read_author_facet and solr_select_url.\n # Importing this module to access them will result in circular import.\n # Setting them like this to avoid circular-import.\n subjects.read_author_facet = read_author_facet\n if hasattr(config, 'plugin_worksearch'):\n subjects.solr_select_url = solr_select_url\n\n subjects.setup()\n\n from openlibrary.plugins.worksearch import languages, publishers\n\n publishers.setup()\n languages.setup()\n\n\nsetup()\n", "path": "openlibrary/plugins/worksearch/code.py" } ]
[ { "content": "from datetime import datetime\nimport copy\nimport json\nimport logging\nimport random\nimport re\nimport string\nfrom typing import List, Tuple, Any, Union, Optional, Iterable, Dict\nfrom unicodedata import normalize\nfrom json import JSONDecodeError\nimport requests\nimport web\nfrom lxml.etree import XML, XMLSyntaxError\nfrom requests import Response\nfrom six.moves import urllib\n\nfrom infogami import config\nfrom infogami.utils import delegate, stats\nfrom infogami.utils.view import public, render, render_template, safeint\nfrom openlibrary.core.lending import add_availability, get_availability_of_ocaids\nfrom openlibrary.core.models import Edition # noqa: E402\nfrom openlibrary.plugins.inside.code import fulltext_search\nfrom openlibrary.plugins.openlibrary.lists import get_list_editions\nfrom openlibrary.plugins.openlibrary.processors import urlsafe\nfrom openlibrary.plugins.upstream.utils import urlencode\nfrom openlibrary.utils import escape_bracket\nfrom openlibrary.utils.ddc import (\n normalize_ddc,\n normalize_ddc_prefix,\n normalize_ddc_range,\n)\nfrom openlibrary.utils.isbn import normalize_isbn\nfrom openlibrary.utils.lcc import (\n normalize_lcc_prefix,\n normalize_lcc_range,\n short_lcc_to_sortable_lcc,\n)\n\nlogger = logging.getLogger(\"openlibrary.worksearch\")\n\nif hasattr(config, 'plugin_worksearch'):\n solr_select_url = (\n config.plugin_worksearch.get('solr_base_url', 'localhost') + '/select'\n )\n\n default_spellcheck_count = config.plugin_worksearch.get('spellcheck_count', 10)\n\n\nALL_FIELDS = [\n \"key\",\n \"redirects\",\n \"title\",\n \"subtitle\",\n \"alternative_title\",\n \"alternative_subtitle\",\n \"edition_key\",\n \"by_statement\",\n \"publish_date\",\n \"lccn\",\n \"ia\",\n \"oclc\",\n \"isbn\",\n \"contributor\",\n \"publish_place\",\n \"publisher\",\n \"first_sentence\",\n \"author_key\",\n \"author_name\",\n \"author_alternative_name\",\n \"subject\",\n \"person\",\n \"place\",\n \"time\",\n \"has_fulltext\",\n \"title_suggest\",\n \"edition_count\",\n \"publish_year\",\n \"language\",\n \"number_of_pages\",\n \"ia_count\",\n \"publisher_facet\",\n \"author_facet\",\n \"first_publish_year\",\n # Subjects\n \"subject_key\",\n \"person_key\",\n \"place_key\",\n \"time_key\",\n # Classifications\n \"lcc\",\n \"ddc\",\n \"lcc_sort\",\n \"ddc_sort\",\n]\nFACET_FIELDS = [\n \"has_fulltext\",\n \"author_facet\",\n \"language\",\n \"first_publish_year\",\n \"publisher_facet\",\n \"subject_facet\",\n \"person_facet\",\n \"place_facet\",\n \"time_facet\",\n \"public_scan_b\",\n]\nFIELD_NAME_MAP = {\n 'author': 'author_name',\n 'authors': 'author_name',\n 'by': 'author_name',\n 'publishers': 'publisher',\n # \"Private\" fields\n # This is private because we'll change it to a multi-valued field instead of a\n # plain string at the next opportunity, which will make it much more usable.\n '_ia_collection': 'ia_collection_s',\n}\nSORTS = {\n 'editions': 'edition_count desc',\n 'old': 'first_publish_year asc',\n 'new': 'first_publish_year desc',\n 'scans': 'ia_count desc',\n # Classifications\n 'lcc_sort': 'lcc_sort asc',\n 'lcc_sort asc': 'lcc_sort asc',\n 'lcc_sort desc': 'lcc_sort desc',\n 'ddc_sort': 'ddc_sort asc',\n 'ddc_sort asc': 'ddc_sort asc',\n 'ddc_sort desc': 'ddc_sort desc',\n # Random\n 'random': 'random_1 asc',\n 'random asc': 'random_1 asc',\n 'random desc': 'random_1 desc',\n 'random.hourly': lambda: f'random_{datetime.now():%Y%m%dT%H} asc',\n 'random.daily': lambda: f'random_{datetime.now():%Y%m%d} asc',\n}\nDEFAULT_SEARCH_FIELDS = {\n 'key',\n 'author_name',\n 'author_key',\n 'title',\n 'subtitle',\n 'edition_count',\n 'ia',\n 'has_fulltext',\n 'first_publish_year',\n 'cover_i',\n 'cover_edition_key',\n 'public_scan_b',\n 'lending_edition_s',\n 'lending_identifier_s',\n 'language',\n 'ia_collection_s',\n # FIXME: These should be fetched from book_providers, but can't cause circular dep\n 'id_project_gutenberg',\n 'id_librivox',\n 'id_standard_ebooks',\n}\nOLID_URLS = {'A': 'authors', 'M': 'books', 'W': 'works'}\n\nre_to_esc = re.compile(r'[\\[\\]:/]')\nre_isbn_field = re.compile(r'^\\s*(?:isbn[:\\s]*)?([-0-9X]{9,})\\s*$', re.I)\nre_author_key = re.compile(r'(OL\\d+A)')\nre_fields = re.compile(r'(-?%s):' % '|'.join(ALL_FIELDS + list(FIELD_NAME_MAP)), re.I)\nre_op = re.compile(' +(OR|AND)$')\nre_range = re.compile(r'\\[(?P<start>.*) TO (?P<end>.*)\\]')\nre_author_facet = re.compile(r'^(OL\\d+A) (.*)$')\nre_pre = re.compile(r'<pre>(.*)</pre>', re.S)\nre_subject_types = re.compile('^(places|times|people)/(.*)')\nre_olid = re.compile(r'^OL\\d+([AMW])$')\n\nplurals = {f + 's': f for f in ('publisher', 'author')}\n\n\n@public\ndef get_solr_works(work_key: Iterable[str]) -> dict[str, dict]:\n from openlibrary.plugins.worksearch.search import get_solr\n\n return {\n doc['key']: doc\n for doc in get_solr().get_many(set(work_key), fields=DEFAULT_SEARCH_FIELDS)\n }\n\n\ndef process_sort(raw_sort):\n \"\"\"\n :param str raw_sort:\n :rtype: str\n\n >>> process_sort('editions')\n 'edition_count desc'\n >>> process_sort('editions, new')\n 'edition_count desc,first_publish_year desc'\n >>> process_sort('random')\n 'random_1 asc'\n >>> process_sort('random_custom_seed')\n 'random_custom_seed asc'\n >>> process_sort('random_custom_seed desc')\n 'random_custom_seed desc'\n >>> process_sort('random_custom_seed asc')\n 'random_custom_seed asc'\n \"\"\"\n\n def process_individual_sort(sort):\n if sort.startswith('random_'):\n return sort if ' ' in sort else sort + ' asc'\n else:\n solr_sort = SORTS[sort]\n return solr_sort() if callable(solr_sort) else solr_sort\n\n return ','.join(process_individual_sort(s.strip()) for s in raw_sort.split(','))\n\n\ndef read_author_facet(af):\n # example input: \"OL26783A Leo Tolstoy\"\n return re_author_facet.match(af).groups()\n\n\ndef get_language_name(code):\n lang = web.ctx.site.get('/languages/' + code)\n return lang.name if lang else \"'%s' unknown\" % code\n\n\ndef read_facets(root):\n e_facet_counts = root.find(\"lst[@name='facet_counts']\")\n e_facet_fields = e_facet_counts.find(\"lst[@name='facet_fields']\")\n facets = {}\n for e_lst in e_facet_fields:\n assert e_lst.tag == 'lst'\n name = e_lst.attrib['name']\n if name == 'author_facet':\n name = 'author_key'\n if name == 'has_fulltext': # boolean facets\n e_true = e_lst.find(\"int[@name='true']\")\n true_count = e_true.text if e_true is not None else 0\n e_false = e_lst.find(\"int[@name='false']\")\n false_count = e_false.text if e_false is not None else 0\n facets[name] = [\n ('true', 'yes', true_count),\n ('false', 'no', false_count),\n ]\n continue\n facets[name] = []\n for e in e_lst:\n if e.text == '0':\n continue\n k = e.attrib['name']\n if name == 'author_key':\n k, display = read_author_facet(k)\n elif name == 'language':\n display = get_language_name(k)\n else:\n display = k\n facets[name].append((k, display, e.text))\n return facets\n\n\ndef lcc_transform(raw):\n \"\"\"\n Transform the lcc search field value\n :param str raw:\n :rtype: str\n \"\"\"\n # e.g. lcc:[NC1 TO NC1000] to lcc:[NC-0001.00000000 TO NC-1000.00000000]\n # for proper range search\n m = re_range.match(raw)\n if m:\n lcc_range = [m.group('start').strip(), m.group('end').strip()]\n normed = normalize_lcc_range(*lcc_range)\n return f'[{normed[0] or lcc_range[0]} TO {normed[1] or lcc_range[1]}]'\n elif '*' in raw and not raw.startswith('*'):\n # Marshals human repr into solr repr\n # lcc:A720* should become A--0720*\n parts = raw.split('*', 1)\n lcc_prefix = normalize_lcc_prefix(parts[0])\n return (lcc_prefix or parts[0]) + '*' + parts[1]\n else:\n normed = short_lcc_to_sortable_lcc(raw.strip('\"'))\n if normed:\n use_quotes = ' ' in normed or raw.startswith('\"')\n return ('\"%s\"' if use_quotes else '%s*') % normed\n\n # If none of the transforms took\n return raw\n\n\ndef ddc_transform(raw):\n \"\"\"\n Transform the ddc search field value\n :param str raw:\n :rtype: str\n \"\"\"\n m = re_range.match(raw)\n if m:\n raw = [m.group('start').strip(), m.group('end').strip()]\n normed = normalize_ddc_range(*raw)\n return f'[{normed[0] or raw[0]} TO {normed[1] or raw[1]}]'\n elif raw.endswith('*'):\n return normalize_ddc_prefix(raw[:-1]) + '*'\n else:\n normed = normalize_ddc(raw.strip('\"'))\n if normed:\n return normed[0]\n\n # if none of the transforms took\n return raw\n\n\ndef ia_collection_s_transform(raw):\n \"\"\"\n Because this field is not a multi-valued field in solr, but a simple ;-separate\n string, we have to do searches like this for now.\n \"\"\"\n result = raw\n if not result.startswith('*'):\n result = '*' + result\n if not result.endswith('*'):\n result += '*'\n return result\n\n\ndef parse_query_fields(q):\n found = [(m.start(), m.end()) for m in re_fields.finditer(q)]\n first = q[: found[0][0]].strip() if found else q.strip()\n if first:\n yield {'field': 'text', 'value': first.replace(':', r'\\:')}\n for field_num in range(len(found)):\n op_found = None\n f = found[field_num]\n field_name = q[f[0] : f[1] - 1].lower()\n if field_name in FIELD_NAME_MAP:\n field_name = FIELD_NAME_MAP[field_name]\n if field_num == len(found) - 1:\n v = q[f[1] :].strip()\n else:\n v = q[f[1] : found[field_num + 1][0]].strip()\n m = re_op.search(v)\n if m:\n v = v[: -len(m.group(0))]\n op_found = m.group(1)\n if field_name == 'isbn':\n isbn = normalize_isbn(v)\n if isbn:\n v = isbn\n if field_name in ('lcc', 'lcc_sort'):\n v = lcc_transform(v)\n if field_name == ('ddc', 'ddc_sort'):\n v = ddc_transform(v)\n if field_name == 'ia_collection_s':\n v = ia_collection_s_transform(v)\n\n yield {'field': field_name, 'value': v.replace(':', r'\\:')}\n if op_found:\n yield {'op': op_found}\n\n\ndef build_q_list(param):\n q_list = []\n if 'q' in param:\n # Solr 4+ has support for regexes (eg `key:/foo.*/`)! But for now, let's not\n # expose that and escape all '/'. Otherwise `key:/works/OL1W` is interpreted as\n # a regex.\n q_param = param['q'].strip().replace('/', '\\\\/')\n else:\n q_param = None\n use_dismax = False\n if q_param:\n if q_param == '*:*':\n q_list.append(q_param)\n elif 'NOT ' in q_param: # this is a hack\n q_list.append(q_param.strip())\n elif re_fields.search(q_param):\n q_list.extend(\n i['op'] if 'op' in i else '{}:({})'.format(i['field'], i['value'])\n for i in parse_query_fields(q_param)\n )\n else:\n isbn = normalize_isbn(q_param)\n if isbn and len(isbn) in (10, 13):\n q_list.append('isbn:(%s)' % isbn)\n else:\n q_list.append(q_param.strip().replace(':', r'\\:'))\n use_dismax = True\n else:\n if 'author' in param:\n v = param['author'].strip()\n m = re_author_key.search(v)\n if m:\n q_list.append(\"author_key:(%s)\" % m.group(1))\n else:\n v = re_to_esc.sub(r'\\\\\\g<0>', v)\n # Somehow v can be empty at this point,\n # passing the following with empty strings causes a severe error in SOLR\n if v:\n q_list.append(\n \"(author_name:({name}) OR author_alternative_name:({name}))\".format(\n name=v\n )\n )\n\n check_params = [\n 'title',\n 'publisher',\n 'oclc',\n 'lccn',\n 'contributor',\n 'subject',\n 'place',\n 'person',\n 'time',\n ]\n q_list += [\n '{}:({})'.format(k, re_to_esc.sub(r'\\\\\\g<0>', param[k]))\n for k in check_params\n if k in param\n ]\n if param.get('isbn'):\n q_list.append(\n 'isbn:(%s)' % (normalize_isbn(param['isbn']) or param['isbn'])\n )\n return (q_list, use_dismax)\n\n\ndef execute_solr_query(\n solr_path: str, params: Union[dict, list[tuple[str, Any]]]\n) -> Optional[Response]:\n stats.begin(\"solr\", url=f'{solr_path}?{urlencode(params)}')\n try:\n response = requests.get(solr_path, params=params, timeout=10)\n response.raise_for_status()\n except requests.HTTPError:\n logger.exception(\"Failed solr query\")\n return None\n finally:\n stats.end()\n return response\n\n\ndef parse_json_from_solr_query(\n solr_path: str, params: Union[dict, list[tuple[str, Any]]]\n) -> Optional[dict]:\n \"\"\"\n Returns a json.loaded Python object or None\n \"\"\"\n response = execute_solr_query(solr_path, params)\n if not response:\n logger.error(\"Error parsing empty search engine response\")\n return None\n try:\n return response.json()\n except JSONDecodeError:\n logger.exception(\"Error parsing search engine response\")\n return None\n\n\ndef run_solr_query(\n param=None,\n rows=100,\n page=1,\n sort=None,\n spellcheck_count=None,\n offset=None,\n fields=None,\n facet=True,\n):\n param = param or {}\n\n # use page when offset is not specified\n if offset is None:\n offset = rows * (page - 1)\n\n (q_list, use_dismax) = build_q_list(param)\n params = [\n ('fl', ','.join(fields or DEFAULT_SEARCH_FIELDS)),\n ('fq', 'type:work'),\n ('q.op', 'AND'),\n ('start', offset),\n ('rows', rows),\n ]\n\n if spellcheck_count is None:\n spellcheck_count = default_spellcheck_count\n\n if spellcheck_count:\n params.append(('spellcheck', 'true'))\n params.append(('spellcheck.count', spellcheck_count))\n\n if facet:\n params.append(('facet', 'true'))\n for facet in FACET_FIELDS:\n params.append(('facet.field', facet))\n\n if q_list:\n if use_dismax:\n params.append(('q', ' '.join(q_list)))\n params.append(('defType', 'dismax'))\n params.append(('qf', 'text title^20 author_name^20'))\n params.append(('bf', 'min(100,edition_count)'))\n else:\n params.append(('q', ' '.join(q_list + ['_val_:\"sqrt(edition_count)\"^10'])))\n\n if 'public_scan' in param:\n v = param.pop('public_scan').lower()\n if v in ('true', 'false'):\n if v == 'false':\n # also constrain on print disabled since the index may not be in sync\n param.setdefault('print_disabled', 'false')\n params.append(('fq', 'public_scan_b:%s' % v))\n\n if 'print_disabled' in param:\n v = param.pop('print_disabled').lower()\n if v in ('true', 'false'):\n minus = '-' if v == 'false' else ''\n params.append(('fq', '%ssubject_key:protected_daisy' % minus))\n\n if 'has_fulltext' in param:\n v = param['has_fulltext'].lower()\n if v not in ('true', 'false'):\n del param['has_fulltext']\n params.append(('fq', 'has_fulltext:%s' % v))\n\n for field in FACET_FIELDS:\n if field == 'has_fulltext':\n continue\n if field == 'author_facet':\n field = 'author_key'\n if field not in param:\n continue\n values = param[field]\n params += [('fq', f'{field}:\"{val}\"') for val in values if val]\n\n if sort:\n params.append(('sort', sort))\n\n if 'wt' in param:\n params.append(('wt', param.get('wt')))\n url = f'{solr_select_url}?{urlencode(params)}'\n\n response = execute_solr_query(solr_select_url, params)\n solr_result = response.content if response else None # bytes or None\n return (solr_result, url, q_list)\n\n\ndef do_search(param, sort, page=1, rows=100, spellcheck_count=None):\n if sort:\n sort = process_sort(sort)\n (solr_result, solr_select, q_list) = run_solr_query(\n param, rows, page, sort, spellcheck_count\n )\n is_bad = False\n if not solr_result or solr_result.startswith(b'<html'):\n is_bad = True\n if not is_bad:\n try:\n root = XML(solr_result)\n except XMLSyntaxError:\n is_bad = True\n if is_bad:\n m = re_pre.search(solr_result)\n return web.storage(\n facet_counts=None,\n docs=[],\n is_advanced=bool(param.get('q')),\n num_found=None,\n solr_select=solr_select,\n q_list=q_list,\n error=(web.htmlunquote(m.group(1)) if m else solr_result),\n )\n\n spellcheck = root.find(\"lst[@name='spellcheck']\")\n spell_map = {}\n if spellcheck is not None and len(spellcheck):\n for e in spellcheck.find(\"lst[@name='suggestions']\"):\n assert e.tag == 'lst'\n a = e.attrib['name']\n if a in spell_map or a in ('sqrt', 'edition_count'):\n continue\n spell_map[a] = [i.text for i in e.find(\"arr[@name='suggestion']\")]\n\n docs = root.find('result')\n return web.storage(\n facet_counts=read_facets(root),\n docs=docs,\n is_advanced=bool(param.get('q')),\n num_found=(int(docs.attrib['numFound']) if docs is not None else None),\n solr_select=solr_select,\n q_list=q_list,\n error=None,\n spellcheck=spell_map,\n )\n\n\ndef get_doc(doc): # called from work_search template\n e_ia = doc.find(\"arr[@name='ia']\")\n e_id_project_gutenberg = doc.find(\"arr[@name='id_project_gutenberg']\") or []\n e_id_librivox = doc.find(\"arr[@name='id_librivox']\") or []\n e_id_standard_ebooks = doc.find(\"arr[@name='id_standard_ebooks']\") or []\n\n first_pub = None\n e_first_pub = doc.find(\"int[@name='first_publish_year']\")\n if e_first_pub is not None:\n first_pub = e_first_pub.text\n e_first_edition = doc.find(\"str[@name='first_edition']\")\n first_edition = None\n if e_first_edition is not None:\n first_edition = e_first_edition.text\n\n work_subtitle = None\n e_subtitle = doc.find(\"str[@name='subtitle']\")\n if e_subtitle is not None:\n work_subtitle = e_subtitle.text\n\n if doc.find(\"arr[@name='author_key']\") is None:\n assert doc.find(\"arr[@name='author_name']\") is None\n authors = []\n else:\n ak = [e.text for e in doc.find(\"arr[@name='author_key']\")]\n an = [e.text for e in doc.find(\"arr[@name='author_name']\")]\n authors = [\n web.storage(\n key=key,\n name=name,\n url=\"/authors/{}/{}\".format(\n key, (urlsafe(name) if name is not None else 'noname')\n ),\n )\n for key, name in zip(ak, an)\n ]\n cover = doc.find(\"str[@name='cover_edition_key']\")\n languages = doc.find(\"arr[@name='language']\")\n e_public_scan = doc.find(\"bool[@name='public_scan_b']\")\n e_lending_edition = doc.find(\"str[@name='lending_edition_s']\")\n e_lending_identifier = doc.find(\"str[@name='lending_identifier_s']\")\n e_collection = doc.find(\"str[@name='ia_collection_s']\")\n collections = set()\n if e_collection is not None:\n collections = set(e_collection.text.split(';'))\n\n doc = web.storage(\n key=doc.find(\"str[@name='key']\").text,\n title=doc.find(\"str[@name='title']\").text,\n edition_count=int(doc.find(\"int[@name='edition_count']\").text),\n ia=[e.text for e in (e_ia if e_ia is not None else [])],\n has_fulltext=(doc.find(\"bool[@name='has_fulltext']\").text == 'true'),\n public_scan=(\n (e_public_scan.text == 'true')\n if e_public_scan is not None\n else (e_ia is not None)\n ),\n lending_edition=(\n e_lending_edition.text if e_lending_edition is not None else None\n ),\n lending_identifier=(\n e_lending_identifier.text if e_lending_identifier is not None else None\n ),\n collections=collections,\n authors=authors,\n first_publish_year=first_pub,\n first_edition=first_edition,\n subtitle=work_subtitle,\n cover_edition_key=(cover.text if cover is not None else None),\n languages=languages and [lang.text for lang in languages],\n id_project_gutenberg=[e.text for e in e_id_project_gutenberg],\n id_librivox=[e.text for e in e_id_librivox],\n id_standard_ebooks=[e.text for e in e_id_standard_ebooks],\n )\n\n doc.url = doc.key + '/' + urlsafe(doc.title)\n return doc\n\n\ndef work_object(w): # called by works_by_author\n ia = w.get('ia', [])\n obj = dict(\n authors=[\n web.storage(key='/authors/' + k, name=n)\n for k, n in zip(w['author_key'], w['author_name'])\n ],\n edition_count=w['edition_count'],\n key=w['key'],\n title=w['title'],\n public_scan=w.get('public_scan_b', bool(ia)),\n lending_edition=w.get('lending_edition_s', ''),\n lending_identifier=w.get('lending_identifier_s', ''),\n collections=set(\n w['ia_collection_s'].split(';') if 'ia_collection_s' in w else []\n ),\n url=w['key'] + '/' + urlsafe(w['title']),\n cover_edition_key=w.get('cover_edition_key'),\n first_publish_year=(\n w['first_publish_year'] if 'first_publish_year' in w else None\n ),\n ia=w.get('ia', []),\n cover_i=w.get('cover_i'),\n id_project_gutenberg=w.get('id_project_gutenberg'),\n id_librivox=w.get('id_librivox'),\n id_standard_ebooks=w.get('id_standard_ebooks'),\n )\n\n for f in 'has_fulltext', 'subtitle':\n if w.get(f):\n obj[f] = w[f]\n return web.storage(obj)\n\n\nclass scan(delegate.page):\n \"\"\"\n Experimental EAN barcode scanner page to scan and add/view books by their barcodes.\n \"\"\"\n\n path = \"/barcodescanner\"\n\n def GET(self):\n return render.barcodescanner()\n\n\nclass search(delegate.page):\n def redirect_if_needed(self, i):\n params = {}\n need_redirect = False\n for k, v in i.items():\n if k in plurals:\n params[k] = None\n k = plurals[k]\n need_redirect = True\n if isinstance(v, list):\n if v == []:\n continue\n clean = [normalize('NFC', b.strip()) for b in v]\n if clean != v:\n need_redirect = True\n if len(clean) == 1 and clean[0] == '':\n clean = None\n else:\n clean = normalize('NFC', v.strip())\n if clean == '':\n need_redirect = True\n clean = None\n if clean != v:\n need_redirect = True\n params[k] = clean\n if need_redirect:\n raise web.seeother(web.changequery(**params))\n\n def isbn_redirect(self, isbn_param):\n isbn = normalize_isbn(isbn_param)\n if not isbn:\n return\n\n ed = Edition.from_isbn(isbn)\n if ed:\n web.seeother(ed.key)\n\n def GET(self):\n # Enable patrons to search for query q2 within collection q\n # q2 param gets removed and prepended to q via a redirect\n _i = web.input(q='', q2='')\n if _i.q.strip() and _i.q2.strip():\n _i.q = _i.q2.strip() + ' ' + _i.q.strip()\n _i.pop('q2')\n raise web.seeother('/search?' + urllib.parse.urlencode(_i))\n\n i = web.input(\n author_key=[],\n language=[],\n first_publish_year=[],\n publisher_facet=[],\n subject_facet=[],\n person_facet=[],\n place_facet=[],\n time_facet=[],\n public_scan_b=[],\n )\n\n # Send to full-text Search Inside if checkbox checked\n if i.get('search-fulltext'):\n raise web.seeother(\n '/search/inside?' + urllib.parse.urlencode({'q': i.get('q', '')})\n )\n\n if i.get('wisbn'):\n i.isbn = i.wisbn\n\n self.redirect_if_needed(i)\n\n if 'isbn' in i:\n self.isbn_redirect(i.isbn)\n\n q_list = []\n q = i.get('q', '').strip()\n if q:\n m = re_olid.match(q)\n if m:\n raise web.seeother(f'/{OLID_URLS[m.group(1)]}/{q}')\n m = re_isbn_field.match(q)\n if m:\n self.isbn_redirect(m.group(1))\n q_list.append(q)\n for k in ('title', 'author', 'isbn', 'subject', 'place', 'person', 'publisher'):\n if k in i:\n v = re_to_esc.sub(r'\\\\\\g<0>', i[k].strip())\n q_list.append(k + ':' + v)\n return render.work_search(\n i,\n ' '.join(q_list),\n do_search,\n get_doc,\n get_availability_of_ocaids,\n fulltext_search,\n FACET_FIELDS,\n )\n\n\ndef works_by_author(\n akey, sort='editions', page=1, rows=100, has_fulltext=False, query=None\n):\n # called by merge_author_works\n q = 'author_key:' + akey\n if query:\n q = query\n\n offset = rows * (page - 1)\n params = [\n ('fq', 'author_key:' + akey),\n ('fq', 'type:work'),\n ('q', q),\n ('start', offset),\n ('rows', rows),\n (\n 'fl',\n ','.join(\n [\n 'key',\n 'author_name',\n 'author_key',\n 'title',\n 'subtitle',\n 'edition_count',\n 'ia',\n 'cover_edition_key',\n 'has_fulltext',\n 'language',\n 'first_publish_year',\n 'public_scan_b',\n 'lending_edition_s',\n 'lending_identifier_s',\n 'ia_collection_s',\n 'id_project_gutenberg',\n 'id_librivox',\n 'id_standard_ebooks',\n 'cover_i',\n ]\n ),\n ),\n ('wt', 'json'),\n ('q.op', 'AND'),\n ('facet', 'true'),\n ('facet.mincount', 1),\n ('f.author_facet.facet.sort', 'count'),\n ('f.publish_year.facet.limit', -1),\n ('facet.limit', 25),\n ]\n\n if has_fulltext:\n params.append(('fq', 'has_fulltext:true'))\n\n if sort == \"editions\":\n params.append(('sort', 'edition_count desc'))\n elif sort.startswith('old'):\n params.append(('sort', 'first_publish_year asc'))\n elif sort.startswith('new'):\n params.append(('sort', 'first_publish_year desc'))\n elif sort.startswith('title'):\n params.append(('sort', 'title asc'))\n\n facet_fields = [\n \"author_facet\",\n \"language\",\n \"publish_year\",\n \"publisher_facet\",\n \"subject_facet\",\n \"person_facet\",\n \"place_facet\",\n \"time_facet\",\n ]\n for f in facet_fields:\n params.append((\"facet.field\", f))\n\n reply = parse_json_from_solr_query(solr_select_url, params)\n if reply is None:\n return web.storage(\n num_found=0,\n works=[],\n years=[],\n get_facet=[],\n sort=sort,\n )\n # TODO: Deep JSON structure defense - for now, let it blow up so easier to detect\n facets = reply['facet_counts']['facet_fields']\n works = [work_object(w) for w in reply['response']['docs']]\n\n def get_facet(f, limit=None):\n return list(web.group(facets[f][: limit * 2] if limit else facets[f], 2))\n\n return web.storage(\n num_found=int(reply['response']['numFound']),\n works=add_availability(works),\n years=[(int(k), v) for k, v in get_facet('publish_year')],\n get_facet=get_facet,\n sort=sort,\n )\n\n\ndef sorted_work_editions(wkey, json_data=None):\n \"\"\"Setting json_data to a real value simulates getting SOLR data back, i.e. for testing (but ick!)\"\"\"\n q = 'key:' + wkey\n if json_data:\n reply = json.loads(json_data)\n else:\n reply = parse_json_from_solr_query(\n solr_select_url,\n {\n 'q.op': 'AND',\n 'q': q,\n 'rows': 10,\n 'fl': 'edition_key',\n 'qt': 'standard',\n 'wt': 'json',\n },\n )\n if reply is None or reply.get('response', {}).get('numFound', 0) == 0:\n return []\n # TODO: Deep JSON structure defense - for now, let it blow up so easier to detect\n return reply[\"response\"]['docs'][0].get('edition_key', [])\n\n\ndef top_books_from_author(akey, rows=5, offset=0):\n q = 'author_key:(' + akey + ')'\n json_result = parse_json_from_solr_query(\n solr_select_url,\n {\n 'q': q,\n 'start': offset,\n 'rows': rows,\n 'fl': 'key,title,edition_count,first_publish_year',\n 'sort': 'edition_count desc',\n 'wt': 'json',\n },\n )\n if json_result is None:\n return {'books': [], 'total': 0}\n # TODO: Deep JSON structure defense - for now, let it blow up so easier to detect\n response = json_result['response']\n return {\n 'books': [web.storage(doc) for doc in response['docs']],\n 'total': response['numFound'],\n }\n\n\nclass advancedsearch(delegate.page):\n path = \"/advancedsearch\"\n\n def GET(self):\n return render_template(\"search/advancedsearch.html\")\n\n\ndef escape_colon(q, vf):\n if ':' not in q:\n return q\n parts = q.split(':')\n result = parts.pop(0)\n while parts:\n if not any(result.endswith(f) for f in vf):\n result += '\\\\'\n result += ':' + parts.pop(0)\n return result\n\n\ndef run_solr_search(solr_select: str, params: dict):\n response = execute_solr_query(solr_select, params)\n json_data = response.content if response else None # bytes or None\n return parse_search_response(json_data)\n\n\ndef parse_search_response(json_data):\n \"\"\"Construct response for any input\"\"\"\n if json_data is None:\n return {'error': 'Error parsing empty search engine response'}\n try:\n return json.loads(json_data)\n except json.JSONDecodeError:\n logger.exception(\"Error parsing search engine response\")\n m = re_pre.search(json_data)\n if m is None:\n return {'error': 'Error parsing search engine response'}\n error = web.htmlunquote(m.group(1))\n solr_error = 'org.apache.lucene.queryParser.ParseException: '\n if error.startswith(solr_error):\n error = error[len(solr_error) :]\n return {'error': error}\n\n\nclass list_search(delegate.page):\n path = '/search/lists'\n\n def GET(self):\n i = web.input(q='', offset='0', limit='10')\n\n lists = self.get_results(i.q, i.offset, i.limit)\n\n return render_template('search/lists.tmpl', q=i.q, lists=lists)\n\n def get_results(self, q, offset=0, limit=100):\n if 'env' not in web.ctx:\n delegate.fakeload()\n\n keys = web.ctx.site.things(\n {\n \"type\": \"/type/list\",\n \"name~\": q,\n \"limit\": int(limit),\n \"offset\": int(offset),\n }\n )\n\n return web.ctx.site.get_many(keys)\n\n\nclass list_search_json(list_search):\n path = '/search/lists'\n encoding = 'json'\n\n def GET(self):\n i = web.input(q='', offset=0, limit=10)\n offset = safeint(i.offset, 0)\n limit = safeint(i.limit, 10)\n limit = min(100, limit)\n\n docs = self.get_results(i.q, offset=offset, limit=limit)\n\n response = {'start': offset, 'docs': [doc.preview() for doc in docs]}\n\n web.header('Content-Type', 'application/json')\n return delegate.RawText(json.dumps(response))\n\n\nclass subject_search(delegate.page):\n path = '/search/subjects'\n\n def GET(self):\n return render_template('search/subjects.tmpl', self.get_results)\n\n def get_results(self, q, offset=0, limit=100):\n valid_fields = ['key', 'name', 'subject_type', 'work_count']\n q = escape_colon(escape_bracket(q), valid_fields)\n\n results = run_solr_search(\n solr_select_url,\n {\n \"fq\": \"type:subject\",\n \"q.op\": \"AND\",\n \"q\": q,\n \"start\": offset,\n \"rows\": limit,\n \"fl\": \",\".join(valid_fields),\n \"qt\": \"standard\",\n \"wt\": \"json\",\n \"sort\": \"work_count desc\",\n },\n )\n response = results['response']\n\n for doc in response['docs']:\n doc['type'] = doc.get('subject_type', 'subject')\n doc['count'] = doc.get('work_count', 0)\n\n return results\n\n\nclass subject_search_json(subject_search):\n path = '/search/subjects'\n encoding = 'json'\n\n def GET(self):\n i = web.input(q='', offset=0, limit=100)\n offset = safeint(i.offset, 0)\n limit = safeint(i.limit, 100)\n limit = min(1000, limit) # limit limit to 1000.\n\n response = self.get_results(i.q, offset=offset, limit=limit)['response']\n web.header('Content-Type', 'application/json')\n return delegate.RawText(json.dumps(response))\n\n\nclass author_search(delegate.page):\n path = '/search/authors'\n\n def GET(self):\n return render_template('search/authors.tmpl', self.get_results)\n\n def get_results(self, q, offset=0, limit=100):\n valid_fields = [\n 'key',\n 'name',\n 'alternate_names',\n 'birth_date',\n 'death_date',\n 'date',\n 'work_count',\n ]\n q = escape_colon(escape_bracket(q), valid_fields)\n q_has_fields = ':' in q.replace(r'\\:', '') or '*' in q\n\n d = run_solr_search(\n solr_select_url,\n {\n 'fq': 'type:author',\n 'q.op': 'AND',\n 'q': q,\n 'start': offset,\n 'rows': limit,\n 'fl': '*',\n 'qt': 'standard',\n 'sort': 'work_count desc',\n 'wt': 'json',\n **(\n {}\n if q_has_fields\n else {'defType': 'dismax', 'qf': 'name alternate_names'}\n ),\n },\n )\n\n docs = d.get('response', {}).get('docs', [])\n for doc in docs:\n # replace /authors/OL1A with OL1A\n # The template still expects the key to be in the old format\n doc['key'] = doc['key'].split(\"/\")[-1]\n return d\n\n\nclass author_search_json(author_search):\n path = '/search/authors'\n encoding = 'json'\n\n def GET(self):\n i = web.input(q='', offset=0, limit=100)\n offset = safeint(i.offset, 0)\n limit = safeint(i.limit, 100)\n limit = min(1000, limit) # limit limit to 1000.\n\n response = self.get_results(i.q, offset=offset, limit=limit)['response']\n web.header('Content-Type', 'application/json')\n return delegate.RawText(json.dumps(response))\n\n\n@public\ndef random_author_search(limit=10):\n \"\"\"\n Returns a dict that contains a random list of authors. Amount of authors\n returned is set be the given limit.\n \"\"\"\n letters_and_digits = string.ascii_letters + string.digits\n seed = ''.join(random.choice(letters_and_digits) for _ in range(10))\n\n search_results = run_solr_search(\n solr_select_url,\n {\n 'q': 'type:author',\n 'rows': limit,\n 'sort': f'random_{seed} desc',\n 'wt': 'json',\n },\n )\n\n docs = search_results.get('response', {}).get('docs', [])\n\n assert docs, f\"random_author_search({limit}) returned no docs\"\n assert (\n len(docs) == limit\n ), f\"random_author_search({limit}) returned {len(docs)} docs\"\n\n for doc in docs:\n # replace /authors/OL1A with OL1A\n # The template still expects the key to be in the old format\n doc['key'] = doc['key'].split(\"/\")[-1]\n\n return search_results['response']\n\n\ndef rewrite_list_editions_query(q, page, offset, limit):\n \"\"\"Takes a solr query. If it doesn't contain a /lists/ key, then\n return the query, unchanged, exactly as it entered the\n function. If it does contain a lists key, then use the pagination\n information to fetch the right block of keys from the\n lists_editions API and then feed these editions resulting work\n keys into solr with the form key:(OL123W, OL234W). This way, we\n can use the solr API to fetch list works and render them in\n carousels in the right format.\n \"\"\"\n if '/lists/' in q:\n editions = get_list_editions(q, offset=offset, limit=limit)\n work_ids = [ed.get('works')[0]['key'] for ed in editions]\n q = 'key:(' + ' OR '.join(work_ids) + ')'\n # We've applied the offset to fetching get_list_editions to\n # produce the right set of discrete work IDs. We don't want\n # it applied to paginate our resulting solr query.\n offset = 0\n page = 1\n return q, page, offset, limit\n\n\n@public\ndef work_search(\n query,\n sort=None,\n page=1,\n offset=0,\n limit=100,\n fields='*',\n facet=True,\n spellcheck_count=None,\n):\n \"\"\"\n params:\n query: dict\n sort: str editions|old|new|scans\n \"\"\"\n # Ensure we don't mutate the `query` passed in by reference\n query = copy.deepcopy(query)\n query['wt'] = 'json'\n if sort:\n sort = process_sort(sort)\n\n # deal with special /lists/ key queries\n query['q'], page, offset, limit = rewrite_list_editions_query(\n query['q'], page, offset, limit\n )\n try:\n (reply, solr_select, q_list) = run_solr_query(\n query,\n rows=limit,\n page=page,\n sort=sort,\n offset=offset,\n fields=fields,\n facet=facet,\n spellcheck_count=spellcheck_count,\n )\n response = json.loads(reply)['response'] or ''\n except (ValueError, OSError) as e:\n logger.error(\"Error in processing search API.\")\n response = dict(start=0, numFound=0, docs=[], error=str(e))\n\n # backward compatibility\n response['num_found'] = response['numFound']\n if fields == '*' or 'availability' in fields:\n response['docs'] = add_availability(response['docs'])\n return response\n\n\nclass search_json(delegate.page):\n path = \"/search\"\n encoding = \"json\"\n\n def GET(self):\n i = web.input(\n author_key=[],\n subject_facet=[],\n person_facet=[],\n place_facet=[],\n time_facet=[],\n first_publish_year=[],\n publisher_facet=[],\n language=[],\n public_scan_b=[],\n )\n if 'query' in i:\n query = json.loads(i.query)\n else:\n query = i\n\n sort = query.get('sort', None)\n\n limit = safeint(query.pop(\"limit\", \"100\"), default=100)\n if \"offset\" in query:\n offset = safeint(query.pop(\"offset\", 0), default=0)\n page = None\n else:\n offset = None\n page = safeint(query.pop(\"page\", \"1\"), default=1)\n\n fields = query.pop('fields', '*').split(',')\n facet = query.pop('_facet', 'true').lower() in ['true']\n spellcheck_count = safeint(\n query.pop(\"_spellcheck_count\", default_spellcheck_count),\n default=default_spellcheck_count,\n )\n\n # If the query is a /list/ key, create custom list_editions_query\n q = query.get('q', '')\n query['q'], page, offset, limit = rewrite_list_editions_query(\n q, page, offset, limit\n )\n response = work_search(\n query,\n sort=sort,\n page=page,\n offset=offset,\n limit=limit,\n fields=fields,\n facet=facet,\n spellcheck_count=spellcheck_count,\n )\n response['q'] = q\n response['offset'] = offset\n response['docs'] = response['docs']\n web.header('Content-Type', 'application/json')\n return delegate.RawText(json.dumps(response, indent=4))\n\n\ndef setup():\n from openlibrary.plugins.worksearch import subjects\n\n # subjects module needs read_author_facet and solr_select_url.\n # Importing this module to access them will result in circular import.\n # Setting them like this to avoid circular-import.\n subjects.read_author_facet = read_author_facet\n if hasattr(config, 'plugin_worksearch'):\n subjects.solr_select_url = solr_select_url\n\n subjects.setup()\n\n from openlibrary.plugins.worksearch import languages, publishers\n\n publishers.setup()\n languages.setup()\n\n\nsetup()\n", "path": "openlibrary/plugins/worksearch/code.py" } ]
diff --git a/openlibrary/plugins/worksearch/code.py b/openlibrary/plugins/worksearch/code.py index 7f1991f46bf..50c876c70d2 100644 --- a/openlibrary/plugins/worksearch/code.py +++ b/openlibrary/plugins/worksearch/code.py @@ -1108,7 +1108,7 @@ def get_results(self, q, offset=0, limit=100): 'work_count', ] q = escape_colon(escape_bracket(q), valid_fields) - q_has_fields = ':' in q.replace(r'\:', '') + q_has_fields = ':' in q.replace(r'\:', '') or '*' in q d = run_solr_search( solr_select_url,
getpelican__pelican-1426
DOCUTILS_SETTINGS not documented nor initialized DOCUTILS_SETTINGS was introduced in #864, but it has not been documented nor is it initialized in settings.py
[ { "content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals, print_function\nimport six\n\nimport copy\nimport inspect\nimport os\nimport locale\nimport logging\n\ntry:\n # SourceFileLoader is the recommended way in 3.3+\n from importlib.machinery import SourceFileLoader\n load_source = lambda name, path: SourceFileLoader(name, path).load_module()\nexcept ImportError:\n # but it does not exist in 3.2-, so fall back to imp\n import imp\n load_source = imp.load_source\n\nfrom os.path import isabs\n\nfrom pelican.log import LimitFilter\n\n\nlogger = logging.getLogger(__name__)\n\n\nDEFAULT_THEME = os.path.join(os.path.dirname(os.path.abspath(__file__)),\n 'themes', 'notmyidea')\nDEFAULT_CONFIG = {\n 'PATH': os.curdir,\n 'ARTICLE_PATHS': [''],\n 'ARTICLE_EXCLUDES': [],\n 'PAGE_PATHS': ['pages'],\n 'PAGE_EXCLUDES': [],\n 'THEME': DEFAULT_THEME,\n 'OUTPUT_PATH': 'output',\n 'READERS': {},\n 'STATIC_PATHS': ['images', ],\n 'THEME_STATIC_DIR': 'theme',\n 'THEME_STATIC_PATHS': ['static', ],\n 'FEED_ALL_ATOM': os.path.join('feeds', 'all.atom.xml'),\n 'CATEGORY_FEED_ATOM': os.path.join('feeds', '%s.atom.xml'),\n 'AUTHOR_FEED_ATOM': os.path.join('feeds', '%s.atom.xml'),\n 'AUTHOR_FEED_RSS': os.path.join('feeds', '%s.rss.xml'),\n 'TRANSLATION_FEED_ATOM': os.path.join('feeds', 'all-%s.atom.xml'),\n 'FEED_MAX_ITEMS': '',\n 'SITEURL': '',\n 'SITENAME': 'A Pelican Blog',\n 'DISPLAY_PAGES_ON_MENU': True,\n 'DISPLAY_CATEGORIES_ON_MENU': True,\n 'OUTPUT_SOURCES': False,\n 'OUTPUT_SOURCES_EXTENSION': '.text',\n 'USE_FOLDER_AS_CATEGORY': True,\n 'DEFAULT_CATEGORY': 'misc',\n 'WITH_FUTURE_DATES': True,\n 'CSS_FILE': 'main.css',\n 'NEWEST_FIRST_ARCHIVES': True,\n 'REVERSE_CATEGORY_ORDER': False,\n 'DELETE_OUTPUT_DIRECTORY': False,\n 'OUTPUT_RETENTION': (),\n 'ARTICLE_URL': '{slug}.html',\n 'ARTICLE_SAVE_AS': '{slug}.html',\n 'ARTICLE_LANG_URL': '{slug}-{lang}.html',\n 'ARTICLE_LANG_SAVE_AS': '{slug}-{lang}.html',\n 'DRAFT_URL': 'drafts/{slug}.html',\n 'DRAFT_SAVE_AS': os.path.join('drafts', '{slug}.html'),\n 'DRAFT_LANG_URL': 'drafts/{slug}-{lang}.html',\n 'DRAFT_LANG_SAVE_AS': os.path.join('drafts', '{slug}-{lang}.html'),\n 'PAGE_URL': 'pages/{slug}.html',\n 'PAGE_SAVE_AS': os.path.join('pages', '{slug}.html'),\n 'PAGE_LANG_URL': 'pages/{slug}-{lang}.html',\n 'PAGE_LANG_SAVE_AS': os.path.join('pages', '{slug}-{lang}.html'),\n 'STATIC_URL': '{path}',\n 'STATIC_SAVE_AS': '{path}',\n 'PDF_GENERATOR': False,\n 'PDF_STYLE_PATH': '',\n 'PDF_STYLE': 'twelvepoint',\n 'CATEGORY_URL': 'category/{slug}.html',\n 'CATEGORY_SAVE_AS': os.path.join('category', '{slug}.html'),\n 'TAG_URL': 'tag/{slug}.html',\n 'TAG_SAVE_AS': os.path.join('tag', '{slug}.html'),\n 'AUTHOR_URL': 'author/{slug}.html',\n 'AUTHOR_SAVE_AS': os.path.join('author', '{slug}.html'),\n 'PAGINATION_PATTERNS': [\n (0, '{name}{number}{extension}', '{name}{number}{extension}'),\n ],\n 'YEAR_ARCHIVE_SAVE_AS': '',\n 'MONTH_ARCHIVE_SAVE_AS': '',\n 'DAY_ARCHIVE_SAVE_AS': '',\n 'RELATIVE_URLS': False,\n 'DEFAULT_LANG': 'en',\n 'TAG_CLOUD_STEPS': 4,\n 'TAG_CLOUD_MAX_ITEMS': 100,\n 'DIRECT_TEMPLATES': ('index', 'tags', 'categories', 'authors', 'archives'),\n 'EXTRA_TEMPLATES_PATHS': [],\n 'PAGINATED_DIRECT_TEMPLATES': ('index', ),\n 'PELICAN_CLASS': 'pelican.Pelican',\n 'DEFAULT_DATE_FORMAT': '%a %d %B %Y',\n 'DATE_FORMATS': {},\n 'MD_EXTENSIONS': ['codehilite(css_class=highlight)', 'extra'],\n 'JINJA_EXTENSIONS': [],\n 'JINJA_FILTERS': {},\n 'LOG_FILTER': [],\n 'LOCALE': [''], # defaults to user locale\n 'DEFAULT_PAGINATION': False,\n 'DEFAULT_ORPHANS': 0,\n 'DEFAULT_METADATA': (),\n 'FILENAME_METADATA': '(?P<date>\\d{4}-\\d{2}-\\d{2}).*',\n 'PATH_METADATA': '',\n 'EXTRA_PATH_METADATA': {},\n 'DEFAULT_STATUS': 'published',\n 'ARTICLE_PERMALINK_STRUCTURE': '',\n 'TYPOGRIFY': False,\n 'SUMMARY_MAX_LENGTH': 50,\n 'PLUGIN_PATHS': [],\n 'PLUGINS': [],\n 'PYGMENTS_RST_OPTIONS': {},\n 'TEMPLATE_PAGES': {},\n 'IGNORE_FILES': ['.#*'],\n 'SLUG_SUBSTITUTIONS': (),\n 'INTRASITE_LINK_REGEX': '[{|](?P<what>.*?)[|}]',\n 'SLUGIFY_SOURCE': 'title',\n 'CACHE_CONTENT': True,\n 'CONTENT_CACHING_LAYER': 'reader',\n 'CACHE_PATH': 'cache',\n 'GZIP_CACHE': True,\n 'CHECK_MODIFIED_METHOD': 'mtime',\n 'LOAD_CONTENT_CACHE': True,\n 'AUTORELOAD_IGNORE_CACHE': False,\n 'WRITE_SELECTED': [],\n }\n\nPYGMENTS_RST_OPTIONS = None\n\n\ndef read_settings(path=None, override=None):\n if path:\n local_settings = get_settings_from_file(path)\n # Make the paths relative to the settings file\n for p in ['PATH', 'OUTPUT_PATH', 'THEME', 'CACHE_PATH']:\n if p in local_settings and local_settings[p] is not None \\\n and not isabs(local_settings[p]):\n absp = os.path.abspath(os.path.normpath(os.path.join(\n os.path.dirname(path), local_settings[p])))\n if p not in ('THEME') or os.path.exists(absp):\n local_settings[p] = absp\n\n if 'PLUGIN_PATH' in local_settings:\n logger.warning('PLUGIN_PATH setting has been replaced by '\n 'PLUGIN_PATHS, moving it to the new setting name.')\n local_settings['PLUGIN_PATHS'] = local_settings['PLUGIN_PATH']\n del local_settings['PLUGIN_PATH']\n if isinstance(local_settings['PLUGIN_PATHS'], six.string_types):\n logger.warning(\"Defining %s setting as string has been deprecated (should be a list)\" % 'PLUGIN_PATHS')\n local_settings['PLUGIN_PATHS'] = [local_settings['PLUGIN_PATHS']]\n elif local_settings['PLUGIN_PATHS'] is not None:\n local_settings['PLUGIN_PATHS'] = [os.path.abspath(os.path.normpath(os.path.join(os.path.dirname(path), pluginpath)))\n if not isabs(pluginpath) else pluginpath for pluginpath in local_settings['PLUGIN_PATHS']]\n else:\n local_settings = copy.deepcopy(DEFAULT_CONFIG)\n\n if override:\n local_settings.update(override)\n\n parsed_settings = configure_settings(local_settings)\n # This is because there doesn't seem to be a way to pass extra\n # parameters to docutils directive handlers, so we have to have a\n # variable here that we'll import from within Pygments.run (see\n # rstdirectives.py) to see what the user defaults were.\n global PYGMENTS_RST_OPTIONS\n PYGMENTS_RST_OPTIONS = parsed_settings.get('PYGMENTS_RST_OPTIONS', None)\n return parsed_settings\n\n\ndef get_settings_from_module(module=None, default_settings=DEFAULT_CONFIG):\n \"\"\"Loads settings from a module, returns a dictionary.\"\"\"\n\n context = copy.deepcopy(default_settings)\n if module is not None:\n context.update(\n (k, v) for k, v in inspect.getmembers(module) if k.isupper())\n return context\n\n\ndef get_settings_from_file(path, default_settings=DEFAULT_CONFIG):\n \"\"\"Loads settings from a file path, returning a dict.\"\"\"\n\n name, ext = os.path.splitext(os.path.basename(path))\n module = load_source(name, path)\n return get_settings_from_module(module, default_settings=default_settings)\n\n\ndef configure_settings(settings):\n \"\"\"Provide optimizations, error checking, and warnings for the given\n settings.\n Also, specify the log messages to be ignored.\n \"\"\"\n if not 'PATH' in settings or not os.path.isdir(settings['PATH']):\n raise Exception('You need to specify a path containing the content'\n ' (see pelican --help for more information)')\n\n # specify the log messages to be ignored\n LimitFilter.ignore.update(set(settings.get('LOG_FILTER',\n DEFAULT_CONFIG['LOG_FILTER'])))\n\n # lookup the theme in \"pelican/themes\" if the given one doesn't exist\n if not os.path.isdir(settings['THEME']):\n theme_path = os.path.join(\n os.path.dirname(os.path.abspath(__file__)),\n 'themes',\n settings['THEME'])\n if os.path.exists(theme_path):\n settings['THEME'] = theme_path\n else:\n raise Exception(\"Could not find the theme %s\"\n % settings['THEME'])\n\n # make paths selected for writing absolute if necessary\n settings['WRITE_SELECTED'] = [\n os.path.abspath(path) for path in\n settings.get('WRITE_SELECTED', DEFAULT_CONFIG['WRITE_SELECTED'])\n ]\n\n # standardize strings to lowercase strings\n for key in [\n 'DEFAULT_LANG',\n ]:\n if key in settings:\n settings[key] = settings[key].lower()\n\n # standardize strings to lists\n for key in [\n 'LOCALE',\n ]:\n if key in settings and isinstance(settings[key], six.string_types):\n settings[key] = [settings[key]]\n\n # check settings that must be a particular type\n for key, types in [\n ('OUTPUT_SOURCES_EXTENSION', six.string_types),\n ('FILENAME_METADATA', six.string_types),\n ]:\n if key in settings and not isinstance(settings[key], types):\n value = settings.pop(key)\n logger.warn(\n 'Detected misconfigured {} ({}), '\n 'falling back to the default ({})'.format(\n key, value, DEFAULT_CONFIG[key]))\n\n # try to set the different locales, fallback on the default.\n locales = settings.get('LOCALE', DEFAULT_CONFIG['LOCALE'])\n\n for locale_ in locales:\n try:\n locale.setlocale(locale.LC_ALL, str(locale_))\n break # break if it is successful\n except locale.Error:\n pass\n else:\n logger.warning(\"LOCALE option doesn't contain a correct value\")\n\n if ('SITEURL' in settings):\n # If SITEURL has a trailing slash, remove it and provide a warning\n siteurl = settings['SITEURL']\n if (siteurl.endswith('/')):\n settings['SITEURL'] = siteurl[:-1]\n logger.warning(\"Removed extraneous trailing slash from SITEURL.\")\n # If SITEURL is defined but FEED_DOMAIN isn't,\n # set FEED_DOMAIN to SITEURL\n if not 'FEED_DOMAIN' in settings:\n settings['FEED_DOMAIN'] = settings['SITEURL']\n\n # check content caching layer and warn of incompatibilities\n if (settings.get('CACHE_CONTENT', False) and\n settings.get('CONTENT_CACHING_LAYER', '') == 'generator' and\n settings.get('WITH_FUTURE_DATES', DEFAULT_CONFIG['WITH_FUTURE_DATES'])):\n logger.warning('WITH_FUTURE_DATES conflicts with '\n \"CONTENT_CACHING_LAYER set to 'generator', \"\n \"use 'reader' layer instead\")\n\n # Warn if feeds are generated with both SITEURL & FEED_DOMAIN undefined\n feed_keys = [\n 'FEED_ATOM', 'FEED_RSS',\n 'FEED_ALL_ATOM', 'FEED_ALL_RSS',\n 'CATEGORY_FEED_ATOM', 'CATEGORY_FEED_RSS',\n 'AUTHOR_FEED_ATOM', 'AUTHOR_FEED_RSS',\n 'TAG_FEED_ATOM', 'TAG_FEED_RSS',\n 'TRANSLATION_FEED_ATOM', 'TRANSLATION_FEED_RSS',\n ]\n\n if any(settings.get(k) for k in feed_keys):\n if not settings.get('SITEURL'):\n logger.warning('Feeds generated without SITEURL set properly may'\n ' not be valid')\n\n if not 'TIMEZONE' in settings:\n logger.warning(\n 'No timezone information specified in the settings. Assuming'\n ' your timezone is UTC for feed generation. Check '\n 'http://docs.getpelican.com/en/latest/settings.html#timezone '\n 'for more information')\n\n # fix up pagination rules\n from pelican.paginator import PaginationRule\n pagination_rules = [\n PaginationRule(*r) for r in settings.get(\n 'PAGINATION_PATTERNS',\n DEFAULT_CONFIG['PAGINATION_PATTERNS'],\n )\n ]\n settings['PAGINATION_PATTERNS'] = sorted(\n pagination_rules,\n key=lambda r: r[0],\n )\n\n # move {ARTICLE,PAGE}_DIR -> {ARTICLE,PAGE}_PATHS\n for key in ['ARTICLE', 'PAGE']:\n old_key = key + '_DIR'\n new_key = key + '_PATHS'\n if old_key in settings:\n logger.warning('Deprecated setting {}, moving it to {} list'.format(\n old_key, new_key))\n settings[new_key] = [settings[old_key]] # also make a list\n del settings[old_key]\n\n # Save people from accidentally setting a string rather than a list\n path_keys = (\n 'ARTICLE_EXCLUDES',\n 'DEFAULT_METADATA',\n 'DIRECT_TEMPLATES',\n 'EXTRA_TEMPLATES_PATHS',\n 'FILES_TO_COPY',\n 'IGNORE_FILES',\n 'JINJA_EXTENSIONS',\n 'PAGINATED_DIRECT_TEMPLATES',\n 'PLUGINS',\n 'STATIC_PATHS',\n 'THEME_STATIC_PATHS',\n 'ARTICLE_PATHS',\n 'PAGE_PATHS',\n )\n for PATH_KEY in filter(lambda k: k in settings, path_keys):\n if isinstance(settings[PATH_KEY], six.string_types):\n logger.warning(\"Detected misconfiguration with %s setting \"\n \"(must be a list), falling back to the default\"\n % PATH_KEY)\n settings[PATH_KEY] = DEFAULT_CONFIG[PATH_KEY]\n\n # Add {PAGE,ARTICLE}_PATHS to {ARTICLE,PAGE}_EXCLUDES\n mutually_exclusive = ('ARTICLE', 'PAGE')\n for type_1, type_2 in [mutually_exclusive, mutually_exclusive[::-1]]:\n try:\n includes = settings[type_1 + '_PATHS']\n excludes = settings[type_2 + '_EXCLUDES']\n for path in includes:\n if path not in excludes:\n excludes.append(path)\n except KeyError:\n continue # setting not specified, nothing to do\n\n for old, new, doc in [\n ('LESS_GENERATOR', 'the Webassets plugin', None),\n ('FILES_TO_COPY', 'STATIC_PATHS and EXTRA_PATH_METADATA',\n 'https://github.com/getpelican/pelican/blob/master/docs/settings.rst#path-metadata'),\n ]:\n if old in settings:\n message = 'The {} setting has been removed in favor of {}'.format(\n old, new)\n if doc:\n message += ', see {} for details'.format(doc)\n logger.warning(message)\n\n return settings\n", "path": "pelican/settings.py" } ]
[ { "content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals, print_function\nimport six\n\nimport copy\nimport inspect\nimport os\nimport locale\nimport logging\n\ntry:\n # SourceFileLoader is the recommended way in 3.3+\n from importlib.machinery import SourceFileLoader\n load_source = lambda name, path: SourceFileLoader(name, path).load_module()\nexcept ImportError:\n # but it does not exist in 3.2-, so fall back to imp\n import imp\n load_source = imp.load_source\n\nfrom os.path import isabs\n\nfrom pelican.log import LimitFilter\n\n\nlogger = logging.getLogger(__name__)\n\n\nDEFAULT_THEME = os.path.join(os.path.dirname(os.path.abspath(__file__)),\n 'themes', 'notmyidea')\nDEFAULT_CONFIG = {\n 'PATH': os.curdir,\n 'ARTICLE_PATHS': [''],\n 'ARTICLE_EXCLUDES': [],\n 'PAGE_PATHS': ['pages'],\n 'PAGE_EXCLUDES': [],\n 'THEME': DEFAULT_THEME,\n 'OUTPUT_PATH': 'output',\n 'READERS': {},\n 'STATIC_PATHS': ['images', ],\n 'THEME_STATIC_DIR': 'theme',\n 'THEME_STATIC_PATHS': ['static', ],\n 'FEED_ALL_ATOM': os.path.join('feeds', 'all.atom.xml'),\n 'CATEGORY_FEED_ATOM': os.path.join('feeds', '%s.atom.xml'),\n 'AUTHOR_FEED_ATOM': os.path.join('feeds', '%s.atom.xml'),\n 'AUTHOR_FEED_RSS': os.path.join('feeds', '%s.rss.xml'),\n 'TRANSLATION_FEED_ATOM': os.path.join('feeds', 'all-%s.atom.xml'),\n 'FEED_MAX_ITEMS': '',\n 'SITEURL': '',\n 'SITENAME': 'A Pelican Blog',\n 'DISPLAY_PAGES_ON_MENU': True,\n 'DISPLAY_CATEGORIES_ON_MENU': True,\n 'DOCUTILS_SETTINGS': {},\n 'OUTPUT_SOURCES': False,\n 'OUTPUT_SOURCES_EXTENSION': '.text',\n 'USE_FOLDER_AS_CATEGORY': True,\n 'DEFAULT_CATEGORY': 'misc',\n 'WITH_FUTURE_DATES': True,\n 'CSS_FILE': 'main.css',\n 'NEWEST_FIRST_ARCHIVES': True,\n 'REVERSE_CATEGORY_ORDER': False,\n 'DELETE_OUTPUT_DIRECTORY': False,\n 'OUTPUT_RETENTION': (),\n 'ARTICLE_URL': '{slug}.html',\n 'ARTICLE_SAVE_AS': '{slug}.html',\n 'ARTICLE_LANG_URL': '{slug}-{lang}.html',\n 'ARTICLE_LANG_SAVE_AS': '{slug}-{lang}.html',\n 'DRAFT_URL': 'drafts/{slug}.html',\n 'DRAFT_SAVE_AS': os.path.join('drafts', '{slug}.html'),\n 'DRAFT_LANG_URL': 'drafts/{slug}-{lang}.html',\n 'DRAFT_LANG_SAVE_AS': os.path.join('drafts', '{slug}-{lang}.html'),\n 'PAGE_URL': 'pages/{slug}.html',\n 'PAGE_SAVE_AS': os.path.join('pages', '{slug}.html'),\n 'PAGE_LANG_URL': 'pages/{slug}-{lang}.html',\n 'PAGE_LANG_SAVE_AS': os.path.join('pages', '{slug}-{lang}.html'),\n 'STATIC_URL': '{path}',\n 'STATIC_SAVE_AS': '{path}',\n 'PDF_GENERATOR': False,\n 'PDF_STYLE_PATH': '',\n 'PDF_STYLE': 'twelvepoint',\n 'CATEGORY_URL': 'category/{slug}.html',\n 'CATEGORY_SAVE_AS': os.path.join('category', '{slug}.html'),\n 'TAG_URL': 'tag/{slug}.html',\n 'TAG_SAVE_AS': os.path.join('tag', '{slug}.html'),\n 'AUTHOR_URL': 'author/{slug}.html',\n 'AUTHOR_SAVE_AS': os.path.join('author', '{slug}.html'),\n 'PAGINATION_PATTERNS': [\n (0, '{name}{number}{extension}', '{name}{number}{extension}'),\n ],\n 'YEAR_ARCHIVE_SAVE_AS': '',\n 'MONTH_ARCHIVE_SAVE_AS': '',\n 'DAY_ARCHIVE_SAVE_AS': '',\n 'RELATIVE_URLS': False,\n 'DEFAULT_LANG': 'en',\n 'TAG_CLOUD_STEPS': 4,\n 'TAG_CLOUD_MAX_ITEMS': 100,\n 'DIRECT_TEMPLATES': ('index', 'tags', 'categories', 'authors', 'archives'),\n 'EXTRA_TEMPLATES_PATHS': [],\n 'PAGINATED_DIRECT_TEMPLATES': ('index', ),\n 'PELICAN_CLASS': 'pelican.Pelican',\n 'DEFAULT_DATE_FORMAT': '%a %d %B %Y',\n 'DATE_FORMATS': {},\n 'MD_EXTENSIONS': ['codehilite(css_class=highlight)', 'extra'],\n 'JINJA_EXTENSIONS': [],\n 'JINJA_FILTERS': {},\n 'LOG_FILTER': [],\n 'LOCALE': [''], # defaults to user locale\n 'DEFAULT_PAGINATION': False,\n 'DEFAULT_ORPHANS': 0,\n 'DEFAULT_METADATA': (),\n 'FILENAME_METADATA': '(?P<date>\\d{4}-\\d{2}-\\d{2}).*',\n 'PATH_METADATA': '',\n 'EXTRA_PATH_METADATA': {},\n 'DEFAULT_STATUS': 'published',\n 'ARTICLE_PERMALINK_STRUCTURE': '',\n 'TYPOGRIFY': False,\n 'SUMMARY_MAX_LENGTH': 50,\n 'PLUGIN_PATHS': [],\n 'PLUGINS': [],\n 'PYGMENTS_RST_OPTIONS': {},\n 'TEMPLATE_PAGES': {},\n 'IGNORE_FILES': ['.#*'],\n 'SLUG_SUBSTITUTIONS': (),\n 'INTRASITE_LINK_REGEX': '[{|](?P<what>.*?)[|}]',\n 'SLUGIFY_SOURCE': 'title',\n 'CACHE_CONTENT': True,\n 'CONTENT_CACHING_LAYER': 'reader',\n 'CACHE_PATH': 'cache',\n 'GZIP_CACHE': True,\n 'CHECK_MODIFIED_METHOD': 'mtime',\n 'LOAD_CONTENT_CACHE': True,\n 'AUTORELOAD_IGNORE_CACHE': False,\n 'WRITE_SELECTED': [],\n }\n\nPYGMENTS_RST_OPTIONS = None\n\n\ndef read_settings(path=None, override=None):\n if path:\n local_settings = get_settings_from_file(path)\n # Make the paths relative to the settings file\n for p in ['PATH', 'OUTPUT_PATH', 'THEME', 'CACHE_PATH']:\n if p in local_settings and local_settings[p] is not None \\\n and not isabs(local_settings[p]):\n absp = os.path.abspath(os.path.normpath(os.path.join(\n os.path.dirname(path), local_settings[p])))\n if p not in ('THEME') or os.path.exists(absp):\n local_settings[p] = absp\n\n if 'PLUGIN_PATH' in local_settings:\n logger.warning('PLUGIN_PATH setting has been replaced by '\n 'PLUGIN_PATHS, moving it to the new setting name.')\n local_settings['PLUGIN_PATHS'] = local_settings['PLUGIN_PATH']\n del local_settings['PLUGIN_PATH']\n if isinstance(local_settings['PLUGIN_PATHS'], six.string_types):\n logger.warning(\"Defining %s setting as string has been deprecated (should be a list)\" % 'PLUGIN_PATHS')\n local_settings['PLUGIN_PATHS'] = [local_settings['PLUGIN_PATHS']]\n elif local_settings['PLUGIN_PATHS'] is not None:\n local_settings['PLUGIN_PATHS'] = [os.path.abspath(os.path.normpath(os.path.join(os.path.dirname(path), pluginpath)))\n if not isabs(pluginpath) else pluginpath for pluginpath in local_settings['PLUGIN_PATHS']]\n else:\n local_settings = copy.deepcopy(DEFAULT_CONFIG)\n\n if override:\n local_settings.update(override)\n\n parsed_settings = configure_settings(local_settings)\n # This is because there doesn't seem to be a way to pass extra\n # parameters to docutils directive handlers, so we have to have a\n # variable here that we'll import from within Pygments.run (see\n # rstdirectives.py) to see what the user defaults were.\n global PYGMENTS_RST_OPTIONS\n PYGMENTS_RST_OPTIONS = parsed_settings.get('PYGMENTS_RST_OPTIONS', None)\n return parsed_settings\n\n\ndef get_settings_from_module(module=None, default_settings=DEFAULT_CONFIG):\n \"\"\"Loads settings from a module, returns a dictionary.\"\"\"\n\n context = copy.deepcopy(default_settings)\n if module is not None:\n context.update(\n (k, v) for k, v in inspect.getmembers(module) if k.isupper())\n return context\n\n\ndef get_settings_from_file(path, default_settings=DEFAULT_CONFIG):\n \"\"\"Loads settings from a file path, returning a dict.\"\"\"\n\n name, ext = os.path.splitext(os.path.basename(path))\n module = load_source(name, path)\n return get_settings_from_module(module, default_settings=default_settings)\n\n\ndef configure_settings(settings):\n \"\"\"Provide optimizations, error checking, and warnings for the given\n settings.\n Also, specify the log messages to be ignored.\n \"\"\"\n if not 'PATH' in settings or not os.path.isdir(settings['PATH']):\n raise Exception('You need to specify a path containing the content'\n ' (see pelican --help for more information)')\n\n # specify the log messages to be ignored\n LimitFilter.ignore.update(set(settings.get('LOG_FILTER',\n DEFAULT_CONFIG['LOG_FILTER'])))\n\n # lookup the theme in \"pelican/themes\" if the given one doesn't exist\n if not os.path.isdir(settings['THEME']):\n theme_path = os.path.join(\n os.path.dirname(os.path.abspath(__file__)),\n 'themes',\n settings['THEME'])\n if os.path.exists(theme_path):\n settings['THEME'] = theme_path\n else:\n raise Exception(\"Could not find the theme %s\"\n % settings['THEME'])\n\n # make paths selected for writing absolute if necessary\n settings['WRITE_SELECTED'] = [\n os.path.abspath(path) for path in\n settings.get('WRITE_SELECTED', DEFAULT_CONFIG['WRITE_SELECTED'])\n ]\n\n # standardize strings to lowercase strings\n for key in [\n 'DEFAULT_LANG',\n ]:\n if key in settings:\n settings[key] = settings[key].lower()\n\n # standardize strings to lists\n for key in [\n 'LOCALE',\n ]:\n if key in settings and isinstance(settings[key], six.string_types):\n settings[key] = [settings[key]]\n\n # check settings that must be a particular type\n for key, types in [\n ('OUTPUT_SOURCES_EXTENSION', six.string_types),\n ('FILENAME_METADATA', six.string_types),\n ]:\n if key in settings and not isinstance(settings[key], types):\n value = settings.pop(key)\n logger.warn(\n 'Detected misconfigured {} ({}), '\n 'falling back to the default ({})'.format(\n key, value, DEFAULT_CONFIG[key]))\n\n # try to set the different locales, fallback on the default.\n locales = settings.get('LOCALE', DEFAULT_CONFIG['LOCALE'])\n\n for locale_ in locales:\n try:\n locale.setlocale(locale.LC_ALL, str(locale_))\n break # break if it is successful\n except locale.Error:\n pass\n else:\n logger.warning(\"LOCALE option doesn't contain a correct value\")\n\n if ('SITEURL' in settings):\n # If SITEURL has a trailing slash, remove it and provide a warning\n siteurl = settings['SITEURL']\n if (siteurl.endswith('/')):\n settings['SITEURL'] = siteurl[:-1]\n logger.warning(\"Removed extraneous trailing slash from SITEURL.\")\n # If SITEURL is defined but FEED_DOMAIN isn't,\n # set FEED_DOMAIN to SITEURL\n if not 'FEED_DOMAIN' in settings:\n settings['FEED_DOMAIN'] = settings['SITEURL']\n\n # check content caching layer and warn of incompatibilities\n if (settings.get('CACHE_CONTENT', False) and\n settings.get('CONTENT_CACHING_LAYER', '') == 'generator' and\n settings.get('WITH_FUTURE_DATES', DEFAULT_CONFIG['WITH_FUTURE_DATES'])):\n logger.warning('WITH_FUTURE_DATES conflicts with '\n \"CONTENT_CACHING_LAYER set to 'generator', \"\n \"use 'reader' layer instead\")\n\n # Warn if feeds are generated with both SITEURL & FEED_DOMAIN undefined\n feed_keys = [\n 'FEED_ATOM', 'FEED_RSS',\n 'FEED_ALL_ATOM', 'FEED_ALL_RSS',\n 'CATEGORY_FEED_ATOM', 'CATEGORY_FEED_RSS',\n 'AUTHOR_FEED_ATOM', 'AUTHOR_FEED_RSS',\n 'TAG_FEED_ATOM', 'TAG_FEED_RSS',\n 'TRANSLATION_FEED_ATOM', 'TRANSLATION_FEED_RSS',\n ]\n\n if any(settings.get(k) for k in feed_keys):\n if not settings.get('SITEURL'):\n logger.warning('Feeds generated without SITEURL set properly may'\n ' not be valid')\n\n if not 'TIMEZONE' in settings:\n logger.warning(\n 'No timezone information specified in the settings. Assuming'\n ' your timezone is UTC for feed generation. Check '\n 'http://docs.getpelican.com/en/latest/settings.html#timezone '\n 'for more information')\n\n # fix up pagination rules\n from pelican.paginator import PaginationRule\n pagination_rules = [\n PaginationRule(*r) for r in settings.get(\n 'PAGINATION_PATTERNS',\n DEFAULT_CONFIG['PAGINATION_PATTERNS'],\n )\n ]\n settings['PAGINATION_PATTERNS'] = sorted(\n pagination_rules,\n key=lambda r: r[0],\n )\n\n # move {ARTICLE,PAGE}_DIR -> {ARTICLE,PAGE}_PATHS\n for key in ['ARTICLE', 'PAGE']:\n old_key = key + '_DIR'\n new_key = key + '_PATHS'\n if old_key in settings:\n logger.warning('Deprecated setting {}, moving it to {} list'.format(\n old_key, new_key))\n settings[new_key] = [settings[old_key]] # also make a list\n del settings[old_key]\n\n # Save people from accidentally setting a string rather than a list\n path_keys = (\n 'ARTICLE_EXCLUDES',\n 'DEFAULT_METADATA',\n 'DIRECT_TEMPLATES',\n 'EXTRA_TEMPLATES_PATHS',\n 'FILES_TO_COPY',\n 'IGNORE_FILES',\n 'JINJA_EXTENSIONS',\n 'PAGINATED_DIRECT_TEMPLATES',\n 'PLUGINS',\n 'STATIC_PATHS',\n 'THEME_STATIC_PATHS',\n 'ARTICLE_PATHS',\n 'PAGE_PATHS',\n )\n for PATH_KEY in filter(lambda k: k in settings, path_keys):\n if isinstance(settings[PATH_KEY], six.string_types):\n logger.warning(\"Detected misconfiguration with %s setting \"\n \"(must be a list), falling back to the default\"\n % PATH_KEY)\n settings[PATH_KEY] = DEFAULT_CONFIG[PATH_KEY]\n\n # Add {PAGE,ARTICLE}_PATHS to {ARTICLE,PAGE}_EXCLUDES\n mutually_exclusive = ('ARTICLE', 'PAGE')\n for type_1, type_2 in [mutually_exclusive, mutually_exclusive[::-1]]:\n try:\n includes = settings[type_1 + '_PATHS']\n excludes = settings[type_2 + '_EXCLUDES']\n for path in includes:\n if path not in excludes:\n excludes.append(path)\n except KeyError:\n continue # setting not specified, nothing to do\n\n for old, new, doc in [\n ('LESS_GENERATOR', 'the Webassets plugin', None),\n ('FILES_TO_COPY', 'STATIC_PATHS and EXTRA_PATH_METADATA',\n 'https://github.com/getpelican/pelican/blob/master/docs/settings.rst#path-metadata'),\n ]:\n if old in settings:\n message = 'The {} setting has been removed in favor of {}'.format(\n old, new)\n if doc:\n message += ', see {} for details'.format(doc)\n logger.warning(message)\n\n return settings\n", "path": "pelican/settings.py" } ]
diff --git a/docs/settings.rst b/docs/settings.rst index df2fa722a..3f4f21471 100644 --- a/docs/settings.rst +++ b/docs/settings.rst @@ -58,6 +58,10 @@ Setting name (followed by default value, if any) ``datetime.datetime`` constructor. ``DEFAULT_METADATA = ()`` The default metadata you want to use for all articles and pages. +``DOCUTILS_SETTINGS = {}`` Extra configuration settings for the docutils publisher + (applicable only to reStructuredText). See `Docutils + Configuration`_ settings for more details. + ``FILENAME_METADATA =`` ``'(?P<date>\d{4}-\d{2}-\d{2}).*'`` The regexp that will be used to extract any metadata from the filename. All named groups that are matched will be set in the metadata object. @@ -819,3 +823,4 @@ Example settings .. _Jinja custom filters documentation: http://jinja.pocoo.org/docs/api/#custom-filters +.. _Docutils Configuration: http://docutils.sourceforge.net/docs/user/config.html diff --git a/pelican/settings.py b/pelican/settings.py index c04cc5d04..a283b2bed 100644 --- a/pelican/settings.py +++ b/pelican/settings.py @@ -49,6 +49,7 @@ 'SITENAME': 'A Pelican Blog', 'DISPLAY_PAGES_ON_MENU': True, 'DISPLAY_CATEGORIES_ON_MENU': True, + 'DOCUTILS_SETTINGS': {}, 'OUTPUT_SOURCES': False, 'OUTPUT_SOURCES_EXTENSION': '.text', 'USE_FOLDER_AS_CATEGORY': True,
ethereum__web3.py-475
web3.auto raises unclear exception if no client is live * Version: 4.0.0-beta.1 * OS: linux ### What was wrong? If no client is live, I expect w3 to return as `None` in this case, but instead I get an exception. ``` from web3.auto import w3 ``` cc @Sebohe > ~/code/ethtoken/ethtoken/main.py in eip20_token(address, w3, **kwargs) > 23 ''' > 24 if w3 is None: > ---> 25 from web3.auto import w3 > 26 if w3 is None: > 27 raise RuntimeError("Could not auto-detect web3 connection, please supply it as arg w3") > > ~/code/ethtoken/venv/lib/python3.5/site-packages/web3/auto/__init__.py in <module>() > 2 > 3 for connector in ('ipc', 'http'): > ----> 4 connection = importlib.import_module('web3.auto.' + connector) > 5 if connection.w3: > 6 w3 = connection.w3 > > /usr/lib/python3.5/importlib/__init__.py in import_module(name, package) > 124 break > 125 level += 1 > --> 126 return _bootstrap._gcd_import(name[level:], package, level) > 127 > 128 > > ~/code/ethtoken/venv/lib/python3.5/site-packages/web3/auto/ipc.py in <module>() > 14 > 15 > ---> 16 w3 = connect() > > ~/code/ethtoken/venv/lib/python3.5/site-packages/web3/auto/ipc.py in connect() > 8 def connect(): > 9 w3 = Web3(IPCProvider(get_default_ipc_path())) > ---> 10 if w3.isConnected(): > 11 return w3 > 12 > > ~/code/ethtoken/venv/lib/python3.5/site-packages/web3/main.py in isConnected(self) > 155 def isConnected(self): > 156 for provider in self.providers: > --> 157 if provider.isConnected(): > 158 return True > 159 else: > > ~/code/ethtoken/venv/lib/python3.5/site-packages/web3/providers/base.py in isConnected(self) > 73 def isConnected(self): > 74 try: > ---> 75 response = self.make_request('web3_clientVersion', []) > 76 except IOError: > 77 return False > > ~/code/ethtoken/venv/lib/python3.5/site-packages/web3/providers/ipc.py in make_request(self, method, params) > 139 request = self.encode_rpc_request(method, params) > 140 > --> 141 with self._lock, self._socket as sock: > 142 sock.sendall(request) > 143 raw_response = b"" > > ~/code/ethtoken/venv/lib/python3.5/site-packages/web3/providers/ipc.py in __enter__(self) > 37 def __enter__(self): > 38 if not self.sock: > ---> 39 self.sock = get_ipc_socket(self.ipc_path) > 40 return self.sock > 41 > > ~/code/ethtoken/venv/lib/python3.5/site-packages/web3/providers/ipc.py in get_ipc_socket(ipc_path, timeout) > 24 else: > 25 sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) > ---> 26 sock.connect(ipc_path) > 27 sock.settimeout(timeout) > 28 return sock > > TypeError: a bytes-like object is required, not 'NoneType' ### How can it be fixed? * Add a new test to verify the situation, and prevent regressions * `isConnected` should short-circuit with something like: `if self.ipc_path is None: return False`
[ { "content": "import os\nimport socket\nimport sys\nimport threading\n\ntry:\n from json import JSONDecodeError\nexcept ImportError:\n JSONDecodeError = ValueError\n\nfrom web3.utils.threads import (\n Timeout,\n)\n\nfrom .base import JSONBaseProvider\n\n\ndef get_ipc_socket(ipc_path, timeout=0.1):\n if sys.platform == 'win32':\n # On Windows named pipe is used. Simulate socket with it.\n from web3.utils.windows import NamedPipe\n\n return NamedPipe(ipc_path)\n else:\n sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)\n sock.connect(ipc_path)\n sock.settimeout(timeout)\n return sock\n\n\nclass PersistantSocket(object):\n sock = None\n\n def __init__(self, ipc_path):\n self.ipc_path = ipc_path\n\n def __enter__(self):\n if not self.sock:\n self.sock = get_ipc_socket(self.ipc_path)\n return self.sock\n\n def __exit__(self, exc_type, exc_value, traceback):\n # only close the socket if there was an error\n if exc_value is not None:\n try:\n self.sock.close()\n except Exception:\n pass\n self.sock = None\n\n\ndef get_default_ipc_path(testnet=False):\n if testnet:\n testnet = \"testnet\"\n else:\n testnet = \"\"\n\n if sys.platform == 'darwin':\n ipc_path = os.path.expanduser(os.path.join(\n \"~\",\n \"Library\",\n \"Ethereum\",\n testnet,\n \"geth.ipc\"\n ))\n if os.path.exists(ipc_path):\n return ipc_path\n\n ipc_path = os.path.expanduser(os.path.join(\n \"~\",\n \"Library\",\n \"Application Support\",\n \"io.parity.ethereum\",\n \"jsonrpc.ipc\"\n ))\n if os.path.exists(ipc_path):\n return ipc_path\n\n elif sys.platform.startswith('linux'):\n ipc_path = os.path.expanduser(os.path.join(\n \"~\",\n \".ethereum\",\n testnet,\n \"geth.ipc\"\n ))\n if os.path.exists(ipc_path):\n return ipc_path\n\n ipc_path = os.path.expanduser(os.path.join(\n \"~\",\n \".local\",\n \"share\",\n \"io.parity.ethereum\",\n \"jsonrpc.ipc\"\n ))\n if os.path.exists(ipc_path):\n return ipc_path\n\n elif sys.platform == 'win32':\n ipc_path = os.path.join(\n \"\\\\\\\\\",\n \".\",\n \"pipe\",\n \"geth.ipc\"\n )\n if os.path.exists(ipc_path):\n return ipc_path\n\n ipc_path = os.path.join(\n \"\\\\\\\\\",\n \".\",\n \"pipe\",\n \"jsonrpc.ipc\"\n )\n if os.path.exists(ipc_path):\n return ipc_path\n\n else:\n raise ValueError(\n \"Unsupported platform '{0}'. Only darwin/linux2/win32 are \"\n \"supported. You must specify the ipc_path\".format(sys.platform)\n )\n\n\nclass IPCProvider(JSONBaseProvider):\n _socket = None\n\n def __init__(self, ipc_path=None, testnet=False, *args, **kwargs):\n if ipc_path is None:\n self.ipc_path = get_default_ipc_path(testnet)\n else:\n self.ipc_path = ipc_path\n\n self._lock = threading.Lock()\n self._socket = PersistantSocket(self.ipc_path)\n super(IPCProvider, self).__init__(*args, **kwargs)\n\n def make_request(self, method, params):\n request = self.encode_rpc_request(method, params)\n\n with self._lock, self._socket as sock:\n sock.sendall(request)\n raw_response = b\"\"\n with Timeout(10) as timeout:\n while True:\n try:\n raw_response += sock.recv(4096)\n except socket.timeout:\n timeout.sleep(0)\n continue\n if raw_response == b\"\":\n timeout.sleep(0)\n else:\n try:\n response = self.decode_rpc_response(raw_response)\n except JSONDecodeError:\n timeout.sleep(0)\n continue\n else:\n return response\n", "path": "web3/providers/ipc.py" } ]
[ { "content": "import os\nimport socket\nimport sys\nimport threading\n\ntry:\n from json import JSONDecodeError\nexcept ImportError:\n JSONDecodeError = ValueError\n\nfrom web3.utils.threads import (\n Timeout,\n)\n\nfrom .base import JSONBaseProvider\n\n\ndef get_ipc_socket(ipc_path, timeout=0.1):\n if sys.platform == 'win32':\n # On Windows named pipe is used. Simulate socket with it.\n from web3.utils.windows import NamedPipe\n\n return NamedPipe(ipc_path)\n else:\n sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)\n sock.connect(ipc_path)\n sock.settimeout(timeout)\n return sock\n\n\nclass PersistantSocket(object):\n sock = None\n\n def __init__(self, ipc_path):\n self.ipc_path = ipc_path\n\n def __enter__(self):\n if not self.ipc_path:\n raise FileNotFoundError(\"cannot connect to IPC socket at path: %r\" % self.ipc_path)\n\n if not self.sock:\n self.sock = get_ipc_socket(self.ipc_path)\n return self.sock\n\n def __exit__(self, exc_type, exc_value, traceback):\n # only close the socket if there was an error\n if exc_value is not None:\n try:\n self.sock.close()\n except Exception:\n pass\n self.sock = None\n\n\ndef get_default_ipc_path(testnet=False):\n if testnet:\n testnet = \"testnet\"\n else:\n testnet = \"\"\n\n if sys.platform == 'darwin':\n ipc_path = os.path.expanduser(os.path.join(\n \"~\",\n \"Library\",\n \"Ethereum\",\n testnet,\n \"geth.ipc\"\n ))\n if os.path.exists(ipc_path):\n return ipc_path\n\n ipc_path = os.path.expanduser(os.path.join(\n \"~\",\n \"Library\",\n \"Application Support\",\n \"io.parity.ethereum\",\n \"jsonrpc.ipc\"\n ))\n if os.path.exists(ipc_path):\n return ipc_path\n\n elif sys.platform.startswith('linux'):\n ipc_path = os.path.expanduser(os.path.join(\n \"~\",\n \".ethereum\",\n testnet,\n \"geth.ipc\"\n ))\n if os.path.exists(ipc_path):\n return ipc_path\n\n ipc_path = os.path.expanduser(os.path.join(\n \"~\",\n \".local\",\n \"share\",\n \"io.parity.ethereum\",\n \"jsonrpc.ipc\"\n ))\n if os.path.exists(ipc_path):\n return ipc_path\n\n elif sys.platform == 'win32':\n ipc_path = os.path.join(\n \"\\\\\\\\\",\n \".\",\n \"pipe\",\n \"geth.ipc\"\n )\n if os.path.exists(ipc_path):\n return ipc_path\n\n ipc_path = os.path.join(\n \"\\\\\\\\\",\n \".\",\n \"pipe\",\n \"jsonrpc.ipc\"\n )\n if os.path.exists(ipc_path):\n return ipc_path\n\n else:\n raise ValueError(\n \"Unsupported platform '{0}'. Only darwin/linux2/win32 are \"\n \"supported. You must specify the ipc_path\".format(sys.platform)\n )\n\n\nclass IPCProvider(JSONBaseProvider):\n _socket = None\n\n def __init__(self, ipc_path=None, testnet=False, *args, **kwargs):\n if ipc_path is None:\n self.ipc_path = get_default_ipc_path(testnet)\n else:\n self.ipc_path = ipc_path\n\n self._lock = threading.Lock()\n self._socket = PersistantSocket(self.ipc_path)\n super(IPCProvider, self).__init__(*args, **kwargs)\n\n def make_request(self, method, params):\n request = self.encode_rpc_request(method, params)\n\n with self._lock, self._socket as sock:\n sock.sendall(request)\n raw_response = b\"\"\n with Timeout(10) as timeout:\n while True:\n try:\n raw_response += sock.recv(4096)\n except socket.timeout:\n timeout.sleep(0)\n continue\n if raw_response == b\"\":\n timeout.sleep(0)\n else:\n try:\n response = self.decode_rpc_response(raw_response)\n except JSONDecodeError:\n timeout.sleep(0)\n continue\n else:\n return response\n", "path": "web3/providers/ipc.py" } ]
diff --git a/tests/core/providers/test_ipc_provider.py b/tests/core/providers/test_ipc_provider.py new file mode 100644 index 0000000000..9a445a1031 --- /dev/null +++ b/tests/core/providers/test_ipc_provider.py @@ -0,0 +1,11 @@ +from web3.providers.ipc import ( + IPCProvider, +) + + +def test_ipc_no_path(): + """ + IPCProvider.isConnected() returns False when no path is supplied + """ + ipc = IPCProvider(None) + assert ipc.isConnected() is False diff --git a/web3/providers/ipc.py b/web3/providers/ipc.py index 60173b7c5f..5dcbb1406d 100644 --- a/web3/providers/ipc.py +++ b/web3/providers/ipc.py @@ -35,6 +35,9 @@ def __init__(self, ipc_path): self.ipc_path = ipc_path def __enter__(self): + if not self.ipc_path: + raise FileNotFoundError("cannot connect to IPC socket at path: %r" % self.ipc_path) + if not self.sock: self.sock = get_ipc_socket(self.ipc_path) return self.sock
cocotb__cocotb-208
Redhat 6.5 can no longer raise a TestError Regressions report pass but number of tests has gone done on some simulators. Icarus for instance shows this. ``` 0.00ns INFO  cocotb.gpi gpi_embed.c:213 in embed_sim_init Running on Icarus Verilog version 0.10.0 (devel) 0.00ns INFO  cocotb.gpi gpi_embed.c:214 in embed_sim_init Python interpreter initialised and cocotb loaded! 0.00ns INFO  cocotb.gpi __init__.py:96 in _initialise_testbench Seeding Python random module with 1421853826 0.00ns INFO  cocotb.gpi __init__.py:110 in _initialise_testbench Running tests with Cocotb v0.5a from /var/lib/jenkins/workspace/cocotb_icarus_x86_64 0.00ns ERROR  cocotb.coroutine.fail decorators.py:99 in __init__ test_duplicate_yield isn't a value coroutine! Did you use the yield keyword? Traceback (most recent call last): File "/var/lib/jenkins/workspace/cocotb_icarus_x86_64/cocotb/__init__.py", line 128, in _initialise_testbench regression.initialise() File "/var/lib/jenkins/workspace/cocotb_icarus_x86_64/cocotb/regression.py", line 123, in initialise test = thing(self._dut) File "/var/lib/jenkins/workspace/cocotb_icarus_x86_64/cocotb/decorators.py", line 356, in _wrapped_test raise_error(self, str(e)) File "/var/lib/jenkins/workspace/cocotb_icarus_x86_64/cocotb/result.py", line 42, in raise_error if sys.version_info.major >= 3: AttributeError: 'tuple' object has no attribute 'major' ```
[ { "content": "''' Copyright (c) 2013 Potential Ventures Ltd\nCopyright (c) 2013 SolarFlare Communications Inc\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n * Redistributions of source code must retain the above copyright\n notice, this list of conditions and the following disclaimer.\n * Redistributions in binary form must reproduce the above copyright\n notice, this list of conditions and the following disclaimer in the\n documentation and/or other materials provided with the distribution.\n * Neither the name of Potential Ventures Ltd,\n SolarFlare Communications Inc nor the\n names of its contributors may be used to endorse or promote products\n derived from this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\nWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY\nDIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\nLOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\nON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. '''\n\n# TODO: Coule use cStringIO?\nimport traceback\nimport sys\n#from StringIO import StringIO\nfrom io import StringIO, BytesIO\n\ndef raise_error(obj, msg):\n \"\"\"\n Creates a TestError exception and raises it after printing a traceback\n\n obj has a log method\n msg is a string\n \"\"\"\n exc_type, exc_value, exc_traceback = sys.exc_info()\n if sys.version_info.major >= 3:\n buff = StringIO()\n traceback.print_tb(exc_traceback, file=buff)\n else:\n buff_bytes = BytesIO()\n traceback.print_tb(exc_traceback, file=buff_bytes)\n buff = StringIO(buff_bytes.getvalue().decode(\"UTF-8\"))\n obj.log.error(\"%s\\n%s\" % (msg, buff.getvalue()))\n exception = TestError(msg)\n exception.stderr.write(buff.getvalue())\n raise exception\n\ndef create_error(obj, msg):\n \"\"\"\n As above, but return the exception rather than raise it, simply to avoid\n too many levels of nested try/except blocks\n \"\"\"\n try:\n raise_error(obj, msg)\n except TestError as error:\n return error\n return TestError(\"Creating error traceback failed\")\n\n\nclass ReturnValue(StopIteration):\n def __init__(self, retval):\n self.retval = retval\n\nclass TestComplete(StopIteration):\n \"\"\"\n Exceptions are used to pass test results around.\n \"\"\"\n def __init__(self, *args, **kwargs):\n super(TestComplete, self).__init__(*args, **kwargs)\n self.stdout = StringIO()\n self.stderr = StringIO()\n\nclass TestError(TestComplete): pass\n\nclass TestFailure(TestComplete): pass\n\nclass TestSuccess(TestComplete): pass\n\nclass SimFailure(TestComplete): pass\n", "path": "cocotb/result.py" } ]
[ { "content": "''' Copyright (c) 2013 Potential Ventures Ltd\nCopyright (c) 2013 SolarFlare Communications Inc\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n * Redistributions of source code must retain the above copyright\n notice, this list of conditions and the following disclaimer.\n * Redistributions in binary form must reproduce the above copyright\n notice, this list of conditions and the following disclaimer in the\n documentation and/or other materials provided with the distribution.\n * Neither the name of Potential Ventures Ltd,\n SolarFlare Communications Inc nor the\n names of its contributors may be used to endorse or promote products\n derived from this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\nWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY\nDIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\nLOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\nON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. '''\n\n# TODO: Coule use cStringIO?\nimport traceback\nimport sys\n#from StringIO import StringIO\nfrom io import StringIO, BytesIO\n\ndef raise_error(obj, msg):\n \"\"\"\n Creates a TestError exception and raises it after printing a traceback\n\n obj has a log method\n msg is a string\n \"\"\"\n exc_type, exc_value, exc_traceback = sys.exc_info()\n # 2.6 cannot use named access\n if sys.version_info[0] >= 3:\n buff = StringIO()\n traceback.print_tb(exc_traceback, file=buff)\n else:\n buff_bytes = BytesIO()\n traceback.print_tb(exc_traceback, file=buff_bytes)\n buff = StringIO(buff_bytes.getvalue().decode(\"UTF-8\"))\n obj.log.error(\"%s\\n%s\" % (msg, buff.getvalue()))\n exception = TestError(msg)\n exception.stderr.write(buff.getvalue())\n raise exception\n\ndef create_error(obj, msg):\n \"\"\"\n As above, but return the exception rather than raise it, simply to avoid\n too many levels of nested try/except blocks\n \"\"\"\n try:\n raise_error(obj, msg)\n except TestError as error:\n return error\n return TestError(\"Creating error traceback failed\")\n\n\nclass ReturnValue(StopIteration):\n def __init__(self, retval):\n self.retval = retval\n\nclass TestComplete(StopIteration):\n \"\"\"\n Exceptions are used to pass test results around.\n \"\"\"\n def __init__(self, *args, **kwargs):\n super(TestComplete, self).__init__(*args, **kwargs)\n self.stdout = StringIO()\n self.stderr = StringIO()\n\nclass TestError(TestComplete): pass\n\nclass TestFailure(TestComplete): pass\n\nclass TestSuccess(TestComplete): pass\n\nclass SimFailure(TestComplete): pass\n", "path": "cocotb/result.py" } ]
diff --git a/cocotb/result.py b/cocotb/result.py index 8ff8d5f6b0..fe5b935e36 100644 --- a/cocotb/result.py +++ b/cocotb/result.py @@ -39,7 +39,8 @@ def raise_error(obj, msg): msg is a string """ exc_type, exc_value, exc_traceback = sys.exc_info() - if sys.version_info.major >= 3: + # 2.6 cannot use named access + if sys.version_info[0] >= 3: buff = StringIO() traceback.print_tb(exc_traceback, file=buff) else:
readthedocs__readthedocs.org-10572
Most recent available `mambaforge=4.10` is simply too old Hello guys, just wanted to ask you if it's possible to have a more modern version available for `mambaforge` - the best and latest available to be sourced on RTD via the configuration file is 4.10 which is simply too old (maximum conda 4.10 and mamba 0.19) - updating to a modern mamba doesn't work, as you can see from me changing the conf file in https://github.com/ESMValGroup/ESMValTool/pull/3310/files with output in https://readthedocs.org/projects/esmvaltool/builds/21390633/ - mamba is stuck at 0.19.0, which, in turn, slows down the environment creation process to around 10 minutes (for more recent conda's, updating mamba to something like >=1.4.8 works very well, and updates conda to 23.3 or 23.4 too, but in this case the base versions are too old). If you need any help whatsoever, I offer to help, and once more, many thanks for your great work on RTD :beer:
[ { "content": "\"\"\"\nDefine constants here to allow import them without any external dependency.\n\nThere are situations where we want to have access to these values without Django installed\n(e.g. common/dockerfiles/tasks.py)\n\nNote these constants where previously defined as Django settings in ``readthedocs/settings/base.py``.\n\"\"\"\n\nDOCKER_DEFAULT_IMAGE = \"readthedocs/build\"\n\n# Adding a new tool/version to this setting requires:\n#\n# - a mapping between the expected version in the config file, to the full\n# version installed via asdf (found via ``asdf list all <tool>``)\n#\n# - running the script ``./scripts/compile_version_upload.sh`` in\n# development and production environments to compile and cache the new\n# tool/version\n#\n# Note that when updating this options, you should also update the file:\n# readthedocs/rtd_tests/fixtures/spec/v2/schema.json\nRTD_DOCKER_BUILD_SETTINGS = {\n # Mapping of build.os options to docker image.\n \"os\": {\n \"ubuntu-20.04\": f\"{DOCKER_DEFAULT_IMAGE}:ubuntu-20.04\",\n \"ubuntu-22.04\": f\"{DOCKER_DEFAULT_IMAGE}:ubuntu-22.04\",\n },\n # Mapping of build.tools options to specific versions.\n \"tools\": {\n \"python\": {\n \"2.7\": \"2.7.18\",\n \"3.6\": \"3.6.15\",\n \"3.7\": \"3.7.17\",\n \"3.8\": \"3.8.17\",\n \"3.9\": \"3.9.17\",\n \"3.10\": \"3.10.12\",\n \"3.11\": \"3.11.4\",\n # Always point to the latest stable release.\n \"3\": \"3.11.4\",\n \"miniconda3-4.7\": \"miniconda3-4.7.12\",\n \"mambaforge-4.10\": \"mambaforge-4.10.3-10\",\n },\n \"nodejs\": {\n \"14\": \"14.20.1\",\n \"16\": \"16.18.1\",\n \"18\": \"18.16.1\", # LTS\n \"19\": \"19.0.1\",\n \"20\": \"20.3.1\",\n },\n \"rust\": {\n \"1.55\": \"1.55.0\",\n \"1.61\": \"1.61.0\",\n \"1.64\": \"1.64.0\",\n \"1.70\": \"1.70.0\",\n },\n \"golang\": {\n \"1.17\": \"1.17.13\",\n \"1.18\": \"1.18.10\",\n \"1.19\": \"1.19.10\",\n \"1.20\": \"1.20.5\",\n },\n },\n}\n", "path": "readthedocs/builds/constants_docker.py" } ]
[ { "content": "\"\"\"\nDefine constants here to allow import them without any external dependency.\n\nThere are situations where we want to have access to these values without Django installed\n(e.g. common/dockerfiles/tasks.py)\n\nNote these constants where previously defined as Django settings in ``readthedocs/settings/base.py``.\n\"\"\"\n\nDOCKER_DEFAULT_IMAGE = \"readthedocs/build\"\n\n# Adding a new tool/version to this setting requires:\n#\n# - a mapping between the expected version in the config file, to the full\n# version installed via asdf (found via ``asdf list all <tool>``)\n#\n# - running the script ``./scripts/compile_version_upload.sh`` in\n# development and production environments to compile and cache the new\n# tool/version\n#\n# Note that when updating this options, you should also update the file:\n# readthedocs/rtd_tests/fixtures/spec/v2/schema.json\nRTD_DOCKER_BUILD_SETTINGS = {\n # Mapping of build.os options to docker image.\n \"os\": {\n \"ubuntu-20.04\": f\"{DOCKER_DEFAULT_IMAGE}:ubuntu-20.04\",\n \"ubuntu-22.04\": f\"{DOCKER_DEFAULT_IMAGE}:ubuntu-22.04\",\n },\n # Mapping of build.tools options to specific versions.\n \"tools\": {\n \"python\": {\n \"2.7\": \"2.7.18\",\n \"3.6\": \"3.6.15\",\n \"3.7\": \"3.7.17\",\n \"3.8\": \"3.8.17\",\n \"3.9\": \"3.9.17\",\n \"3.10\": \"3.10.12\",\n \"3.11\": \"3.11.4\",\n # Always point to the latest stable release.\n \"3\": \"3.11.4\",\n \"miniconda3-4.7\": \"miniconda3-4.7.12\",\n \"mambaforge-4.10\": \"mambaforge-4.10.3-10\",\n \"mambaforge-22.9\": \"mambaforge-22.9.0-3\",\n },\n \"nodejs\": {\n \"14\": \"14.20.1\",\n \"16\": \"16.18.1\",\n \"18\": \"18.16.1\", # LTS\n \"19\": \"19.0.1\",\n \"20\": \"20.3.1\",\n },\n \"rust\": {\n \"1.55\": \"1.55.0\",\n \"1.61\": \"1.61.0\",\n \"1.64\": \"1.64.0\",\n \"1.70\": \"1.70.0\",\n },\n \"golang\": {\n \"1.17\": \"1.17.13\",\n \"1.18\": \"1.18.10\",\n \"1.19\": \"1.19.10\",\n \"1.20\": \"1.20.5\",\n },\n },\n}\n", "path": "readthedocs/builds/constants_docker.py" } ]
diff --git a/docs/user/config-file/v2.rst b/docs/user/config-file/v2.rst index 6984e9298da..2e0a96e580f 100644 --- a/docs/user/config-file/v2.rst +++ b/docs/user/config-file/v2.rst @@ -330,6 +330,7 @@ You can use several interpreters and versions, from CPython, Miniconda, and Mamb - ``3.11`` - ``miniconda3-4.7`` - ``mambaforge-4.10`` + - ``mambaforge-22.9`` build.tools.nodejs `````````````````` diff --git a/docs/user/guides/conda.rst b/docs/user/guides/conda.rst index c1201e401f9..7f5f82c0b71 100644 --- a/docs/user/guides/conda.rst +++ b/docs/user/guides/conda.rst @@ -126,7 +126,7 @@ with these contents: build: os: "ubuntu-20.04" tools: - python: "mambaforge-4.10" + python: "mambaforge-22.9" conda: environment: environment.yml diff --git a/readthedocs/builds/constants_docker.py b/readthedocs/builds/constants_docker.py index c49434e58fd..4612ca02822 100644 --- a/readthedocs/builds/constants_docker.py +++ b/readthedocs/builds/constants_docker.py @@ -40,6 +40,7 @@ "3": "3.11.4", "miniconda3-4.7": "miniconda3-4.7.12", "mambaforge-4.10": "mambaforge-4.10.3-10", + "mambaforge-22.9": "mambaforge-22.9.0-3", }, "nodejs": { "14": "14.20.1", diff --git a/readthedocs/rtd_tests/fixtures/spec/v2/schema.json b/readthedocs/rtd_tests/fixtures/spec/v2/schema.json index 438a050753d..d51e2c2b97f 100644 --- a/readthedocs/rtd_tests/fixtures/spec/v2/schema.json +++ b/readthedocs/rtd_tests/fixtures/spec/v2/schema.json @@ -177,7 +177,8 @@ "3.10", "3.11", "miniconda3-4.7", - "mambaforge-4.10" + "mambaforge-4.10", + "mambaforge-22.9" ] }, "nodejs": {
pantsbuild__pants-4714
Overlaid Config Files are applied in reverse order http://www.pantsbuild.org/options.html#overlaying-config-files documents that one can do: $ ./pants --pants-config-files=a.ini --pants-config-files=b.ini options --options-name="level" level = info (from CONFIG in a.ini) $ cat a.ini [GLOBAL] level: info $ cat b.ini [GLOBAL] level: debug According to the docs, the second --pants-config-files should overlay the earlier values, but this is not happening :/
[ { "content": "# coding=utf-8\n# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import (absolute_import, division, generators, nested_scopes, print_function,\n unicode_literals, with_statement)\n\nimport getpass\nimport itertools\nimport os\n\nimport six\nfrom six.moves import configparser\nfrom twitter.common.collections import OrderedSet\n\nfrom pants.base.build_environment import get_buildroot, get_pants_cachedir, get_pants_configdir\nfrom pants.util.eval import parse_expression\nfrom pants.util.meta import AbstractClass\n\n\nclass Config(AbstractClass):\n \"\"\"Encapsulates ini-style config file loading and access.\n\n Supports recursive variable substitution using standard python format strings. E.g.,\n %(var_name)s will be replaced with the value of var_name.\n \"\"\"\n DEFAULT_SECTION = configparser.DEFAULTSECT\n\n class ConfigError(Exception):\n pass\n\n class ConfigValidationError(ConfigError):\n pass\n\n @classmethod\n def load(cls, configpaths, seed_values=None):\n \"\"\"Loads config from the given paths.\n\n A handful of seed values will be set to act as if specified in the loaded config file's DEFAULT\n section, and be available for use in substitutions. The caller may override some of these\n seed values.\n\n :param list configpaths: Load from these paths. Later instances take precedence over earlier\n ones. If empty, returns an empty config.\n :param seed_values: A dict with optional override seed values for buildroot, pants_workdir,\n pants_supportdir and pants_distdir.\n \"\"\"\n if not configpaths:\n return _EmptyConfig()\n\n single_file_configs = []\n for configpath in configpaths:\n parser = cls._create_parser(seed_values)\n with open(configpath, 'r') as ini:\n parser.readfp(ini)\n single_file_configs.append(_SingleFileConfig(configpath, parser))\n return _ChainedConfig(single_file_configs)\n\n @classmethod\n def _create_parser(cls, seed_values=None):\n \"\"\"Creates a config parser that supports %([key-name])s value substitution.\n\n A handful of seed values will be set to act as if specified in the loaded config file's DEFAULT\n section, and be available for use in substitutions. The caller may override some of these\n seed values.\n\n :param seed_values: A dict with optional override seed values for buildroot, pants_workdir,\n pants_supportdir and pants_distdir.\n \"\"\"\n seed_values = seed_values or {}\n buildroot = seed_values.get('buildroot', get_buildroot())\n\n all_seed_values = {\n 'buildroot': buildroot,\n 'homedir': os.path.expanduser('~'),\n 'user': getpass.getuser(),\n 'pants_bootstrapdir': get_pants_cachedir(),\n 'pants_configdir': get_pants_configdir(),\n }\n\n def update_dir_from_seed_values(key, default):\n all_seed_values[key] = seed_values.get(key, os.path.join(buildroot, default))\n update_dir_from_seed_values('pants_workdir', '.pants.d')\n update_dir_from_seed_values('pants_supportdir', 'build-support')\n update_dir_from_seed_values('pants_distdir', 'dist')\n\n return configparser.SafeConfigParser(all_seed_values)\n\n def get(self, section, option, type_=six.string_types, default=None):\n \"\"\"Retrieves option from the specified section (or 'DEFAULT') and attempts to parse it as type.\n\n If the specified section does not exist or is missing a definition for the option, the value is\n looked up in the DEFAULT section. If there is still no definition found, the default value\n supplied is returned.\n \"\"\"\n return self._getinstance(section, option, type_, default)\n\n def _getinstance(self, section, option, type_, default=None):\n if not self.has_option(section, option):\n return default\n\n raw_value = self.get_value(section, option)\n # We jump through some hoops here to deal with the fact that `six.string_types` is a tuple of\n # types.\n if (type_ == six.string_types or\n (isinstance(type_, type) and issubclass(type_, six.string_types))):\n return raw_value\n\n key = '{}.{}'.format(section, option)\n return parse_expression(name=key, val=raw_value, acceptable_types=type_,\n raise_type=self.ConfigError)\n\n # Subclasses must implement.\n def configs(self):\n \"\"\"Returns the underlying single-file configs represented by this object.\"\"\"\n raise NotImplementedError()\n\n def sources(self):\n \"\"\"Returns the sources of this config as a list of filenames.\"\"\"\n raise NotImplementedError()\n\n def sections(self):\n \"\"\"Returns the sections in this config (not including DEFAULT).\"\"\"\n raise NotImplementedError()\n\n def has_section(self, section):\n \"\"\"Returns whether this config has the section.\"\"\"\n raise NotImplementedError()\n\n def has_option(self, section, option):\n \"\"\"Returns whether this config specified a value the option.\"\"\"\n raise NotImplementedError()\n\n def get_value(self, section, option):\n \"\"\"Returns the value of the option in this config as a string, or None if no value specified.\"\"\"\n raise NotImplementedError()\n\n def get_source_for_option(self, section, option):\n \"\"\"Returns the path to the source file the given option was defined in.\n\n :param string section: the scope of the option.\n :param string option: the name of the option.\n :returns: the path to the config file, or None if the option was not defined by a config file.\n :rtype: string\n \"\"\"\n raise NotImplementedError\n\n\nclass _EmptyConfig(Config):\n \"\"\"A dummy config with no data at all.\"\"\"\n\n def sources(self):\n return []\n\n def configs(self):\n return []\n\n def sections(self):\n return []\n\n def has_section(self, section):\n return False\n\n def has_option(self, section, option):\n return False\n\n def get_value(self, section, option):\n return None\n\n def get_source_for_option(self, section, option):\n return None\n\n\nclass _SingleFileConfig(Config):\n \"\"\"Config read from a single file.\"\"\"\n\n def __init__(self, configpath, configparser):\n super(_SingleFileConfig, self).__init__()\n self.configpath = configpath\n self.configparser = configparser\n\n def configs(self):\n return [self]\n\n def sources(self):\n return [self.configpath]\n\n def sections(self):\n return self.configparser.sections()\n\n def has_section(self, section):\n return self.configparser.has_section(section)\n\n def has_option(self, section, option):\n return self.configparser.has_option(section, option)\n\n def get_value(self, section, option):\n return self.configparser.get(section, option)\n\n def get_source_for_option(self, section, option):\n if self.has_option(section, option):\n return self.sources()[0]\n return None\n\n\nclass _ChainedConfig(Config):\n \"\"\"Config read from multiple sources.\"\"\"\n\n def __init__(self, configs):\n \"\"\"\n :param configs: A list of Config instances to chain.\n Later instances take precedence over earlier ones.\n \"\"\"\n super(_ChainedConfig, self).__init__()\n self._configs = list(reversed(configs))\n\n def configs(self):\n return self._configs\n\n def sources(self):\n return list(itertools.chain.from_iterable(cfg.sources() for cfg in self._configs))\n\n def sections(self):\n ret = OrderedSet()\n for cfg in self._configs:\n ret.update(cfg.sections())\n return ret\n\n def has_section(self, section):\n for cfg in self._configs:\n if cfg.has_section(section):\n return True\n return False\n\n def has_option(self, section, option):\n for cfg in self._configs:\n if cfg.has_option(section, option):\n return True\n return False\n\n def get_value(self, section, option):\n for cfg in self._configs:\n try:\n return cfg.get_value(section, option)\n except (configparser.NoSectionError, configparser.NoOptionError):\n pass\n if not self.has_section(section):\n raise configparser.NoSectionError(section)\n raise configparser.NoOptionError(option, section)\n\n def get_source_for_option(self, section, option):\n for cfg in self._configs:\n if cfg.has_option(section, option):\n return cfg.get_source_for_option(section, option)\n return None\n", "path": "src/python/pants/option/config.py" } ]
[ { "content": "# coding=utf-8\n# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import (absolute_import, division, generators, nested_scopes, print_function,\n unicode_literals, with_statement)\n\nimport getpass\nimport itertools\nimport os\n\nimport six\nfrom six.moves import configparser\nfrom twitter.common.collections import OrderedSet\n\nfrom pants.base.build_environment import get_buildroot, get_pants_cachedir, get_pants_configdir\nfrom pants.util.eval import parse_expression\nfrom pants.util.meta import AbstractClass\n\n\nclass Config(AbstractClass):\n \"\"\"Encapsulates ini-style config file loading and access.\n\n Supports recursive variable substitution using standard python format strings. E.g.,\n %(var_name)s will be replaced with the value of var_name.\n \"\"\"\n DEFAULT_SECTION = configparser.DEFAULTSECT\n\n class ConfigError(Exception):\n pass\n\n class ConfigValidationError(ConfigError):\n pass\n\n @classmethod\n def load(cls, configpaths, seed_values=None):\n \"\"\"Loads config from the given paths.\n\n A handful of seed values will be set to act as if specified in the loaded config file's DEFAULT\n section, and be available for use in substitutions. The caller may override some of these\n seed values.\n\n :param list configpaths: Load from these paths. Later instances take precedence over earlier\n ones. If empty, returns an empty config.\n :param seed_values: A dict with optional override seed values for buildroot, pants_workdir,\n pants_supportdir and pants_distdir.\n \"\"\"\n if not configpaths:\n return _EmptyConfig()\n\n single_file_configs = []\n for configpath in configpaths:\n parser = cls._create_parser(seed_values)\n with open(configpath, 'r') as ini:\n parser.readfp(ini)\n single_file_configs.append(_SingleFileConfig(configpath, parser))\n return _ChainedConfig(single_file_configs)\n\n @classmethod\n def _create_parser(cls, seed_values=None):\n \"\"\"Creates a config parser that supports %([key-name])s value substitution.\n\n A handful of seed values will be set to act as if specified in the loaded config file's DEFAULT\n section, and be available for use in substitutions. The caller may override some of these\n seed values.\n\n :param seed_values: A dict with optional override seed values for buildroot, pants_workdir,\n pants_supportdir and pants_distdir.\n \"\"\"\n seed_values = seed_values or {}\n buildroot = seed_values.get('buildroot', get_buildroot())\n\n all_seed_values = {\n 'buildroot': buildroot,\n 'homedir': os.path.expanduser('~'),\n 'user': getpass.getuser(),\n 'pants_bootstrapdir': get_pants_cachedir(),\n 'pants_configdir': get_pants_configdir(),\n }\n\n def update_dir_from_seed_values(key, default):\n all_seed_values[key] = seed_values.get(key, os.path.join(buildroot, default))\n update_dir_from_seed_values('pants_workdir', '.pants.d')\n update_dir_from_seed_values('pants_supportdir', 'build-support')\n update_dir_from_seed_values('pants_distdir', 'dist')\n\n return configparser.SafeConfigParser(all_seed_values)\n\n def get(self, section, option, type_=six.string_types, default=None):\n \"\"\"Retrieves option from the specified section (or 'DEFAULT') and attempts to parse it as type.\n\n If the specified section does not exist or is missing a definition for the option, the value is\n looked up in the DEFAULT section. If there is still no definition found, the default value\n supplied is returned.\n \"\"\"\n return self._getinstance(section, option, type_, default)\n\n def _getinstance(self, section, option, type_, default=None):\n if not self.has_option(section, option):\n return default\n\n raw_value = self.get_value(section, option)\n # We jump through some hoops here to deal with the fact that `six.string_types` is a tuple of\n # types.\n if (type_ == six.string_types or\n (isinstance(type_, type) and issubclass(type_, six.string_types))):\n return raw_value\n\n key = '{}.{}'.format(section, option)\n return parse_expression(name=key, val=raw_value, acceptable_types=type_,\n raise_type=self.ConfigError)\n\n # Subclasses must implement.\n def configs(self):\n \"\"\"Returns the underlying single-file configs represented by this object.\"\"\"\n raise NotImplementedError()\n\n def sources(self):\n \"\"\"Returns the sources of this config as a list of filenames.\"\"\"\n raise NotImplementedError()\n\n def sections(self):\n \"\"\"Returns the sections in this config (not including DEFAULT).\"\"\"\n raise NotImplementedError()\n\n def has_section(self, section):\n \"\"\"Returns whether this config has the section.\"\"\"\n raise NotImplementedError()\n\n def has_option(self, section, option):\n \"\"\"Returns whether this config specified a value the option.\"\"\"\n raise NotImplementedError()\n\n def get_value(self, section, option):\n \"\"\"Returns the value of the option in this config as a string, or None if no value specified.\"\"\"\n raise NotImplementedError()\n\n def get_source_for_option(self, section, option):\n \"\"\"Returns the path to the source file the given option was defined in.\n\n :param string section: the scope of the option.\n :param string option: the name of the option.\n :returns: the path to the config file, or None if the option was not defined by a config file.\n :rtype: string\n \"\"\"\n raise NotImplementedError\n\n\nclass _EmptyConfig(Config):\n \"\"\"A dummy config with no data at all.\"\"\"\n\n def sources(self):\n return []\n\n def configs(self):\n return []\n\n def sections(self):\n return []\n\n def has_section(self, section):\n return False\n\n def has_option(self, section, option):\n return False\n\n def get_value(self, section, option):\n return None\n\n def get_source_for_option(self, section, option):\n return None\n\n\nclass _SingleFileConfig(Config):\n \"\"\"Config read from a single file.\"\"\"\n\n def __init__(self, configpath, configparser):\n super(_SingleFileConfig, self).__init__()\n self.configpath = configpath\n self.configparser = configparser\n\n def configs(self):\n return [self]\n\n def sources(self):\n return [self.configpath]\n\n def sections(self):\n return self.configparser.sections()\n\n def has_section(self, section):\n return self.configparser.has_section(section)\n\n def has_option(self, section, option):\n return self.configparser.has_option(section, option)\n\n def get_value(self, section, option):\n return self.configparser.get(section, option)\n\n def get_source_for_option(self, section, option):\n if self.has_option(section, option):\n return self.sources()[0]\n return None\n\n\nclass _ChainedConfig(Config):\n \"\"\"Config read from multiple sources.\"\"\"\n\n def __init__(self, configs):\n \"\"\"\n :param configs: A list of Config instances to chain.\n Later instances take precedence over earlier ones.\n \"\"\"\n super(_ChainedConfig, self).__init__()\n self._configs = list(reversed(configs))\n\n def configs(self):\n return self._configs\n\n def sources(self):\n # NB: Present the sources in the order we were given them.\n return list(itertools.chain.from_iterable(cfg.sources() for cfg in reversed(self._configs)))\n\n def sections(self):\n ret = OrderedSet()\n for cfg in self._configs:\n ret.update(cfg.sections())\n return ret\n\n def has_section(self, section):\n for cfg in self._configs:\n if cfg.has_section(section):\n return True\n return False\n\n def has_option(self, section, option):\n for cfg in self._configs:\n if cfg.has_option(section, option):\n return True\n return False\n\n def get_value(self, section, option):\n for cfg in self._configs:\n try:\n return cfg.get_value(section, option)\n except (configparser.NoSectionError, configparser.NoOptionError):\n pass\n if not self.has_section(section):\n raise configparser.NoSectionError(section)\n raise configparser.NoOptionError(option, section)\n\n def get_source_for_option(self, section, option):\n for cfg in self._configs:\n if cfg.has_option(section, option):\n return cfg.get_source_for_option(section, option)\n return None\n", "path": "src/python/pants/option/config.py" } ]
diff --git a/src/python/pants/option/config.py b/src/python/pants/option/config.py index 10e3c3561d0..554e38aa4f3 100644 --- a/src/python/pants/option/config.py +++ b/src/python/pants/option/config.py @@ -218,7 +218,8 @@ def configs(self): return self._configs def sources(self): - return list(itertools.chain.from_iterable(cfg.sources() for cfg in self._configs)) + # NB: Present the sources in the order we were given them. + return list(itertools.chain.from_iterable(cfg.sources() for cfg in reversed(self._configs))) def sections(self): ret = OrderedSet() diff --git a/tests/python/pants_test/option/test_config.py b/tests/python/pants_test/option/test_config.py index 0f4df3cb5f7..5050806c138 100644 --- a/tests/python/pants_test/option/test_config.py +++ b/tests/python/pants_test/option/test_config.py @@ -56,6 +56,7 @@ def setUp(self): """)) ini2.close() self.config = Config.load(configpaths=[ini1.name, ini2.name]) + self.assertEqual([ini1.name, ini2.name], self.config.sources()) def test_getstring(self): self.assertEquals('/a/b/42', self.config.get('a', 'path')) diff --git a/tests/python/pants_test/option/test_options_bootstrapper.py b/tests/python/pants_test/option/test_options_bootstrapper.py index 7effa6e2843..49d003ede47 100644 --- a/tests/python/pants_test/option/test_options_bootstrapper.py +++ b/tests/python/pants_test/option/test_options_bootstrapper.py @@ -50,7 +50,7 @@ def test_bootstrap_option_values(self): def br(path): # Returns the full path of the given path under the buildroot. - return '{}/{}'.format(buildroot, path) + return os.path.join(buildroot, path) self._do_test([br('.pants.d'), br('build-support'), br('dist')], config=None, env={}, args=[]) @@ -134,7 +134,7 @@ def test_create_bootstrapped_options(self): self.assertEquals('/qux/baz', opts.for_scope('foo').bar) self.assertEquals('/pear/banana', opts.for_scope('fruit').apple) - def test_create_bootstrapped_multiple_config_override(self): + def do_test_create_bootstrapped_multiple_config(self, create_options_bootstrapper): # check with multiple config files, the latest values always get taken # in this case worker_count will be overwritten, while fruit stays the same with temporary_file() as fp: @@ -147,16 +147,15 @@ def test_create_bootstrapped_multiple_config_override(self): """)) fp.close() - args = ['--config-override={}'.format(fp.name)] + self._config_path(fp.name) - bootstrapper_single_config = OptionsBootstrapper(args=args) + bootstrapper_single_config = create_options_bootstrapper(fp.name) - opts_single_config = bootstrapper_single_config.get_full_options(known_scope_infos=[ + opts_single_config = bootstrapper_single_config.get_full_options(known_scope_infos=[ ScopeInfo('', ScopeInfo.GLOBAL), ScopeInfo('compile.apt', ScopeInfo.TASK), ScopeInfo('fruit', ScopeInfo.TASK), ]) # So we don't choke on these on the cmd line. - opts_single_config.register('', '--pants-config-files') + opts_single_config.register('', '--pants-config-files', type=list) opts_single_config.register('', '--config-override', type=list) opts_single_config.register('compile.apt', '--worker-count') @@ -172,10 +171,7 @@ def test_create_bootstrapped_multiple_config_override(self): """)) fp2.close() - args = ['--config-override={}'.format(fp.name), - '--config-override={}'.format(fp2.name)] + self._config_path(fp.name) - - bootstrapper_double_config = OptionsBootstrapper(args=args) + bootstrapper_double_config = create_options_bootstrapper(fp.name, fp2.name) opts_double_config = bootstrapper_double_config.get_full_options(known_scope_infos=[ ScopeInfo('', ScopeInfo.GLOBAL), @@ -183,7 +179,7 @@ def test_create_bootstrapped_multiple_config_override(self): ScopeInfo('fruit', ScopeInfo.TASK), ]) # So we don't choke on these on the cmd line. - opts_double_config.register('', '--pants-config-files') + opts_double_config.register('', '--pants-config-files', type=list) opts_double_config.register('', '--config-override', type=list) opts_double_config.register('compile.apt', '--worker-count') opts_double_config.register('fruit', '--apple') @@ -191,6 +187,18 @@ def test_create_bootstrapped_multiple_config_override(self): self.assertEquals('2', opts_double_config.for_scope('compile.apt').worker_count) self.assertEquals('red', opts_double_config.for_scope('fruit').apple) + def test_create_bootstrapped_multiple_config_override(self): + def create_options_bootstrapper(*config_paths): + return OptionsBootstrapper(args=['--config-override={}'.format(cp) for cp in config_paths]) + + self.do_test_create_bootstrapped_multiple_config(create_options_bootstrapper) + + def test_create_bootstrapped_multiple_pants_config_files(self): + def create_options_bootstrapper(*config_paths): + return OptionsBootstrapper(args=['--pants-config-files={}'.format(cp) for cp in config_paths]) + + self.do_test_create_bootstrapped_multiple_config(create_options_bootstrapper) + def test_full_options_caching(self): with temporary_file_path() as config: args = self._config_path(config)
numba__numba-5455
bump max llvmlite version to accept 0.32.0 Now that llvmlite 0.32.0rc1 is released. We need to bump the accepted version to `0.33.0.dev0`
[ { "content": "from setuptools import setup, Extension, find_packages\nfrom distutils.command import build\nfrom distutils.spawn import spawn\nfrom distutils import sysconfig\nimport sys\nimport os\nimport platform\n\nimport versioneer\n\nmin_python_version = \"3.6\"\nmin_numpy_build_version = \"1.11\"\nmin_numpy_run_version = \"1.15\"\nmin_llvmlite_version = \"0.31.0.dev0\"\nmax_llvmlite_version = \"0.32.0.dev0\"\n\nif sys.platform.startswith('linux'):\n # Patch for #2555 to make wheels without libpython\n sysconfig.get_config_vars()['Py_ENABLE_SHARED'] = 0\n\n\nclass build_doc(build.build):\n description = \"build documentation\"\n\n def run(self):\n spawn(['make', '-C', 'docs', 'html'])\n\n\nversioneer.VCS = 'git'\nversioneer.versionfile_source = 'numba/_version.py'\nversioneer.versionfile_build = 'numba/_version.py'\nversioneer.tag_prefix = ''\nversioneer.parentdir_prefix = 'numba-'\n\ncmdclass = versioneer.get_cmdclass()\ncmdclass['build_doc'] = build_doc\n\n\nGCCFLAGS = [\"-std=c89\", \"-Wdeclaration-after-statement\", \"-Werror\"]\n\nif os.environ.get(\"NUMBA_GCC_FLAGS\"):\n CFLAGS = GCCFLAGS\nelse:\n CFLAGS = ['-g']\n\ninstall_name_tool_fixer = []\nif sys.platform == 'darwin':\n install_name_tool_fixer += ['-headerpad_max_install_names']\n\n\ndef is_building():\n \"\"\"\n Parse the setup.py command and return whether a build is requested.\n If False is returned, only an informational command is run.\n If True is returned, information about C extensions will have to\n be passed to the setup() function.\n \"\"\"\n if len(sys.argv) < 2:\n # User forgot to give an argument probably, let setuptools handle that.\n return True\n\n info_commands = ['--help-commands', '--name', '--version', '-V',\n '--fullname', '--author', '--author-email',\n '--maintainer', '--maintainer-email', '--contact',\n '--contact-email', '--url', '--license', '--description',\n '--long-description', '--platforms', '--classifiers',\n '--keywords', '--provides', '--requires', '--obsoletes']\n # Add commands that do more than print info, but also don't need\n # any build step.\n info_commands.extend(['egg_info', 'install_egg_info', 'rotate'])\n\n for command in info_commands:\n if command in sys.argv[1:]:\n return False\n\n return True\n\n\ndef is_building_wheel():\n if len(sys.argv) < 2:\n # No command is given.\n return False\n\n return 'bdist_wheel' in sys.argv[1:]\n\n\ndef get_ext_modules():\n \"\"\"\n Return a list of Extension instances for the setup() call.\n \"\"\"\n # Note we don't import Numpy at the toplevel, since setup.py\n # should be able to run without Numpy for pip to discover the\n # build dependencies\n import numpy.distutils.misc_util as np_misc\n\n # Inject required options for extensions compiled against the Numpy\n # C API (include dirs, library dirs etc.)\n np_compile_args = np_misc.get_info('npymath')\n\n ext_dynfunc = Extension(name='numba._dynfunc',\n sources=['numba/_dynfuncmod.c'],\n extra_compile_args=CFLAGS,\n depends=['numba/_pymodule.h',\n 'numba/_dynfunc.c'])\n\n ext_dispatcher = Extension(name=\"numba._dispatcher\",\n sources=['numba/_dispatcher.c',\n 'numba/_typeof.c',\n 'numba/_hashtable.c',\n 'numba/_dispatcherimpl.cpp',\n 'numba/core/typeconv/typeconv.cpp'],\n depends=[\"numba/_pymodule.h\",\n \"numba/_dispatcher.h\",\n \"numba/_typeof.h\",\n \"numba/_hashtable.h\"],\n **np_compile_args)\n\n ext_helperlib = Extension(name=\"numba._helperlib\",\n sources=[\"numba/_helpermod.c\",\n \"numba/cext/utils.c\",\n \"numba/cext/dictobject.c\",\n \"numba/cext/listobject.c\",\n ],\n extra_compile_args=CFLAGS,\n extra_link_args=install_name_tool_fixer,\n depends=[\"numba/_pymodule.h\",\n \"numba/_helperlib.c\",\n \"numba/_lapack.c\",\n \"numba/_npymath_exports.c\",\n \"numba/_random.c\",\n \"numba/mathnames.inc\",\n ],\n **np_compile_args)\n\n ext_typeconv = Extension(name=\"numba.core.typeconv._typeconv\",\n sources=[\"numba/core/typeconv/typeconv.cpp\",\n \"numba/core/typeconv/_typeconv.cpp\"],\n depends=[\"numba/_pymodule.h\"],\n )\n\n ext_np_ufunc = Extension(name=\"numba.np.ufunc._internal\",\n sources=[\"numba/np/ufunc/_internal.c\"],\n depends=[\"numba/np/ufunc/_ufunc.c\",\n \"numba/np/ufunc/_internal.h\",\n \"numba/_pymodule.h\"],\n **np_compile_args)\n\n ext_npyufunc_num_threads = Extension(name=\"numba.np.ufunc._num_threads\",\n sources=[\n \"numba/np/ufunc/_num_threads.c\"],\n depends=[\"numba/_pymodule.h\"],\n )\n\n ext_np_ufunc_backends = []\n\n def check_file_at_path(path2file):\n \"\"\"\n Takes a list as a path, a single glob (*) is permitted as an entry which\n indicates that expansion at this location is required (i.e. version\n might not be known).\n \"\"\"\n found = None\n path2check = [os.path.split(os.path.split(sys.executable)[0])[0]]\n path2check += [os.getenv(n, '') for n in ['CONDA_PREFIX', 'PREFIX']]\n if sys.platform.startswith('win'):\n path2check += [os.path.join(p, 'Library') for p in path2check]\n for p in path2check:\n if p:\n if '*' in path2file:\n globloc = path2file.index('*')\n searchroot = os.path.join(*path2file[:globloc])\n try:\n potential_locs = os.listdir(os.path.join(p, searchroot))\n except BaseException:\n continue\n searchfor = path2file[globloc + 1:]\n for x in potential_locs:\n potpath = os.path.join(p, searchroot, x, *searchfor)\n if os.path.isfile(potpath):\n found = p # the latest is used\n elif os.path.isfile(os.path.join(p, *path2file)):\n found = p # the latest is used\n return found\n\n # Search for Intel TBB, first check env var TBBROOT then conda locations\n tbb_root = os.getenv('TBBROOT')\n if not tbb_root:\n tbb_root = check_file_at_path(['include', 'tbb', 'tbb.h'])\n\n # Set various flags for use in TBB and openmp. On OSX, also find OpenMP!\n have_openmp = True\n if sys.platform.startswith('win'):\n cpp11flags = []\n ompcompileflags = ['-openmp']\n omplinkflags = []\n elif sys.platform.startswith('darwin'):\n cpp11flags = ['-std=c++11']\n # This is a bit unusual but necessary...\n # llvm (clang) OpenMP is used for headers etc at compile time\n # Intel OpenMP (libiomp5) provides the link library.\n # They are binary compatible and may not safely coexist in a process, as\n # libiomp5 is more prevalent and often linked in for NumPy it is used\n # here!\n ompcompileflags = ['-fopenmp']\n omplinkflags = ['-fopenmp=libiomp5']\n omppath = ['lib', 'clang', '*', 'include', 'omp.h']\n have_openmp = check_file_at_path(omppath)\n else:\n cpp11flags = ['-std=c++11']\n ompcompileflags = ['-fopenmp']\n if platform.machine() == 'ppc64le':\n omplinkflags = ['-fopenmp']\n else:\n omplinkflags = ['-fopenmp']\n\n if tbb_root:\n print(\"Using Intel TBB from:\", tbb_root)\n ext_np_ufunc_tbb_backend = Extension(\n name='numba.np.ufunc.tbbpool',\n sources=[\n 'numba/np/ufunc/tbbpool.cpp',\n 'numba/np/ufunc/gufunc_scheduler.cpp',\n ],\n depends=['numba/np/ufunc/workqueue.h'],\n include_dirs=[os.path.join(tbb_root, 'include')],\n extra_compile_args=cpp11flags,\n libraries=['tbb'], # TODO: if --debug or -g, use 'tbb_debug'\n library_dirs=[\n # for Linux\n os.path.join(tbb_root, 'lib', 'intel64', 'gcc4.4'),\n # for MacOS\n os.path.join(tbb_root, 'lib'),\n # for Windows\n os.path.join(tbb_root, 'lib', 'intel64', 'vc_mt'),\n ],\n )\n ext_np_ufunc_backends.append(ext_np_ufunc_tbb_backend)\n else:\n print(\"TBB not found\")\n\n # Disable OpenMP if we are building a wheel or\n # forced by user with NUMBA_NO_OPENMP=1\n if is_building_wheel() or os.getenv('NUMBA_NO_OPENMP'):\n print(\"OpenMP disabled\")\n elif have_openmp:\n print(\"Using OpenMP from:\", have_openmp)\n # OpenMP backed work queue\n ext_np_ufunc_omppool_backend = Extension(\n name='numba.np.ufunc.omppool',\n sources=[\n 'numba/np/ufunc/omppool.cpp',\n 'numba/np/ufunc/gufunc_scheduler.cpp',\n ],\n depends=['numba/np/ufunc/workqueue.h'],\n extra_compile_args=ompcompileflags + cpp11flags,\n extra_link_args=omplinkflags,\n )\n\n ext_np_ufunc_backends.append(ext_np_ufunc_omppool_backend)\n else:\n print(\"OpenMP not found\")\n\n # Build the Numba workqueue implementation irrespective of whether the TBB\n # version is built. Users can select a backend via env vars.\n ext_np_ufunc_workqueue_backend = Extension(\n name='numba.np.ufunc.workqueue',\n sources=['numba/np/ufunc/workqueue.c',\n 'numba/np/ufunc/gufunc_scheduler.cpp'],\n depends=['numba/np/ufunc/workqueue.h'])\n ext_np_ufunc_backends.append(ext_np_ufunc_workqueue_backend)\n\n ext_mviewbuf = Extension(name='numba.mviewbuf',\n extra_link_args=install_name_tool_fixer,\n sources=['numba/mviewbuf.c'])\n\n ext_nrt_python = Extension(name='numba.core.runtime._nrt_python',\n sources=['numba/core/runtime/_nrt_pythonmod.c',\n 'numba/core/runtime/nrt.c'],\n depends=['numba/core/runtime/nrt.h',\n 'numba/_pymodule.h',\n 'numba/core/runtime/_nrt_python.c'],\n **np_compile_args)\n\n ext_jitclass_box = Extension(name='numba.experimental.jitclass._box',\n sources=['numba/experimental/jitclass/_box.c'],\n depends=['numba/experimental/_pymodule.h'],\n )\n\n ext_cuda_extras = Extension(name='numba.cuda.cudadrv._extras',\n sources=['numba/cuda/cudadrv/_extras.c'],\n depends=['numba/_pymodule.h'],\n include_dirs=[\"numba\"])\n\n ext_modules = [ext_dynfunc, ext_dispatcher, ext_helperlib, ext_typeconv,\n ext_np_ufunc, ext_npyufunc_num_threads, ext_mviewbuf,\n ext_nrt_python, ext_jitclass_box, ext_cuda_extras]\n\n ext_modules += ext_np_ufunc_backends\n\n return ext_modules\n\n\npackages = find_packages(include=[\"numba\", \"numba.*\"])\n\nbuild_requires = [f'numpy >={min_numpy_build_version}']\ninstall_requires = [\n f'llvmlite >={min_llvmlite_version},<={max_llvmlite_version}',\n f'numpy >={min_numpy_run_version}',\n 'setuptools',\n]\n\nmetadata = dict(\n name='numba',\n description=\"compiling Python code using LLVM\",\n version=versioneer.get_version(),\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Topic :: Software Development :: Compilers\",\n ],\n package_data={\n # HTML templates for type annotations\n \"numba.core.annotations\": [\"*.html\"],\n # Various test data\n \"numba.cuda.tests.cudadrv.data\": [\"*.ptx\"],\n \"numba.tests\": [\"pycc_distutils_usecase/*.py\"],\n # Some C files are needed by pycc\n \"numba\": [\"*.c\", \"*.h\"],\n \"numba.pycc\": [\"*.c\", \"*.h\"],\n \"numba.core.runtime\": [\"*.c\", \"*.h\"],\n \"numba.cext\": [\"*.c\", \"*.h\"],\n # numba gdb hook init command language file\n \"numba.misc\": [\"cmdlang.gdb\"],\n },\n scripts=[\"numba/pycc/pycc\", \"bin/numba\"],\n author=\"Anaconda, Inc.\",\n author_email=\"[email protected]\",\n url=\"http://numba.github.com\",\n packages=packages,\n setup_requires=build_requires,\n install_requires=install_requires,\n python_requires=f\">={min_python_version}\",\n license=\"BSD\",\n cmdclass=cmdclass,\n)\n\nwith open('README.rst') as f:\n metadata['long_description'] = f.read()\n\nif is_building():\n metadata['ext_modules'] = get_ext_modules()\n\nsetup(**metadata)\n", "path": "setup.py" } ]
[ { "content": "from setuptools import setup, Extension, find_packages\nfrom distutils.command import build\nfrom distutils.spawn import spawn\nfrom distutils import sysconfig\nimport sys\nimport os\nimport platform\n\nimport versioneer\n\nmin_python_version = \"3.6\"\nmin_numpy_build_version = \"1.11\"\nmin_numpy_run_version = \"1.15\"\nmin_llvmlite_version = \"0.31.0.dev0\"\nmax_llvmlite_version = \"0.33.0.dev0\"\n\nif sys.platform.startswith('linux'):\n # Patch for #2555 to make wheels without libpython\n sysconfig.get_config_vars()['Py_ENABLE_SHARED'] = 0\n\n\nclass build_doc(build.build):\n description = \"build documentation\"\n\n def run(self):\n spawn(['make', '-C', 'docs', 'html'])\n\n\nversioneer.VCS = 'git'\nversioneer.versionfile_source = 'numba/_version.py'\nversioneer.versionfile_build = 'numba/_version.py'\nversioneer.tag_prefix = ''\nversioneer.parentdir_prefix = 'numba-'\n\ncmdclass = versioneer.get_cmdclass()\ncmdclass['build_doc'] = build_doc\n\n\nGCCFLAGS = [\"-std=c89\", \"-Wdeclaration-after-statement\", \"-Werror\"]\n\nif os.environ.get(\"NUMBA_GCC_FLAGS\"):\n CFLAGS = GCCFLAGS\nelse:\n CFLAGS = ['-g']\n\ninstall_name_tool_fixer = []\nif sys.platform == 'darwin':\n install_name_tool_fixer += ['-headerpad_max_install_names']\n\n\ndef is_building():\n \"\"\"\n Parse the setup.py command and return whether a build is requested.\n If False is returned, only an informational command is run.\n If True is returned, information about C extensions will have to\n be passed to the setup() function.\n \"\"\"\n if len(sys.argv) < 2:\n # User forgot to give an argument probably, let setuptools handle that.\n return True\n\n info_commands = ['--help-commands', '--name', '--version', '-V',\n '--fullname', '--author', '--author-email',\n '--maintainer', '--maintainer-email', '--contact',\n '--contact-email', '--url', '--license', '--description',\n '--long-description', '--platforms', '--classifiers',\n '--keywords', '--provides', '--requires', '--obsoletes']\n # Add commands that do more than print info, but also don't need\n # any build step.\n info_commands.extend(['egg_info', 'install_egg_info', 'rotate'])\n\n for command in info_commands:\n if command in sys.argv[1:]:\n return False\n\n return True\n\n\ndef is_building_wheel():\n if len(sys.argv) < 2:\n # No command is given.\n return False\n\n return 'bdist_wheel' in sys.argv[1:]\n\n\ndef get_ext_modules():\n \"\"\"\n Return a list of Extension instances for the setup() call.\n \"\"\"\n # Note we don't import Numpy at the toplevel, since setup.py\n # should be able to run without Numpy for pip to discover the\n # build dependencies\n import numpy.distutils.misc_util as np_misc\n\n # Inject required options for extensions compiled against the Numpy\n # C API (include dirs, library dirs etc.)\n np_compile_args = np_misc.get_info('npymath')\n\n ext_dynfunc = Extension(name='numba._dynfunc',\n sources=['numba/_dynfuncmod.c'],\n extra_compile_args=CFLAGS,\n depends=['numba/_pymodule.h',\n 'numba/_dynfunc.c'])\n\n ext_dispatcher = Extension(name=\"numba._dispatcher\",\n sources=['numba/_dispatcher.c',\n 'numba/_typeof.c',\n 'numba/_hashtable.c',\n 'numba/_dispatcherimpl.cpp',\n 'numba/core/typeconv/typeconv.cpp'],\n depends=[\"numba/_pymodule.h\",\n \"numba/_dispatcher.h\",\n \"numba/_typeof.h\",\n \"numba/_hashtable.h\"],\n **np_compile_args)\n\n ext_helperlib = Extension(name=\"numba._helperlib\",\n sources=[\"numba/_helpermod.c\",\n \"numba/cext/utils.c\",\n \"numba/cext/dictobject.c\",\n \"numba/cext/listobject.c\",\n ],\n extra_compile_args=CFLAGS,\n extra_link_args=install_name_tool_fixer,\n depends=[\"numba/_pymodule.h\",\n \"numba/_helperlib.c\",\n \"numba/_lapack.c\",\n \"numba/_npymath_exports.c\",\n \"numba/_random.c\",\n \"numba/mathnames.inc\",\n ],\n **np_compile_args)\n\n ext_typeconv = Extension(name=\"numba.core.typeconv._typeconv\",\n sources=[\"numba/core/typeconv/typeconv.cpp\",\n \"numba/core/typeconv/_typeconv.cpp\"],\n depends=[\"numba/_pymodule.h\"],\n )\n\n ext_np_ufunc = Extension(name=\"numba.np.ufunc._internal\",\n sources=[\"numba/np/ufunc/_internal.c\"],\n depends=[\"numba/np/ufunc/_ufunc.c\",\n \"numba/np/ufunc/_internal.h\",\n \"numba/_pymodule.h\"],\n **np_compile_args)\n\n ext_npyufunc_num_threads = Extension(name=\"numba.np.ufunc._num_threads\",\n sources=[\n \"numba/np/ufunc/_num_threads.c\"],\n depends=[\"numba/_pymodule.h\"],\n )\n\n ext_np_ufunc_backends = []\n\n def check_file_at_path(path2file):\n \"\"\"\n Takes a list as a path, a single glob (*) is permitted as an entry which\n indicates that expansion at this location is required (i.e. version\n might not be known).\n \"\"\"\n found = None\n path2check = [os.path.split(os.path.split(sys.executable)[0])[0]]\n path2check += [os.getenv(n, '') for n in ['CONDA_PREFIX', 'PREFIX']]\n if sys.platform.startswith('win'):\n path2check += [os.path.join(p, 'Library') for p in path2check]\n for p in path2check:\n if p:\n if '*' in path2file:\n globloc = path2file.index('*')\n searchroot = os.path.join(*path2file[:globloc])\n try:\n potential_locs = os.listdir(os.path.join(p, searchroot))\n except BaseException:\n continue\n searchfor = path2file[globloc + 1:]\n for x in potential_locs:\n potpath = os.path.join(p, searchroot, x, *searchfor)\n if os.path.isfile(potpath):\n found = p # the latest is used\n elif os.path.isfile(os.path.join(p, *path2file)):\n found = p # the latest is used\n return found\n\n # Search for Intel TBB, first check env var TBBROOT then conda locations\n tbb_root = os.getenv('TBBROOT')\n if not tbb_root:\n tbb_root = check_file_at_path(['include', 'tbb', 'tbb.h'])\n\n # Set various flags for use in TBB and openmp. On OSX, also find OpenMP!\n have_openmp = True\n if sys.platform.startswith('win'):\n cpp11flags = []\n ompcompileflags = ['-openmp']\n omplinkflags = []\n elif sys.platform.startswith('darwin'):\n cpp11flags = ['-std=c++11']\n # This is a bit unusual but necessary...\n # llvm (clang) OpenMP is used for headers etc at compile time\n # Intel OpenMP (libiomp5) provides the link library.\n # They are binary compatible and may not safely coexist in a process, as\n # libiomp5 is more prevalent and often linked in for NumPy it is used\n # here!\n ompcompileflags = ['-fopenmp']\n omplinkflags = ['-fopenmp=libiomp5']\n omppath = ['lib', 'clang', '*', 'include', 'omp.h']\n have_openmp = check_file_at_path(omppath)\n else:\n cpp11flags = ['-std=c++11']\n ompcompileflags = ['-fopenmp']\n if platform.machine() == 'ppc64le':\n omplinkflags = ['-fopenmp']\n else:\n omplinkflags = ['-fopenmp']\n\n if tbb_root:\n print(\"Using Intel TBB from:\", tbb_root)\n ext_np_ufunc_tbb_backend = Extension(\n name='numba.np.ufunc.tbbpool',\n sources=[\n 'numba/np/ufunc/tbbpool.cpp',\n 'numba/np/ufunc/gufunc_scheduler.cpp',\n ],\n depends=['numba/np/ufunc/workqueue.h'],\n include_dirs=[os.path.join(tbb_root, 'include')],\n extra_compile_args=cpp11flags,\n libraries=['tbb'], # TODO: if --debug or -g, use 'tbb_debug'\n library_dirs=[\n # for Linux\n os.path.join(tbb_root, 'lib', 'intel64', 'gcc4.4'),\n # for MacOS\n os.path.join(tbb_root, 'lib'),\n # for Windows\n os.path.join(tbb_root, 'lib', 'intel64', 'vc_mt'),\n ],\n )\n ext_np_ufunc_backends.append(ext_np_ufunc_tbb_backend)\n else:\n print(\"TBB not found\")\n\n # Disable OpenMP if we are building a wheel or\n # forced by user with NUMBA_NO_OPENMP=1\n if is_building_wheel() or os.getenv('NUMBA_NO_OPENMP'):\n print(\"OpenMP disabled\")\n elif have_openmp:\n print(\"Using OpenMP from:\", have_openmp)\n # OpenMP backed work queue\n ext_np_ufunc_omppool_backend = Extension(\n name='numba.np.ufunc.omppool',\n sources=[\n 'numba/np/ufunc/omppool.cpp',\n 'numba/np/ufunc/gufunc_scheduler.cpp',\n ],\n depends=['numba/np/ufunc/workqueue.h'],\n extra_compile_args=ompcompileflags + cpp11flags,\n extra_link_args=omplinkflags,\n )\n\n ext_np_ufunc_backends.append(ext_np_ufunc_omppool_backend)\n else:\n print(\"OpenMP not found\")\n\n # Build the Numba workqueue implementation irrespective of whether the TBB\n # version is built. Users can select a backend via env vars.\n ext_np_ufunc_workqueue_backend = Extension(\n name='numba.np.ufunc.workqueue',\n sources=['numba/np/ufunc/workqueue.c',\n 'numba/np/ufunc/gufunc_scheduler.cpp'],\n depends=['numba/np/ufunc/workqueue.h'])\n ext_np_ufunc_backends.append(ext_np_ufunc_workqueue_backend)\n\n ext_mviewbuf = Extension(name='numba.mviewbuf',\n extra_link_args=install_name_tool_fixer,\n sources=['numba/mviewbuf.c'])\n\n ext_nrt_python = Extension(name='numba.core.runtime._nrt_python',\n sources=['numba/core/runtime/_nrt_pythonmod.c',\n 'numba/core/runtime/nrt.c'],\n depends=['numba/core/runtime/nrt.h',\n 'numba/_pymodule.h',\n 'numba/core/runtime/_nrt_python.c'],\n **np_compile_args)\n\n ext_jitclass_box = Extension(name='numba.experimental.jitclass._box',\n sources=['numba/experimental/jitclass/_box.c'],\n depends=['numba/experimental/_pymodule.h'],\n )\n\n ext_cuda_extras = Extension(name='numba.cuda.cudadrv._extras',\n sources=['numba/cuda/cudadrv/_extras.c'],\n depends=['numba/_pymodule.h'],\n include_dirs=[\"numba\"])\n\n ext_modules = [ext_dynfunc, ext_dispatcher, ext_helperlib, ext_typeconv,\n ext_np_ufunc, ext_npyufunc_num_threads, ext_mviewbuf,\n ext_nrt_python, ext_jitclass_box, ext_cuda_extras]\n\n ext_modules += ext_np_ufunc_backends\n\n return ext_modules\n\n\npackages = find_packages(include=[\"numba\", \"numba.*\"])\n\nbuild_requires = [f'numpy >={min_numpy_build_version}']\ninstall_requires = [\n f'llvmlite >={min_llvmlite_version},<={max_llvmlite_version}',\n f'numpy >={min_numpy_run_version}',\n 'setuptools',\n]\n\nmetadata = dict(\n name='numba',\n description=\"compiling Python code using LLVM\",\n version=versioneer.get_version(),\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Topic :: Software Development :: Compilers\",\n ],\n package_data={\n # HTML templates for type annotations\n \"numba.core.annotations\": [\"*.html\"],\n # Various test data\n \"numba.cuda.tests.cudadrv.data\": [\"*.ptx\"],\n \"numba.tests\": [\"pycc_distutils_usecase/*.py\"],\n # Some C files are needed by pycc\n \"numba\": [\"*.c\", \"*.h\"],\n \"numba.pycc\": [\"*.c\", \"*.h\"],\n \"numba.core.runtime\": [\"*.c\", \"*.h\"],\n \"numba.cext\": [\"*.c\", \"*.h\"],\n # numba gdb hook init command language file\n \"numba.misc\": [\"cmdlang.gdb\"],\n },\n scripts=[\"numba/pycc/pycc\", \"bin/numba\"],\n author=\"Anaconda, Inc.\",\n author_email=\"[email protected]\",\n url=\"http://numba.github.com\",\n packages=packages,\n setup_requires=build_requires,\n install_requires=install_requires,\n python_requires=f\">={min_python_version}\",\n license=\"BSD\",\n cmdclass=cmdclass,\n)\n\nwith open('README.rst') as f:\n metadata['long_description'] = f.read()\n\nif is_building():\n metadata['ext_modules'] = get_ext_modules()\n\nsetup(**metadata)\n", "path": "setup.py" } ]
diff --git a/README.rst b/README.rst index e56331f9bda..9be382775d2 100644 --- a/README.rst +++ b/README.rst @@ -41,7 +41,7 @@ Dependencies ============ * Python versions: 3.6-3.8 -* llvmlite 0.31.* +* llvmlite 0.32.* * NumPy >=1.15 (can build with 1.11 for ABI compatibility) Optionally: diff --git a/buildscripts/condarecipe.local/meta.yaml b/buildscripts/condarecipe.local/meta.yaml index 40a411e6341..cad87a5c872 100644 --- a/buildscripts/condarecipe.local/meta.yaml +++ b/buildscripts/condarecipe.local/meta.yaml @@ -32,7 +32,7 @@ requirements: - numpy - setuptools # On channel https://anaconda.org/numba/ - - llvmlite 0.31.* + - llvmlite >=0.31,<0.33 # TBB devel version is to match TBB libs - tbb-devel >=2019.5 # [not (armv6l or armv7l or aarch64 or linux32)] run: @@ -40,7 +40,7 @@ requirements: - numpy >=1.15 - setuptools # On channel https://anaconda.org/numba/ - - llvmlite 0.31.* + - llvmlite >=0.31,<0.33 run_constrained: # If TBB is present it must be at least this version from Anaconda due to # build flag issues triggering UB diff --git a/requirements.txt b/requirements.txt index 00c7dcbd001..6ae43a42963 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,4 +1,4 @@ setuptools numpy>=1.10 -llvmlite==0.31.* +llvmlite>=0.31,<0.33 argparse diff --git a/setup.py b/setup.py index 72ddf36e368..d9af1861140 100644 --- a/setup.py +++ b/setup.py @@ -12,7 +12,7 @@ min_numpy_build_version = "1.11" min_numpy_run_version = "1.15" min_llvmlite_version = "0.31.0.dev0" -max_llvmlite_version = "0.32.0.dev0" +max_llvmlite_version = "0.33.0.dev0" if sys.platform.startswith('linux'): # Patch for #2555 to make wheels without libpython
sopel-irc__sopel-1941
settings: Bot assumes `+` for modes The `core.modes` setting is assumed to be just a string of letters, and the leading `+` is assumed by `coretasks`: https://github.com/sopel-irc/sopel/blob/a33caf15090d61b90dc831f55cc195e56185dad3/sopel/coretasks.py#L155-L156 @cottongin rightly pointed out on IRC that sometimes it's desirable to _remove_ modes the IRC server sets by default. While some IRCds will happily accept `MODE nickname +-abc` (like freenode's), it's not a universal workaround. I'm happy to add this to 7.1 because 1) it's a pretty trivial change to implement and 2) there's an obvious backward-compatible way to parse the setting. Proposal: Add the leading `+` automatically only if the `core.modes` setting doesn't contain a prefix character (`+` or `-`).
[ { "content": "# coding=utf-8\n\"\"\"Tasks that allow the bot to run, but aren't user-facing functionality\n\nThis is written as a module to make it easier to extend to support more\nresponses to standard IRC codes without having to shove them all into the\ndispatch function in bot.py and making it easier to maintain.\n\"\"\"\n# Copyright 2008-2011, Sean B. Palmer (inamidst.com) and Michael Yanovich\n# (yanovich.net)\n# Copyright © 2012, Elad Alfassa <[email protected]>\n# Copyright 2012-2015, Elsie Powell embolalia.com\n# Copyright 2019, Florian Strzelecki <[email protected]>\n#\n# Licensed under the Eiffel Forum License 2.\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport base64\nimport collections\nimport datetime\nimport functools\nimport logging\nfrom random import randint\nimport re\nimport sys\nimport time\n\nfrom sopel import loader, module, plugin\nfrom sopel.irc import isupport\nfrom sopel.irc.utils import CapReq, MyInfo\nfrom sopel.tools import events, Identifier, iteritems, target, web\n\n\nif sys.version_info.major >= 3:\n unicode = str\n\nLOGGER = logging.getLogger(__name__)\n\nbatched_caps = {}\nwho_reqs = {} # Keeps track of reqs coming from this module, rather than others\n\n\ndef setup(bot):\n bot.memory['join_events_queue'] = collections.deque()\n\n # Manage JOIN flood protection\n if bot.settings.core.throttle_join:\n wait_interval = max(bot.settings.core.throttle_wait, 1)\n\n @module.interval(wait_interval)\n @plugin.label('throttle_join')\n def processing_job(bot):\n _join_event_processing(bot)\n\n loader.clean_callable(processing_job, bot.settings)\n processing_job.plugin_name = 'coretasks'\n\n bot.register_jobs([processing_job])\n\n\ndef shutdown(bot):\n try:\n bot.memory['join_events_queue'].clear()\n except KeyError:\n pass\n\n\ndef _join_event_processing(bot):\n \"\"\"Process a batch of JOIN event from the ``join_events_queue`` queue.\n\n Every time this function is executed, it processes at most\n ``throttle_join`` JOIN events. For each JOIN, it sends a WHO request to\n know more about the channel. This will prevent an excess of flood when\n there are too many channels to join at once.\n \"\"\"\n batch_size = max(bot.settings.core.throttle_join, 1)\n for _ in range(batch_size):\n try:\n channel = bot.memory['join_events_queue'].popleft()\n except IndexError:\n break\n LOGGER.debug('Sending WHO after channel JOIN: %s', channel)\n _send_who(bot, channel)\n\n\ndef auth_after_register(bot):\n \"\"\"Do NickServ/AuthServ auth\"\"\"\n if bot.config.core.auth_method:\n auth_method = bot.config.core.auth_method\n auth_username = bot.config.core.auth_username\n auth_password = bot.config.core.auth_password\n auth_target = bot.config.core.auth_target\n elif bot.config.core.nick_auth_method:\n auth_method = bot.config.core.nick_auth_method\n auth_username = (bot.config.core.nick_auth_username or\n bot.config.core.nick)\n auth_password = bot.config.core.nick_auth_password\n auth_target = bot.config.core.nick_auth_target\n else:\n return\n\n if auth_method == 'nickserv':\n bot.say('IDENTIFY %s' % auth_password, auth_target or 'NickServ')\n elif auth_method == 'authserv':\n bot.write(('AUTHSERV auth', auth_username + ' ' + auth_password))\n elif auth_method == 'Q':\n bot.write(('AUTH', auth_username + ' ' + auth_password))\n elif auth_method == 'userserv':\n bot.say(\"LOGIN %s %s\" % (auth_username, auth_password),\n auth_target or 'UserServ')\n\n\ndef _execute_perform(bot):\n \"\"\"Execute commands specified to perform on IRC server connect.\"\"\"\n if not bot.connection_registered:\n # How did you even get this command, bot?\n raise Exception('Bot must be connected to server to perform commands.')\n\n LOGGER.debug('{} commands to execute:'.format(len(bot.config.core.commands_on_connect)))\n for i, command in enumerate(bot.config.core.commands_on_connect):\n command = command.replace('$nickname', bot.config.core.nick)\n LOGGER.debug(command)\n bot.write((command,))\n\n\[email protected]_privmsg(\"This command only works as a private message.\")\[email protected]_admin(\"This command requires admin privileges.\")\[email protected]('execute')\ndef execute_perform(bot, trigger):\n \"\"\"Execute commands specified to perform on IRC server connect.\"\"\"\n _execute_perform(bot)\n\n\[email protected]('high')\[email protected](events.RPL_WELCOME, events.RPL_LUSERCLIENT)\[email protected](False)\[email protected]\ndef startup(bot, trigger):\n \"\"\"Do tasks related to connecting to the network.\n\n 001 RPL_WELCOME is from RFC2812 and is the first message that is sent after\n the connection has been registered on the network.\n\n 251 RPL_LUSERCLIENT is a mandatory message that is sent after client\n connects to the server in rfc1459. RFC2812 does not require it and all\n networks might not send it. We support both.\n\n \"\"\"\n if bot.connection_registered:\n return\n\n bot.connection_registered = True\n\n auth_after_register(bot)\n\n modes = bot.config.core.modes\n bot.write(('MODE', '%s +%s' % (bot.nick, modes)))\n\n bot.memory['retry_join'] = dict()\n\n channels = bot.config.core.channels\n if not channels:\n LOGGER.info('No initial channels to JOIN.')\n elif bot.config.core.throttle_join:\n throttle_rate = int(bot.config.core.throttle_join)\n throttle_wait = max(bot.config.core.throttle_wait, 1)\n channels_joined = 0\n\n LOGGER.info(\n 'Joining %d channels (with JOIN throttle ON); '\n 'this may take a moment.',\n len(channels))\n\n for channel in channels:\n channels_joined += 1\n if not channels_joined % throttle_rate:\n LOGGER.debug(\n 'Waiting %ds before next JOIN batch.',\n throttle_wait)\n time.sleep(throttle_wait)\n bot.join(channel)\n else:\n LOGGER.info(\n 'Joining %d channels (with JOIN throttle OFF); '\n 'this may take a moment.',\n len(channels))\n\n for channel in bot.config.core.channels:\n bot.join(channel)\n\n if (not bot.config.core.owner_account and\n 'account-tag' in bot.enabled_capabilities and\n '@' not in bot.config.core.owner):\n msg = (\n \"This network supports using network services to identify you as \"\n \"my owner, rather than just matching your nickname. This is much \"\n \"more secure. If you'd like to do this, make sure you're logged in \"\n \"and reply with \\\"{}useserviceauth\\\"\"\n ).format(bot.config.core.help_prefix)\n bot.say(msg, bot.config.core.owner)\n\n _execute_perform(bot)\n\n\[email protected]('high')\[email protected](events.RPL_ISUPPORT)\[email protected](False)\[email protected]\[email protected]('are supported by this server')\ndef handle_isupport(bot, trigger):\n \"\"\"Handle ``RPL_ISUPPORT`` events.\"\"\"\n parameters = {}\n for arg in trigger.args:\n try:\n key, value = isupport.parse_parameter(arg)\n parameters[key] = value\n except ValueError:\n # ignore malformed parameter: log a warning and continue\n LOGGER.warning('Unable to parse ISUPPORT parameter: %r', arg)\n\n bot._isupport = bot._isupport.apply(**parameters)\n\n\[email protected]('high')\[email protected](events.RPL_MYINFO)\[email protected](False)\[email protected]\ndef parse_reply_myinfo(bot, trigger):\n \"\"\"Handle ``RPL_MYINFO`` events.\"\"\"\n # keep <client> <servername> <version> only\n # the trailing parameters (mode types) should be read from ISUPPORT\n bot._myinfo = MyInfo(*trigger.args[0:3])\n\n\[email protected]_privmsg()\[email protected]_owner()\[email protected]('useserviceauth')\ndef enable_service_auth(bot, trigger):\n if bot.config.core.owner_account:\n return\n if 'account-tag' not in bot.enabled_capabilities:\n bot.say('This server does not fully support services auth, so this '\n 'command is not available.')\n return\n if not trigger.account:\n bot.say('You must be logged in to network services before using this '\n 'command.')\n return\n bot.config.core.owner_account = trigger.account\n bot.config.save()\n bot.say('Success! I will now use network services to identify you as my '\n 'owner.')\n\n\[email protected](events.ERR_NOCHANMODES)\[email protected]('high')\ndef retry_join(bot, trigger):\n \"\"\"Give NickServ enough time to identify on a +R channel.\n\n Give NickServ enough time to identify, and retry rejoining an\n identified-only (+R) channel. Maximum of ten rejoin attempts.\n \"\"\"\n channel = trigger.args[1]\n if channel in bot.memory['retry_join'].keys():\n bot.memory['retry_join'][channel] += 1\n if bot.memory['retry_join'][channel] > 10:\n LOGGER.warning('Failed to join %s after 10 attempts.', channel)\n return\n else:\n bot.memory['retry_join'][channel] = 0\n bot.join(channel)\n return\n\n time.sleep(6)\n bot.join(channel)\n\n\[email protected]('(.*)')\[email protected](events.RPL_NAMREPLY)\[email protected]('high')\[email protected](False)\[email protected]\ndef handle_names(bot, trigger):\n \"\"\"Handle NAMES response, happens when joining to channels.\"\"\"\n names = trigger.split()\n\n # TODO specific to one channel type. See issue 281.\n channels = re.search(r'(#\\S*)', trigger.raw)\n if not channels:\n return\n channel = Identifier(channels.group(1))\n if channel not in bot.privileges:\n bot.privileges[channel] = dict()\n if channel not in bot.channels:\n bot.channels[channel] = target.Channel(channel)\n\n # This could probably be made flexible in the future, but I don't think\n # it'd be worth it.\n # If this ever needs to be updated, remember to change the mode handling in\n # the WHO-handler functions below, too.\n mapping = {\n \"+\": module.VOICE,\n \"%\": module.HALFOP,\n \"@\": module.OP,\n \"&\": module.ADMIN,\n \"~\": module.OWNER,\n \"!\": module.OPER,\n }\n\n for name in names:\n priv = 0\n for prefix, value in iteritems(mapping):\n if prefix in name:\n priv = priv | value\n nick = Identifier(name.lstrip(''.join(mapping.keys())))\n bot.privileges[channel][nick] = priv\n user = bot.users.get(nick)\n if user is None:\n # It's not possible to set the username/hostname from info received\n # in a NAMES reply, unfortunately.\n # Fortunately, the user should already exist in bot.users by the\n # time this code runs, so this is 99.9% ass-covering.\n user = target.User(nick, None, None)\n bot.users[nick] = user\n bot.channels[channel].add_user(user, privs=priv)\n\n\[email protected]('(.*)')\[email protected]('MODE')\[email protected]('high')\[email protected](False)\[email protected]\ndef track_modes(bot, trigger):\n \"\"\"Track usermode changes and keep our lists of ops up to date.\"\"\"\n # Mode message format: <channel> *( ( \"-\" / \"+\" ) *<modes> *<modeparams> )\n if len(trigger.args) < 3:\n # We need at least [channel, mode, nickname] to do anything useful\n # MODE messages with fewer args won't help us\n LOGGER.debug(\"Received an apparently useless MODE message: {}\"\n .format(trigger.raw))\n return\n # Our old MODE parsing code checked if any of the args was empty.\n # Somewhere around here would be a good place to re-implement that if it's\n # actually necessary to guard against some non-compliant IRCd. But for now\n # let's just log malformed lines to the debug log.\n if not all(trigger.args):\n LOGGER.debug(\"The server sent a possibly malformed MODE message: {}\"\n .format(trigger.raw))\n\n # From here on, we will make a (possibly dangerous) assumption that the\n # received MODE message is more-or-less compliant\n channel = Identifier(trigger.args[0])\n # If the first character of where the mode is being set isn't a #\n # then it's a user mode, not a channel mode, so we'll ignore it.\n # TODO: Handle CHANTYPES from ISUPPORT numeric (005)\n # (Actually, most of this function should be rewritten again when we parse\n # ISUPPORT...)\n if channel.is_nick():\n return\n\n modestring = trigger.args[1]\n nicks = [Identifier(nick) for nick in trigger.args[2:]]\n\n mapping = {\n \"v\": module.VOICE,\n \"h\": module.HALFOP,\n \"o\": module.OP,\n \"a\": module.ADMIN,\n \"q\": module.OWNER,\n \"y\": module.OPER,\n \"Y\": module.OPER,\n }\n\n # Parse modes before doing anything else\n modes = []\n sign = ''\n for char in modestring:\n # There was a comment claiming IRC allows e.g. MODE +aB-c foo, but it\n # doesn't seem to appear in any RFCs. But modern.ircdocs.horse shows\n # it, so we'll leave in the extra parsing for now.\n if char in '+-':\n sign = char\n elif char in mapping:\n # Filter out unexpected modes and hope they don't have parameters\n modes.append(sign + char)\n\n # Try to map modes to arguments, after sanity-checking\n if len(modes) != len(nicks) or not all([nick.is_nick() for nick in nicks]):\n # Something fucky happening, like unusual batching of non-privilege\n # modes together with the ones we expect. Way easier to just re-WHO\n # than try to account for non-standard parameter-taking modes.\n LOGGER.debug('Sending WHO for channel: %s', channel)\n _send_who(bot, channel)\n return\n\n for (mode, nick) in zip(modes, nicks):\n priv = bot.channels[channel].privileges.get(nick, 0)\n # Log a warning if the two privilege-tracking data structures\n # get out of sync. That should never happen.\n # This is a good place to verify that bot.channels is doing\n # what it's supposed to do before ultimately removing the old,\n # deprecated bot.privileges structure completely.\n ppriv = bot.privileges[channel].get(nick, 0)\n if priv != ppriv:\n LOGGER.warning(\"Privilege data error! Please share Sopel's\"\n \"raw log with the developers, if enabled. \"\n \"(Expected {} == {} for {} in {}.)\"\n .format(priv, ppriv, nick, channel))\n value = mapping.get(mode[1])\n if value is not None:\n if mode[0] == '+':\n priv = priv | value\n else:\n priv = priv & ~value\n bot.privileges[channel][nick] = priv\n bot.channels[channel].privileges[nick] = priv\n\n\[email protected]('NICK')\[email protected]('high')\[email protected](False)\[email protected]\ndef track_nicks(bot, trigger):\n \"\"\"Track nickname changes and maintain our chanops list accordingly.\"\"\"\n old = trigger.nick\n new = Identifier(trigger)\n\n # Give debug mssage, and PM the owner, if the bot's own nick changes.\n if old == bot.nick and new != bot.nick:\n privmsg = (\n \"Hi, I'm your bot, %s. Something has made my nick change. This \"\n \"can cause some problems for me, and make me do weird things. \"\n \"You'll probably want to restart me, and figure out what made \"\n \"that happen so you can stop it happening again. (Usually, it \"\n \"means you tried to give me a nick that's protected by NickServ.)\"\n ) % bot.nick\n debug_msg = (\n \"Nick changed by server. This can cause unexpected behavior. \"\n \"Please restart the bot.\"\n )\n LOGGER.critical(debug_msg)\n bot.say(privmsg, bot.config.core.owner)\n return\n\n for channel in bot.privileges:\n channel = Identifier(channel)\n if old in bot.privileges[channel]:\n value = bot.privileges[channel].pop(old)\n bot.privileges[channel][new] = value\n\n for channel in bot.channels.values():\n channel.rename_user(old, new)\n if old in bot.users:\n bot.users[new] = bot.users.pop(old)\n\n\[email protected]('(.*)')\[email protected]('PART')\[email protected]('high')\[email protected](False)\[email protected]\ndef track_part(bot, trigger):\n nick = trigger.nick\n channel = trigger.sender\n _remove_from_channel(bot, nick, channel)\n\n\[email protected]('KICK')\[email protected]('high')\[email protected](False)\[email protected]\ndef track_kick(bot, trigger):\n nick = Identifier(trigger.args[1])\n channel = trigger.sender\n _remove_from_channel(bot, nick, channel)\n\n\ndef _remove_from_channel(bot, nick, channel):\n if nick == bot.nick:\n bot.privileges.pop(channel, None)\n bot.channels.pop(channel, None)\n\n lost_users = []\n for nick_, user in bot.users.items():\n user.channels.pop(channel, None)\n if not user.channels:\n lost_users.append(nick_)\n for nick_ in lost_users:\n bot.users.pop(nick_, None)\n else:\n bot.privileges[channel].pop(nick, None)\n\n user = bot.users.get(nick)\n if user and channel in user.channels:\n bot.channels[channel].clear_user(nick)\n if not user.channels:\n bot.users.pop(nick, None)\n\n\ndef _whox_enabled(bot):\n # Either privilege tracking or away notification. For simplicity, both\n # account notify and extended join must be there for account tracking.\n return (('account-notify' in bot.enabled_capabilities and\n 'extended-join' in bot.enabled_capabilities) or\n 'away-notify' in bot.enabled_capabilities)\n\n\ndef _send_who(bot, channel):\n if _whox_enabled(bot):\n # WHOX syntax, see http://faerion.sourceforge.net/doc/irc/whox.var\n # Needed for accounts in who replies. The random integer is a param\n # to identify the reply as one from this command, because if someone\n # else sent it, we have no fucking way to know what the format is.\n rand = str(randint(0, 999))\n while rand in who_reqs:\n rand = str(randint(0, 999))\n who_reqs[rand] = channel\n bot.write(['WHO', channel, 'a%nuachtf,' + rand])\n else:\n # We might be on an old network, but we still care about keeping our\n # user list updated\n bot.write(['WHO', channel])\n bot.channels[Identifier(channel)].last_who = datetime.datetime.utcnow()\n\n\[email protected](30)\ndef _periodic_send_who(bot):\n \"\"\"Periodically send a WHO request to keep user information up-to-date.\"\"\"\n if 'away-notify' in bot.enabled_capabilities:\n # WHO not needed to update 'away' status\n return\n\n # Loops through the channels to find the one that has the longest time since the last WHO\n # request, and issues a WHO request only if the last request for the channel was more than\n # 120 seconds ago.\n who_trigger_time = datetime.datetime.utcnow() - datetime.timedelta(seconds=120)\n selected_channel = None\n for channel_name, channel in bot.channels.items():\n if channel.last_who is None:\n # WHO was never sent yet to this channel: stop here\n selected_channel = channel_name\n break\n if channel.last_who < who_trigger_time:\n # this channel's last who request is the most outdated one at the moment\n selected_channel = channel_name\n who_trigger_time = channel.last_who\n\n if selected_channel is not None:\n # selected_channel's last who is either none or the oldest valid\n LOGGER.debug('Sending WHO for channel: %s', selected_channel)\n _send_who(bot, selected_channel)\n\n\[email protected]('JOIN')\[email protected]('high')\[email protected](False)\[email protected]\ndef track_join(bot, trigger):\n channel = trigger.sender\n\n # is it a new channel?\n if channel not in bot.channels:\n LOGGER.info('Channel joined: %s', channel)\n bot.privileges[channel] = dict()\n bot.channels[channel] = target.Channel(channel)\n\n # did *we* just join?\n if trigger.nick == bot.nick:\n if bot.settings.core.throttle_join:\n LOGGER.debug('JOIN event added to queue for channel: %s', channel)\n bot.memory['join_events_queue'].append(channel)\n else:\n LOGGER.debug('Send direct WHO for channel: %s', channel)\n _send_who(bot, channel)\n\n # set initial values\n bot.privileges[channel][trigger.nick] = 0\n\n user = bot.users.get(trigger.nick)\n if user is None:\n user = target.User(trigger.nick, trigger.user, trigger.host)\n bot.users[trigger.nick] = user\n bot.channels[channel].add_user(user)\n\n if len(trigger.args) > 1 and trigger.args[1] != '*' and (\n 'account-notify' in bot.enabled_capabilities and\n 'extended-join' in bot.enabled_capabilities):\n user.account = trigger.args[1]\n\n\[email protected]('QUIT')\[email protected]('high')\[email protected](False)\[email protected]\ndef track_quit(bot, trigger):\n for chanprivs in bot.privileges.values():\n chanprivs.pop(trigger.nick, None)\n for channel in bot.channels.values():\n channel.clear_user(trigger.nick)\n bot.users.pop(trigger.nick, None)\n\n\[email protected]('CAP')\[email protected](False)\[email protected]('high')\[email protected]\ndef receive_cap_list(bot, trigger):\n cap = trigger.strip('-=~')\n # Server is listing capabilities\n if trigger.args[1] == 'LS':\n receive_cap_ls_reply(bot, trigger)\n # Server denied CAP REQ\n elif trigger.args[1] == 'NAK':\n entry = bot._cap_reqs.get(cap, None)\n # If it was requested with bot.cap_req\n if entry:\n for req in entry:\n # And that request was mandatory/prohibit, and a callback was\n # provided\n if req.prefix and req.failure:\n # Call it.\n req.failure(bot, req.prefix + cap)\n # Server is removing a capability\n elif trigger.args[1] == 'DEL':\n entry = bot._cap_reqs.get(cap, None)\n # If it was requested with bot.cap_req\n if entry:\n for req in entry:\n # And that request wasn't prohibit, and a callback was\n # provided\n if req.prefix != '-' and req.failure:\n # Call it.\n req.failure(bot, req.prefix + cap)\n # Server is adding new capability\n elif trigger.args[1] == 'NEW':\n entry = bot._cap_reqs.get(cap, None)\n # If it was requested with bot.cap_req\n if entry:\n for req in entry:\n # And that request wasn't prohibit\n if req.prefix != '-':\n # Request it\n bot.write(('CAP', 'REQ', req.prefix + cap))\n # Server is acknowledging a capability\n elif trigger.args[1] == 'ACK':\n caps = trigger.args[2].split()\n for cap in caps:\n cap.strip('-~= ')\n bot.enabled_capabilities.add(cap)\n entry = bot._cap_reqs.get(cap, [])\n for req in entry:\n if req.success:\n req.success(bot, req.prefix + trigger)\n if cap == 'sasl': # TODO why is this not done with bot.cap_req?\n receive_cap_ack_sasl(bot)\n\n\ndef receive_cap_ls_reply(bot, trigger):\n if bot.server_capabilities:\n # We've already seen the results, so someone sent CAP LS from a plugin.\n # We're too late to do SASL, and we don't want to send CAP END before\n # the plugin has done what it needs to, so just return\n return\n\n for cap in trigger.split():\n c = cap.split('=')\n if len(c) == 2:\n batched_caps[c[0]] = c[1]\n else:\n batched_caps[c[0]] = None\n\n # Not the last in a multi-line reply. First two args are * and LS.\n if trigger.args[2] == '*':\n return\n\n bot.server_capabilities = batched_caps\n\n # If some other plugin requests it, we don't need to add another request.\n # If some other plugin prohibits it, we shouldn't request it.\n core_caps = [\n 'echo-message',\n 'multi-prefix',\n 'away-notify',\n 'cap-notify',\n 'server-time',\n ]\n for cap in core_caps:\n if cap not in bot._cap_reqs:\n bot._cap_reqs[cap] = [CapReq('', 'coretasks')]\n\n def acct_warn(bot, cap):\n LOGGER.info('Server does not support %s, or it conflicts with a custom '\n 'plugin. User account validation unavailable or limited.',\n cap[1:])\n if bot.config.core.owner_account or bot.config.core.admin_accounts:\n LOGGER.warning(\n 'Owner or admin accounts are configured, but %s is not '\n 'supported by the server. This may cause unexpected behavior.',\n cap[1:])\n auth_caps = ['account-notify', 'extended-join', 'account-tag']\n for cap in auth_caps:\n if cap not in bot._cap_reqs:\n bot._cap_reqs[cap] = [CapReq('', 'coretasks', acct_warn)]\n\n for cap, reqs in iteritems(bot._cap_reqs):\n # At this point, we know mandatory and prohibited don't co-exist, but\n # we need to call back for optionals if they're also prohibited\n prefix = ''\n for entry in reqs:\n if prefix == '-' and entry.prefix != '-':\n entry.failure(bot, entry.prefix + cap)\n continue\n if entry.prefix:\n prefix = entry.prefix\n\n # It's not required, or it's supported, so we can request it\n if prefix != '=' or cap in bot.server_capabilities:\n # REQs fail as a whole, so we send them one capability at a time\n bot.write(('CAP', 'REQ', entry.prefix + cap))\n # If it's required but not in server caps, we need to call all the\n # callbacks\n else:\n for entry in reqs:\n if entry.failure and entry.prefix == '=':\n entry.failure(bot, entry.prefix + cap)\n\n # If we want to do SASL, we have to wait before we can send CAP END. So if\n # we are, wait on 903 (SASL successful) to send it.\n if bot.config.core.auth_method == 'sasl' or bot.config.core.server_auth_method == 'sasl':\n bot.write(('CAP', 'REQ', 'sasl'))\n else:\n bot.write(('CAP', 'END'))\n\n\ndef receive_cap_ack_sasl(bot):\n # Presumably we're only here if we said we actually *want* sasl, but still\n # check anyway.\n password = None\n mech = None\n if bot.config.core.auth_method == 'sasl':\n password = bot.config.core.auth_password\n mech = bot.config.core.auth_target\n elif bot.config.core.server_auth_method == 'sasl':\n password = bot.config.core.server_auth_password\n mech = bot.config.core.server_auth_sasl_mech\n if not password:\n return\n mech = mech or 'PLAIN'\n bot.write(('AUTHENTICATE', mech))\n\n\ndef send_authenticate(bot, token):\n \"\"\"Send ``AUTHENTICATE`` command to server with the given ``token``.\n\n :param bot: instance of IRC bot that must authenticate\n :param str token: authentication token\n\n In case the ``token`` is more than 400 bytes, we need to split it and send\n as many ``AUTHENTICATE`` commands as needed. If the last chunk is 400 bytes\n long, we must also send a last empty command (`AUTHENTICATE +` is for empty\n line), so the server knows we are done with ``AUTHENTICATE``.\n\n .. seealso::\n\n https://ircv3.net/specs/extensions/sasl-3.1.html#the-authenticate-command\n\n \"\"\"\n # payload is a base64 encoded token\n payload = base64.b64encode(token.encode('utf-8'))\n\n # split the payload into chunks of at most 400 bytes\n chunk_size = 400\n for i in range(0, len(payload), chunk_size):\n offset = i + chunk_size\n chunk = payload[i:offset]\n bot.write(('AUTHENTICATE', chunk))\n\n # send empty (+) AUTHENTICATE when payload's length is a multiple of 400\n if len(payload) % chunk_size == 0:\n bot.write(('AUTHENTICATE', '+'))\n\n\[email protected]('AUTHENTICATE')\ndef auth_proceed(bot, trigger):\n if trigger.args[0] != '+':\n # How did we get here? I am not good with computer.\n return\n # Is this right?\n if bot.config.core.auth_method == 'sasl':\n sasl_username = bot.config.core.auth_username\n sasl_password = bot.config.core.auth_password\n elif bot.config.core.server_auth_method == 'sasl':\n sasl_username = bot.config.core.server_auth_username\n sasl_password = bot.config.core.server_auth_password\n else:\n return\n sasl_username = sasl_username or bot.nick\n sasl_token = '\\0'.join((sasl_username, sasl_username, sasl_password))\n send_authenticate(bot, sasl_token)\n\n\[email protected](events.RPL_SASLSUCCESS)\ndef sasl_success(bot, trigger):\n bot.write(('CAP', 'END'))\n\n\n# Live blocklist editing\n\n\[email protected]('blocks')\[email protected]('low')\[email protected](False)\[email protected]\[email protected]_admin\ndef blocks(bot, trigger):\n \"\"\"\n Manage Sopel's blocking features.\\\n See [ignore system documentation]({% link _usage/ignoring-people.md %}).\n\n \"\"\"\n STRINGS = {\n \"success_del\": \"Successfully deleted block: %s\",\n \"success_add\": \"Successfully added block: %s\",\n \"no_nick\": \"No matching nick block found for: %s\",\n \"no_host\": \"No matching hostmask block found for: %s\",\n \"invalid\": \"Invalid format for %s a block. Try: .blocks add (nick|hostmask) sopel\",\n \"invalid_display\": \"Invalid input for displaying blocks.\",\n \"nonelisted\": \"No %s listed in the blocklist.\",\n 'huh': \"I could not figure out what you wanted to do.\",\n }\n\n masks = set(s for s in bot.config.core.host_blocks if s != '')\n nicks = set(Identifier(nick)\n for nick in bot.config.core.nick_blocks\n if nick != '')\n text = trigger.group().split()\n\n if len(text) == 3 and text[1] == \"list\":\n if text[2] == \"hostmask\":\n if len(masks) > 0:\n blocked = ', '.join(unicode(mask) for mask in masks)\n bot.say(\"Blocked hostmasks: {}\".format(blocked))\n else:\n bot.reply(STRINGS['nonelisted'] % ('hostmasks'))\n elif text[2] == \"nick\":\n if len(nicks) > 0:\n blocked = ', '.join(unicode(nick) for nick in nicks)\n bot.say(\"Blocked nicks: {}\".format(blocked))\n else:\n bot.reply(STRINGS['nonelisted'] % ('nicks'))\n else:\n bot.reply(STRINGS['invalid_display'])\n\n elif len(text) == 4 and text[1] == \"add\":\n if text[2] == \"nick\":\n nicks.add(text[3])\n bot.config.core.nick_blocks = nicks\n bot.config.save()\n elif text[2] == \"hostmask\":\n masks.add(text[3].lower())\n bot.config.core.host_blocks = list(masks)\n else:\n bot.reply(STRINGS['invalid'] % (\"adding\"))\n return\n\n bot.reply(STRINGS['success_add'] % (text[3]))\n\n elif len(text) == 4 and text[1] == \"del\":\n if text[2] == \"nick\":\n if Identifier(text[3]) not in nicks:\n bot.reply(STRINGS['no_nick'] % (text[3]))\n return\n nicks.remove(Identifier(text[3]))\n bot.config.core.nick_blocks = [unicode(n) for n in nicks]\n bot.config.save()\n bot.reply(STRINGS['success_del'] % (text[3]))\n elif text[2] == \"hostmask\":\n mask = text[3].lower()\n if mask not in masks:\n bot.reply(STRINGS['no_host'] % (text[3]))\n return\n masks.remove(mask)\n bot.config.core.host_blocks = [unicode(m) for m in masks]\n bot.config.save()\n bot.reply(STRINGS['success_del'] % (text[3]))\n else:\n bot.reply(STRINGS['invalid'] % (\"deleting\"))\n return\n else:\n bot.reply(STRINGS['huh'])\n\n\[email protected]('ACCOUNT')\ndef account_notify(bot, trigger):\n if trigger.nick not in bot.users:\n bot.users[trigger.nick] = target.User(\n trigger.nick, trigger.user, trigger.host)\n account = trigger.args[0]\n if account == '*':\n account = None\n bot.users[trigger.nick].account = account\n\n\[email protected](events.RPL_WHOSPCRPL)\[email protected]('high')\[email protected]\ndef recv_whox(bot, trigger):\n if len(trigger.args) < 2 or trigger.args[1] not in who_reqs:\n # Ignored, some plugin probably called WHO\n return\n if len(trigger.args) != 8:\n return LOGGER.warning('While populating `bot.accounts` a WHO response was malformed.')\n _, _, channel, user, host, nick, status, account = trigger.args\n away = 'G' in status\n modes = ''.join([c for c in status if c in '~&@%+!'])\n _record_who(bot, channel, user, host, nick, account, away, modes)\n\n\ndef _record_who(bot, channel, user, host, nick, account=None, away=None, modes=None):\n nick = Identifier(nick)\n channel = Identifier(channel)\n if nick not in bot.users:\n usr = target.User(nick, user, host)\n bot.users[nick] = usr\n else:\n usr = bot.users[nick]\n # check for & fill in sparse User added by handle_names()\n if usr.host is None and host:\n usr.host = host\n if usr.user is None and user:\n usr.user = user\n if account == '0':\n usr.account = None\n else:\n usr.account = account\n if away is not None:\n usr.away = away\n priv = 0\n if modes:\n mapping = {\n \"+\": module.VOICE,\n \"%\": module.HALFOP,\n \"@\": module.OP,\n \"&\": module.ADMIN,\n \"~\": module.OWNER,\n \"!\": module.OPER,\n }\n for c in modes:\n priv = priv | mapping[c]\n if channel not in bot.channels:\n bot.channels[channel] = target.Channel(channel)\n bot.channels[channel].add_user(usr, privs=priv)\n if channel not in bot.privileges:\n bot.privileges[channel] = dict()\n bot.privileges[channel][nick] = priv\n\n\[email protected](events.RPL_WHOREPLY)\[email protected]('high')\[email protected]\ndef recv_who(bot, trigger):\n channel, user, host, _, nick, status = trigger.args[1:7]\n away = 'G' in status\n modes = ''.join([c for c in status if c in '~&@%+!'])\n _record_who(bot, channel, user, host, nick, away=away, modes=modes)\n\n\[email protected](events.RPL_ENDOFWHO)\[email protected]('high')\[email protected]\ndef end_who(bot, trigger):\n if _whox_enabled(bot):\n who_reqs.pop(trigger.args[1], None)\n\n\[email protected]('AWAY')\[email protected]('high')\[email protected](False)\[email protected]\ndef track_notify(bot, trigger):\n if trigger.nick not in bot.users:\n bot.users[trigger.nick] = target.User(\n trigger.nick, trigger.user, trigger.host)\n user = bot.users[trigger.nick]\n user.away = bool(trigger.args)\n\n\[email protected]('TOPIC')\[email protected](events.RPL_TOPIC)\[email protected]('high')\[email protected](False)\[email protected]\ndef track_topic(bot, trigger):\n if trigger.event != 'TOPIC':\n channel = trigger.args[1]\n else:\n channel = trigger.args[0]\n if channel not in bot.channels:\n return\n bot.channels[channel].topic = trigger.args[-1]\n\n\[email protected](r'(?u).*(.+://\\S+).*')\ndef handle_url_callbacks(bot, trigger):\n \"\"\"Dispatch callbacks on URLs\n\n For each URL found in the trigger, trigger the URL callback registered by\n the ``@url`` decorator.\n \"\"\"\n schemes = bot.config.core.auto_url_schemes\n # find URLs in the trigger\n for url in web.search_urls(trigger, schemes=schemes):\n # find callbacks for said URL\n for function, match in bot.search_url_callbacks(url):\n # trigger callback defined by the `@url` decorator\n if hasattr(function, 'url_regex'):\n # bake the `match` argument in before passing the callback on\n @functools.wraps(function)\n def decorated(bot, trigger):\n return function(bot, trigger, match=match)\n\n bot.call(decorated, bot, trigger)\n", "path": "sopel/coretasks.py" } ]
[ { "content": "# coding=utf-8\n\"\"\"Tasks that allow the bot to run, but aren't user-facing functionality\n\nThis is written as a module to make it easier to extend to support more\nresponses to standard IRC codes without having to shove them all into the\ndispatch function in bot.py and making it easier to maintain.\n\"\"\"\n# Copyright 2008-2011, Sean B. Palmer (inamidst.com) and Michael Yanovich\n# (yanovich.net)\n# Copyright © 2012, Elad Alfassa <[email protected]>\n# Copyright 2012-2015, Elsie Powell embolalia.com\n# Copyright 2019, Florian Strzelecki <[email protected]>\n#\n# Licensed under the Eiffel Forum License 2.\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport base64\nimport collections\nimport datetime\nimport functools\nimport logging\nfrom random import randint\nimport re\nimport sys\nimport time\n\nfrom sopel import loader, module, plugin\nfrom sopel.irc import isupport\nfrom sopel.irc.utils import CapReq, MyInfo\nfrom sopel.tools import events, Identifier, iteritems, target, web\n\n\nif sys.version_info.major >= 3:\n unicode = str\n\nLOGGER = logging.getLogger(__name__)\n\nbatched_caps = {}\nwho_reqs = {} # Keeps track of reqs coming from this module, rather than others\n\n\ndef setup(bot):\n bot.memory['join_events_queue'] = collections.deque()\n\n # Manage JOIN flood protection\n if bot.settings.core.throttle_join:\n wait_interval = max(bot.settings.core.throttle_wait, 1)\n\n @module.interval(wait_interval)\n @plugin.label('throttle_join')\n def processing_job(bot):\n _join_event_processing(bot)\n\n loader.clean_callable(processing_job, bot.settings)\n processing_job.plugin_name = 'coretasks'\n\n bot.register_jobs([processing_job])\n\n\ndef shutdown(bot):\n try:\n bot.memory['join_events_queue'].clear()\n except KeyError:\n pass\n\n\ndef _join_event_processing(bot):\n \"\"\"Process a batch of JOIN event from the ``join_events_queue`` queue.\n\n Every time this function is executed, it processes at most\n ``throttle_join`` JOIN events. For each JOIN, it sends a WHO request to\n know more about the channel. This will prevent an excess of flood when\n there are too many channels to join at once.\n \"\"\"\n batch_size = max(bot.settings.core.throttle_join, 1)\n for _ in range(batch_size):\n try:\n channel = bot.memory['join_events_queue'].popleft()\n except IndexError:\n break\n LOGGER.debug('Sending WHO after channel JOIN: %s', channel)\n _send_who(bot, channel)\n\n\ndef auth_after_register(bot):\n \"\"\"Do NickServ/AuthServ auth\"\"\"\n if bot.config.core.auth_method:\n auth_method = bot.config.core.auth_method\n auth_username = bot.config.core.auth_username\n auth_password = bot.config.core.auth_password\n auth_target = bot.config.core.auth_target\n elif bot.config.core.nick_auth_method:\n auth_method = bot.config.core.nick_auth_method\n auth_username = (bot.config.core.nick_auth_username or\n bot.config.core.nick)\n auth_password = bot.config.core.nick_auth_password\n auth_target = bot.config.core.nick_auth_target\n else:\n return\n\n if auth_method == 'nickserv':\n bot.say('IDENTIFY %s' % auth_password, auth_target or 'NickServ')\n elif auth_method == 'authserv':\n bot.write(('AUTHSERV auth', auth_username + ' ' + auth_password))\n elif auth_method == 'Q':\n bot.write(('AUTH', auth_username + ' ' + auth_password))\n elif auth_method == 'userserv':\n bot.say(\"LOGIN %s %s\" % (auth_username, auth_password),\n auth_target or 'UserServ')\n\n\ndef _execute_perform(bot):\n \"\"\"Execute commands specified to perform on IRC server connect.\"\"\"\n if not bot.connection_registered:\n # How did you even get this command, bot?\n raise Exception('Bot must be connected to server to perform commands.')\n\n LOGGER.debug('{} commands to execute:'.format(len(bot.config.core.commands_on_connect)))\n for i, command in enumerate(bot.config.core.commands_on_connect):\n command = command.replace('$nickname', bot.config.core.nick)\n LOGGER.debug(command)\n bot.write((command,))\n\n\[email protected]_privmsg(\"This command only works as a private message.\")\[email protected]_admin(\"This command requires admin privileges.\")\[email protected]('execute')\ndef execute_perform(bot, trigger):\n \"\"\"Execute commands specified to perform on IRC server connect.\"\"\"\n _execute_perform(bot)\n\n\[email protected]('high')\[email protected](events.RPL_WELCOME, events.RPL_LUSERCLIENT)\[email protected](False)\[email protected]\ndef startup(bot, trigger):\n \"\"\"Do tasks related to connecting to the network.\n\n 001 RPL_WELCOME is from RFC2812 and is the first message that is sent after\n the connection has been registered on the network.\n\n 251 RPL_LUSERCLIENT is a mandatory message that is sent after client\n connects to the server in rfc1459. RFC2812 does not require it and all\n networks might not send it. We support both.\n\n \"\"\"\n if bot.connection_registered:\n return\n\n bot.connection_registered = True\n\n auth_after_register(bot)\n\n modes = bot.config.core.modes\n if modes:\n if not modes.startswith(('+', '-')):\n # Assume \"+\" by default.\n modes = '+' + modes\n bot.write(('MODE', bot.nick, modes))\n\n bot.memory['retry_join'] = dict()\n\n channels = bot.config.core.channels\n if not channels:\n LOGGER.info('No initial channels to JOIN.')\n elif bot.config.core.throttle_join:\n throttle_rate = int(bot.config.core.throttle_join)\n throttle_wait = max(bot.config.core.throttle_wait, 1)\n channels_joined = 0\n\n LOGGER.info(\n 'Joining %d channels (with JOIN throttle ON); '\n 'this may take a moment.',\n len(channels))\n\n for channel in channels:\n channels_joined += 1\n if not channels_joined % throttle_rate:\n LOGGER.debug(\n 'Waiting %ds before next JOIN batch.',\n throttle_wait)\n time.sleep(throttle_wait)\n bot.join(channel)\n else:\n LOGGER.info(\n 'Joining %d channels (with JOIN throttle OFF); '\n 'this may take a moment.',\n len(channels))\n\n for channel in bot.config.core.channels:\n bot.join(channel)\n\n if (not bot.config.core.owner_account and\n 'account-tag' in bot.enabled_capabilities and\n '@' not in bot.config.core.owner):\n msg = (\n \"This network supports using network services to identify you as \"\n \"my owner, rather than just matching your nickname. This is much \"\n \"more secure. If you'd like to do this, make sure you're logged in \"\n \"and reply with \\\"{}useserviceauth\\\"\"\n ).format(bot.config.core.help_prefix)\n bot.say(msg, bot.config.core.owner)\n\n _execute_perform(bot)\n\n\[email protected]('high')\[email protected](events.RPL_ISUPPORT)\[email protected](False)\[email protected]\[email protected]('are supported by this server')\ndef handle_isupport(bot, trigger):\n \"\"\"Handle ``RPL_ISUPPORT`` events.\"\"\"\n parameters = {}\n for arg in trigger.args:\n try:\n key, value = isupport.parse_parameter(arg)\n parameters[key] = value\n except ValueError:\n # ignore malformed parameter: log a warning and continue\n LOGGER.warning('Unable to parse ISUPPORT parameter: %r', arg)\n\n bot._isupport = bot._isupport.apply(**parameters)\n\n\[email protected]('high')\[email protected](events.RPL_MYINFO)\[email protected](False)\[email protected]\ndef parse_reply_myinfo(bot, trigger):\n \"\"\"Handle ``RPL_MYINFO`` events.\"\"\"\n # keep <client> <servername> <version> only\n # the trailing parameters (mode types) should be read from ISUPPORT\n bot._myinfo = MyInfo(*trigger.args[0:3])\n\n\[email protected]_privmsg()\[email protected]_owner()\[email protected]('useserviceauth')\ndef enable_service_auth(bot, trigger):\n if bot.config.core.owner_account:\n return\n if 'account-tag' not in bot.enabled_capabilities:\n bot.say('This server does not fully support services auth, so this '\n 'command is not available.')\n return\n if not trigger.account:\n bot.say('You must be logged in to network services before using this '\n 'command.')\n return\n bot.config.core.owner_account = trigger.account\n bot.config.save()\n bot.say('Success! I will now use network services to identify you as my '\n 'owner.')\n\n\[email protected](events.ERR_NOCHANMODES)\[email protected]('high')\ndef retry_join(bot, trigger):\n \"\"\"Give NickServ enough time to identify on a +R channel.\n\n Give NickServ enough time to identify, and retry rejoining an\n identified-only (+R) channel. Maximum of ten rejoin attempts.\n \"\"\"\n channel = trigger.args[1]\n if channel in bot.memory['retry_join'].keys():\n bot.memory['retry_join'][channel] += 1\n if bot.memory['retry_join'][channel] > 10:\n LOGGER.warning('Failed to join %s after 10 attempts.', channel)\n return\n else:\n bot.memory['retry_join'][channel] = 0\n bot.join(channel)\n return\n\n time.sleep(6)\n bot.join(channel)\n\n\[email protected]('(.*)')\[email protected](events.RPL_NAMREPLY)\[email protected]('high')\[email protected](False)\[email protected]\ndef handle_names(bot, trigger):\n \"\"\"Handle NAMES response, happens when joining to channels.\"\"\"\n names = trigger.split()\n\n # TODO specific to one channel type. See issue 281.\n channels = re.search(r'(#\\S*)', trigger.raw)\n if not channels:\n return\n channel = Identifier(channels.group(1))\n if channel not in bot.privileges:\n bot.privileges[channel] = dict()\n if channel not in bot.channels:\n bot.channels[channel] = target.Channel(channel)\n\n # This could probably be made flexible in the future, but I don't think\n # it'd be worth it.\n # If this ever needs to be updated, remember to change the mode handling in\n # the WHO-handler functions below, too.\n mapping = {\n \"+\": module.VOICE,\n \"%\": module.HALFOP,\n \"@\": module.OP,\n \"&\": module.ADMIN,\n \"~\": module.OWNER,\n \"!\": module.OPER,\n }\n\n for name in names:\n priv = 0\n for prefix, value in iteritems(mapping):\n if prefix in name:\n priv = priv | value\n nick = Identifier(name.lstrip(''.join(mapping.keys())))\n bot.privileges[channel][nick] = priv\n user = bot.users.get(nick)\n if user is None:\n # It's not possible to set the username/hostname from info received\n # in a NAMES reply, unfortunately.\n # Fortunately, the user should already exist in bot.users by the\n # time this code runs, so this is 99.9% ass-covering.\n user = target.User(nick, None, None)\n bot.users[nick] = user\n bot.channels[channel].add_user(user, privs=priv)\n\n\[email protected]('(.*)')\[email protected]('MODE')\[email protected]('high')\[email protected](False)\[email protected]\ndef track_modes(bot, trigger):\n \"\"\"Track usermode changes and keep our lists of ops up to date.\"\"\"\n # Mode message format: <channel> *( ( \"-\" / \"+\" ) *<modes> *<modeparams> )\n if len(trigger.args) < 3:\n # We need at least [channel, mode, nickname] to do anything useful\n # MODE messages with fewer args won't help us\n LOGGER.debug(\"Received an apparently useless MODE message: {}\"\n .format(trigger.raw))\n return\n # Our old MODE parsing code checked if any of the args was empty.\n # Somewhere around here would be a good place to re-implement that if it's\n # actually necessary to guard against some non-compliant IRCd. But for now\n # let's just log malformed lines to the debug log.\n if not all(trigger.args):\n LOGGER.debug(\"The server sent a possibly malformed MODE message: {}\"\n .format(trigger.raw))\n\n # From here on, we will make a (possibly dangerous) assumption that the\n # received MODE message is more-or-less compliant\n channel = Identifier(trigger.args[0])\n # If the first character of where the mode is being set isn't a #\n # then it's a user mode, not a channel mode, so we'll ignore it.\n # TODO: Handle CHANTYPES from ISUPPORT numeric (005)\n # (Actually, most of this function should be rewritten again when we parse\n # ISUPPORT...)\n if channel.is_nick():\n return\n\n modestring = trigger.args[1]\n nicks = [Identifier(nick) for nick in trigger.args[2:]]\n\n mapping = {\n \"v\": module.VOICE,\n \"h\": module.HALFOP,\n \"o\": module.OP,\n \"a\": module.ADMIN,\n \"q\": module.OWNER,\n \"y\": module.OPER,\n \"Y\": module.OPER,\n }\n\n # Parse modes before doing anything else\n modes = []\n sign = ''\n for char in modestring:\n # There was a comment claiming IRC allows e.g. MODE +aB-c foo, but it\n # doesn't seem to appear in any RFCs. But modern.ircdocs.horse shows\n # it, so we'll leave in the extra parsing for now.\n if char in '+-':\n sign = char\n elif char in mapping:\n # Filter out unexpected modes and hope they don't have parameters\n modes.append(sign + char)\n\n # Try to map modes to arguments, after sanity-checking\n if len(modes) != len(nicks) or not all([nick.is_nick() for nick in nicks]):\n # Something fucky happening, like unusual batching of non-privilege\n # modes together with the ones we expect. Way easier to just re-WHO\n # than try to account for non-standard parameter-taking modes.\n LOGGER.debug('Sending WHO for channel: %s', channel)\n _send_who(bot, channel)\n return\n\n for (mode, nick) in zip(modes, nicks):\n priv = bot.channels[channel].privileges.get(nick, 0)\n # Log a warning if the two privilege-tracking data structures\n # get out of sync. That should never happen.\n # This is a good place to verify that bot.channels is doing\n # what it's supposed to do before ultimately removing the old,\n # deprecated bot.privileges structure completely.\n ppriv = bot.privileges[channel].get(nick, 0)\n if priv != ppriv:\n LOGGER.warning(\"Privilege data error! Please share Sopel's\"\n \"raw log with the developers, if enabled. \"\n \"(Expected {} == {} for {} in {}.)\"\n .format(priv, ppriv, nick, channel))\n value = mapping.get(mode[1])\n if value is not None:\n if mode[0] == '+':\n priv = priv | value\n else:\n priv = priv & ~value\n bot.privileges[channel][nick] = priv\n bot.channels[channel].privileges[nick] = priv\n\n\[email protected]('NICK')\[email protected]('high')\[email protected](False)\[email protected]\ndef track_nicks(bot, trigger):\n \"\"\"Track nickname changes and maintain our chanops list accordingly.\"\"\"\n old = trigger.nick\n new = Identifier(trigger)\n\n # Give debug mssage, and PM the owner, if the bot's own nick changes.\n if old == bot.nick and new != bot.nick:\n privmsg = (\n \"Hi, I'm your bot, %s. Something has made my nick change. This \"\n \"can cause some problems for me, and make me do weird things. \"\n \"You'll probably want to restart me, and figure out what made \"\n \"that happen so you can stop it happening again. (Usually, it \"\n \"means you tried to give me a nick that's protected by NickServ.)\"\n ) % bot.nick\n debug_msg = (\n \"Nick changed by server. This can cause unexpected behavior. \"\n \"Please restart the bot.\"\n )\n LOGGER.critical(debug_msg)\n bot.say(privmsg, bot.config.core.owner)\n return\n\n for channel in bot.privileges:\n channel = Identifier(channel)\n if old in bot.privileges[channel]:\n value = bot.privileges[channel].pop(old)\n bot.privileges[channel][new] = value\n\n for channel in bot.channels.values():\n channel.rename_user(old, new)\n if old in bot.users:\n bot.users[new] = bot.users.pop(old)\n\n\[email protected]('(.*)')\[email protected]('PART')\[email protected]('high')\[email protected](False)\[email protected]\ndef track_part(bot, trigger):\n nick = trigger.nick\n channel = trigger.sender\n _remove_from_channel(bot, nick, channel)\n\n\[email protected]('KICK')\[email protected]('high')\[email protected](False)\[email protected]\ndef track_kick(bot, trigger):\n nick = Identifier(trigger.args[1])\n channel = trigger.sender\n _remove_from_channel(bot, nick, channel)\n\n\ndef _remove_from_channel(bot, nick, channel):\n if nick == bot.nick:\n bot.privileges.pop(channel, None)\n bot.channels.pop(channel, None)\n\n lost_users = []\n for nick_, user in bot.users.items():\n user.channels.pop(channel, None)\n if not user.channels:\n lost_users.append(nick_)\n for nick_ in lost_users:\n bot.users.pop(nick_, None)\n else:\n bot.privileges[channel].pop(nick, None)\n\n user = bot.users.get(nick)\n if user and channel in user.channels:\n bot.channels[channel].clear_user(nick)\n if not user.channels:\n bot.users.pop(nick, None)\n\n\ndef _whox_enabled(bot):\n # Either privilege tracking or away notification. For simplicity, both\n # account notify and extended join must be there for account tracking.\n return (('account-notify' in bot.enabled_capabilities and\n 'extended-join' in bot.enabled_capabilities) or\n 'away-notify' in bot.enabled_capabilities)\n\n\ndef _send_who(bot, channel):\n if _whox_enabled(bot):\n # WHOX syntax, see http://faerion.sourceforge.net/doc/irc/whox.var\n # Needed for accounts in who replies. The random integer is a param\n # to identify the reply as one from this command, because if someone\n # else sent it, we have no fucking way to know what the format is.\n rand = str(randint(0, 999))\n while rand in who_reqs:\n rand = str(randint(0, 999))\n who_reqs[rand] = channel\n bot.write(['WHO', channel, 'a%nuachtf,' + rand])\n else:\n # We might be on an old network, but we still care about keeping our\n # user list updated\n bot.write(['WHO', channel])\n bot.channels[Identifier(channel)].last_who = datetime.datetime.utcnow()\n\n\[email protected](30)\ndef _periodic_send_who(bot):\n \"\"\"Periodically send a WHO request to keep user information up-to-date.\"\"\"\n if 'away-notify' in bot.enabled_capabilities:\n # WHO not needed to update 'away' status\n return\n\n # Loops through the channels to find the one that has the longest time since the last WHO\n # request, and issues a WHO request only if the last request for the channel was more than\n # 120 seconds ago.\n who_trigger_time = datetime.datetime.utcnow() - datetime.timedelta(seconds=120)\n selected_channel = None\n for channel_name, channel in bot.channels.items():\n if channel.last_who is None:\n # WHO was never sent yet to this channel: stop here\n selected_channel = channel_name\n break\n if channel.last_who < who_trigger_time:\n # this channel's last who request is the most outdated one at the moment\n selected_channel = channel_name\n who_trigger_time = channel.last_who\n\n if selected_channel is not None:\n # selected_channel's last who is either none or the oldest valid\n LOGGER.debug('Sending WHO for channel: %s', selected_channel)\n _send_who(bot, selected_channel)\n\n\[email protected]('JOIN')\[email protected]('high')\[email protected](False)\[email protected]\ndef track_join(bot, trigger):\n channel = trigger.sender\n\n # is it a new channel?\n if channel not in bot.channels:\n LOGGER.info('Channel joined: %s', channel)\n bot.privileges[channel] = dict()\n bot.channels[channel] = target.Channel(channel)\n\n # did *we* just join?\n if trigger.nick == bot.nick:\n if bot.settings.core.throttle_join:\n LOGGER.debug('JOIN event added to queue for channel: %s', channel)\n bot.memory['join_events_queue'].append(channel)\n else:\n LOGGER.debug('Send direct WHO for channel: %s', channel)\n _send_who(bot, channel)\n\n # set initial values\n bot.privileges[channel][trigger.nick] = 0\n\n user = bot.users.get(trigger.nick)\n if user is None:\n user = target.User(trigger.nick, trigger.user, trigger.host)\n bot.users[trigger.nick] = user\n bot.channels[channel].add_user(user)\n\n if len(trigger.args) > 1 and trigger.args[1] != '*' and (\n 'account-notify' in bot.enabled_capabilities and\n 'extended-join' in bot.enabled_capabilities):\n user.account = trigger.args[1]\n\n\[email protected]('QUIT')\[email protected]('high')\[email protected](False)\[email protected]\ndef track_quit(bot, trigger):\n for chanprivs in bot.privileges.values():\n chanprivs.pop(trigger.nick, None)\n for channel in bot.channels.values():\n channel.clear_user(trigger.nick)\n bot.users.pop(trigger.nick, None)\n\n\[email protected]('CAP')\[email protected](False)\[email protected]('high')\[email protected]\ndef receive_cap_list(bot, trigger):\n cap = trigger.strip('-=~')\n # Server is listing capabilities\n if trigger.args[1] == 'LS':\n receive_cap_ls_reply(bot, trigger)\n # Server denied CAP REQ\n elif trigger.args[1] == 'NAK':\n entry = bot._cap_reqs.get(cap, None)\n # If it was requested with bot.cap_req\n if entry:\n for req in entry:\n # And that request was mandatory/prohibit, and a callback was\n # provided\n if req.prefix and req.failure:\n # Call it.\n req.failure(bot, req.prefix + cap)\n # Server is removing a capability\n elif trigger.args[1] == 'DEL':\n entry = bot._cap_reqs.get(cap, None)\n # If it was requested with bot.cap_req\n if entry:\n for req in entry:\n # And that request wasn't prohibit, and a callback was\n # provided\n if req.prefix != '-' and req.failure:\n # Call it.\n req.failure(bot, req.prefix + cap)\n # Server is adding new capability\n elif trigger.args[1] == 'NEW':\n entry = bot._cap_reqs.get(cap, None)\n # If it was requested with bot.cap_req\n if entry:\n for req in entry:\n # And that request wasn't prohibit\n if req.prefix != '-':\n # Request it\n bot.write(('CAP', 'REQ', req.prefix + cap))\n # Server is acknowledging a capability\n elif trigger.args[1] == 'ACK':\n caps = trigger.args[2].split()\n for cap in caps:\n cap.strip('-~= ')\n bot.enabled_capabilities.add(cap)\n entry = bot._cap_reqs.get(cap, [])\n for req in entry:\n if req.success:\n req.success(bot, req.prefix + trigger)\n if cap == 'sasl': # TODO why is this not done with bot.cap_req?\n receive_cap_ack_sasl(bot)\n\n\ndef receive_cap_ls_reply(bot, trigger):\n if bot.server_capabilities:\n # We've already seen the results, so someone sent CAP LS from a plugin.\n # We're too late to do SASL, and we don't want to send CAP END before\n # the plugin has done what it needs to, so just return\n return\n\n for cap in trigger.split():\n c = cap.split('=')\n if len(c) == 2:\n batched_caps[c[0]] = c[1]\n else:\n batched_caps[c[0]] = None\n\n # Not the last in a multi-line reply. First two args are * and LS.\n if trigger.args[2] == '*':\n return\n\n bot.server_capabilities = batched_caps\n\n # If some other plugin requests it, we don't need to add another request.\n # If some other plugin prohibits it, we shouldn't request it.\n core_caps = [\n 'echo-message',\n 'multi-prefix',\n 'away-notify',\n 'cap-notify',\n 'server-time',\n ]\n for cap in core_caps:\n if cap not in bot._cap_reqs:\n bot._cap_reqs[cap] = [CapReq('', 'coretasks')]\n\n def acct_warn(bot, cap):\n LOGGER.info('Server does not support %s, or it conflicts with a custom '\n 'plugin. User account validation unavailable or limited.',\n cap[1:])\n if bot.config.core.owner_account or bot.config.core.admin_accounts:\n LOGGER.warning(\n 'Owner or admin accounts are configured, but %s is not '\n 'supported by the server. This may cause unexpected behavior.',\n cap[1:])\n auth_caps = ['account-notify', 'extended-join', 'account-tag']\n for cap in auth_caps:\n if cap not in bot._cap_reqs:\n bot._cap_reqs[cap] = [CapReq('', 'coretasks', acct_warn)]\n\n for cap, reqs in iteritems(bot._cap_reqs):\n # At this point, we know mandatory and prohibited don't co-exist, but\n # we need to call back for optionals if they're also prohibited\n prefix = ''\n for entry in reqs:\n if prefix == '-' and entry.prefix != '-':\n entry.failure(bot, entry.prefix + cap)\n continue\n if entry.prefix:\n prefix = entry.prefix\n\n # It's not required, or it's supported, so we can request it\n if prefix != '=' or cap in bot.server_capabilities:\n # REQs fail as a whole, so we send them one capability at a time\n bot.write(('CAP', 'REQ', entry.prefix + cap))\n # If it's required but not in server caps, we need to call all the\n # callbacks\n else:\n for entry in reqs:\n if entry.failure and entry.prefix == '=':\n entry.failure(bot, entry.prefix + cap)\n\n # If we want to do SASL, we have to wait before we can send CAP END. So if\n # we are, wait on 903 (SASL successful) to send it.\n if bot.config.core.auth_method == 'sasl' or bot.config.core.server_auth_method == 'sasl':\n bot.write(('CAP', 'REQ', 'sasl'))\n else:\n bot.write(('CAP', 'END'))\n\n\ndef receive_cap_ack_sasl(bot):\n # Presumably we're only here if we said we actually *want* sasl, but still\n # check anyway.\n password = None\n mech = None\n if bot.config.core.auth_method == 'sasl':\n password = bot.config.core.auth_password\n mech = bot.config.core.auth_target\n elif bot.config.core.server_auth_method == 'sasl':\n password = bot.config.core.server_auth_password\n mech = bot.config.core.server_auth_sasl_mech\n if not password:\n return\n mech = mech or 'PLAIN'\n bot.write(('AUTHENTICATE', mech))\n\n\ndef send_authenticate(bot, token):\n \"\"\"Send ``AUTHENTICATE`` command to server with the given ``token``.\n\n :param bot: instance of IRC bot that must authenticate\n :param str token: authentication token\n\n In case the ``token`` is more than 400 bytes, we need to split it and send\n as many ``AUTHENTICATE`` commands as needed. If the last chunk is 400 bytes\n long, we must also send a last empty command (`AUTHENTICATE +` is for empty\n line), so the server knows we are done with ``AUTHENTICATE``.\n\n .. seealso::\n\n https://ircv3.net/specs/extensions/sasl-3.1.html#the-authenticate-command\n\n \"\"\"\n # payload is a base64 encoded token\n payload = base64.b64encode(token.encode('utf-8'))\n\n # split the payload into chunks of at most 400 bytes\n chunk_size = 400\n for i in range(0, len(payload), chunk_size):\n offset = i + chunk_size\n chunk = payload[i:offset]\n bot.write(('AUTHENTICATE', chunk))\n\n # send empty (+) AUTHENTICATE when payload's length is a multiple of 400\n if len(payload) % chunk_size == 0:\n bot.write(('AUTHENTICATE', '+'))\n\n\[email protected]('AUTHENTICATE')\ndef auth_proceed(bot, trigger):\n if trigger.args[0] != '+':\n # How did we get here? I am not good with computer.\n return\n # Is this right?\n if bot.config.core.auth_method == 'sasl':\n sasl_username = bot.config.core.auth_username\n sasl_password = bot.config.core.auth_password\n elif bot.config.core.server_auth_method == 'sasl':\n sasl_username = bot.config.core.server_auth_username\n sasl_password = bot.config.core.server_auth_password\n else:\n return\n sasl_username = sasl_username or bot.nick\n sasl_token = '\\0'.join((sasl_username, sasl_username, sasl_password))\n send_authenticate(bot, sasl_token)\n\n\[email protected](events.RPL_SASLSUCCESS)\ndef sasl_success(bot, trigger):\n bot.write(('CAP', 'END'))\n\n\n# Live blocklist editing\n\n\[email protected]('blocks')\[email protected]('low')\[email protected](False)\[email protected]\[email protected]_admin\ndef blocks(bot, trigger):\n \"\"\"\n Manage Sopel's blocking features.\\\n See [ignore system documentation]({% link _usage/ignoring-people.md %}).\n\n \"\"\"\n STRINGS = {\n \"success_del\": \"Successfully deleted block: %s\",\n \"success_add\": \"Successfully added block: %s\",\n \"no_nick\": \"No matching nick block found for: %s\",\n \"no_host\": \"No matching hostmask block found for: %s\",\n \"invalid\": \"Invalid format for %s a block. Try: .blocks add (nick|hostmask) sopel\",\n \"invalid_display\": \"Invalid input for displaying blocks.\",\n \"nonelisted\": \"No %s listed in the blocklist.\",\n 'huh': \"I could not figure out what you wanted to do.\",\n }\n\n masks = set(s for s in bot.config.core.host_blocks if s != '')\n nicks = set(Identifier(nick)\n for nick in bot.config.core.nick_blocks\n if nick != '')\n text = trigger.group().split()\n\n if len(text) == 3 and text[1] == \"list\":\n if text[2] == \"hostmask\":\n if len(masks) > 0:\n blocked = ', '.join(unicode(mask) for mask in masks)\n bot.say(\"Blocked hostmasks: {}\".format(blocked))\n else:\n bot.reply(STRINGS['nonelisted'] % ('hostmasks'))\n elif text[2] == \"nick\":\n if len(nicks) > 0:\n blocked = ', '.join(unicode(nick) for nick in nicks)\n bot.say(\"Blocked nicks: {}\".format(blocked))\n else:\n bot.reply(STRINGS['nonelisted'] % ('nicks'))\n else:\n bot.reply(STRINGS['invalid_display'])\n\n elif len(text) == 4 and text[1] == \"add\":\n if text[2] == \"nick\":\n nicks.add(text[3])\n bot.config.core.nick_blocks = nicks\n bot.config.save()\n elif text[2] == \"hostmask\":\n masks.add(text[3].lower())\n bot.config.core.host_blocks = list(masks)\n else:\n bot.reply(STRINGS['invalid'] % (\"adding\"))\n return\n\n bot.reply(STRINGS['success_add'] % (text[3]))\n\n elif len(text) == 4 and text[1] == \"del\":\n if text[2] == \"nick\":\n if Identifier(text[3]) not in nicks:\n bot.reply(STRINGS['no_nick'] % (text[3]))\n return\n nicks.remove(Identifier(text[3]))\n bot.config.core.nick_blocks = [unicode(n) for n in nicks]\n bot.config.save()\n bot.reply(STRINGS['success_del'] % (text[3]))\n elif text[2] == \"hostmask\":\n mask = text[3].lower()\n if mask not in masks:\n bot.reply(STRINGS['no_host'] % (text[3]))\n return\n masks.remove(mask)\n bot.config.core.host_blocks = [unicode(m) for m in masks]\n bot.config.save()\n bot.reply(STRINGS['success_del'] % (text[3]))\n else:\n bot.reply(STRINGS['invalid'] % (\"deleting\"))\n return\n else:\n bot.reply(STRINGS['huh'])\n\n\[email protected]('ACCOUNT')\ndef account_notify(bot, trigger):\n if trigger.nick not in bot.users:\n bot.users[trigger.nick] = target.User(\n trigger.nick, trigger.user, trigger.host)\n account = trigger.args[0]\n if account == '*':\n account = None\n bot.users[trigger.nick].account = account\n\n\[email protected](events.RPL_WHOSPCRPL)\[email protected]('high')\[email protected]\ndef recv_whox(bot, trigger):\n if len(trigger.args) < 2 or trigger.args[1] not in who_reqs:\n # Ignored, some plugin probably called WHO\n return\n if len(trigger.args) != 8:\n return LOGGER.warning('While populating `bot.accounts` a WHO response was malformed.')\n _, _, channel, user, host, nick, status, account = trigger.args\n away = 'G' in status\n modes = ''.join([c for c in status if c in '~&@%+!'])\n _record_who(bot, channel, user, host, nick, account, away, modes)\n\n\ndef _record_who(bot, channel, user, host, nick, account=None, away=None, modes=None):\n nick = Identifier(nick)\n channel = Identifier(channel)\n if nick not in bot.users:\n usr = target.User(nick, user, host)\n bot.users[nick] = usr\n else:\n usr = bot.users[nick]\n # check for & fill in sparse User added by handle_names()\n if usr.host is None and host:\n usr.host = host\n if usr.user is None and user:\n usr.user = user\n if account == '0':\n usr.account = None\n else:\n usr.account = account\n if away is not None:\n usr.away = away\n priv = 0\n if modes:\n mapping = {\n \"+\": module.VOICE,\n \"%\": module.HALFOP,\n \"@\": module.OP,\n \"&\": module.ADMIN,\n \"~\": module.OWNER,\n \"!\": module.OPER,\n }\n for c in modes:\n priv = priv | mapping[c]\n if channel not in bot.channels:\n bot.channels[channel] = target.Channel(channel)\n bot.channels[channel].add_user(usr, privs=priv)\n if channel not in bot.privileges:\n bot.privileges[channel] = dict()\n bot.privileges[channel][nick] = priv\n\n\[email protected](events.RPL_WHOREPLY)\[email protected]('high')\[email protected]\ndef recv_who(bot, trigger):\n channel, user, host, _, nick, status = trigger.args[1:7]\n away = 'G' in status\n modes = ''.join([c for c in status if c in '~&@%+!'])\n _record_who(bot, channel, user, host, nick, away=away, modes=modes)\n\n\[email protected](events.RPL_ENDOFWHO)\[email protected]('high')\[email protected]\ndef end_who(bot, trigger):\n if _whox_enabled(bot):\n who_reqs.pop(trigger.args[1], None)\n\n\[email protected]('AWAY')\[email protected]('high')\[email protected](False)\[email protected]\ndef track_notify(bot, trigger):\n if trigger.nick not in bot.users:\n bot.users[trigger.nick] = target.User(\n trigger.nick, trigger.user, trigger.host)\n user = bot.users[trigger.nick]\n user.away = bool(trigger.args)\n\n\[email protected]('TOPIC')\[email protected](events.RPL_TOPIC)\[email protected]('high')\[email protected](False)\[email protected]\ndef track_topic(bot, trigger):\n if trigger.event != 'TOPIC':\n channel = trigger.args[1]\n else:\n channel = trigger.args[0]\n if channel not in bot.channels:\n return\n bot.channels[channel].topic = trigger.args[-1]\n\n\[email protected](r'(?u).*(.+://\\S+).*')\ndef handle_url_callbacks(bot, trigger):\n \"\"\"Dispatch callbacks on URLs\n\n For each URL found in the trigger, trigger the URL callback registered by\n the ``@url`` decorator.\n \"\"\"\n schemes = bot.config.core.auto_url_schemes\n # find URLs in the trigger\n for url in web.search_urls(trigger, schemes=schemes):\n # find callbacks for said URL\n for function, match in bot.search_url_callbacks(url):\n # trigger callback defined by the `@url` decorator\n if hasattr(function, 'url_regex'):\n # bake the `match` argument in before passing the callback on\n @functools.wraps(function)\n def decorated(bot, trigger):\n return function(bot, trigger, match=match)\n\n bot.call(decorated, bot, trigger)\n", "path": "sopel/coretasks.py" } ]
diff --git a/sopel/coretasks.py b/sopel/coretasks.py index 38379c6c9e..047c6aef11 100644 --- a/sopel/coretasks.py +++ b/sopel/coretasks.py @@ -153,7 +153,11 @@ def startup(bot, trigger): auth_after_register(bot) modes = bot.config.core.modes - bot.write(('MODE', '%s +%s' % (bot.nick, modes))) + if modes: + if not modes.startswith(('+', '-')): + # Assume "+" by default. + modes = '+' + modes + bot.write(('MODE', bot.nick, modes)) bot.memory['retry_join'] = dict()
svthalia__concrexit-1103
Manually entered usernames not lowercased in registrations ### Describe the bug Manually entered usernames are not lowercased in registrations. ### How to reproduce Steps to reproduce the behaviour: 1. Create a registration 2. Enter a manual username that is not completely lowercase 3. Complete the registration 4. Try to log-in with the new user 5. It is not possible ### Expected behaviour The username should have been lowercased, since it is not possible to login with a username that has capitalisaton of any kind.
[ { "content": "\"\"\"The services defined by the registrations package\"\"\"\nimport string\nimport unicodedata\nfrom typing import Union\n\nfrom django.conf import settings\nfrom django.contrib.admin.models import LogEntry, CHANGE\nfrom django.contrib.admin.options import get_content_type_for_model\nfrom django.contrib.auth import get_user_model\nfrom django.db.models import Q, QuerySet\nfrom django.utils import timezone\n\nimport members\nfrom members.models import Membership, Profile, Member\nfrom payments.models import Payment\nfrom registrations import emails\nfrom registrations.models import Entry, Registration, Renewal\nfrom utils.snippets import datetime_to_lectureyear\n\n\ndef _generate_username(registration: Registration) -> str:\n \"\"\"\n Create username from first and lastname\n\n :param registration: Model containing first and last name\n :type registration: Registration\n :return: Created username\n :rtype: str\n \"\"\"\n username = (registration.first_name[0] + registration.last_name).lower()\n username = \"\".join(c for c in username if c.isalpha())\n username = \"\".join(\n c for c in unicodedata.normalize(\"NFKD\", username) if c in string.ascii_letters\n ).lower()\n\n # Limit length to 150 characters since Django doesn't support longer\n if len(username) > 150:\n username = username[:150]\n\n return username\n\n\ndef check_unique_user(entry: Entry) -> bool:\n \"\"\"\n Check that the username and email address of the entry are unique.\n\n :param entry: Registration entry\n :type entry: Entry\n :return: True if unique, False if not unique\n :rtype: boolean\n \"\"\"\n try:\n registration = entry.registration\n username = _generate_username(registration)\n if (\n get_user_model().objects.filter(username=username).exists()\n and registration.username is not None\n ):\n username = registration.username\n\n return not (\n get_user_model()\n .objects.filter(Q(email=registration.email) | Q(username=username))\n .exists()\n )\n except Registration.DoesNotExist:\n pass\n return True\n\n\ndef confirm_entry(queryset: QuerySet) -> int:\n \"\"\"\n Confirm all entries in the queryset\n\n :param queryset: queryset of entries\n :type queryset: Queryset[Entry]\n :return: number of updated rows\n :rtype: integer\n \"\"\"\n queryset = queryset.filter(status=Entry.STATUS_CONFIRM)\n rows_updated = queryset.update(\n status=Entry.STATUS_REVIEW, updated_at=timezone.now()\n )\n return rows_updated\n\n\ndef reject_entries(user_id: int, queryset: QuerySet) -> int:\n \"\"\"\n Reject all entries in the queryset\n\n :param user_id: Id of the user executing this action\n :param queryset: queryset of entries\n :type queryset: Queryset[Entry]\n :return: number of updated rows\n :rtype: integer\n \"\"\"\n queryset = queryset.filter(status=Entry.STATUS_REVIEW)\n entries = list(queryset.all())\n rows_updated = queryset.update(\n status=Entry.STATUS_REJECTED, updated_at=timezone.now()\n )\n\n for entry in entries:\n log_obj = None\n\n try:\n emails.send_registration_rejected_message(entry.registration)\n log_obj = entry.registration\n except Registration.DoesNotExist:\n try:\n emails.send_renewal_rejected_message(entry.renewal)\n log_obj = entry.renewal\n except Renewal.DoesNotExist:\n pass\n\n if log_obj:\n LogEntry.objects.log_action(\n user_id=user_id,\n content_type_id=get_content_type_for_model(log_obj).pk,\n object_id=log_obj.pk,\n object_repr=str(log_obj),\n action_flag=CHANGE,\n change_message=\"Changed status to rejected\",\n )\n\n return rows_updated\n\n\ndef accept_entries(user_id: int, queryset: QuerySet) -> int:\n \"\"\"\n Accept all entries in the queryset\n\n :param user_id: Id of the user executing this action\n :param queryset: queryset of entries\n :type queryset: Queryset[Entry]\n :return: number of updated rows\n :rtype: integer\n \"\"\"\n queryset = queryset.filter(status=Entry.STATUS_REVIEW)\n entries = queryset.all()\n updated_entries = []\n\n for entry in entries:\n # Check if the user is unique\n if not check_unique_user(entry):\n # User is not unique, do not proceed\n continue\n\n entry.status = Entry.STATUS_ACCEPTED\n entry.updated_at = timezone.now()\n entry.payment = _create_payment_for_entry(entry)\n\n log_obj = None\n\n try:\n if entry.registration.username is None:\n entry.registration.username = _generate_username(entry.registration)\n entry.registration.save()\n emails.send_registration_accepted_message(entry.registration, entry.payment)\n log_obj = entry.registration\n except Registration.DoesNotExist:\n try:\n emails.send_renewal_accepted_message(entry.renewal, entry.payment)\n log_obj = entry.renewal\n except Renewal.DoesNotExist:\n pass\n\n if log_obj:\n LogEntry.objects.log_action(\n user_id=user_id,\n content_type_id=get_content_type_for_model(log_obj).pk,\n object_id=log_obj.pk,\n object_repr=str(log_obj),\n action_flag=CHANGE,\n change_message=\"Change status to approved\",\n )\n\n entry.save()\n updated_entries.append(entry.pk)\n\n return len(updated_entries)\n\n\ndef revert_entry(user_id: int, entry: Entry) -> None:\n \"\"\"\n Revert status of entry to review so that it can be corrected\n\n :param user_id: Id of the user executing this action\n :param entry: Entry that should be reverted\n \"\"\"\n if not (entry.status in [Entry.STATUS_ACCEPTED, Entry.STATUS_REJECTED]):\n return\n\n payment = entry.payment\n entry.status = Entry.STATUS_REVIEW\n entry.updated_at = timezone.now()\n entry.payment = None\n entry.save()\n if payment is not None:\n payment.delete()\n\n log_obj = None\n\n try:\n log_obj = entry.registration\n except Registration.DoesNotExist:\n try:\n log_obj = entry.renewal\n except Renewal.DoesNotExist:\n pass\n\n if log_obj:\n LogEntry.objects.log_action(\n user_id=user_id,\n content_type_id=get_content_type_for_model(log_obj).pk,\n object_id=log_obj.pk,\n object_repr=str(log_obj),\n action_flag=CHANGE,\n change_message=\"Revert status to review\",\n )\n\n\ndef _create_payment_for_entry(entry: Entry) -> Payment:\n \"\"\"\n Create payment model for entry\n\n :param entry: Registration or Renewal model\n :type entry: Entry\n :return: Payment connected to the entry with the right price\n :rtype: Payment\n \"\"\"\n amount = settings.MEMBERSHIP_PRICES[entry.length]\n if entry.contribution and entry.membership_type == Membership.BENEFACTOR:\n amount = entry.contribution\n notes = f\"Membership registration. {entry.get_membership_type_display()}.\"\n topic = f\"Member registration [{entry.membership_type.upper()}]\"\n\n try:\n renewal = entry.renewal\n membership = renewal.member.latest_membership\n notes = f\"Membership renewal. {entry.get_membership_type_display()}.\"\n topic = f\"Member renewal [{entry.membership_type.upper()}]\"\n # Having a latest membership which has an until date implies that this\n # membership lasts/lasted till the end of the lecture year\n # This means it's possible to renew the 'year' membership\n # to a 'study' membership and the price should be adjusted since\n # it is considered an upgrade without paying twice\n # The rules for this behaviour are taken from the HR\n\n # Since it is possible for people to renew their membership\n # but processing to occur _after_ the membership ended\n # we're checking if that is the case so that these members\n # still get the discount price\n if (\n membership is not None\n and membership.until is not None\n and entry.created_at.date() < membership.until\n and renewal.length == Entry.MEMBERSHIP_STUDY\n ):\n amount = (\n settings.MEMBERSHIP_PRICES[Entry.MEMBERSHIP_STUDY]\n - settings.MEMBERSHIP_PRICES[Entry.MEMBERSHIP_YEAR]\n )\n except Renewal.DoesNotExist:\n pass\n\n return Payment.objects.create(amount=amount, notes=notes, topic=topic)\n\n\ndef _create_member_from_registration(registration: Registration) -> Member:\n \"\"\"\n Create User and Member model from Registration\n\n :param registration: Registration model\n :type registration: Registration\n :return: Created member object\n :rtype: Member\n \"\"\"\n\n # Generate random password for user that we can send to the new user\n password = get_user_model().objects.make_random_password(length=15)\n\n # Make sure the username and email are unique\n if not check_unique_user(registration):\n raise ValueError(\n \"Username or email address of the registration \" \"are not unique\"\n )\n\n # Create user\n user = get_user_model().objects.create_user(\n username=registration.username,\n email=registration.email,\n password=password,\n first_name=registration.first_name,\n last_name=registration.last_name,\n )\n\n # Add profile to created user\n Profile.objects.create(\n user=user,\n programme=registration.programme,\n student_number=registration.student_number,\n starting_year=registration.starting_year,\n address_street=registration.address_street,\n address_street2=registration.address_street2,\n address_postal_code=registration.address_postal_code,\n address_city=registration.address_city,\n address_country=registration.address_country,\n phone_number=registration.phone_number,\n birthday=registration.birthday,\n language=registration.language,\n show_birthday=registration.optin_birthday,\n receive_optin=registration.optin_mailinglist,\n )\n\n # Send welcome message to new member\n members.emails.send_welcome_message(user, password, registration.language)\n\n return Member.objects.get(pk=user.pk)\n\n\ndef calculate_membership_since() -> timezone.datetime:\n \"\"\"\n Calculate the start date of a membership\n\n If it's August we act as if it's the next\n lecture year already and we start new memberships in September\n :return:\n \"\"\"\n since = timezone.now().date()\n if timezone.now().month == 8:\n since = since.replace(month=9, day=1)\n return since\n\n\ndef _create_membership_from_entry(\n entry: Entry, member: Member = None\n) -> Union[Membership, None]:\n \"\"\"\n Create or update Membership model based on Entry model information\n\n :param entry: Entry model\n :type entry: Entry\n :return: The created or updated membership\n :rtype: Membership\n \"\"\"\n lecture_year = datetime_to_lectureyear(timezone.now())\n since = calculate_membership_since()\n until = None\n if timezone.now().month == 8:\n lecture_year += 1\n\n if entry.length == Entry.MEMBERSHIP_YEAR:\n # If entry is Renewal set since to current membership until + 1 day\n # Unless there is no current membership\n try:\n member = entry.renewal.member\n membership = member.current_membership\n if membership is not None:\n if membership.until is None:\n raise ValueError(\n \"This member already has a never ending \" \"membership\"\n )\n since = membership.until\n except Renewal.DoesNotExist:\n pass\n until = timezone.datetime(year=lecture_year + 1, month=9, day=1).date()\n elif entry.length == Entry.MEMBERSHIP_STUDY:\n try:\n renewal = entry.renewal\n member = renewal.member\n membership = member.latest_membership\n # Having a latest membership which has an until date implies that\n # this membership last(s/ed) till the end of the lecture year\n # This means it's possible to renew the 'year' membership\n # to a 'study' membership thus the until date should now be None\n # and no new membership is needed.\n # The rules for this behaviour are taken from the HR\n if membership is not None:\n if membership.until is None:\n raise ValueError(\n \"This member already has a never ending \" \"membership\"\n )\n if entry.created_at.date() < membership.until:\n membership.until = None\n membership.save()\n return membership\n except Renewal.DoesNotExist:\n pass\n else:\n return None\n\n return Membership.objects.create(\n user=member, since=since, until=until, type=entry.membership_type\n )\n\n\ndef process_payment(payment: Payment) -> None:\n \"\"\"\n Process the payment for the entry and send the right emails\n\n :param payment: The payment that should be processed\n :type payment: Payment\n \"\"\"\n\n if not payment.processed:\n return\n\n try:\n entry = payment.registrations_entry\n except Entry.DoesNotExist:\n return\n\n if entry.status != Entry.STATUS_ACCEPTED:\n return\n\n member = None\n\n try:\n registration = entry.registration\n # Create user and member\n member = _create_member_from_registration(registration)\n except Registration.DoesNotExist:\n try:\n # Get member from renewal\n renewal = entry.renewal\n member = renewal.member\n # Send email of payment confirmation for renewal,\n # not needed for registration since a new member already\n # gets the welcome email\n emails.send_renewal_complete_message(entry.renewal)\n except Renewal.DoesNotExist:\n pass\n\n # If member was retrieved, then create a new membership\n if member is not None:\n Payment.objects.filter(pk=payment.pk).update(paid_by=member)\n membership = _create_membership_from_entry(entry, member)\n entry.membership = membership\n entry.status = Entry.STATUS_COMPLETED\n entry.save()\n\n\ndef execute_data_minimisation(dry_run=False):\n \"\"\"\n Delete completed or rejected registrations that were modified\n at least 31 days ago\n\n :param dry_run: does not really remove data if True\n :return: number of removed registrations\n \"\"\"\n deletion_period = timezone.now() - timezone.timedelta(days=31)\n objects = Entry.objects.filter(\n (Q(status=Entry.STATUS_COMPLETED) | Q(status=Entry.STATUS_REJECTED))\n & Q(updated_at__lt=deletion_period)\n )\n\n if dry_run:\n return objects.count()\n return objects.delete()[0]\n", "path": "website/registrations/services.py" } ]
[ { "content": "\"\"\"The services defined by the registrations package\"\"\"\nimport string\nimport unicodedata\nfrom typing import Union\n\nfrom django.conf import settings\nfrom django.contrib.admin.models import LogEntry, CHANGE\nfrom django.contrib.admin.options import get_content_type_for_model\nfrom django.contrib.auth import get_user_model\nfrom django.db.models import Q, QuerySet\nfrom django.utils import timezone\n\nimport members\nfrom members.models import Membership, Profile, Member\nfrom payments.models import Payment\nfrom registrations import emails\nfrom registrations.models import Entry, Registration, Renewal\nfrom utils.snippets import datetime_to_lectureyear\n\n\ndef _generate_username(registration: Registration) -> str:\n \"\"\"\n Create username from first and lastname\n\n :param registration: Model containing first and last name\n :type registration: Registration\n :return: Created username\n :rtype: str\n \"\"\"\n username = (registration.first_name[0] + registration.last_name).lower()\n username = \"\".join(c for c in username if c.isalpha())\n username = \"\".join(\n c for c in unicodedata.normalize(\"NFKD\", username) if c in string.ascii_letters\n ).lower()\n\n # Limit length to 150 characters since Django doesn't support longer\n if len(username) > 150:\n username = username[:150]\n\n return username\n\n\ndef check_unique_user(entry: Entry) -> bool:\n \"\"\"\n Check that the username and email address of the entry are unique.\n\n :param entry: Registration entry\n :type entry: Entry\n :return: True if unique, False if not unique\n :rtype: boolean\n \"\"\"\n try:\n registration = entry.registration\n username = _generate_username(registration)\n if (\n get_user_model().objects.filter(username=username).exists()\n and registration.username is not None\n ):\n username = registration.username\n\n return not (\n get_user_model()\n .objects.filter(Q(email=registration.email) | Q(username=username))\n .exists()\n )\n except Registration.DoesNotExist:\n pass\n return True\n\n\ndef confirm_entry(queryset: QuerySet) -> int:\n \"\"\"\n Confirm all entries in the queryset\n\n :param queryset: queryset of entries\n :type queryset: Queryset[Entry]\n :return: number of updated rows\n :rtype: integer\n \"\"\"\n queryset = queryset.filter(status=Entry.STATUS_CONFIRM)\n rows_updated = queryset.update(\n status=Entry.STATUS_REVIEW, updated_at=timezone.now()\n )\n return rows_updated\n\n\ndef reject_entries(user_id: int, queryset: QuerySet) -> int:\n \"\"\"\n Reject all entries in the queryset\n\n :param user_id: Id of the user executing this action\n :param queryset: queryset of entries\n :type queryset: Queryset[Entry]\n :return: number of updated rows\n :rtype: integer\n \"\"\"\n queryset = queryset.filter(status=Entry.STATUS_REVIEW)\n entries = list(queryset.all())\n rows_updated = queryset.update(\n status=Entry.STATUS_REJECTED, updated_at=timezone.now()\n )\n\n for entry in entries:\n log_obj = None\n\n try:\n emails.send_registration_rejected_message(entry.registration)\n log_obj = entry.registration\n except Registration.DoesNotExist:\n try:\n emails.send_renewal_rejected_message(entry.renewal)\n log_obj = entry.renewal\n except Renewal.DoesNotExist:\n pass\n\n if log_obj:\n LogEntry.objects.log_action(\n user_id=user_id,\n content_type_id=get_content_type_for_model(log_obj).pk,\n object_id=log_obj.pk,\n object_repr=str(log_obj),\n action_flag=CHANGE,\n change_message=\"Changed status to rejected\",\n )\n\n return rows_updated\n\n\ndef accept_entries(user_id: int, queryset: QuerySet) -> int:\n \"\"\"\n Accept all entries in the queryset\n\n :param user_id: Id of the user executing this action\n :param queryset: queryset of entries\n :type queryset: Queryset[Entry]\n :return: number of updated rows\n :rtype: integer\n \"\"\"\n queryset = queryset.filter(status=Entry.STATUS_REVIEW)\n entries = queryset.all()\n updated_entries = []\n\n for entry in entries:\n # Check if the user is unique\n if not check_unique_user(entry):\n # User is not unique, do not proceed\n continue\n\n entry.status = Entry.STATUS_ACCEPTED\n entry.updated_at = timezone.now()\n entry.payment = _create_payment_for_entry(entry)\n\n log_obj = None\n\n try:\n if entry.registration.username is None:\n entry.registration.username = _generate_username(entry.registration)\n entry.registration.save()\n emails.send_registration_accepted_message(entry.registration, entry.payment)\n log_obj = entry.registration\n except Registration.DoesNotExist:\n try:\n emails.send_renewal_accepted_message(entry.renewal, entry.payment)\n log_obj = entry.renewal\n except Renewal.DoesNotExist:\n pass\n\n if log_obj:\n LogEntry.objects.log_action(\n user_id=user_id,\n content_type_id=get_content_type_for_model(log_obj).pk,\n object_id=log_obj.pk,\n object_repr=str(log_obj),\n action_flag=CHANGE,\n change_message=\"Change status to approved\",\n )\n\n entry.save()\n updated_entries.append(entry.pk)\n\n return len(updated_entries)\n\n\ndef revert_entry(user_id: int, entry: Entry) -> None:\n \"\"\"\n Revert status of entry to review so that it can be corrected\n\n :param user_id: Id of the user executing this action\n :param entry: Entry that should be reverted\n \"\"\"\n if not (entry.status in [Entry.STATUS_ACCEPTED, Entry.STATUS_REJECTED]):\n return\n\n payment = entry.payment\n entry.status = Entry.STATUS_REVIEW\n entry.updated_at = timezone.now()\n entry.payment = None\n entry.save()\n if payment is not None:\n payment.delete()\n\n log_obj = None\n\n try:\n log_obj = entry.registration\n except Registration.DoesNotExist:\n try:\n log_obj = entry.renewal\n except Renewal.DoesNotExist:\n pass\n\n if log_obj:\n LogEntry.objects.log_action(\n user_id=user_id,\n content_type_id=get_content_type_for_model(log_obj).pk,\n object_id=log_obj.pk,\n object_repr=str(log_obj),\n action_flag=CHANGE,\n change_message=\"Revert status to review\",\n )\n\n\ndef _create_payment_for_entry(entry: Entry) -> Payment:\n \"\"\"\n Create payment model for entry\n\n :param entry: Registration or Renewal model\n :type entry: Entry\n :return: Payment connected to the entry with the right price\n :rtype: Payment\n \"\"\"\n amount = settings.MEMBERSHIP_PRICES[entry.length]\n if entry.contribution and entry.membership_type == Membership.BENEFACTOR:\n amount = entry.contribution\n notes = f\"Membership registration. {entry.get_membership_type_display()}.\"\n topic = f\"Member registration [{entry.membership_type.upper()}]\"\n\n try:\n renewal = entry.renewal\n membership = renewal.member.latest_membership\n notes = f\"Membership renewal. {entry.get_membership_type_display()}.\"\n topic = f\"Member renewal [{entry.membership_type.upper()}]\"\n # Having a latest membership which has an until date implies that this\n # membership lasts/lasted till the end of the lecture year\n # This means it's possible to renew the 'year' membership\n # to a 'study' membership and the price should be adjusted since\n # it is considered an upgrade without paying twice\n # The rules for this behaviour are taken from the HR\n\n # Since it is possible for people to renew their membership\n # but processing to occur _after_ the membership ended\n # we're checking if that is the case so that these members\n # still get the discount price\n if (\n membership is not None\n and membership.until is not None\n and entry.created_at.date() < membership.until\n and renewal.length == Entry.MEMBERSHIP_STUDY\n ):\n amount = (\n settings.MEMBERSHIP_PRICES[Entry.MEMBERSHIP_STUDY]\n - settings.MEMBERSHIP_PRICES[Entry.MEMBERSHIP_YEAR]\n )\n except Renewal.DoesNotExist:\n pass\n\n return Payment.objects.create(amount=amount, notes=notes, topic=topic)\n\n\ndef _create_member_from_registration(registration: Registration) -> Member:\n \"\"\"\n Create User and Member model from Registration\n\n :param registration: Registration model\n :type registration: Registration\n :return: Created member object\n :rtype: Member\n \"\"\"\n\n # Generate random password for user that we can send to the new user\n password = get_user_model().objects.make_random_password(length=15)\n\n # Make sure the username and email are unique\n if not check_unique_user(registration):\n raise ValueError(\n \"Username or email address of the registration \" \"are not unique\"\n )\n\n # Create user\n user = get_user_model().objects.create_user(\n username=registration.username.lower(),\n email=registration.email,\n password=password,\n first_name=registration.first_name,\n last_name=registration.last_name,\n )\n\n # Add profile to created user\n Profile.objects.create(\n user=user,\n programme=registration.programme,\n student_number=registration.student_number,\n starting_year=registration.starting_year,\n address_street=registration.address_street,\n address_street2=registration.address_street2,\n address_postal_code=registration.address_postal_code,\n address_city=registration.address_city,\n address_country=registration.address_country,\n phone_number=registration.phone_number,\n birthday=registration.birthday,\n language=registration.language,\n show_birthday=registration.optin_birthday,\n receive_optin=registration.optin_mailinglist,\n )\n\n # Send welcome message to new member\n members.emails.send_welcome_message(user, password, registration.language)\n\n return Member.objects.get(pk=user.pk)\n\n\ndef calculate_membership_since() -> timezone.datetime:\n \"\"\"\n Calculate the start date of a membership\n\n If it's August we act as if it's the next\n lecture year already and we start new memberships in September\n :return:\n \"\"\"\n since = timezone.now().date()\n if timezone.now().month == 8:\n since = since.replace(month=9, day=1)\n return since\n\n\ndef _create_membership_from_entry(\n entry: Entry, member: Member = None\n) -> Union[Membership, None]:\n \"\"\"\n Create or update Membership model based on Entry model information\n\n :param entry: Entry model\n :type entry: Entry\n :return: The created or updated membership\n :rtype: Membership\n \"\"\"\n lecture_year = datetime_to_lectureyear(timezone.now())\n since = calculate_membership_since()\n until = None\n if timezone.now().month == 8:\n lecture_year += 1\n\n if entry.length == Entry.MEMBERSHIP_YEAR:\n # If entry is Renewal set since to current membership until + 1 day\n # Unless there is no current membership\n try:\n member = entry.renewal.member\n membership = member.current_membership\n if membership is not None:\n if membership.until is None:\n raise ValueError(\n \"This member already has a never ending \" \"membership\"\n )\n since = membership.until\n except Renewal.DoesNotExist:\n pass\n until = timezone.datetime(year=lecture_year + 1, month=9, day=1).date()\n elif entry.length == Entry.MEMBERSHIP_STUDY:\n try:\n renewal = entry.renewal\n member = renewal.member\n membership = member.latest_membership\n # Having a latest membership which has an until date implies that\n # this membership last(s/ed) till the end of the lecture year\n # This means it's possible to renew the 'year' membership\n # to a 'study' membership thus the until date should now be None\n # and no new membership is needed.\n # The rules for this behaviour are taken from the HR\n if membership is not None:\n if membership.until is None:\n raise ValueError(\n \"This member already has a never ending \" \"membership\"\n )\n if entry.created_at.date() < membership.until:\n membership.until = None\n membership.save()\n return membership\n except Renewal.DoesNotExist:\n pass\n else:\n return None\n\n return Membership.objects.create(\n user=member, since=since, until=until, type=entry.membership_type\n )\n\n\ndef process_payment(payment: Payment) -> None:\n \"\"\"\n Process the payment for the entry and send the right emails\n\n :param payment: The payment that should be processed\n :type payment: Payment\n \"\"\"\n\n if not payment.processed:\n return\n\n try:\n entry = payment.registrations_entry\n except Entry.DoesNotExist:\n return\n\n if entry.status != Entry.STATUS_ACCEPTED:\n return\n\n member = None\n\n try:\n registration = entry.registration\n # Create user and member\n member = _create_member_from_registration(registration)\n except Registration.DoesNotExist:\n try:\n # Get member from renewal\n renewal = entry.renewal\n member = renewal.member\n # Send email of payment confirmation for renewal,\n # not needed for registration since a new member already\n # gets the welcome email\n emails.send_renewal_complete_message(entry.renewal)\n except Renewal.DoesNotExist:\n pass\n\n # If member was retrieved, then create a new membership\n if member is not None:\n Payment.objects.filter(pk=payment.pk).update(paid_by=member)\n membership = _create_membership_from_entry(entry, member)\n entry.membership = membership\n entry.status = Entry.STATUS_COMPLETED\n entry.save()\n\n\ndef execute_data_minimisation(dry_run=False):\n \"\"\"\n Delete completed or rejected registrations that were modified\n at least 31 days ago\n\n :param dry_run: does not really remove data if True\n :return: number of removed registrations\n \"\"\"\n deletion_period = timezone.now() - timezone.timedelta(days=31)\n objects = Entry.objects.filter(\n (Q(status=Entry.STATUS_COMPLETED) | Q(status=Entry.STATUS_REJECTED))\n & Q(updated_at__lt=deletion_period)\n )\n\n if dry_run:\n return objects.count()\n return objects.delete()[0]\n", "path": "website/registrations/services.py" } ]
diff --git a/website/registrations/services.py b/website/registrations/services.py index b06b2e09b..8eff2ba46 100644 --- a/website/registrations/services.py +++ b/website/registrations/services.py @@ -288,7 +288,7 @@ def _create_member_from_registration(registration: Registration) -> Member: # Create user user = get_user_model().objects.create_user( - username=registration.username, + username=registration.username.lower(), email=registration.email, password=password, first_name=registration.first_name, diff --git a/website/registrations/tests/test_services.py b/website/registrations/tests/test_services.py index 1201bc683..ec3b77dee 100644 --- a/website/registrations/tests/test_services.py +++ b/website/registrations/tests/test_services.py @@ -338,7 +338,9 @@ def test_create_payment_for_entry(self): @mock.patch("registrations.services.check_unique_user") def test_create_member_from_registration(self, check_unique_user): - self.e1.username = "jdoe" + # We use capitalisation here because we want + # to test if the username is lowercased + self.e1.username = "JDoe" self.e1.save() check_unique_user.return_value = False
Pycord-Development__pycord-607
@commands.bot_has_permissions always fails for moderate_members ### Summary The `@commands.bot_has_permissions` (and possibly other similar decorators such as `has_permissions`) always result in `CheckFailure` if evaluating `moderate_members`. ### Reproduction Steps Use the example code below in a bot to see the issue happen. ### Minimal Reproducible Code ```python @commands.command() @commands.bot_has_permissions(moderate_members=True) @commands.guild_only() async def timeout_test(self, ctx): await ctx.send("haha permissions work as intended!") ``` ### Expected Results If the bot has either `Administrator` or `Moderate Members`, the message is sent. ### Actual Results The command will always result in `CheckFailure`, regardless of the bot's permissions. ![image](https://user-images.githubusercontent.com/46067571/146967608-05032f21-1ab2-484a-b477-b0fe03d974a1.png) ### Intents ``` discord.Intents(guilds = True, members = True, bans = True, emojis = True, messages = True, invites = True, reactions = True) ``` ### System Information ``` - Python v3.10.1-final - py-cord v2.0.0-alpha - py-cord pkg_resources: v2.0.0a4580+g1d65214e - aiohttp v3.7.4.post0 - system info: Linux 5.15.10-zen1-1-zen #1 ZEN SMP PREEMPT Fri, 17 Dec 2021 11:17:39 +0000 ``` ### Checklist - [X] I have searched the open issues for duplicates. - [X] I have shown the entire traceback, if possible. - [X] I have removed my token from display, if visible. ### Additional Context The timeout functionality works correctly if the bot has the permission to time out members, it is only the check that fails.
[ { "content": "\"\"\"\nThe MIT License (MIT)\n\nCopyright (c) 2015-2021 Rapptz\nCopyright (c) 2021-present Pycord Development\n\nPermission is hereby granted, free of charge, to any person obtaining a\ncopy of this software and associated documentation files (the \"Software\"),\nto deal in the Software without restriction, including without limitation\nthe rights to use, copy, modify, merge, publish, distribute, sublicense,\nand/or sell copies of the Software, and to permit persons to whom the\nSoftware is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\nOR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\nFROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\nDEALINGS IN THE SOFTWARE.\n\"\"\"\n\nfrom __future__ import annotations\n\nfrom typing import Callable, Any, ClassVar, Dict, Iterator, Set, TYPE_CHECKING, Tuple, Type, TypeVar, Optional\nfrom .flags import BaseFlags, flag_value, fill_with_flags, alias_flag_value\n\n__all__ = (\n 'Permissions',\n 'PermissionOverwrite',\n)\n\n# A permission alias works like a regular flag but is marked\n# So the PermissionOverwrite knows to work with it\nclass permission_alias(alias_flag_value):\n alias: str\n\n\ndef make_permission_alias(alias: str) -> Callable[[Callable[[Any], int]], permission_alias]:\n def decorator(func: Callable[[Any], int]) -> permission_alias:\n ret = permission_alias(func)\n ret.alias = alias\n return ret\n\n return decorator\n\nP = TypeVar('P', bound='Permissions')\n\n@fill_with_flags()\nclass Permissions(BaseFlags):\n \"\"\"Wraps up the Discord permission value.\n\n The properties provided are two way. You can set and retrieve individual\n bits using the properties as if they were regular bools. This allows\n you to edit permissions.\n\n .. versionchanged:: 1.3\n You can now use keyword arguments to initialize :class:`Permissions`\n similar to :meth:`update`.\n\n .. container:: operations\n\n .. describe:: x == y\n\n Checks if two permissions are equal.\n .. describe:: x != y\n\n Checks if two permissions are not equal.\n .. describe:: x <= y\n\n Checks if a permission is a subset of another permission.\n .. describe:: x >= y\n\n Checks if a permission is a superset of another permission.\n .. describe:: x < y\n\n Checks if a permission is a strict subset of another permission.\n .. describe:: x > y\n\n Checks if a permission is a strict superset of another permission.\n .. describe:: hash(x)\n\n Return the permission's hash.\n .. describe:: iter(x)\n\n Returns an iterator of ``(perm, value)`` pairs. This allows it\n to be, for example, constructed as a dict or a list of pairs.\n Note that aliases are not shown.\n\n Attributes\n -----------\n value: :class:`int`\n The raw value. This value is a bit array field of a 53-bit integer\n representing the currently available permissions. You should query\n permissions via the properties rather than using this raw value.\n \"\"\"\n\n __slots__ = ()\n\n def __init__(self, permissions: int = 0, **kwargs: bool):\n if not isinstance(permissions, int):\n raise TypeError(f'Expected int parameter, received {permissions.__class__.__name__} instead.')\n\n self.value = permissions\n for key, value in kwargs.items():\n if key not in self.VALID_FLAGS:\n raise TypeError(f'{key!r} is not a valid permission name.')\n setattr(self, key, value)\n\n def is_subset(self, other: Permissions) -> bool:\n \"\"\"Returns ``True`` if self has the same or fewer permissions as other.\"\"\"\n if isinstance(other, Permissions):\n return (self.value & other.value) == self.value\n else:\n raise TypeError(f\"cannot compare {self.__class__.__name__} with {other.__class__.__name__}\")\n\n def is_superset(self, other: Permissions) -> bool:\n \"\"\"Returns ``True`` if self has the same or more permissions as other.\"\"\"\n if isinstance(other, Permissions):\n return (self.value | other.value) == self.value\n else:\n raise TypeError(f\"cannot compare {self.__class__.__name__} with {other.__class__.__name__}\")\n\n def is_strict_subset(self, other: Permissions) -> bool:\n \"\"\"Returns ``True`` if the permissions on other are a strict subset of those on self.\"\"\"\n return self.is_subset(other) and self != other\n\n def is_strict_superset(self, other: Permissions) -> bool:\n \"\"\"Returns ``True`` if the permissions on other are a strict superset of those on self.\"\"\"\n return self.is_superset(other) and self != other\n\n __le__ = is_subset\n __ge__ = is_superset\n __lt__ = is_strict_subset\n __gt__ = is_strict_superset\n\n @classmethod\n def none(cls: Type[P]) -> P:\n \"\"\"A factory method that creates a :class:`Permissions` with all\n permissions set to ``False``.\"\"\"\n return cls(0)\n\n @classmethod\n def all(cls: Type[P]) -> P:\n \"\"\"A factory method that creates a :class:`Permissions` with all\n permissions set to ``True``.\n \"\"\"\n return cls(0b111111111111111111111111111111111111111)\n\n @classmethod\n def all_channel(cls: Type[P]) -> P:\n \"\"\"A :class:`Permissions` with all channel-specific permissions set to\n ``True`` and the guild-specific ones set to ``False``. The guild-specific\n permissions are currently:\n\n - :attr:`manage_emojis`\n - :attr:`view_audit_log`\n - :attr:`view_guild_insights`\n - :attr:`manage_guild`\n - :attr:`change_nickname`\n - :attr:`manage_nicknames`\n - :attr:`kick_members`\n - :attr:`ban_members`\n - :attr:`administrator`\n\n .. versionchanged:: 1.7\n Added :attr:`stream`, :attr:`priority_speaker` and :attr:`use_slash_commands` permissions.\n\n .. versionchanged:: 2.0\n Added :attr:`create_public_threads`, :attr:`create_private_threads`, :attr:`manage_threads`,\n :attr:`use_external_stickers`, :attr:`send_messages_in_threads` and\n :attr:`request_to_speak` permissions.\n \"\"\"\n return cls(0b111110110110011111101111111111101010001)\n\n @classmethod\n def general(cls: Type[P]) -> P:\n \"\"\"A factory method that creates a :class:`Permissions` with all\n \"General\" permissions from the official Discord UI set to ``True``.\n\n .. versionchanged:: 1.7\n Permission :attr:`read_messages` is now included in the general permissions, but\n permissions :attr:`administrator`, :attr:`create_instant_invite`, :attr:`kick_members`,\n :attr:`ban_members`, :attr:`change_nickname` and :attr:`manage_nicknames` are\n no longer part of the general permissions.\n \"\"\"\n return cls(0b01110000000010000000010010110000)\n\n @classmethod\n def membership(cls: Type[P]) -> P:\n \"\"\"A factory method that creates a :class:`Permissions` with all\n \"Membership\" permissions from the official Discord UI set to ``True``.\n\n .. versionadded:: 1.7\n \"\"\"\n return cls(0b00001100000000000000000000000111)\n\n @classmethod\n def text(cls: Type[P]) -> P:\n \"\"\"A factory method that creates a :class:`Permissions` with all\n \"Text\" permissions from the official Discord UI set to ``True``.\n\n .. versionchanged:: 1.7\n Permission :attr:`read_messages` is no longer part of the text permissions.\n Added :attr:`use_slash_commands` permission.\n\n .. versionchanged:: 2.0\n Added :attr:`create_public_threads`, :attr:`create_private_threads`, :attr:`manage_threads`,\n :attr:`send_messages_in_threads` and :attr:`use_external_stickers` permissions.\n \"\"\"\n return cls(0b111110010000000000001111111100001000000)\n\n @classmethod\n def voice(cls: Type[P]) -> P:\n \"\"\"A factory method that creates a :class:`Permissions` with all\n \"Voice\" permissions from the official Discord UI set to ``True``.\"\"\"\n return cls(0b00000011111100000000001100000000)\n\n @classmethod\n def stage(cls: Type[P]) -> P:\n \"\"\"A factory method that creates a :class:`Permissions` with all\n \"Stage Channel\" permissions from the official Discord UI set to ``True``.\n\n .. versionadded:: 1.7\n \"\"\"\n return cls(1 << 32)\n\n @classmethod\n def stage_moderator(cls: Type[P]) -> P:\n \"\"\"A factory method that creates a :class:`Permissions` with all\n \"Stage Moderator\" permissions from the official Discord UI set to ``True``.\n\n .. versionadded:: 1.7\n \"\"\"\n return cls(0b100000001010000000000000000000000)\n\n @classmethod\n def advanced(cls: Type[P]) -> P:\n \"\"\"A factory method that creates a :class:`Permissions` with all\n \"Advanced\" permissions from the official Discord UI set to ``True``.\n\n .. versionadded:: 1.7\n \"\"\"\n return cls(1 << 3)\n\n def update(self, **kwargs: bool) -> None:\n r\"\"\"Bulk updates this permission object.\n\n Allows you to set multiple attributes by using keyword\n arguments. The names must be equivalent to the properties\n listed. Extraneous key/value pairs will be silently ignored.\n\n Parameters\n ------------\n \\*\\*kwargs\n A list of key/value pairs to bulk update permissions with.\n \"\"\"\n for key, value in kwargs.items():\n if key in self.VALID_FLAGS:\n setattr(self, key, value)\n\n def handle_overwrite(self, allow: int, deny: int) -> None:\n # Basically this is what's happening here.\n # We have an original bit array, e.g. 1010\n # Then we have another bit array that is 'denied', e.g. 1111\n # And then we have the last one which is 'allowed', e.g. 0101\n # We want original OP denied to end up resulting in\n # whatever is in denied to be set to 0.\n # So 1010 OP 1111 -> 0000\n # Then we take this value and look at the allowed values.\n # And whatever is allowed is set to 1.\n # So 0000 OP2 0101 -> 0101\n # The OP is base & ~denied.\n # The OP2 is base | allowed.\n self.value = (self.value & ~deny) | allow\n\n @flag_value\n def create_instant_invite(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if the user can create instant invites.\"\"\"\n return 1 << 0\n\n @flag_value\n def kick_members(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if the user can kick users from the guild.\"\"\"\n return 1 << 1\n\n @flag_value\n def ban_members(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can ban users from the guild.\"\"\"\n return 1 << 2\n\n @flag_value\n def administrator(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user is an administrator. This role overrides all other permissions.\n\n This also bypasses all channel-specific overrides.\n \"\"\"\n return 1 << 3\n\n @flag_value\n def manage_channels(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can edit, delete, or create channels in the guild.\n\n This also corresponds to the \"Manage Channel\" channel-specific override.\"\"\"\n return 1 << 4\n\n @flag_value\n def manage_guild(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can edit guild properties.\"\"\"\n return 1 << 5\n\n @flag_value\n def add_reactions(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can add reactions to messages.\"\"\"\n return 1 << 6\n\n @flag_value\n def view_audit_log(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can view the guild's audit log.\"\"\"\n return 1 << 7\n\n @flag_value\n def priority_speaker(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can be more easily heard while talking.\"\"\"\n return 1 << 8\n\n @flag_value\n def stream(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can stream in a voice channel.\"\"\"\n return 1 << 9\n\n @flag_value\n def read_messages(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can read messages from all or specific text channels.\"\"\"\n return 1 << 10\n\n @make_permission_alias('read_messages')\n def view_channel(self) -> int:\n \"\"\":class:`bool`: An alias for :attr:`read_messages`.\n\n .. versionadded:: 1.3\n \"\"\"\n return 1 << 10\n\n @flag_value\n def send_messages(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can send messages from all or specific text channels.\"\"\"\n return 1 << 11\n\n @flag_value\n def send_tts_messages(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can send TTS messages from all or specific text channels.\"\"\"\n return 1 << 12\n\n @flag_value\n def manage_messages(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can delete or pin messages in a text channel.\n\n .. note::\n\n Note that there are currently no ways to edit other people's messages.\n \"\"\"\n return 1 << 13\n\n @flag_value\n def embed_links(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user's messages will automatically be embedded by Discord.\"\"\"\n return 1 << 14\n\n @flag_value\n def attach_files(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can send files in their messages.\"\"\"\n return 1 << 15\n\n @flag_value\n def read_message_history(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can read a text channel's previous messages.\"\"\"\n return 1 << 16\n\n @flag_value\n def mention_everyone(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user's @everyone or @here will mention everyone in the text channel.\"\"\"\n return 1 << 17\n\n @flag_value\n def external_emojis(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can use emojis from other guilds.\"\"\"\n return 1 << 18\n\n @make_permission_alias('external_emojis')\n def use_external_emojis(self) -> int:\n \"\"\":class:`bool`: An alias for :attr:`external_emojis`.\n\n .. versionadded:: 1.3\n \"\"\"\n return 1 << 18\n\n @flag_value\n def view_guild_insights(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can view the guild's insights.\n\n .. versionadded:: 1.3\n \"\"\"\n return 1 << 19\n\n @flag_value\n def connect(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can connect to a voice channel.\"\"\"\n return 1 << 20\n\n @flag_value\n def speak(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can speak in a voice channel.\"\"\"\n return 1 << 21\n\n @flag_value\n def mute_members(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can mute other users.\"\"\"\n return 1 << 22\n\n @flag_value\n def deafen_members(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can deafen other users.\"\"\"\n return 1 << 23\n\n @flag_value\n def move_members(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can move users between other voice channels.\"\"\"\n return 1 << 24\n\n @flag_value\n def use_voice_activation(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can use voice activation in voice channels.\"\"\"\n return 1 << 25\n\n @flag_value\n def change_nickname(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can change their nickname in the guild.\"\"\"\n return 1 << 26\n\n @flag_value\n def manage_nicknames(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can change other user's nickname in the guild.\"\"\"\n return 1 << 27\n\n @flag_value\n def manage_roles(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can create or edit roles less than their role's position.\n\n This also corresponds to the \"Manage Permissions\" channel-specific override.\n \"\"\"\n return 1 << 28\n\n @make_permission_alias('manage_roles')\n def manage_permissions(self) -> int:\n \"\"\":class:`bool`: An alias for :attr:`manage_roles`.\n\n .. versionadded:: 1.3\n \"\"\"\n return 1 << 28\n\n @flag_value\n def manage_webhooks(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can create, edit, or delete webhooks.\"\"\"\n return 1 << 29\n\n @flag_value\n def manage_emojis(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can create, edit, or delete emojis.\"\"\"\n return 1 << 30\n\n @make_permission_alias('manage_emojis')\n def manage_emojis_and_stickers(self) -> int:\n \"\"\":class:`bool`: An alias for :attr:`manage_emojis`.\n\n .. versionadded:: 2.0\n \"\"\"\n return 1 << 30\n\n @flag_value\n def use_slash_commands(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can use slash commands.\n\n .. versionadded:: 1.7\n \"\"\"\n return 1 << 31\n\n @flag_value\n def request_to_speak(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can request to speak in a stage channel.\n\n .. versionadded:: 1.7\n \"\"\"\n return 1 << 32\n\n @flag_value\n def manage_events(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can manage guild events.\n\n .. versionadded:: 2.0\n \"\"\"\n return 1 << 33\n\n @flag_value\n def manage_threads(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can manage threads.\n\n .. versionadded:: 2.0\n \"\"\"\n return 1 << 34\n\n @flag_value\n def create_public_threads(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can create public threads.\n\n .. versionadded:: 2.0\n \"\"\"\n return 1 << 35\n\n @flag_value\n def create_private_threads(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can create private threads.\n\n .. versionadded:: 2.0\n \"\"\"\n return 1 << 36\n\n @flag_value\n def external_stickers(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can use stickers from other guilds.\n\n .. versionadded:: 2.0\n \"\"\"\n return 1 << 37\n\n @make_permission_alias('external_stickers')\n def use_external_stickers(self) -> int:\n \"\"\":class:`bool`: An alias for :attr:`external_stickers`.\n\n .. versionadded:: 2.0\n \"\"\"\n return 1 << 37\n\n @flag_value\n def send_messages_in_threads(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can send messages in threads.\n\n .. versionadded:: 2.0\n \"\"\"\n return 1 << 38\n \n @flag_value\n def start_embedded_activities(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can launch an activity flagged 'EMBEDDED' in a voice channel.\n\n .. versionadded:: 2.0\n \"\"\"\n return 1 << 39\n \n @flag_value\n def moderate_members(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can moderate members (timeout).\n\n .. versionadded:: 2.0\n \"\"\"\n return 1 << 40\n\nPO = TypeVar('PO', bound='PermissionOverwrite')\n\ndef _augment_from_permissions(cls):\n cls.VALID_NAMES = set(Permissions.VALID_FLAGS)\n aliases = set()\n\n # make descriptors for all the valid names and aliases\n for name, value in Permissions.__dict__.items():\n if isinstance(value, permission_alias):\n key = value.alias\n aliases.add(name)\n elif isinstance(value, flag_value):\n key = name\n else:\n continue\n\n # god bless Python\n def getter(self, x=key):\n return self._values.get(x)\n\n def setter(self, value, x=key):\n self._set(x, value)\n\n prop = property(getter, setter)\n setattr(cls, name, prop)\n\n cls.PURE_FLAGS = cls.VALID_NAMES - aliases\n return cls\n\n\n@_augment_from_permissions\nclass PermissionOverwrite:\n r\"\"\"A type that is used to represent a channel specific permission.\n\n Unlike a regular :class:`Permissions`\\, the default value of a\n permission is equivalent to ``None`` and not ``False``. Setting\n a value to ``False`` is **explicitly** denying that permission,\n while setting a value to ``True`` is **explicitly** allowing\n that permission.\n\n The values supported by this are the same as :class:`Permissions`\n with the added possibility of it being set to ``None``.\n\n .. container:: operations\n\n .. describe:: x == y\n\n Checks if two overwrites are equal.\n .. describe:: x != y\n\n Checks if two overwrites are not equal.\n .. describe:: iter(x)\n\n Returns an iterator of ``(perm, value)`` pairs. This allows it\n to be, for example, constructed as a dict or a list of pairs.\n Note that aliases are not shown.\n\n Parameters\n -----------\n \\*\\*kwargs\n Set the value of permissions by their name.\n \"\"\"\n\n __slots__ = ('_values',)\n\n if TYPE_CHECKING:\n VALID_NAMES: ClassVar[Set[str]]\n PURE_FLAGS: ClassVar[Set[str]]\n # I wish I didn't have to do this\n create_instant_invite: Optional[bool]\n kick_members: Optional[bool]\n ban_members: Optional[bool]\n administrator: Optional[bool]\n manage_channels: Optional[bool]\n manage_guild: Optional[bool]\n add_reactions: Optional[bool]\n view_audit_log: Optional[bool]\n priority_speaker: Optional[bool]\n stream: Optional[bool]\n read_messages: Optional[bool]\n view_channel: Optional[bool]\n send_messages: Optional[bool]\n send_tts_messages: Optional[bool]\n manage_messages: Optional[bool]\n embed_links: Optional[bool]\n attach_files: Optional[bool]\n read_message_history: Optional[bool]\n mention_everyone: Optional[bool]\n external_emojis: Optional[bool]\n use_external_emojis: Optional[bool]\n view_guild_insights: Optional[bool]\n connect: Optional[bool]\n speak: Optional[bool]\n mute_members: Optional[bool]\n deafen_members: Optional[bool]\n move_members: Optional[bool]\n use_voice_activation: Optional[bool]\n change_nickname: Optional[bool]\n manage_nicknames: Optional[bool]\n manage_roles: Optional[bool]\n manage_permissions: Optional[bool]\n manage_webhooks: Optional[bool]\n manage_emojis: Optional[bool]\n manage_emojis_and_stickers: Optional[bool]\n use_slash_commands: Optional[bool]\n request_to_speak: Optional[bool]\n manage_events: Optional[bool]\n manage_threads: Optional[bool]\n create_public_threads: Optional[bool]\n create_private_threads: Optional[bool]\n send_messages_in_threads: Optional[bool]\n external_stickers: Optional[bool]\n use_external_stickers: Optional[bool]\n start_embedded_activities: Optional[bool]\n moderate_members: Optional[bool]\n\n def __init__(self, **kwargs: Optional[bool]):\n self._values: Dict[str, Optional[bool]] = {}\n\n for key, value in kwargs.items():\n if key not in self.VALID_NAMES:\n raise ValueError(f'no permission called {key}.')\n\n setattr(self, key, value)\n\n def __eq__(self, other: Any) -> bool:\n return isinstance(other, PermissionOverwrite) and self._values == other._values\n\n def _set(self, key: str, value: Optional[bool]) -> None:\n if value not in (True, None, False):\n raise TypeError(f'Expected bool or NoneType, received {value.__class__.__name__}')\n\n if value is None:\n self._values.pop(key, None)\n else:\n self._values[key] = value\n\n def pair(self) -> Tuple[Permissions, Permissions]:\n \"\"\"Tuple[:class:`Permissions`, :class:`Permissions`]: Returns the (allow, deny) pair from this overwrite.\"\"\"\n\n allow = Permissions.none()\n deny = Permissions.none()\n\n for key, value in self._values.items():\n if value is True:\n setattr(allow, key, True)\n elif value is False:\n setattr(deny, key, True)\n\n return allow, deny\n\n @classmethod\n def from_pair(cls: Type[PO], allow: Permissions, deny: Permissions) -> PO:\n \"\"\"Creates an overwrite from an allow/deny pair of :class:`Permissions`.\"\"\"\n ret = cls()\n for key, value in allow:\n if value is True:\n setattr(ret, key, True)\n\n for key, value in deny:\n if value is True:\n setattr(ret, key, False)\n\n return ret\n\n def is_empty(self) -> bool:\n \"\"\"Checks if the permission overwrite is currently empty.\n\n An empty permission overwrite is one that has no overwrites set\n to ``True`` or ``False``.\n\n Returns\n -------\n :class:`bool`\n Indicates if the overwrite is empty.\n \"\"\"\n return len(self._values) == 0\n\n def update(self, **kwargs: bool) -> None:\n r\"\"\"Bulk updates this permission overwrite object.\n\n Allows you to set multiple attributes by using keyword\n arguments. The names must be equivalent to the properties\n listed. Extraneous key/value pairs will be silently ignored.\n\n Parameters\n ------------\n \\*\\*kwargs\n A list of key/value pairs to bulk update with.\n \"\"\"\n for key, value in kwargs.items():\n if key not in self.VALID_NAMES:\n continue\n\n setattr(self, key, value)\n\n def __iter__(self) -> Iterator[Tuple[str, Optional[bool]]]:\n for key in self.PURE_FLAGS:\n yield key, self._values.get(key)\n", "path": "discord/permissions.py" } ]
[ { "content": "\"\"\"\nThe MIT License (MIT)\n\nCopyright (c) 2015-2021 Rapptz\nCopyright (c) 2021-present Pycord Development\n\nPermission is hereby granted, free of charge, to any person obtaining a\ncopy of this software and associated documentation files (the \"Software\"),\nto deal in the Software without restriction, including without limitation\nthe rights to use, copy, modify, merge, publish, distribute, sublicense,\nand/or sell copies of the Software, and to permit persons to whom the\nSoftware is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\nOR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\nFROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\nDEALINGS IN THE SOFTWARE.\n\"\"\"\n\nfrom __future__ import annotations\n\nfrom typing import Callable, Any, ClassVar, Dict, Iterator, Set, TYPE_CHECKING, Tuple, Type, TypeVar, Optional\nfrom .flags import BaseFlags, flag_value, fill_with_flags, alias_flag_value\n\n__all__ = (\n 'Permissions',\n 'PermissionOverwrite',\n)\n\n# A permission alias works like a regular flag but is marked\n# So the PermissionOverwrite knows to work with it\nclass permission_alias(alias_flag_value):\n alias: str\n\n\ndef make_permission_alias(alias: str) -> Callable[[Callable[[Any], int]], permission_alias]:\n def decorator(func: Callable[[Any], int]) -> permission_alias:\n ret = permission_alias(func)\n ret.alias = alias\n return ret\n\n return decorator\n\nP = TypeVar('P', bound='Permissions')\n\n@fill_with_flags()\nclass Permissions(BaseFlags):\n \"\"\"Wraps up the Discord permission value.\n\n The properties provided are two way. You can set and retrieve individual\n bits using the properties as if they were regular bools. This allows\n you to edit permissions.\n\n .. versionchanged:: 1.3\n You can now use keyword arguments to initialize :class:`Permissions`\n similar to :meth:`update`.\n\n .. container:: operations\n\n .. describe:: x == y\n\n Checks if two permissions are equal.\n .. describe:: x != y\n\n Checks if two permissions are not equal.\n .. describe:: x <= y\n\n Checks if a permission is a subset of another permission.\n .. describe:: x >= y\n\n Checks if a permission is a superset of another permission.\n .. describe:: x < y\n\n Checks if a permission is a strict subset of another permission.\n .. describe:: x > y\n\n Checks if a permission is a strict superset of another permission.\n .. describe:: hash(x)\n\n Return the permission's hash.\n .. describe:: iter(x)\n\n Returns an iterator of ``(perm, value)`` pairs. This allows it\n to be, for example, constructed as a dict or a list of pairs.\n Note that aliases are not shown.\n\n Attributes\n -----------\n value: :class:`int`\n The raw value. This value is a bit array field of a 53-bit integer\n representing the currently available permissions. You should query\n permissions via the properties rather than using this raw value.\n \"\"\"\n\n __slots__ = ()\n\n def __init__(self, permissions: int = 0, **kwargs: bool):\n if not isinstance(permissions, int):\n raise TypeError(f'Expected int parameter, received {permissions.__class__.__name__} instead.')\n\n self.value = permissions\n for key, value in kwargs.items():\n if key not in self.VALID_FLAGS:\n raise TypeError(f'{key!r} is not a valid permission name.')\n setattr(self, key, value)\n\n def is_subset(self, other: Permissions) -> bool:\n \"\"\"Returns ``True`` if self has the same or fewer permissions as other.\"\"\"\n if isinstance(other, Permissions):\n return (self.value & other.value) == self.value\n else:\n raise TypeError(f\"cannot compare {self.__class__.__name__} with {other.__class__.__name__}\")\n\n def is_superset(self, other: Permissions) -> bool:\n \"\"\"Returns ``True`` if self has the same or more permissions as other.\"\"\"\n if isinstance(other, Permissions):\n return (self.value | other.value) == self.value\n else:\n raise TypeError(f\"cannot compare {self.__class__.__name__} with {other.__class__.__name__}\")\n\n def is_strict_subset(self, other: Permissions) -> bool:\n \"\"\"Returns ``True`` if the permissions on other are a strict subset of those on self.\"\"\"\n return self.is_subset(other) and self != other\n\n def is_strict_superset(self, other: Permissions) -> bool:\n \"\"\"Returns ``True`` if the permissions on other are a strict superset of those on self.\"\"\"\n return self.is_superset(other) and self != other\n\n __le__ = is_subset\n __ge__ = is_superset\n __lt__ = is_strict_subset\n __gt__ = is_strict_superset\n\n @classmethod\n def none(cls: Type[P]) -> P:\n \"\"\"A factory method that creates a :class:`Permissions` with all\n permissions set to ``False``.\"\"\"\n return cls(0)\n\n @classmethod\n def all(cls: Type[P]) -> P:\n \"\"\"A factory method that creates a :class:`Permissions` with all\n permissions set to ``True``.\n \"\"\"\n return cls(-1)\n\n @classmethod\n def all_channel(cls: Type[P]) -> P:\n \"\"\"A :class:`Permissions` with all channel-specific permissions set to\n ``True`` and the guild-specific ones set to ``False``. The guild-specific\n permissions are currently:\n\n - :attr:`manage_emojis`\n - :attr:`view_audit_log`\n - :attr:`view_guild_insights`\n - :attr:`manage_guild`\n - :attr:`change_nickname`\n - :attr:`manage_nicknames`\n - :attr:`kick_members`\n - :attr:`ban_members`\n - :attr:`administrator`\n\n .. versionchanged:: 1.7\n Added :attr:`stream`, :attr:`priority_speaker` and :attr:`use_slash_commands` permissions.\n\n .. versionchanged:: 2.0\n Added :attr:`create_public_threads`, :attr:`create_private_threads`, :attr:`manage_threads`,\n :attr:`use_external_stickers`, :attr:`send_messages_in_threads` and\n :attr:`request_to_speak` permissions.\n \"\"\"\n return cls(0b111110110110011111101111111111101010001)\n\n @classmethod\n def general(cls: Type[P]) -> P:\n \"\"\"A factory method that creates a :class:`Permissions` with all\n \"General\" permissions from the official Discord UI set to ``True``.\n\n .. versionchanged:: 1.7\n Permission :attr:`read_messages` is now included in the general permissions, but\n permissions :attr:`administrator`, :attr:`create_instant_invite`, :attr:`kick_members`,\n :attr:`ban_members`, :attr:`change_nickname` and :attr:`manage_nicknames` are\n no longer part of the general permissions.\n \"\"\"\n return cls(0b01110000000010000000010010110000)\n\n @classmethod\n def membership(cls: Type[P]) -> P:\n \"\"\"A factory method that creates a :class:`Permissions` with all\n \"Membership\" permissions from the official Discord UI set to ``True``.\n\n .. versionadded:: 1.7\n \"\"\"\n return cls(0b00001100000000000000000000000111)\n\n @classmethod\n def text(cls: Type[P]) -> P:\n \"\"\"A factory method that creates a :class:`Permissions` with all\n \"Text\" permissions from the official Discord UI set to ``True``.\n\n .. versionchanged:: 1.7\n Permission :attr:`read_messages` is no longer part of the text permissions.\n Added :attr:`use_slash_commands` permission.\n\n .. versionchanged:: 2.0\n Added :attr:`create_public_threads`, :attr:`create_private_threads`, :attr:`manage_threads`,\n :attr:`send_messages_in_threads` and :attr:`use_external_stickers` permissions.\n \"\"\"\n return cls(0b111110010000000000001111111100001000000)\n\n @classmethod\n def voice(cls: Type[P]) -> P:\n \"\"\"A factory method that creates a :class:`Permissions` with all\n \"Voice\" permissions from the official Discord UI set to ``True``.\"\"\"\n return cls(0b00000011111100000000001100000000)\n\n @classmethod\n def stage(cls: Type[P]) -> P:\n \"\"\"A factory method that creates a :class:`Permissions` with all\n \"Stage Channel\" permissions from the official Discord UI set to ``True``.\n\n .. versionadded:: 1.7\n \"\"\"\n return cls(1 << 32)\n\n @classmethod\n def stage_moderator(cls: Type[P]) -> P:\n \"\"\"A factory method that creates a :class:`Permissions` with all\n \"Stage Moderator\" permissions from the official Discord UI set to ``True``.\n\n .. versionadded:: 1.7\n \"\"\"\n return cls(0b100000001010000000000000000000000)\n\n @classmethod\n def advanced(cls: Type[P]) -> P:\n \"\"\"A factory method that creates a :class:`Permissions` with all\n \"Advanced\" permissions from the official Discord UI set to ``True``.\n\n .. versionadded:: 1.7\n \"\"\"\n return cls(1 << 3)\n\n def update(self, **kwargs: bool) -> None:\n r\"\"\"Bulk updates this permission object.\n\n Allows you to set multiple attributes by using keyword\n arguments. The names must be equivalent to the properties\n listed. Extraneous key/value pairs will be silently ignored.\n\n Parameters\n ------------\n \\*\\*kwargs\n A list of key/value pairs to bulk update permissions with.\n \"\"\"\n for key, value in kwargs.items():\n if key in self.VALID_FLAGS:\n setattr(self, key, value)\n\n def handle_overwrite(self, allow: int, deny: int) -> None:\n # Basically this is what's happening here.\n # We have an original bit array, e.g. 1010\n # Then we have another bit array that is 'denied', e.g. 1111\n # And then we have the last one which is 'allowed', e.g. 0101\n # We want original OP denied to end up resulting in\n # whatever is in denied to be set to 0.\n # So 1010 OP 1111 -> 0000\n # Then we take this value and look at the allowed values.\n # And whatever is allowed is set to 1.\n # So 0000 OP2 0101 -> 0101\n # The OP is base & ~denied.\n # The OP2 is base | allowed.\n self.value = (self.value & ~deny) | allow\n\n @flag_value\n def create_instant_invite(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if the user can create instant invites.\"\"\"\n return 1 << 0\n\n @flag_value\n def kick_members(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if the user can kick users from the guild.\"\"\"\n return 1 << 1\n\n @flag_value\n def ban_members(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can ban users from the guild.\"\"\"\n return 1 << 2\n\n @flag_value\n def administrator(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user is an administrator. This role overrides all other permissions.\n\n This also bypasses all channel-specific overrides.\n \"\"\"\n return 1 << 3\n\n @flag_value\n def manage_channels(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can edit, delete, or create channels in the guild.\n\n This also corresponds to the \"Manage Channel\" channel-specific override.\"\"\"\n return 1 << 4\n\n @flag_value\n def manage_guild(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can edit guild properties.\"\"\"\n return 1 << 5\n\n @flag_value\n def add_reactions(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can add reactions to messages.\"\"\"\n return 1 << 6\n\n @flag_value\n def view_audit_log(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can view the guild's audit log.\"\"\"\n return 1 << 7\n\n @flag_value\n def priority_speaker(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can be more easily heard while talking.\"\"\"\n return 1 << 8\n\n @flag_value\n def stream(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can stream in a voice channel.\"\"\"\n return 1 << 9\n\n @flag_value\n def read_messages(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can read messages from all or specific text channels.\"\"\"\n return 1 << 10\n\n @make_permission_alias('read_messages')\n def view_channel(self) -> int:\n \"\"\":class:`bool`: An alias for :attr:`read_messages`.\n\n .. versionadded:: 1.3\n \"\"\"\n return 1 << 10\n\n @flag_value\n def send_messages(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can send messages from all or specific text channels.\"\"\"\n return 1 << 11\n\n @flag_value\n def send_tts_messages(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can send TTS messages from all or specific text channels.\"\"\"\n return 1 << 12\n\n @flag_value\n def manage_messages(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can delete or pin messages in a text channel.\n\n .. note::\n\n Note that there are currently no ways to edit other people's messages.\n \"\"\"\n return 1 << 13\n\n @flag_value\n def embed_links(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user's messages will automatically be embedded by Discord.\"\"\"\n return 1 << 14\n\n @flag_value\n def attach_files(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can send files in their messages.\"\"\"\n return 1 << 15\n\n @flag_value\n def read_message_history(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can read a text channel's previous messages.\"\"\"\n return 1 << 16\n\n @flag_value\n def mention_everyone(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user's @everyone or @here will mention everyone in the text channel.\"\"\"\n return 1 << 17\n\n @flag_value\n def external_emojis(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can use emojis from other guilds.\"\"\"\n return 1 << 18\n\n @make_permission_alias('external_emojis')\n def use_external_emojis(self) -> int:\n \"\"\":class:`bool`: An alias for :attr:`external_emojis`.\n\n .. versionadded:: 1.3\n \"\"\"\n return 1 << 18\n\n @flag_value\n def view_guild_insights(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can view the guild's insights.\n\n .. versionadded:: 1.3\n \"\"\"\n return 1 << 19\n\n @flag_value\n def connect(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can connect to a voice channel.\"\"\"\n return 1 << 20\n\n @flag_value\n def speak(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can speak in a voice channel.\"\"\"\n return 1 << 21\n\n @flag_value\n def mute_members(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can mute other users.\"\"\"\n return 1 << 22\n\n @flag_value\n def deafen_members(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can deafen other users.\"\"\"\n return 1 << 23\n\n @flag_value\n def move_members(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can move users between other voice channels.\"\"\"\n return 1 << 24\n\n @flag_value\n def use_voice_activation(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can use voice activation in voice channels.\"\"\"\n return 1 << 25\n\n @flag_value\n def change_nickname(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can change their nickname in the guild.\"\"\"\n return 1 << 26\n\n @flag_value\n def manage_nicknames(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can change other user's nickname in the guild.\"\"\"\n return 1 << 27\n\n @flag_value\n def manage_roles(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can create or edit roles less than their role's position.\n\n This also corresponds to the \"Manage Permissions\" channel-specific override.\n \"\"\"\n return 1 << 28\n\n @make_permission_alias('manage_roles')\n def manage_permissions(self) -> int:\n \"\"\":class:`bool`: An alias for :attr:`manage_roles`.\n\n .. versionadded:: 1.3\n \"\"\"\n return 1 << 28\n\n @flag_value\n def manage_webhooks(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can create, edit, or delete webhooks.\"\"\"\n return 1 << 29\n\n @flag_value\n def manage_emojis(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can create, edit, or delete emojis.\"\"\"\n return 1 << 30\n\n @make_permission_alias('manage_emojis')\n def manage_emojis_and_stickers(self) -> int:\n \"\"\":class:`bool`: An alias for :attr:`manage_emojis`.\n\n .. versionadded:: 2.0\n \"\"\"\n return 1 << 30\n\n @flag_value\n def use_slash_commands(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can use slash commands.\n\n .. versionadded:: 1.7\n \"\"\"\n return 1 << 31\n\n @flag_value\n def request_to_speak(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can request to speak in a stage channel.\n\n .. versionadded:: 1.7\n \"\"\"\n return 1 << 32\n\n @flag_value\n def manage_events(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can manage guild events.\n\n .. versionadded:: 2.0\n \"\"\"\n return 1 << 33\n\n @flag_value\n def manage_threads(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can manage threads.\n\n .. versionadded:: 2.0\n \"\"\"\n return 1 << 34\n\n @flag_value\n def create_public_threads(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can create public threads.\n\n .. versionadded:: 2.0\n \"\"\"\n return 1 << 35\n\n @flag_value\n def create_private_threads(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can create private threads.\n\n .. versionadded:: 2.0\n \"\"\"\n return 1 << 36\n\n @flag_value\n def external_stickers(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can use stickers from other guilds.\n\n .. versionadded:: 2.0\n \"\"\"\n return 1 << 37\n\n @make_permission_alias('external_stickers')\n def use_external_stickers(self) -> int:\n \"\"\":class:`bool`: An alias for :attr:`external_stickers`.\n\n .. versionadded:: 2.0\n \"\"\"\n return 1 << 37\n\n @flag_value\n def send_messages_in_threads(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can send messages in threads.\n\n .. versionadded:: 2.0\n \"\"\"\n return 1 << 38\n \n @flag_value\n def start_embedded_activities(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can launch an activity flagged 'EMBEDDED' in a voice channel.\n\n .. versionadded:: 2.0\n \"\"\"\n return 1 << 39\n \n @flag_value\n def moderate_members(self) -> int:\n \"\"\":class:`bool`: Returns ``True`` if a user can moderate members (timeout).\n\n .. versionadded:: 2.0\n \"\"\"\n return 1 << 40\n\nPO = TypeVar('PO', bound='PermissionOverwrite')\n\ndef _augment_from_permissions(cls):\n cls.VALID_NAMES = set(Permissions.VALID_FLAGS)\n aliases = set()\n\n # make descriptors for all the valid names and aliases\n for name, value in Permissions.__dict__.items():\n if isinstance(value, permission_alias):\n key = value.alias\n aliases.add(name)\n elif isinstance(value, flag_value):\n key = name\n else:\n continue\n\n # god bless Python\n def getter(self, x=key):\n return self._values.get(x)\n\n def setter(self, value, x=key):\n self._set(x, value)\n\n prop = property(getter, setter)\n setattr(cls, name, prop)\n\n cls.PURE_FLAGS = cls.VALID_NAMES - aliases\n return cls\n\n\n@_augment_from_permissions\nclass PermissionOverwrite:\n r\"\"\"A type that is used to represent a channel specific permission.\n\n Unlike a regular :class:`Permissions`\\, the default value of a\n permission is equivalent to ``None`` and not ``False``. Setting\n a value to ``False`` is **explicitly** denying that permission,\n while setting a value to ``True`` is **explicitly** allowing\n that permission.\n\n The values supported by this are the same as :class:`Permissions`\n with the added possibility of it being set to ``None``.\n\n .. container:: operations\n\n .. describe:: x == y\n\n Checks if two overwrites are equal.\n .. describe:: x != y\n\n Checks if two overwrites are not equal.\n .. describe:: iter(x)\n\n Returns an iterator of ``(perm, value)`` pairs. This allows it\n to be, for example, constructed as a dict or a list of pairs.\n Note that aliases are not shown.\n\n Parameters\n -----------\n \\*\\*kwargs\n Set the value of permissions by their name.\n \"\"\"\n\n __slots__ = ('_values',)\n\n if TYPE_CHECKING:\n VALID_NAMES: ClassVar[Set[str]]\n PURE_FLAGS: ClassVar[Set[str]]\n # I wish I didn't have to do this\n create_instant_invite: Optional[bool]\n kick_members: Optional[bool]\n ban_members: Optional[bool]\n administrator: Optional[bool]\n manage_channels: Optional[bool]\n manage_guild: Optional[bool]\n add_reactions: Optional[bool]\n view_audit_log: Optional[bool]\n priority_speaker: Optional[bool]\n stream: Optional[bool]\n read_messages: Optional[bool]\n view_channel: Optional[bool]\n send_messages: Optional[bool]\n send_tts_messages: Optional[bool]\n manage_messages: Optional[bool]\n embed_links: Optional[bool]\n attach_files: Optional[bool]\n read_message_history: Optional[bool]\n mention_everyone: Optional[bool]\n external_emojis: Optional[bool]\n use_external_emojis: Optional[bool]\n view_guild_insights: Optional[bool]\n connect: Optional[bool]\n speak: Optional[bool]\n mute_members: Optional[bool]\n deafen_members: Optional[bool]\n move_members: Optional[bool]\n use_voice_activation: Optional[bool]\n change_nickname: Optional[bool]\n manage_nicknames: Optional[bool]\n manage_roles: Optional[bool]\n manage_permissions: Optional[bool]\n manage_webhooks: Optional[bool]\n manage_emojis: Optional[bool]\n manage_emojis_and_stickers: Optional[bool]\n use_slash_commands: Optional[bool]\n request_to_speak: Optional[bool]\n manage_events: Optional[bool]\n manage_threads: Optional[bool]\n create_public_threads: Optional[bool]\n create_private_threads: Optional[bool]\n send_messages_in_threads: Optional[bool]\n external_stickers: Optional[bool]\n use_external_stickers: Optional[bool]\n start_embedded_activities: Optional[bool]\n moderate_members: Optional[bool]\n\n def __init__(self, **kwargs: Optional[bool]):\n self._values: Dict[str, Optional[bool]] = {}\n\n for key, value in kwargs.items():\n if key not in self.VALID_NAMES:\n raise ValueError(f'no permission called {key}.')\n\n setattr(self, key, value)\n\n def __eq__(self, other: Any) -> bool:\n return isinstance(other, PermissionOverwrite) and self._values == other._values\n\n def _set(self, key: str, value: Optional[bool]) -> None:\n if value not in (True, None, False):\n raise TypeError(f'Expected bool or NoneType, received {value.__class__.__name__}')\n\n if value is None:\n self._values.pop(key, None)\n else:\n self._values[key] = value\n\n def pair(self) -> Tuple[Permissions, Permissions]:\n \"\"\"Tuple[:class:`Permissions`, :class:`Permissions`]: Returns the (allow, deny) pair from this overwrite.\"\"\"\n\n allow = Permissions.none()\n deny = Permissions.none()\n\n for key, value in self._values.items():\n if value is True:\n setattr(allow, key, True)\n elif value is False:\n setattr(deny, key, True)\n\n return allow, deny\n\n @classmethod\n def from_pair(cls: Type[PO], allow: Permissions, deny: Permissions) -> PO:\n \"\"\"Creates an overwrite from an allow/deny pair of :class:`Permissions`.\"\"\"\n ret = cls()\n for key, value in allow:\n if value is True:\n setattr(ret, key, True)\n\n for key, value in deny:\n if value is True:\n setattr(ret, key, False)\n\n return ret\n\n def is_empty(self) -> bool:\n \"\"\"Checks if the permission overwrite is currently empty.\n\n An empty permission overwrite is one that has no overwrites set\n to ``True`` or ``False``.\n\n Returns\n -------\n :class:`bool`\n Indicates if the overwrite is empty.\n \"\"\"\n return len(self._values) == 0\n\n def update(self, **kwargs: bool) -> None:\n r\"\"\"Bulk updates this permission overwrite object.\n\n Allows you to set multiple attributes by using keyword\n arguments. The names must be equivalent to the properties\n listed. Extraneous key/value pairs will be silently ignored.\n\n Parameters\n ------------\n \\*\\*kwargs\n A list of key/value pairs to bulk update with.\n \"\"\"\n for key, value in kwargs.items():\n if key not in self.VALID_NAMES:\n continue\n\n setattr(self, key, value)\n\n def __iter__(self) -> Iterator[Tuple[str, Optional[bool]]]:\n for key in self.PURE_FLAGS:\n yield key, self._values.get(key)\n", "path": "discord/permissions.py" } ]
diff --git a/discord/permissions.py b/discord/permissions.py index 2aa4de1f95..41825b8007 100644 --- a/discord/permissions.py +++ b/discord/permissions.py @@ -148,7 +148,7 @@ def all(cls: Type[P]) -> P: """A factory method that creates a :class:`Permissions` with all permissions set to ``True``. """ - return cls(0b111111111111111111111111111111111111111) + return cls(-1) @classmethod def all_channel(cls: Type[P]) -> P:
secdev__scapy-2255
tcpdump check error in centos #### Brief description > I have installed tcpdump in PATH, but it reports: scapy.error.Scapy_Exception: tcpdump is not available. Cannot use filter ! I found the code which check tcudmp in /opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/scapy/arch/common.py: ``` def _check_tcpdump(): """ Return True if the tcpdump command can be started """ with open(os.devnull, 'wb') as devnull: try: proc = subprocess.Popen([conf.prog.tcpdump, "--version"], stdout=devnull, stderr=subprocess.STDOUT) except OSError: return False return proc.wait() == 0 ``` the error is that tcpdump --version return 1 instead of 0 eg: ``` [root@localhost proxy]# tcpdump --version tcpdump version 4.1-PRE-CVS_2017_03_21 libpcap version 1.4.0 Usage: tcpdump [-aAdDefhIJKlLnNOpqRStuUvxX] [ -B size ] [ -c count ] [ -C file_size ] [ -E algo:secret ] [ -F file ] [ -G seconds ] [ -i interface ] [ -j tstamptype ] [ -M secret ] [ -Q|-P in|out|inout ] [ -r file ] [ -s snaplen ] [ -T type ] [ -w file ] [ -W filecount ] [ -y datalinktype ] [ -z command ] [ -Z user ] [ expression ] [root@localhost proxy]# echo $? 1 ``` #### Environment ``` [root@localhost proxy]# python3.6 --version Python 3.6.3 [root@localhost proxy]# pip3.6 freeze certifi==2018.11.29 chardet==3.0.4 idna==2.8 protobuf==3.6.1 psutil==5.4.8 PyMySQL==0.9.3 redis==3.0.1 requests==2.21.0 s8-protocol==1.0 scapy==2.4.2 six==1.11.0 snakeMQ==1.6 urllib3==1.24.1 virtualenv==15.1.0 xlrd==1.2.0 You are using pip version 9.0.1, however version 19.0.2 is available. You should consider upgrading via the 'pip install --upgrade pip' command. [root@localhost proxy]# uname -a Linux localhost.localdomain 2.6.32-431.el6.x86_64 #1 SMP Fri Nov 22 03:15:09 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux [root@localhost proxy]# cat /etc/issue CentOS release 6.5 (Final) Kernel \r on an \m ```
[ { "content": "# This file is part of Scapy\n# See http://www.secdev.org/projects/scapy for more information\n# Copyright (C) Philippe Biondi <[email protected]>\n# This program is published under a GPLv2 license\n\n\"\"\"\nFunctions common to different architectures\n\"\"\"\n\nimport ctypes\nimport os\nimport socket\nimport struct\nimport subprocess\nimport time\nfrom ctypes import POINTER, Structure\nfrom ctypes import c_uint, c_uint32, c_ushort, c_ubyte\nfrom scapy.consts import WINDOWS\nfrom scapy.config import conf\nfrom scapy.data import MTU\nfrom scapy.error import Scapy_Exception\nimport scapy.modules.six as six\n\nif not WINDOWS:\n from fcntl import ioctl\n\n# BOOT\n\n\ndef _check_tcpdump():\n \"\"\"\n Return True if the tcpdump command can be started\n \"\"\"\n try:\n proc = subprocess.Popen(\n [conf.prog.tcpdump, \"--version\"],\n stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT\n )\n output = proc.communicate()[0]\n except OSError:\n return False\n\n # On some systems, --version does not exist on tcpdump\n return proc.returncode == 0 or output.startswith(b'Usage: tcpdump ')\n\n\n# This won't be used on Windows\nTCPDUMP = WINDOWS or _check_tcpdump()\n\n# UTILS\n\n\ndef get_if(iff, cmd):\n \"\"\"Ease SIOCGIF* ioctl calls\"\"\"\n\n sck = socket.socket()\n ifreq = ioctl(sck, cmd, struct.pack(\"16s16x\", iff.encode(\"utf8\")))\n sck.close()\n return ifreq\n\n\ndef get_if_raw_hwaddr(iff):\n \"\"\"Get the raw MAC address of a local interface.\n\n This function uses SIOCGIFHWADDR calls, therefore only works\n on some distros.\n\n :param iff: the network interface name as a string\n :returns: the corresponding raw MAC address\n \"\"\"\n from scapy.arch import SIOCGIFHWADDR\n return struct.unpack(\"16xh6s8x\", get_if(iff, SIOCGIFHWADDR))\n\n# SOCKET UTILS\n\n\ndef _select_nonblock(sockets, remain=None):\n \"\"\"This function is called during sendrecv() routine to select\n the available sockets.\n \"\"\"\n # pcap sockets aren't selectable, so we return all of them\n # and ask the selecting functions to use nonblock_recv instead of recv\n def _sleep_nonblock_recv(self):\n res = self.nonblock_recv()\n if res is None:\n time.sleep(conf.recv_poll_rate)\n return res\n # we enforce remain=None: don't wait.\n return sockets, _sleep_nonblock_recv\n\n# BPF HANDLERS\n\n\nclass bpf_insn(Structure):\n \"\"\"\"The BPF instruction data structure\"\"\"\n _fields_ = [(\"code\", c_ushort),\n (\"jt\", c_ubyte),\n (\"jf\", c_ubyte),\n (\"k\", c_uint32)]\n\n\nclass bpf_program(Structure):\n \"\"\"\"Structure for BIOCSETF\"\"\"\n _fields_ = [(\"bf_len\", c_uint),\n (\"bf_insns\", POINTER(bpf_insn))]\n\n\ndef _legacy_bpf_pointer(tcpdump_lines):\n \"\"\"Get old-format BPF Pointer. Deprecated\"\"\"\n X86_64 = os.uname()[4] in ['x86_64', 'aarch64']\n size = int(tcpdump_lines[0])\n bpf = b\"\"\n for l in tcpdump_lines[1:]:\n if six.PY2:\n int_type = long # noqa: F821\n else:\n int_type = int\n bpf += struct.pack(\"HBBI\", *map(int_type, l.split()))\n\n # Thanks to http://www.netprojects.de/scapy-with-pypy-solved/ for the pypy trick # noqa: E501\n if conf.use_pypy:\n str_buffer = ctypes.create_string_buffer(bpf)\n return struct.pack('HL', size, ctypes.addressof(str_buffer))\n else:\n # XXX. Argl! We need to give the kernel a pointer on the BPF,\n # Python object header seems to be 20 bytes. 36 bytes for x86 64bits arch. # noqa: E501\n if X86_64:\n return struct.pack(\"HL\", size, id(bpf) + 36)\n else:\n return struct.pack(\"HI\", size, id(bpf) + 20)\n\n\ndef get_bpf_pointer(tcpdump_lines):\n \"\"\"Create a BPF Pointer for TCPDump filter\"\"\"\n if conf.use_pypy:\n return _legacy_bpf_pointer(tcpdump_lines)\n\n # Allocate BPF instructions\n size = int(tcpdump_lines[0])\n bpf_insn_a = bpf_insn * size\n bip = bpf_insn_a()\n\n # Fill the BPF instruction structures with the byte code\n tcpdump_lines = tcpdump_lines[1:]\n i = 0\n for line in tcpdump_lines:\n values = [int(v) for v in line.split()]\n bip[i].code = c_ushort(values[0])\n bip[i].jt = c_ubyte(values[1])\n bip[i].jf = c_ubyte(values[2])\n bip[i].k = c_uint(values[3])\n i += 1\n\n # Create the BPF program\n return bpf_program(size, bip)\n\n\ndef compile_filter(bpf_filter, iface=None):\n \"\"\"Asks Tcpdump to parse the filter, then build the matching\n BPF bytecode using get_bpf_pointer.\n \"\"\"\n if not TCPDUMP:\n raise Scapy_Exception(\"tcpdump is not available. Cannot use filter !\")\n try:\n process = subprocess.Popen([\n conf.prog.tcpdump,\n \"-p\",\n \"-i\", (conf.iface if iface is None else iface),\n \"-ddd\",\n \"-s\", str(MTU),\n bpf_filter],\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE\n )\n except OSError as ex:\n raise Scapy_Exception(\"Failed to attach filter: %s\" % ex)\n lines, err = process.communicate()\n ret = process.returncode\n if ret:\n raise Scapy_Exception(\n \"Failed to attach filter: tcpdump returned: %s\" % err\n )\n lines = lines.strip().split(b\"\\n\")\n return get_bpf_pointer(lines)\n", "path": "scapy/arch/common.py" } ]
[ { "content": "# This file is part of Scapy\n# See http://www.secdev.org/projects/scapy for more information\n# Copyright (C) Philippe Biondi <[email protected]>\n# This program is published under a GPLv2 license\n\n\"\"\"\nFunctions common to different architectures\n\"\"\"\n\nimport ctypes\nimport os\nimport socket\nimport struct\nimport subprocess\nimport time\nfrom ctypes import POINTER, Structure\nfrom ctypes import c_uint, c_uint32, c_ushort, c_ubyte\nfrom scapy.consts import WINDOWS\nfrom scapy.config import conf\nfrom scapy.data import MTU\nfrom scapy.error import Scapy_Exception\nimport scapy.modules.six as six\n\nif not WINDOWS:\n from fcntl import ioctl\n\n# BOOT\n\n\ndef _check_tcpdump():\n \"\"\"\n Return True if the tcpdump command can be started\n \"\"\"\n try:\n proc = subprocess.Popen(\n [conf.prog.tcpdump, \"--version\"],\n stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT\n )\n output = proc.communicate()[0]\n except OSError:\n return False\n\n # On some systems, --version does not exist on tcpdump\n return proc.returncode == 0 \\\n or output.startswith(b'Usage: tcpdump ') \\\n or output.startswith(b'tcpdump: unrecognized option')\n\n\n# This won't be used on Windows\nTCPDUMP = WINDOWS or _check_tcpdump()\n\n# UTILS\n\n\ndef get_if(iff, cmd):\n \"\"\"Ease SIOCGIF* ioctl calls\"\"\"\n\n sck = socket.socket()\n ifreq = ioctl(sck, cmd, struct.pack(\"16s16x\", iff.encode(\"utf8\")))\n sck.close()\n return ifreq\n\n\ndef get_if_raw_hwaddr(iff):\n \"\"\"Get the raw MAC address of a local interface.\n\n This function uses SIOCGIFHWADDR calls, therefore only works\n on some distros.\n\n :param iff: the network interface name as a string\n :returns: the corresponding raw MAC address\n \"\"\"\n from scapy.arch import SIOCGIFHWADDR\n return struct.unpack(\"16xh6s8x\", get_if(iff, SIOCGIFHWADDR))\n\n# SOCKET UTILS\n\n\ndef _select_nonblock(sockets, remain=None):\n \"\"\"This function is called during sendrecv() routine to select\n the available sockets.\n \"\"\"\n # pcap sockets aren't selectable, so we return all of them\n # and ask the selecting functions to use nonblock_recv instead of recv\n def _sleep_nonblock_recv(self):\n res = self.nonblock_recv()\n if res is None:\n time.sleep(conf.recv_poll_rate)\n return res\n # we enforce remain=None: don't wait.\n return sockets, _sleep_nonblock_recv\n\n# BPF HANDLERS\n\n\nclass bpf_insn(Structure):\n \"\"\"\"The BPF instruction data structure\"\"\"\n _fields_ = [(\"code\", c_ushort),\n (\"jt\", c_ubyte),\n (\"jf\", c_ubyte),\n (\"k\", c_uint32)]\n\n\nclass bpf_program(Structure):\n \"\"\"\"Structure for BIOCSETF\"\"\"\n _fields_ = [(\"bf_len\", c_uint),\n (\"bf_insns\", POINTER(bpf_insn))]\n\n\ndef _legacy_bpf_pointer(tcpdump_lines):\n \"\"\"Get old-format BPF Pointer. Deprecated\"\"\"\n X86_64 = os.uname()[4] in ['x86_64', 'aarch64']\n size = int(tcpdump_lines[0])\n bpf = b\"\"\n for l in tcpdump_lines[1:]:\n if six.PY2:\n int_type = long # noqa: F821\n else:\n int_type = int\n bpf += struct.pack(\"HBBI\", *map(int_type, l.split()))\n\n # Thanks to http://www.netprojects.de/scapy-with-pypy-solved/ for the pypy trick # noqa: E501\n if conf.use_pypy:\n str_buffer = ctypes.create_string_buffer(bpf)\n return struct.pack('HL', size, ctypes.addressof(str_buffer))\n else:\n # XXX. Argl! We need to give the kernel a pointer on the BPF,\n # Python object header seems to be 20 bytes. 36 bytes for x86 64bits arch. # noqa: E501\n if X86_64:\n return struct.pack(\"HL\", size, id(bpf) + 36)\n else:\n return struct.pack(\"HI\", size, id(bpf) + 20)\n\n\ndef get_bpf_pointer(tcpdump_lines):\n \"\"\"Create a BPF Pointer for TCPDump filter\"\"\"\n if conf.use_pypy:\n return _legacy_bpf_pointer(tcpdump_lines)\n\n # Allocate BPF instructions\n size = int(tcpdump_lines[0])\n bpf_insn_a = bpf_insn * size\n bip = bpf_insn_a()\n\n # Fill the BPF instruction structures with the byte code\n tcpdump_lines = tcpdump_lines[1:]\n i = 0\n for line in tcpdump_lines:\n values = [int(v) for v in line.split()]\n bip[i].code = c_ushort(values[0])\n bip[i].jt = c_ubyte(values[1])\n bip[i].jf = c_ubyte(values[2])\n bip[i].k = c_uint(values[3])\n i += 1\n\n # Create the BPF program\n return bpf_program(size, bip)\n\n\ndef compile_filter(bpf_filter, iface=None):\n \"\"\"Asks Tcpdump to parse the filter, then build the matching\n BPF bytecode using get_bpf_pointer.\n \"\"\"\n if not TCPDUMP:\n raise Scapy_Exception(\"tcpdump is not available. Cannot use filter !\")\n try:\n process = subprocess.Popen([\n conf.prog.tcpdump,\n \"-p\",\n \"-i\", (conf.iface if iface is None else iface),\n \"-ddd\",\n \"-s\", str(MTU),\n bpf_filter],\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE\n )\n except OSError as ex:\n raise Scapy_Exception(\"Failed to attach filter: %s\" % ex)\n lines, err = process.communicate()\n ret = process.returncode\n if ret:\n raise Scapy_Exception(\n \"Failed to attach filter: tcpdump returned: %s\" % err\n )\n lines = lines.strip().split(b\"\\n\")\n return get_bpf_pointer(lines)\n", "path": "scapy/arch/common.py" } ]
diff --git a/scapy/arch/common.py b/scapy/arch/common.py index cf92cd9efa1..35276718062 100644 --- a/scapy/arch/common.py +++ b/scapy/arch/common.py @@ -42,7 +42,9 @@ def _check_tcpdump(): return False # On some systems, --version does not exist on tcpdump - return proc.returncode == 0 or output.startswith(b'Usage: tcpdump ') + return proc.returncode == 0 \ + or output.startswith(b'Usage: tcpdump ') \ + or output.startswith(b'tcpdump: unrecognized option') # This won't be used on Windows
Mailu__Mailu-958
Using external smtp relay server for outgoing emails Hi, I need to use mailchannels.com to relay all outgoing emails from my Mailu install. In this doc is what I need to change in Postfix: https://mailchannels.zendesk.com/hc/en-us/articles/200262640-Setting-up-for-Postfix Is there any way to do this in Mailu ? Thanks,
[ { "content": "#!/usr/bin/python3\n\nimport os\nimport glob\nimport shutil\nimport multiprocessing\nimport logging as log\nimport sys\nfrom mailustart import resolve, convert\n\nfrom podop import run_server\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"WARNING\"))\n\ndef start_podop():\n os.setuid(100)\n url = \"http://\" + os.environ[\"ADMIN_ADDRESS\"] + \"/internal/postfix/\"\n # TODO: Remove verbosity setting from Podop?\n run_server(0, \"postfix\", \"/tmp/podop.socket\", [\n\t\t(\"transport\", \"url\", url + \"transport/§\"),\n\t\t(\"alias\", \"url\", url + \"alias/§\"),\n\t\t(\"domain\", \"url\", url + \"domain/§\"),\n (\"mailbox\", \"url\", url + \"mailbox/§\"),\n (\"senderaccess\", \"url\", url + \"sender/access/§\"),\n (\"senderlogin\", \"url\", url + \"sender/login/§\")\n ])\n\n# Actual startup script\nos.environ[\"FRONT_ADDRESS\"] = resolve(os.environ.get(\"FRONT_ADDRESS\", \"front\"))\nos.environ[\"ADMIN_ADDRESS\"] = resolve(os.environ.get(\"ADMIN_ADDRESS\", \"admin\"))\nos.environ[\"HOST_ANTISPAM\"] = resolve(os.environ.get(\"HOST_ANTISPAM\", \"antispam:11332\"))\nos.environ[\"HOST_LMTP\"] = resolve(os.environ.get(\"HOST_LMTP\", \"imap:2525\"))\n\nfor postfix_file in glob.glob(\"/conf/*.cf\"):\n convert(postfix_file, os.path.join(\"/etc/postfix\", os.path.basename(postfix_file)))\n\nif os.path.exists(\"/overrides/postfix.cf\"):\n for line in open(\"/overrides/postfix.cf\").read().strip().split(\"\\n\"):\n os.system('postconf -e \"{}\"'.format(line))\n\nif os.path.exists(\"/overrides/postfix.master\"):\n for line in open(\"/overrides/postfix.master\").read().strip().split(\"\\n\"):\n os.system('postconf -Me \"{}\"'.format(line))\n\nfor map_file in glob.glob(\"/overrides/*.map\"):\n destination = os.path.join(\"/etc/postfix\", os.path.basename(map_file))\n shutil.copyfile(map_file, destination)\n os.system(\"postmap {}\".format(destination))\n os.remove(destination)\n\nconvert(\"/conf/rsyslog.conf\", \"/etc/rsyslog.conf\")\n\n# Run Podop and Postfix\nmultiprocessing.Process(target=start_podop).start()\nif os.path.exists(\"/var/run/rsyslogd.pid\"):\n os.remove(\"/var/run/rsyslogd.pid\")\nos.system(\"/usr/lib/postfix/post-install meta_directory=/etc/postfix create-missing\")\nos.system(\"/usr/lib/postfix/master &\")\nos.execv(\"/usr/sbin/rsyslogd\", [\"rsyslogd\", \"-n\"])\n", "path": "core/postfix/start.py" } ]
[ { "content": "#!/usr/bin/python3\n\nimport os\nimport glob\nimport shutil\nimport multiprocessing\nimport logging as log\nimport sys\nfrom mailustart import resolve, convert\n\nfrom podop import run_server\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"WARNING\"))\n\ndef start_podop():\n os.setuid(100)\n url = \"http://\" + os.environ[\"ADMIN_ADDRESS\"] + \"/internal/postfix/\"\n # TODO: Remove verbosity setting from Podop?\n run_server(0, \"postfix\", \"/tmp/podop.socket\", [\n\t\t(\"transport\", \"url\", url + \"transport/§\"),\n\t\t(\"alias\", \"url\", url + \"alias/§\"),\n\t\t(\"domain\", \"url\", url + \"domain/§\"),\n (\"mailbox\", \"url\", url + \"mailbox/§\"),\n (\"senderaccess\", \"url\", url + \"sender/access/§\"),\n (\"senderlogin\", \"url\", url + \"sender/login/§\")\n ])\n\n# Actual startup script\nos.environ[\"FRONT_ADDRESS\"] = resolve(os.environ.get(\"FRONT_ADDRESS\", \"front\"))\nos.environ[\"ADMIN_ADDRESS\"] = resolve(os.environ.get(\"ADMIN_ADDRESS\", \"admin\"))\nos.environ[\"HOST_ANTISPAM\"] = resolve(os.environ.get(\"HOST_ANTISPAM\", \"antispam:11332\"))\nos.environ[\"HOST_LMTP\"] = resolve(os.environ.get(\"HOST_LMTP\", \"imap:2525\"))\n\nfor postfix_file in glob.glob(\"/conf/*.cf\"):\n convert(postfix_file, os.path.join(\"/etc/postfix\", os.path.basename(postfix_file)))\n\nif os.path.exists(\"/overrides/postfix.cf\"):\n for line in open(\"/overrides/postfix.cf\").read().strip().split(\"\\n\"):\n os.system('postconf -e \"{}\"'.format(line))\n\nif os.path.exists(\"/overrides/postfix.master\"):\n for line in open(\"/overrides/postfix.master\").read().strip().split(\"\\n\"):\n os.system('postconf -Me \"{}\"'.format(line))\n\nfor map_file in glob.glob(\"/overrides/*.map\"):\n destination = os.path.join(\"/etc/postfix\", os.path.basename(map_file))\n shutil.copyfile(map_file, destination)\n os.system(\"postmap {}\".format(destination))\n os.remove(destination)\n\nif \"RELAYUSER\" in os.environ:\n path = \"/etc/postfix/sasl_passwd\"\n convert(\"/conf/sasl_passwd\", path)\n os.system(\"postmap {}\".format(path))\n\nconvert(\"/conf/rsyslog.conf\", \"/etc/rsyslog.conf\")\n\n# Run Podop and Postfix\nmultiprocessing.Process(target=start_podop).start()\nif os.path.exists(\"/var/run/rsyslogd.pid\"):\n os.remove(\"/var/run/rsyslogd.pid\")\nos.system(\"/usr/lib/postfix/post-install meta_directory=/etc/postfix create-missing\")\nos.system(\"/usr/lib/postfix/master &\")\nos.execv(\"/usr/sbin/rsyslogd\", [\"rsyslogd\", \"-n\"])\n", "path": "core/postfix/start.py" } ]
diff --git a/core/postfix/conf/main.cf b/core/postfix/conf/main.cf index 7fb32b678..d7e3dca8f 100644 --- a/core/postfix/conf/main.cf +++ b/core/postfix/conf/main.cf @@ -27,6 +27,11 @@ mydestination = # Relayhost if any is configured relayhost = {{ RELAYHOST }} +{% if RELAYUSER %} +smtp_sasl_auth_enable = yes +smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd +smtp_sasl_security_options = noanonymous +{% endif %} # Recipient delimiter for extended addresses recipient_delimiter = {{ RECIPIENT_DELIMITER }} diff --git a/core/postfix/conf/sasl_passwd b/core/postfix/conf/sasl_passwd new file mode 100644 index 000000000..e19d0657d --- /dev/null +++ b/core/postfix/conf/sasl_passwd @@ -0,0 +1 @@ +{{ RELAYHOST }} {{ RELAYUSER }}:{{ RELAYPASSWORD }} \ No newline at end of file diff --git a/core/postfix/start.py b/core/postfix/start.py index 95c97fded..81849c5b2 100755 --- a/core/postfix/start.py +++ b/core/postfix/start.py @@ -48,6 +48,11 @@ def start_podop(): os.system("postmap {}".format(destination)) os.remove(destination) +if "RELAYUSER" in os.environ: + path = "/etc/postfix/sasl_passwd" + convert("/conf/sasl_passwd", path) + os.system("postmap {}".format(path)) + convert("/conf/rsyslog.conf", "/etc/rsyslog.conf") # Run Podop and Postfix diff --git a/docs/configuration.rst b/docs/configuration.rst index e7dfa2af8..7b84d6fcf 100644 --- a/docs/configuration.rst +++ b/docs/configuration.rst @@ -57,7 +57,8 @@ Docker services' outbound mail to be relayed, you can set this to ``172.16.0.0/1 to include **all** Docker networks. The default is to leave this empty. The ``RELAYHOST`` is an optional address of a mail server relaying all outgoing -mail. +mail in following format: ``[HOST]:PORT``. +``RELAYUSER`` and ``RELAYPASSWORD`` can be used when authentication is needed. The ``FETCHMAIL_DELAY`` is a delay (in seconds) for the fetchmail service to go and fetch new email if available. Do not use too short delays if you do not diff --git a/towncrier/newsfragments/958.feature b/towncrier/newsfragments/958.feature new file mode 100644 index 000000000..ac02dec40 --- /dev/null +++ b/towncrier/newsfragments/958.feature @@ -0,0 +1 @@ +Relays with authentication
rucio__rucio-4790
Fix setup_webui script Motivation ---------- Script has a wrong import, needs to be fixed.
[ { "content": "# -*- coding: utf-8 -*-\n# Copyright 2015-2021 CERN\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n# Authors:\n# - Vincent Garonne <[email protected]>, 2015-2017\n# - Martin Barisits <[email protected]>, 2016-2021\n# - Benedikt Ziemons <[email protected]>, 2021\n\nimport os\nimport sys\n\nfrom setuptools import setup\n\n\nif sys.version_info < (3, 6):\n print('ERROR: Rucio WebUI requires at least Python 3.6 to run.')\n sys.exit(1)\n\ntry:\n from setuputil import get_rucio_version\nexcept ImportError:\n sys.path.append(os.path.abspath(os.path.dirname(__file__)))\n from setuputil import get_rucio_version\n\nname = 'rucio-webui'\npackages = ['rucio', 'rucio.web', 'rucio.web.ui', 'rucio.web.ui.flask', 'rucio.web.flask.common']\ndata_files = []\ndescription = \"Rucio WebUI Package\"\n\nsetup(\n name=name,\n version=get_rucio_version(),\n packages=packages,\n package_dir={'': 'lib'},\n data_files=None,\n include_package_data=True,\n scripts=None,\n author=\"Rucio\",\n author_email=\"[email protected]\",\n description=description,\n license=\"Apache License, Version 2.0\",\n url=\"https://rucio.cern.ch/\",\n python_requires=\">=3.6, <4\",\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'License :: OSI Approved :: Apache Software License',\n 'Intended Audience :: Information Technology',\n 'Intended Audience :: System Administrators',\n 'Operating System :: POSIX :: Linux',\n 'Natural Language :: English',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Environment :: No Input/Output (Daemon)', ],\n install_requires=['rucio>=1.2.5', ],\n)\n", "path": "setup_webui.py" } ]
[ { "content": "# -*- coding: utf-8 -*-\n# Copyright 2015-2021 CERN\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n# Authors:\n# - Vincent Garonne <[email protected]>, 2015-2017\n# - Martin Barisits <[email protected]>, 2016-2021\n# - Benedikt Ziemons <[email protected]>, 2021\n\nimport os\nimport sys\n\nfrom setuptools import setup\n\n\nif sys.version_info < (3, 6):\n print('ERROR: Rucio WebUI requires at least Python 3.6 to run.')\n sys.exit(1)\n\ntry:\n from setuputil import get_rucio_version\nexcept ImportError:\n sys.path.append(os.path.abspath(os.path.dirname(__file__)))\n from setuputil import get_rucio_version\n\nname = 'rucio-webui'\npackages = ['rucio', 'rucio.web', 'rucio.web.ui', 'rucio.web.ui.flask', 'rucio.web.ui.flask.common']\ndata_files = []\ndescription = \"Rucio WebUI Package\"\n\nsetup(\n name=name,\n version=get_rucio_version(),\n packages=packages,\n package_dir={'': 'lib'},\n data_files=None,\n include_package_data=True,\n scripts=None,\n author=\"Rucio\",\n author_email=\"[email protected]\",\n description=description,\n license=\"Apache License, Version 2.0\",\n url=\"https://rucio.cern.ch/\",\n python_requires=\">=3.6, <4\",\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'License :: OSI Approved :: Apache Software License',\n 'Intended Audience :: Information Technology',\n 'Intended Audience :: System Administrators',\n 'Operating System :: POSIX :: Linux',\n 'Natural Language :: English',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Environment :: No Input/Output (Daemon)', ],\n install_requires=['rucio>=1.2.5', ],\n)\n", "path": "setup_webui.py" } ]
diff --git a/setup_webui.py b/setup_webui.py index ef91603cb5..65afdd4102 100644 --- a/setup_webui.py +++ b/setup_webui.py @@ -35,7 +35,7 @@ from setuputil import get_rucio_version name = 'rucio-webui' -packages = ['rucio', 'rucio.web', 'rucio.web.ui', 'rucio.web.ui.flask', 'rucio.web.flask.common'] +packages = ['rucio', 'rucio.web', 'rucio.web.ui', 'rucio.web.ui.flask', 'rucio.web.ui.flask.common'] data_files = [] description = "Rucio WebUI Package"
scikit-image__scikit-image-3901
ransac selects duplicate data points in random sample ## Description I don't know if this behavior is intentional, but to me it seems a bit odd. When the ransac-method selects the random sample to fit the model, it can select the same data point multiple times. ## Way to reproduce ```python import numpy as np from skimage.measure import ransac np.random.seed(seed=0) data = np.arange(10) class Model(object): """Dummy model""" def estimate(self, data): if np.unique(data).size != data.size: print("Duplicate points: ", data) return True def residuals(self, data): return 1.0 ransac(data, Model, min_samples=3, residual_threshold=0.0, max_trials=10) ``` Which results in ```python ('Duplicate points: ', array([8, 8, 1])) ('Duplicate points: ', array([6, 7, 7])) ('Duplicate points: ', array([9, 8, 9])) ``` ## Version information ```python # Paste the output of the following python commands 2.7.13 (default, Sep 26 2018, 18:42:22) [GCC 6.3.0 20170516] Linux-4.9.0-8-amd64-x86_64-with-debian-9.4 scikit-image version: 0.14.2 numpy version: 1.16.0 ```
[ { "content": "import math\nimport numpy as np\nfrom numpy.linalg import inv, pinv\nfrom scipy import optimize\nfrom .._shared.utils import check_random_state\n\n\ndef _check_data_dim(data, dim):\n if data.ndim != 2 or data.shape[1] != dim:\n raise ValueError('Input data must have shape (N, %d).' % dim)\n\n\ndef _check_data_atleast_2D(data):\n if data.ndim < 2 or data.shape[1] < 2:\n raise ValueError('Input data must be at least 2D.')\n\n\ndef _norm_along_axis(x, axis):\n \"\"\"NumPy < 1.8 does not support the `axis` argument for `np.linalg.norm`.\"\"\"\n return np.sqrt(np.einsum('ij,ij->i', x, x))\n\n\nclass BaseModel(object):\n\n def __init__(self):\n self.params = None\n\n\nclass LineModelND(BaseModel):\n \"\"\"Total least squares estimator for N-dimensional lines.\n\n In contrast to ordinary least squares line estimation, this estimator\n minimizes the orthogonal distances of points to the estimated line.\n\n Lines are defined by a point (origin) and a unit vector (direction)\n according to the following vector equation::\n\n X = origin + lambda * direction\n\n Attributes\n ----------\n params : tuple\n Line model parameters in the following order `origin`, `direction`.\n\n Examples\n --------\n >>> x = np.linspace(1, 2, 25)\n >>> y = 1.5 * x + 3\n >>> lm = LineModelND()\n >>> lm.estimate(np.array([x, y]).T)\n True\n >>> tuple(np.round(lm.params, 5))\n (array([ 1.5 , 5.25]), array([ 0.5547 , 0.83205]))\n >>> res = lm.residuals(np.array([x, y]).T)\n >>> np.abs(np.round(res, 9))\n array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])\n >>> np.round(lm.predict_y(x[:5]), 3)\n array([ 4.5 , 4.562, 4.625, 4.688, 4.75 ])\n >>> np.round(lm.predict_x(y[:5]), 3)\n array([ 1. , 1.042, 1.083, 1.125, 1.167])\n\n \"\"\"\n\n def estimate(self, data):\n \"\"\"Estimate line model from data.\n\n This minimizes the sum of shortest (orthogonal) distances\n from the given data points to the estimated line.\n\n Parameters\n ----------\n data : (N, dim) array\n N points in a space of dimensionality dim >= 2.\n\n Returns\n -------\n success : bool\n True, if model estimation succeeds.\n \"\"\"\n _check_data_atleast_2D(data)\n\n origin = data.mean(axis=0)\n data = data - origin\n\n if data.shape[0] == 2: # well determined\n direction = data[1] - data[0]\n norm = np.linalg.norm(direction)\n if norm != 0: # this should not happen to be norm 0\n direction /= norm\n elif data.shape[0] > 2: # over-determined\n # Note: with full_matrices=1 Python dies with joblib parallel_for.\n _, _, v = np.linalg.svd(data, full_matrices=False)\n direction = v[0]\n else: # under-determined\n raise ValueError('At least 2 input points needed.')\n\n self.params = (origin, direction)\n\n return True\n\n def residuals(self, data, params=None):\n \"\"\"Determine residuals of data to model.\n\n For each point, the shortest (orthogonal) distance to the line is\n returned. It is obtained by projecting the data onto the line.\n\n Parameters\n ----------\n data : (N, dim) array\n N points in a space of dimension dim.\n params : (2, ) array, optional\n Optional custom parameter set in the form (`origin`, `direction`).\n\n Returns\n -------\n residuals : (N, ) array\n Residual for each data point.\n \"\"\"\n _check_data_atleast_2D(data)\n if params is None:\n params = self.params\n assert params is not None\n if len(params) != 2:\n raise ValueError('Parameters are defined by 2 sets.')\n\n origin, direction = params\n res = (data - origin) - \\\n ((data - origin) @ direction)[..., np.newaxis] * direction\n return _norm_along_axis(res, axis=1)\n\n def predict(self, x, axis=0, params=None):\n \"\"\"Predict intersection of the estimated line model with a hyperplane\n orthogonal to a given axis.\n\n Parameters\n ----------\n x : (n, 1) array\n Coordinates along an axis.\n axis : int\n Axis orthogonal to the hyperplane intersecting the line.\n params : (2, ) array, optional\n Optional custom parameter set in the form (`origin`, `direction`).\n\n Returns\n -------\n data : (n, m) array\n Predicted coordinates.\n\n Raises\n ------\n ValueError\n If the line is parallel to the given axis.\n \"\"\"\n if params is None:\n params = self.params\n assert params is not None\n if len(params) != 2:\n raise ValueError('Parameters are defined by 2 sets.')\n\n origin, direction = params\n\n if direction[axis] == 0:\n # line parallel to axis\n raise ValueError('Line parallel to axis %s' % axis)\n\n l = (x - origin[axis]) / direction[axis]\n data = origin + l[..., np.newaxis] * direction\n return data\n\n def predict_x(self, y, params=None):\n \"\"\"Predict x-coordinates for 2D lines using the estimated model.\n\n Alias for::\n\n predict(y, axis=1)[:, 0]\n\n Parameters\n ----------\n y : array\n y-coordinates.\n params : (2, ) array, optional\n Optional custom parameter set in the form (`origin`, `direction`).\n\n Returns\n -------\n x : array\n Predicted x-coordinates.\n\n \"\"\"\n x = self.predict(y, axis=1, params=params)[:, 0]\n return x\n\n def predict_y(self, x, params=None):\n \"\"\"Predict y-coordinates for 2D lines using the estimated model.\n\n Alias for::\n\n predict(x, axis=0)[:, 1]\n\n Parameters\n ----------\n x : array\n x-coordinates.\n params : (2, ) array, optional\n Optional custom parameter set in the form (`origin`, `direction`).\n\n Returns\n -------\n y : array\n Predicted y-coordinates.\n\n \"\"\"\n y = self.predict(x, axis=0, params=params)[:, 1]\n return y\n\n\nclass CircleModel(BaseModel):\n\n \"\"\"Total least squares estimator for 2D circles.\n\n The functional model of the circle is::\n\n r**2 = (x - xc)**2 + (y - yc)**2\n\n This estimator minimizes the squared distances from all points to the\n circle::\n\n min{ sum((r - sqrt((x_i - xc)**2 + (y_i - yc)**2))**2) }\n\n A minimum number of 3 points is required to solve for the parameters.\n\n Attributes\n ----------\n params : tuple\n Circle model parameters in the following order `xc`, `yc`, `r`.\n\n Examples\n --------\n >>> t = np.linspace(0, 2 * np.pi, 25)\n >>> xy = CircleModel().predict_xy(t, params=(2, 3, 4))\n >>> model = CircleModel()\n >>> model.estimate(xy)\n True\n >>> tuple(np.round(model.params, 5))\n (2.0, 3.0, 4.0)\n >>> res = model.residuals(xy)\n >>> np.abs(np.round(res, 9))\n array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])\n\n \"\"\"\n\n def estimate(self, data):\n \"\"\"Estimate circle model from data using total least squares.\n\n Parameters\n ----------\n data : (N, 2) array\n N points with ``(x, y)`` coordinates, respectively.\n\n Returns\n -------\n success : bool\n True, if model estimation succeeds.\n\n \"\"\"\n\n _check_data_dim(data, dim=2)\n\n x = data[:, 0]\n y = data[:, 1]\n\n # http://www.had2know.com/academics/best-fit-circle-least-squares.html\n x2y2 = (x ** 2 + y ** 2)\n sum_x = np.sum(x)\n sum_y = np.sum(y)\n sum_xy = np.sum(x * y)\n m1 = np.array([[np.sum(x ** 2), sum_xy, sum_x],\n [sum_xy, np.sum(y ** 2), sum_y],\n [sum_x, sum_y, float(len(x))]])\n m2 = np.array([[np.sum(x * x2y2),\n np.sum(y * x2y2),\n np.sum(x2y2)]]).T\n a, b, c = pinv(m1) @ m2\n a, b, c = a[0], b[0], c[0]\n xc = a / 2\n yc = b / 2\n r = np.sqrt(4 * c + a ** 2 + b ** 2) / 2\n\n self.params = (xc, yc, r)\n\n return True\n\n def residuals(self, data):\n \"\"\"Determine residuals of data to model.\n\n For each point the shortest distance to the circle is returned.\n\n Parameters\n ----------\n data : (N, 2) array\n N points with ``(x, y)`` coordinates, respectively.\n\n Returns\n -------\n residuals : (N, ) array\n Residual for each data point.\n\n \"\"\"\n\n _check_data_dim(data, dim=2)\n\n xc, yc, r = self.params\n\n x = data[:, 0]\n y = data[:, 1]\n\n return r - np.sqrt((x - xc)**2 + (y - yc)**2)\n\n def predict_xy(self, t, params=None):\n \"\"\"Predict x- and y-coordinates using the estimated model.\n\n Parameters\n ----------\n t : array\n Angles in circle in radians. Angles start to count from positive\n x-axis to positive y-axis in a right-handed system.\n params : (3, ) array, optional\n Optional custom parameter set.\n\n Returns\n -------\n xy : (..., 2) array\n Predicted x- and y-coordinates.\n\n \"\"\"\n if params is None:\n params = self.params\n xc, yc, r = params\n\n x = xc + r * np.cos(t)\n y = yc + r * np.sin(t)\n\n return np.concatenate((x[..., None], y[..., None]), axis=t.ndim)\n\n\nclass EllipseModel(BaseModel):\n \"\"\"Total least squares estimator for 2D ellipses.\n\n The functional model of the ellipse is::\n\n xt = xc + a*cos(theta)*cos(t) - b*sin(theta)*sin(t)\n yt = yc + a*sin(theta)*cos(t) + b*cos(theta)*sin(t)\n d = sqrt((x - xt)**2 + (y - yt)**2)\n\n where ``(xt, yt)`` is the closest point on the ellipse to ``(x, y)``. Thus\n d is the shortest distance from the point to the ellipse.\n\n The estimator is based on a least squares minimization. The optimal\n solution is computed directly, no iterations are required. This leads\n to a simple, stable and robust fitting method.\n\n The ``params`` attribute contains the parameters in the following order::\n\n xc, yc, a, b, theta\n\n Attributes\n ----------\n params : tuple\n Ellipse model parameters in the following order `xc`, `yc`, `a`, `b`,\n `theta`.\n\n Examples\n --------\n\n >>> xy = EllipseModel().predict_xy(np.linspace(0, 2 * np.pi, 25),\n ... params=(10, 15, 4, 8, np.deg2rad(30)))\n >>> ellipse = EllipseModel()\n >>> ellipse.estimate(xy)\n True\n >>> np.round(ellipse.params, 2)\n array([ 10. , 15. , 4. , 8. , 0.52])\n >>> np.round(abs(ellipse.residuals(xy)), 5)\n array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])\n \"\"\"\n\n def estimate(self, data):\n \"\"\"Estimate circle model from data using total least squares.\n\n Parameters\n ----------\n data : (N, 2) array\n N points with ``(x, y)`` coordinates, respectively.\n\n Returns\n -------\n success : bool\n True, if model estimation succeeds.\n\n\n References\n ----------\n .. [1] Halir, R.; Flusser, J. \"Numerically stable direct least squares\n fitting of ellipses\". In Proc. 6th International Conference in\n Central Europe on Computer Graphics and Visualization.\n WSCG (Vol. 98, pp. 125-132).\n\n \"\"\"\n # Original Implementation: Ben Hammel, Nick Sullivan-Molina\n # another REFERENCE: [2] http://mathworld.wolfram.com/Ellipse.html\n _check_data_dim(data, dim=2)\n\n x = data[:, 0]\n y = data[:, 1]\n\n # Quadratic part of design matrix [eqn. 15] from [1]\n D1 = np.vstack([x ** 2, x * y, y ** 2]).T\n # Linear part of design matrix [eqn. 16] from [1]\n D2 = np.vstack([x, y, np.ones(len(x))]).T\n\n # forming scatter matrix [eqn. 17] from [1]\n S1 = D1.T @ D1\n S2 = D1.T @ D2\n S3 = D2.T @ D2\n\n # Constraint matrix [eqn. 18]\n C1 = np.array([[0., 0., 2.], [0., -1., 0.], [2., 0., 0.]])\n\n try:\n # Reduced scatter matrix [eqn. 29]\n M = inv(C1) @ (S1 - S2 @ inv(S3) @ S2.T)\n except np.linalg.LinAlgError: # LinAlgError: Singular matrix\n return False\n\n # M*|a b c >=l|a b c >. Find eigenvalues and eigenvectors\n # from this equation [eqn. 28]\n eig_vals, eig_vecs = np.linalg.eig(M)\n\n # eigenvector must meet constraint 4ac - b^2 to be valid.\n cond = 4 * np.multiply(eig_vecs[0, :], eig_vecs[2, :]) \\\n - np.power(eig_vecs[1, :], 2)\n a1 = eig_vecs[:, (cond > 0)]\n # seeks for empty matrix\n if 0 in a1.shape or len(a1.ravel()) != 3:\n return False\n a, b, c = a1.ravel()\n\n # |d f g> = -S3^(-1)*S2^(T)*|a b c> [eqn. 24]\n a2 = -inv(S3) @ S2.T @ a1\n d, f, g = a2.ravel()\n\n # eigenvectors are the coefficients of an ellipse in general form\n # a*x^2 + 2*b*x*y + c*y^2 + 2*d*x + 2*f*y + g = 0 (eqn. 15) from [2]\n b /= 2.\n d /= 2.\n f /= 2.\n\n # finding center of ellipse [eqn.19 and 20] from [2]\n x0 = (c * d - b * f) / (b ** 2. - a * c)\n y0 = (a * f - b * d) / (b ** 2. - a * c)\n\n # Find the semi-axes lengths [eqn. 21 and 22] from [2]\n numerator = a * f ** 2 + c * d ** 2 + g * b ** 2 \\\n - 2 * b * d * f - a * c * g\n term = np.sqrt((a - c) ** 2 + 4 * b ** 2)\n denominator1 = (b ** 2 - a * c) * (term - (a + c))\n denominator2 = (b ** 2 - a * c) * (- term - (a + c))\n width = np.sqrt(2 * numerator / denominator1)\n height = np.sqrt(2 * numerator / denominator2)\n\n # angle of counterclockwise rotation of major-axis of ellipse\n # to x-axis [eqn. 23] from [2].\n phi = 0.5 * np.arctan((2. * b) / (a - c))\n if a > c:\n phi += 0.5 * np.pi\n\n self.params = np.nan_to_num([x0, y0, width, height, phi]).tolist()\n self.params = [float(np.real(x)) for x in self.params]\n return True\n\n def residuals(self, data):\n \"\"\"Determine residuals of data to model.\n\n For each point the shortest distance to the ellipse is returned.\n\n Parameters\n ----------\n data : (N, 2) array\n N points with ``(x, y)`` coordinates, respectively.\n\n Returns\n -------\n residuals : (N, ) array\n Residual for each data point.\n\n \"\"\"\n\n _check_data_dim(data, dim=2)\n\n xc, yc, a, b, theta = self.params\n\n ctheta = math.cos(theta)\n stheta = math.sin(theta)\n\n x = data[:, 0]\n y = data[:, 1]\n\n N = data.shape[0]\n\n def fun(t, xi, yi):\n ct = math.cos(t)\n st = math.sin(t)\n xt = xc + a * ctheta * ct - b * stheta * st\n yt = yc + a * stheta * ct + b * ctheta * st\n return (xi - xt) ** 2 + (yi - yt) ** 2\n\n # def Dfun(t, xi, yi):\n # ct = math.cos(t)\n # st = math.sin(t)\n # xt = xc + a * ctheta * ct - b * stheta * st\n # yt = yc + a * stheta * ct + b * ctheta * st\n # dfx_t = - 2 * (xi - xt) * (- a * ctheta * st\n # - b * stheta * ct)\n # dfy_t = - 2 * (yi - yt) * (- a * stheta * st\n # + b * ctheta * ct)\n # return [dfx_t + dfy_t]\n\n residuals = np.empty((N, ), dtype=np.double)\n\n # initial guess for parameter t of closest point on ellipse\n t0 = np.arctan2(y - yc, x - xc) - theta\n\n # determine shortest distance to ellipse for each point\n for i in range(N):\n xi = x[i]\n yi = y[i]\n # faster without Dfun, because of the python overhead\n t, _ = optimize.leastsq(fun, t0[i], args=(xi, yi))\n residuals[i] = np.sqrt(fun(t, xi, yi))\n\n return residuals\n\n def predict_xy(self, t, params=None):\n \"\"\"Predict x- and y-coordinates using the estimated model.\n\n Parameters\n ----------\n t : array\n Angles in circle in radians. Angles start to count from positive\n x-axis to positive y-axis in a right-handed system.\n params : (5, ) array, optional\n Optional custom parameter set.\n\n Returns\n -------\n xy : (..., 2) array\n Predicted x- and y-coordinates.\n\n \"\"\"\n\n if params is None:\n params = self.params\n\n xc, yc, a, b, theta = params\n\n ct = np.cos(t)\n st = np.sin(t)\n ctheta = math.cos(theta)\n stheta = math.sin(theta)\n\n x = xc + a * ctheta * ct - b * stheta * st\n y = yc + a * stheta * ct + b * ctheta * st\n\n return np.concatenate((x[..., None], y[..., None]), axis=t.ndim)\n\n\ndef _dynamic_max_trials(n_inliers, n_samples, min_samples, probability):\n \"\"\"Determine number trials such that at least one outlier-free subset is\n sampled for the given inlier/outlier ratio.\n Parameters\n ----------\n n_inliers : int\n Number of inliers in the data.\n n_samples : int\n Total number of samples in the data.\n min_samples : int\n Minimum number of samples chosen randomly from original data.\n probability : float\n Probability (confidence) that one outlier-free sample is generated.\n Returns\n -------\n trials : int\n Number of trials.\n \"\"\"\n if n_inliers == 0:\n return np.inf\n\n nom = 1 - probability\n if nom == 0:\n return np.inf\n\n inlier_ratio = n_inliers / float(n_samples)\n denom = 1 - inlier_ratio ** min_samples\n if denom == 0:\n return 1\n elif denom == 1:\n return np.inf\n\n nom = np.log(nom)\n denom = np.log(denom)\n if denom == 0:\n return 0\n\n return int(np.ceil(nom / denom))\n\n\ndef ransac(data, model_class, min_samples, residual_threshold,\n is_data_valid=None, is_model_valid=None,\n max_trials=100, stop_sample_num=np.inf, stop_residuals_sum=0,\n stop_probability=1, random_state=None):\n \"\"\"Fit a model to data with the RANSAC (random sample consensus) algorithm.\n\n RANSAC is an iterative algorithm for the robust estimation of parameters\n from a subset of inliers from the complete data set. Each iteration\n performs the following tasks:\n\n 1. Select `min_samples` random samples from the original data and check\n whether the set of data is valid (see `is_data_valid`).\n 2. Estimate a model to the random subset\n (`model_cls.estimate(*data[random_subset]`) and check whether the\n estimated model is valid (see `is_model_valid`).\n 3. Classify all data as inliers or outliers by calculating the residuals\n to the estimated model (`model_cls.residuals(*data)`) - all data samples\n with residuals smaller than the `residual_threshold` are considered as\n inliers.\n 4. Save estimated model as best model if number of inlier samples is\n maximal. In case the current estimated model has the same number of\n inliers, it is only considered as the best model if it has less sum of\n residuals.\n\n These steps are performed either a maximum number of times or until one of\n the special stop criteria are met. The final model is estimated using all\n inlier samples of the previously determined best model.\n\n Parameters\n ----------\n data : [list, tuple of] (N, D) array\n Data set to which the model is fitted, where N is the number of data\n points and D the dimensionality of the data.\n If the model class requires multiple input data arrays (e.g. source and\n destination coordinates of ``skimage.transform.AffineTransform``),\n they can be optionally passed as tuple or list. Note, that in this case\n the functions ``estimate(*data)``, ``residuals(*data)``,\n ``is_model_valid(model, *random_data)`` and\n ``is_data_valid(*random_data)`` must all take each data array as\n separate arguments.\n model_class : object\n Object with the following object methods:\n\n * ``success = estimate(*data)``\n * ``residuals(*data)``\n\n where `success` indicates whether the model estimation succeeded\n (`True` or `None` for success, `False` for failure).\n min_samples : int\n The minimum number of data points to fit a model to.\n residual_threshold : float\n Maximum distance for a data point to be classified as an inlier.\n is_data_valid : function, optional\n This function is called with the randomly selected data before the\n model is fitted to it: `is_data_valid(*random_data)`.\n is_model_valid : function, optional\n This function is called with the estimated model and the randomly\n selected data: `is_model_valid(model, *random_data)`, .\n max_trials : int, optional\n Maximum number of iterations for random sample selection.\n stop_sample_num : int, optional\n Stop iteration if at least this number of inliers are found.\n stop_residuals_sum : float, optional\n Stop iteration if sum of residuals is less than or equal to this\n threshold.\n stop_probability : float in range [0, 1], optional\n RANSAC iteration stops if at least one outlier-free set of the\n training data is sampled with ``probability >= stop_probability``,\n depending on the current best model's inlier ratio and the number\n of trials. This requires to generate at least N samples (trials):\n\n N >= log(1 - probability) / log(1 - e**m)\n\n where the probability (confidence) is typically set to a high value\n such as 0.99, and e is the current fraction of inliers w.r.t. the\n total number of samples.\n random_state : int, RandomState instance or None, optional\n If int, random_state is the seed used by the random number generator;\n If RandomState instance, random_state is the random number generator;\n If None, the random number generator is the RandomState instance used\n by `np.random`.\n\n\n Returns\n -------\n model : object\n Best model with largest consensus set.\n inliers : (N, ) array\n Boolean mask of inliers classified as ``True``.\n\n References\n ----------\n .. [1] \"RANSAC\", Wikipedia, https://en.wikipedia.org/wiki/RANSAC\n\n Examples\n --------\n\n Generate ellipse data without tilt and add noise:\n\n >>> t = np.linspace(0, 2 * np.pi, 50)\n >>> xc, yc = 20, 30\n >>> a, b = 5, 10\n >>> x = xc + a * np.cos(t)\n >>> y = yc + b * np.sin(t)\n >>> data = np.column_stack([x, y])\n >>> np.random.seed(seed=1234)\n >>> data += np.random.normal(size=data.shape)\n\n Add some faulty data:\n\n >>> data[0] = (100, 100)\n >>> data[1] = (110, 120)\n >>> data[2] = (120, 130)\n >>> data[3] = (140, 130)\n\n Estimate ellipse model using all available data:\n\n >>> model = EllipseModel()\n >>> model.estimate(data)\n True\n >>> np.round(model.params) # doctest: +SKIP\n array([ 72., 75., 77., 14., 1.])\n\n Estimate ellipse model using RANSAC:\n\n >>> ransac_model, inliers = ransac(data, EllipseModel, 20, 3, max_trials=50)\n >>> abs(np.round(ransac_model.params))\n array([ 20., 30., 5., 10., 0.])\n >>> inliers # doctest: +SKIP\n array([False, False, False, False, True, True, True, True, True,\n True, True, True, True, True, True, True, True, True,\n True, True, True, True, True, True, True, True, True,\n True, True, True, True, True, True, True, True, True,\n True, True, True, True, True, True, True, True, True,\n True, True, True, True, True], dtype=bool)\n >>> sum(inliers) > 40\n True\n\n Robustly estimate geometric transformation:\n\n >>> from skimage.transform import SimilarityTransform\n >>> np.random.seed(0)\n >>> src = 100 * np.random.rand(50, 2)\n >>> model0 = SimilarityTransform(scale=0.5, rotation=1,\n ... translation=(10, 20))\n >>> dst = model0(src)\n >>> dst[0] = (10000, 10000)\n >>> dst[1] = (-100, 100)\n >>> dst[2] = (50, 50)\n >>> model, inliers = ransac((src, dst), SimilarityTransform, 2, 10)\n >>> inliers\n array([False, False, False, True, True, True, True, True, True,\n True, True, True, True, True, True, True, True, True,\n True, True, True, True, True, True, True, True, True,\n True, True, True, True, True, True, True, True, True,\n True, True, True, True, True, True, True, True, True,\n True, True, True, True, True], dtype=bool)\n\n \"\"\"\n\n best_model = None\n best_inlier_num = 0\n best_inlier_residuals_sum = np.inf\n best_inliers = None\n\n random_state = check_random_state(random_state)\n\n if min_samples < 0:\n raise ValueError(\"`min_samples` must be greater than zero\")\n\n if max_trials < 0:\n raise ValueError(\"`max_trials` must be greater than zero\")\n\n if stop_probability < 0 or stop_probability > 1:\n raise ValueError(\"`stop_probability` must be in range [0, 1]\")\n\n if not isinstance(data, list) and not isinstance(data, tuple):\n data = [data]\n\n # make sure data is list and not tuple, so it can be modified below\n data = list(data)\n # number of samples\n num_samples = data[0].shape[0]\n\n for num_trials in range(max_trials):\n\n # choose random sample set\n samples = []\n random_idxs = random_state.randint(0, num_samples, min_samples)\n for d in data:\n samples.append(d[random_idxs])\n\n # check if random sample set is valid\n if is_data_valid is not None and not is_data_valid(*samples):\n continue\n\n # estimate model for current random sample set\n sample_model = model_class()\n\n success = sample_model.estimate(*samples)\n\n if success is not None: # backwards compatibility\n if not success:\n continue\n\n # check if estimated model is valid\n if is_model_valid is not None \\\n and not is_model_valid(sample_model, *samples):\n continue\n\n sample_model_residuals = np.abs(sample_model.residuals(*data))\n # consensus set / inliers\n sample_model_inliers = sample_model_residuals < residual_threshold\n sample_model_residuals_sum = np.sum(sample_model_residuals**2)\n\n # choose as new best model if number of inliers is maximal\n sample_inlier_num = np.sum(sample_model_inliers)\n if (\n # more inliers\n sample_inlier_num > best_inlier_num\n # same number of inliers but less \"error\" in terms of residuals\n or (sample_inlier_num == best_inlier_num\n and sample_model_residuals_sum < best_inlier_residuals_sum)\n ):\n best_model = sample_model\n best_inlier_num = sample_inlier_num\n best_inlier_residuals_sum = sample_model_residuals_sum\n best_inliers = sample_model_inliers\n if (\n best_inlier_num >= stop_sample_num\n or best_inlier_residuals_sum <= stop_residuals_sum\n or num_trials\n >= _dynamic_max_trials(best_inlier_num, num_samples,\n min_samples, stop_probability)\n ):\n break\n\n # estimate final model using all inliers\n if best_inliers is not None:\n # select inliers for each data array\n for i in range(len(data)):\n data[i] = data[i][best_inliers]\n best_model.estimate(*data)\n\n return best_model, best_inliers\n", "path": "skimage/measure/fit.py" } ]
[ { "content": "import math\nimport numpy as np\nfrom numpy.linalg import inv, pinv\nfrom scipy import optimize\nfrom .._shared.utils import check_random_state\n\n\ndef _check_data_dim(data, dim):\n if data.ndim != 2 or data.shape[1] != dim:\n raise ValueError('Input data must have shape (N, %d).' % dim)\n\n\ndef _check_data_atleast_2D(data):\n if data.ndim < 2 or data.shape[1] < 2:\n raise ValueError('Input data must be at least 2D.')\n\n\ndef _norm_along_axis(x, axis):\n \"\"\"NumPy < 1.8 does not support the `axis` argument for `np.linalg.norm`.\"\"\"\n return np.sqrt(np.einsum('ij,ij->i', x, x))\n\n\nclass BaseModel(object):\n\n def __init__(self):\n self.params = None\n\n\nclass LineModelND(BaseModel):\n \"\"\"Total least squares estimator for N-dimensional lines.\n\n In contrast to ordinary least squares line estimation, this estimator\n minimizes the orthogonal distances of points to the estimated line.\n\n Lines are defined by a point (origin) and a unit vector (direction)\n according to the following vector equation::\n\n X = origin + lambda * direction\n\n Attributes\n ----------\n params : tuple\n Line model parameters in the following order `origin`, `direction`.\n\n Examples\n --------\n >>> x = np.linspace(1, 2, 25)\n >>> y = 1.5 * x + 3\n >>> lm = LineModelND()\n >>> lm.estimate(np.array([x, y]).T)\n True\n >>> tuple(np.round(lm.params, 5))\n (array([ 1.5 , 5.25]), array([ 0.5547 , 0.83205]))\n >>> res = lm.residuals(np.array([x, y]).T)\n >>> np.abs(np.round(res, 9))\n array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])\n >>> np.round(lm.predict_y(x[:5]), 3)\n array([ 4.5 , 4.562, 4.625, 4.688, 4.75 ])\n >>> np.round(lm.predict_x(y[:5]), 3)\n array([ 1. , 1.042, 1.083, 1.125, 1.167])\n\n \"\"\"\n\n def estimate(self, data):\n \"\"\"Estimate line model from data.\n\n This minimizes the sum of shortest (orthogonal) distances\n from the given data points to the estimated line.\n\n Parameters\n ----------\n data : (N, dim) array\n N points in a space of dimensionality dim >= 2.\n\n Returns\n -------\n success : bool\n True, if model estimation succeeds.\n \"\"\"\n _check_data_atleast_2D(data)\n\n origin = data.mean(axis=0)\n data = data - origin\n\n if data.shape[0] == 2: # well determined\n direction = data[1] - data[0]\n norm = np.linalg.norm(direction)\n if norm != 0: # this should not happen to be norm 0\n direction /= norm\n elif data.shape[0] > 2: # over-determined\n # Note: with full_matrices=1 Python dies with joblib parallel_for.\n _, _, v = np.linalg.svd(data, full_matrices=False)\n direction = v[0]\n else: # under-determined\n raise ValueError('At least 2 input points needed.')\n\n self.params = (origin, direction)\n\n return True\n\n def residuals(self, data, params=None):\n \"\"\"Determine residuals of data to model.\n\n For each point, the shortest (orthogonal) distance to the line is\n returned. It is obtained by projecting the data onto the line.\n\n Parameters\n ----------\n data : (N, dim) array\n N points in a space of dimension dim.\n params : (2, ) array, optional\n Optional custom parameter set in the form (`origin`, `direction`).\n\n Returns\n -------\n residuals : (N, ) array\n Residual for each data point.\n \"\"\"\n _check_data_atleast_2D(data)\n if params is None:\n params = self.params\n assert params is not None\n if len(params) != 2:\n raise ValueError('Parameters are defined by 2 sets.')\n\n origin, direction = params\n res = (data - origin) - \\\n ((data - origin) @ direction)[..., np.newaxis] * direction\n return _norm_along_axis(res, axis=1)\n\n def predict(self, x, axis=0, params=None):\n \"\"\"Predict intersection of the estimated line model with a hyperplane\n orthogonal to a given axis.\n\n Parameters\n ----------\n x : (n, 1) array\n Coordinates along an axis.\n axis : int\n Axis orthogonal to the hyperplane intersecting the line.\n params : (2, ) array, optional\n Optional custom parameter set in the form (`origin`, `direction`).\n\n Returns\n -------\n data : (n, m) array\n Predicted coordinates.\n\n Raises\n ------\n ValueError\n If the line is parallel to the given axis.\n \"\"\"\n if params is None:\n params = self.params\n assert params is not None\n if len(params) != 2:\n raise ValueError('Parameters are defined by 2 sets.')\n\n origin, direction = params\n\n if direction[axis] == 0:\n # line parallel to axis\n raise ValueError('Line parallel to axis %s' % axis)\n\n l = (x - origin[axis]) / direction[axis]\n data = origin + l[..., np.newaxis] * direction\n return data\n\n def predict_x(self, y, params=None):\n \"\"\"Predict x-coordinates for 2D lines using the estimated model.\n\n Alias for::\n\n predict(y, axis=1)[:, 0]\n\n Parameters\n ----------\n y : array\n y-coordinates.\n params : (2, ) array, optional\n Optional custom parameter set in the form (`origin`, `direction`).\n\n Returns\n -------\n x : array\n Predicted x-coordinates.\n\n \"\"\"\n x = self.predict(y, axis=1, params=params)[:, 0]\n return x\n\n def predict_y(self, x, params=None):\n \"\"\"Predict y-coordinates for 2D lines using the estimated model.\n\n Alias for::\n\n predict(x, axis=0)[:, 1]\n\n Parameters\n ----------\n x : array\n x-coordinates.\n params : (2, ) array, optional\n Optional custom parameter set in the form (`origin`, `direction`).\n\n Returns\n -------\n y : array\n Predicted y-coordinates.\n\n \"\"\"\n y = self.predict(x, axis=0, params=params)[:, 1]\n return y\n\n\nclass CircleModel(BaseModel):\n\n \"\"\"Total least squares estimator for 2D circles.\n\n The functional model of the circle is::\n\n r**2 = (x - xc)**2 + (y - yc)**2\n\n This estimator minimizes the squared distances from all points to the\n circle::\n\n min{ sum((r - sqrt((x_i - xc)**2 + (y_i - yc)**2))**2) }\n\n A minimum number of 3 points is required to solve for the parameters.\n\n Attributes\n ----------\n params : tuple\n Circle model parameters in the following order `xc`, `yc`, `r`.\n\n Examples\n --------\n >>> t = np.linspace(0, 2 * np.pi, 25)\n >>> xy = CircleModel().predict_xy(t, params=(2, 3, 4))\n >>> model = CircleModel()\n >>> model.estimate(xy)\n True\n >>> tuple(np.round(model.params, 5))\n (2.0, 3.0, 4.0)\n >>> res = model.residuals(xy)\n >>> np.abs(np.round(res, 9))\n array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])\n\n \"\"\"\n\n def estimate(self, data):\n \"\"\"Estimate circle model from data using total least squares.\n\n Parameters\n ----------\n data : (N, 2) array\n N points with ``(x, y)`` coordinates, respectively.\n\n Returns\n -------\n success : bool\n True, if model estimation succeeds.\n\n \"\"\"\n\n _check_data_dim(data, dim=2)\n\n x = data[:, 0]\n y = data[:, 1]\n\n # http://www.had2know.com/academics/best-fit-circle-least-squares.html\n x2y2 = (x ** 2 + y ** 2)\n sum_x = np.sum(x)\n sum_y = np.sum(y)\n sum_xy = np.sum(x * y)\n m1 = np.array([[np.sum(x ** 2), sum_xy, sum_x],\n [sum_xy, np.sum(y ** 2), sum_y],\n [sum_x, sum_y, float(len(x))]])\n m2 = np.array([[np.sum(x * x2y2),\n np.sum(y * x2y2),\n np.sum(x2y2)]]).T\n a, b, c = pinv(m1) @ m2\n a, b, c = a[0], b[0], c[0]\n xc = a / 2\n yc = b / 2\n r = np.sqrt(4 * c + a ** 2 + b ** 2) / 2\n\n self.params = (xc, yc, r)\n\n return True\n\n def residuals(self, data):\n \"\"\"Determine residuals of data to model.\n\n For each point the shortest distance to the circle is returned.\n\n Parameters\n ----------\n data : (N, 2) array\n N points with ``(x, y)`` coordinates, respectively.\n\n Returns\n -------\n residuals : (N, ) array\n Residual for each data point.\n\n \"\"\"\n\n _check_data_dim(data, dim=2)\n\n xc, yc, r = self.params\n\n x = data[:, 0]\n y = data[:, 1]\n\n return r - np.sqrt((x - xc)**2 + (y - yc)**2)\n\n def predict_xy(self, t, params=None):\n \"\"\"Predict x- and y-coordinates using the estimated model.\n\n Parameters\n ----------\n t : array\n Angles in circle in radians. Angles start to count from positive\n x-axis to positive y-axis in a right-handed system.\n params : (3, ) array, optional\n Optional custom parameter set.\n\n Returns\n -------\n xy : (..., 2) array\n Predicted x- and y-coordinates.\n\n \"\"\"\n if params is None:\n params = self.params\n xc, yc, r = params\n\n x = xc + r * np.cos(t)\n y = yc + r * np.sin(t)\n\n return np.concatenate((x[..., None], y[..., None]), axis=t.ndim)\n\n\nclass EllipseModel(BaseModel):\n \"\"\"Total least squares estimator for 2D ellipses.\n\n The functional model of the ellipse is::\n\n xt = xc + a*cos(theta)*cos(t) - b*sin(theta)*sin(t)\n yt = yc + a*sin(theta)*cos(t) + b*cos(theta)*sin(t)\n d = sqrt((x - xt)**2 + (y - yt)**2)\n\n where ``(xt, yt)`` is the closest point on the ellipse to ``(x, y)``. Thus\n d is the shortest distance from the point to the ellipse.\n\n The estimator is based on a least squares minimization. The optimal\n solution is computed directly, no iterations are required. This leads\n to a simple, stable and robust fitting method.\n\n The ``params`` attribute contains the parameters in the following order::\n\n xc, yc, a, b, theta\n\n Attributes\n ----------\n params : tuple\n Ellipse model parameters in the following order `xc`, `yc`, `a`, `b`,\n `theta`.\n\n Examples\n --------\n\n >>> xy = EllipseModel().predict_xy(np.linspace(0, 2 * np.pi, 25),\n ... params=(10, 15, 4, 8, np.deg2rad(30)))\n >>> ellipse = EllipseModel()\n >>> ellipse.estimate(xy)\n True\n >>> np.round(ellipse.params, 2)\n array([ 10. , 15. , 4. , 8. , 0.52])\n >>> np.round(abs(ellipse.residuals(xy)), 5)\n array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])\n \"\"\"\n\n def estimate(self, data):\n \"\"\"Estimate circle model from data using total least squares.\n\n Parameters\n ----------\n data : (N, 2) array\n N points with ``(x, y)`` coordinates, respectively.\n\n Returns\n -------\n success : bool\n True, if model estimation succeeds.\n\n\n References\n ----------\n .. [1] Halir, R.; Flusser, J. \"Numerically stable direct least squares\n fitting of ellipses\". In Proc. 6th International Conference in\n Central Europe on Computer Graphics and Visualization.\n WSCG (Vol. 98, pp. 125-132).\n\n \"\"\"\n # Original Implementation: Ben Hammel, Nick Sullivan-Molina\n # another REFERENCE: [2] http://mathworld.wolfram.com/Ellipse.html\n _check_data_dim(data, dim=2)\n\n x = data[:, 0]\n y = data[:, 1]\n\n # Quadratic part of design matrix [eqn. 15] from [1]\n D1 = np.vstack([x ** 2, x * y, y ** 2]).T\n # Linear part of design matrix [eqn. 16] from [1]\n D2 = np.vstack([x, y, np.ones(len(x))]).T\n\n # forming scatter matrix [eqn. 17] from [1]\n S1 = D1.T @ D1\n S2 = D1.T @ D2\n S3 = D2.T @ D2\n\n # Constraint matrix [eqn. 18]\n C1 = np.array([[0., 0., 2.], [0., -1., 0.], [2., 0., 0.]])\n\n try:\n # Reduced scatter matrix [eqn. 29]\n M = inv(C1) @ (S1 - S2 @ inv(S3) @ S2.T)\n except np.linalg.LinAlgError: # LinAlgError: Singular matrix\n return False\n\n # M*|a b c >=l|a b c >. Find eigenvalues and eigenvectors\n # from this equation [eqn. 28]\n eig_vals, eig_vecs = np.linalg.eig(M)\n\n # eigenvector must meet constraint 4ac - b^2 to be valid.\n cond = 4 * np.multiply(eig_vecs[0, :], eig_vecs[2, :]) \\\n - np.power(eig_vecs[1, :], 2)\n a1 = eig_vecs[:, (cond > 0)]\n # seeks for empty matrix\n if 0 in a1.shape or len(a1.ravel()) != 3:\n return False\n a, b, c = a1.ravel()\n\n # |d f g> = -S3^(-1)*S2^(T)*|a b c> [eqn. 24]\n a2 = -inv(S3) @ S2.T @ a1\n d, f, g = a2.ravel()\n\n # eigenvectors are the coefficients of an ellipse in general form\n # a*x^2 + 2*b*x*y + c*y^2 + 2*d*x + 2*f*y + g = 0 (eqn. 15) from [2]\n b /= 2.\n d /= 2.\n f /= 2.\n\n # finding center of ellipse [eqn.19 and 20] from [2]\n x0 = (c * d - b * f) / (b ** 2. - a * c)\n y0 = (a * f - b * d) / (b ** 2. - a * c)\n\n # Find the semi-axes lengths [eqn. 21 and 22] from [2]\n numerator = a * f ** 2 + c * d ** 2 + g * b ** 2 \\\n - 2 * b * d * f - a * c * g\n term = np.sqrt((a - c) ** 2 + 4 * b ** 2)\n denominator1 = (b ** 2 - a * c) * (term - (a + c))\n denominator2 = (b ** 2 - a * c) * (- term - (a + c))\n width = np.sqrt(2 * numerator / denominator1)\n height = np.sqrt(2 * numerator / denominator2)\n\n # angle of counterclockwise rotation of major-axis of ellipse\n # to x-axis [eqn. 23] from [2].\n phi = 0.5 * np.arctan((2. * b) / (a - c))\n if a > c:\n phi += 0.5 * np.pi\n\n self.params = np.nan_to_num([x0, y0, width, height, phi]).tolist()\n self.params = [float(np.real(x)) for x in self.params]\n return True\n\n def residuals(self, data):\n \"\"\"Determine residuals of data to model.\n\n For each point the shortest distance to the ellipse is returned.\n\n Parameters\n ----------\n data : (N, 2) array\n N points with ``(x, y)`` coordinates, respectively.\n\n Returns\n -------\n residuals : (N, ) array\n Residual for each data point.\n\n \"\"\"\n\n _check_data_dim(data, dim=2)\n\n xc, yc, a, b, theta = self.params\n\n ctheta = math.cos(theta)\n stheta = math.sin(theta)\n\n x = data[:, 0]\n y = data[:, 1]\n\n N = data.shape[0]\n\n def fun(t, xi, yi):\n ct = math.cos(t)\n st = math.sin(t)\n xt = xc + a * ctheta * ct - b * stheta * st\n yt = yc + a * stheta * ct + b * ctheta * st\n return (xi - xt) ** 2 + (yi - yt) ** 2\n\n # def Dfun(t, xi, yi):\n # ct = math.cos(t)\n # st = math.sin(t)\n # xt = xc + a * ctheta * ct - b * stheta * st\n # yt = yc + a * stheta * ct + b * ctheta * st\n # dfx_t = - 2 * (xi - xt) * (- a * ctheta * st\n # - b * stheta * ct)\n # dfy_t = - 2 * (yi - yt) * (- a * stheta * st\n # + b * ctheta * ct)\n # return [dfx_t + dfy_t]\n\n residuals = np.empty((N, ), dtype=np.double)\n\n # initial guess for parameter t of closest point on ellipse\n t0 = np.arctan2(y - yc, x - xc) - theta\n\n # determine shortest distance to ellipse for each point\n for i in range(N):\n xi = x[i]\n yi = y[i]\n # faster without Dfun, because of the python overhead\n t, _ = optimize.leastsq(fun, t0[i], args=(xi, yi))\n residuals[i] = np.sqrt(fun(t, xi, yi))\n\n return residuals\n\n def predict_xy(self, t, params=None):\n \"\"\"Predict x- and y-coordinates using the estimated model.\n\n Parameters\n ----------\n t : array\n Angles in circle in radians. Angles start to count from positive\n x-axis to positive y-axis in a right-handed system.\n params : (5, ) array, optional\n Optional custom parameter set.\n\n Returns\n -------\n xy : (..., 2) array\n Predicted x- and y-coordinates.\n\n \"\"\"\n\n if params is None:\n params = self.params\n\n xc, yc, a, b, theta = params\n\n ct = np.cos(t)\n st = np.sin(t)\n ctheta = math.cos(theta)\n stheta = math.sin(theta)\n\n x = xc + a * ctheta * ct - b * stheta * st\n y = yc + a * stheta * ct + b * ctheta * st\n\n return np.concatenate((x[..., None], y[..., None]), axis=t.ndim)\n\n\ndef _dynamic_max_trials(n_inliers, n_samples, min_samples, probability):\n \"\"\"Determine number trials such that at least one outlier-free subset is\n sampled for the given inlier/outlier ratio.\n Parameters\n ----------\n n_inliers : int\n Number of inliers in the data.\n n_samples : int\n Total number of samples in the data.\n min_samples : int\n Minimum number of samples chosen randomly from original data.\n probability : float\n Probability (confidence) that one outlier-free sample is generated.\n Returns\n -------\n trials : int\n Number of trials.\n \"\"\"\n if n_inliers == 0:\n return np.inf\n\n nom = 1 - probability\n if nom == 0:\n return np.inf\n\n inlier_ratio = n_inliers / float(n_samples)\n denom = 1 - inlier_ratio ** min_samples\n if denom == 0:\n return 1\n elif denom == 1:\n return np.inf\n\n nom = np.log(nom)\n denom = np.log(denom)\n if denom == 0:\n return 0\n\n return int(np.ceil(nom / denom))\n\n\ndef ransac(data, model_class, min_samples, residual_threshold,\n is_data_valid=None, is_model_valid=None,\n max_trials=100, stop_sample_num=np.inf, stop_residuals_sum=0,\n stop_probability=1, random_state=None):\n \"\"\"Fit a model to data with the RANSAC (random sample consensus) algorithm.\n\n RANSAC is an iterative algorithm for the robust estimation of parameters\n from a subset of inliers from the complete data set. Each iteration\n performs the following tasks:\n\n 1. Select `min_samples` random samples from the original data and check\n whether the set of data is valid (see `is_data_valid`).\n 2. Estimate a model to the random subset\n (`model_cls.estimate(*data[random_subset]`) and check whether the\n estimated model is valid (see `is_model_valid`).\n 3. Classify all data as inliers or outliers by calculating the residuals\n to the estimated model (`model_cls.residuals(*data)`) - all data samples\n with residuals smaller than the `residual_threshold` are considered as\n inliers.\n 4. Save estimated model as best model if number of inlier samples is\n maximal. In case the current estimated model has the same number of\n inliers, it is only considered as the best model if it has less sum of\n residuals.\n\n These steps are performed either a maximum number of times or until one of\n the special stop criteria are met. The final model is estimated using all\n inlier samples of the previously determined best model.\n\n Parameters\n ----------\n data : [list, tuple of] (N, D) array\n Data set to which the model is fitted, where N is the number of data\n points and D the dimensionality of the data.\n If the model class requires multiple input data arrays (e.g. source and\n destination coordinates of ``skimage.transform.AffineTransform``),\n they can be optionally passed as tuple or list. Note, that in this case\n the functions ``estimate(*data)``, ``residuals(*data)``,\n ``is_model_valid(model, *random_data)`` and\n ``is_data_valid(*random_data)`` must all take each data array as\n separate arguments.\n model_class : object\n Object with the following object methods:\n\n * ``success = estimate(*data)``\n * ``residuals(*data)``\n\n where `success` indicates whether the model estimation succeeded\n (`True` or `None` for success, `False` for failure).\n min_samples : int\n The minimum number of data points to fit a model to.\n residual_threshold : float\n Maximum distance for a data point to be classified as an inlier.\n is_data_valid : function, optional\n This function is called with the randomly selected data before the\n model is fitted to it: `is_data_valid(*random_data)`.\n is_model_valid : function, optional\n This function is called with the estimated model and the randomly\n selected data: `is_model_valid(model, *random_data)`, .\n max_trials : int, optional\n Maximum number of iterations for random sample selection.\n stop_sample_num : int, optional\n Stop iteration if at least this number of inliers are found.\n stop_residuals_sum : float, optional\n Stop iteration if sum of residuals is less than or equal to this\n threshold.\n stop_probability : float in range [0, 1], optional\n RANSAC iteration stops if at least one outlier-free set of the\n training data is sampled with ``probability >= stop_probability``,\n depending on the current best model's inlier ratio and the number\n of trials. This requires to generate at least N samples (trials):\n\n N >= log(1 - probability) / log(1 - e**m)\n\n where the probability (confidence) is typically set to a high value\n such as 0.99, and e is the current fraction of inliers w.r.t. the\n total number of samples.\n random_state : int, RandomState instance or None, optional\n If int, random_state is the seed used by the random number generator;\n If RandomState instance, random_state is the random number generator;\n If None, the random number generator is the RandomState instance used\n by `np.random`.\n\n\n Returns\n -------\n model : object\n Best model with largest consensus set.\n inliers : (N, ) array\n Boolean mask of inliers classified as ``True``.\n\n References\n ----------\n .. [1] \"RANSAC\", Wikipedia, https://en.wikipedia.org/wiki/RANSAC\n\n Examples\n --------\n\n Generate ellipse data without tilt and add noise:\n\n >>> t = np.linspace(0, 2 * np.pi, 50)\n >>> xc, yc = 20, 30\n >>> a, b = 5, 10\n >>> x = xc + a * np.cos(t)\n >>> y = yc + b * np.sin(t)\n >>> data = np.column_stack([x, y])\n >>> np.random.seed(seed=1234)\n >>> data += np.random.normal(size=data.shape)\n\n Add some faulty data:\n\n >>> data[0] = (100, 100)\n >>> data[1] = (110, 120)\n >>> data[2] = (120, 130)\n >>> data[3] = (140, 130)\n\n Estimate ellipse model using all available data:\n\n >>> model = EllipseModel()\n >>> model.estimate(data)\n True\n >>> np.round(model.params) # doctest: +SKIP\n array([ 72., 75., 77., 14., 1.])\n\n Estimate ellipse model using RANSAC:\n\n >>> ransac_model, inliers = ransac(data, EllipseModel, 20, 3, max_trials=50)\n >>> abs(np.round(ransac_model.params))\n array([ 20., 30., 5., 10., 0.])\n >>> inliers # doctest: +SKIP\n array([False, False, False, False, True, True, True, True, True,\n True, True, True, True, True, True, True, True, True,\n True, True, True, True, True, True, True, True, True,\n True, True, True, True, True, True, True, True, True,\n True, True, True, True, True, True, True, True, True,\n True, True, True, True, True], dtype=bool)\n >>> sum(inliers) > 40\n True\n\n Robustly estimate geometric transformation:\n\n >>> from skimage.transform import SimilarityTransform\n >>> np.random.seed(0)\n >>> src = 100 * np.random.rand(50, 2)\n >>> model0 = SimilarityTransform(scale=0.5, rotation=1,\n ... translation=(10, 20))\n >>> dst = model0(src)\n >>> dst[0] = (10000, 10000)\n >>> dst[1] = (-100, 100)\n >>> dst[2] = (50, 50)\n >>> model, inliers = ransac((src, dst), SimilarityTransform, 2, 10)\n >>> inliers\n array([False, False, False, True, True, True, True, True, True,\n True, True, True, True, True, True, True, True, True,\n True, True, True, True, True, True, True, True, True,\n True, True, True, True, True, True, True, True, True,\n True, True, True, True, True, True, True, True, True,\n True, True, True, True, True], dtype=bool)\n\n \"\"\"\n\n best_model = None\n best_inlier_num = 0\n best_inlier_residuals_sum = np.inf\n best_inliers = None\n\n random_state = check_random_state(random_state)\n\n if min_samples < 0:\n raise ValueError(\"`min_samples` must be greater than zero\")\n\n if max_trials < 0:\n raise ValueError(\"`max_trials` must be greater than zero\")\n\n if stop_probability < 0 or stop_probability > 1:\n raise ValueError(\"`stop_probability` must be in range [0, 1]\")\n\n if not isinstance(data, list) and not isinstance(data, tuple):\n data = [data]\n\n # make sure data is list and not tuple, so it can be modified below\n data = list(data)\n # number of samples\n num_samples = data[0].shape[0]\n\n for num_trials in range(max_trials):\n\n # choose random sample set\n samples = []\n random_idxs = random_state.choice(num_samples, min_samples,\n replace=False)\n for d in data:\n samples.append(d[random_idxs])\n\n # check if random sample set is valid\n if is_data_valid is not None and not is_data_valid(*samples):\n continue\n\n # estimate model for current random sample set\n sample_model = model_class()\n\n success = sample_model.estimate(*samples)\n\n if success is not None: # backwards compatibility\n if not success:\n continue\n\n # check if estimated model is valid\n if is_model_valid is not None \\\n and not is_model_valid(sample_model, *samples):\n continue\n\n sample_model_residuals = np.abs(sample_model.residuals(*data))\n # consensus set / inliers\n sample_model_inliers = sample_model_residuals < residual_threshold\n sample_model_residuals_sum = np.sum(sample_model_residuals**2)\n\n # choose as new best model if number of inliers is maximal\n sample_inlier_num = np.sum(sample_model_inliers)\n if (\n # more inliers\n sample_inlier_num > best_inlier_num\n # same number of inliers but less \"error\" in terms of residuals\n or (sample_inlier_num == best_inlier_num\n and sample_model_residuals_sum < best_inlier_residuals_sum)\n ):\n best_model = sample_model\n best_inlier_num = sample_inlier_num\n best_inlier_residuals_sum = sample_model_residuals_sum\n best_inliers = sample_model_inliers\n if (\n best_inlier_num >= stop_sample_num\n or best_inlier_residuals_sum <= stop_residuals_sum\n or num_trials\n >= _dynamic_max_trials(best_inlier_num, num_samples,\n min_samples, stop_probability)\n ):\n break\n\n # estimate final model using all inliers\n if best_inliers is not None:\n # select inliers for each data array\n for i in range(len(data)):\n data[i] = data[i][best_inliers]\n best_model.estimate(*data)\n\n return best_model, best_inliers\n", "path": "skimage/measure/fit.py" } ]
diff --git a/skimage/measure/fit.py b/skimage/measure/fit.py index 2cb51426f3b..d88b79c4a0b 100644 --- a/skimage/measure/fit.py +++ b/skimage/measure/fit.py @@ -804,7 +804,8 @@ def ransac(data, model_class, min_samples, residual_threshold, # choose random sample set samples = [] - random_idxs = random_state.randint(0, num_samples, min_samples) + random_idxs = random_state.choice(num_samples, min_samples, + replace=False) for d in data: samples.append(d[random_idxs]) diff --git a/skimage/measure/tests/test_fit.py b/skimage/measure/tests/test_fit.py index 0a2f069e5fe..c58934e155e 100644 --- a/skimage/measure/tests/test_fit.py +++ b/skimage/measure/tests/test_fit.py @@ -348,3 +348,23 @@ def test_ransac_invalid_input(): with testing.raises(ValueError): ransac(np.zeros((10, 2)), None, min_samples=2, residual_threshold=0, stop_probability=1.01) + + +def test_ransac_sample_duplicates(): + class DummyModel(object): + + """Dummy model to check for duplicates.""" + + def estimate(self, data): + # Assert that all data points are unique. + assert_equal(np.unique(data).size, data.size) + return True + + def residuals(self, data): + return 1.0 + + # Create dataset with four unique points. Force 10 iterations + # and check that there are no duplicated data points. + data = np.arange(4) + ransac(data, DummyModel, min_samples=3, residual_threshold=0.0, + max_trials=10)
mindsdb__mindsdb-1576
Add new method to count number of rows for MySQL datasources :electric_plug: :1234: When MindsDB creates a new MySQL datasource we get information for row counts by fetching all datasources. The problem here is that if datasource is big it takes a lot of time. We need a new get_row_count method to return the number of rows per datasource. The PR should include this method inside the PostgreSQL class . ## Steps :male_detective: :female_detective: - Implement in https://github.com/mindsdb/mindsdb/blob/stable/mindsdb/integrations/mysql/mysql.py#L51 - Example method: ```py def get_row_count(self, query): result = conn.execute(query) return len(query) ``` - Push to staging branch ## Additional rewards :1st_place_medal: Each code PR brings :three: point for entry into the draw for a :computer: Deep Learning Laptop powered by the NVIDIA RTX 3080 Max-Q GPU or other swag :shirt: :bear: . For more info check out https://mindsdb.com/hacktoberfest/
[ { "content": "import os\nimport shutil\nimport tempfile\n\nfrom contextlib import closing\nimport mysql.connector\n\nfrom lightwood.api import dtype\nfrom mindsdb.integrations.base import Integration\nfrom mindsdb.utilities.log import log\n\n\nclass MySQLConnectionChecker:\n def __init__(self, **kwargs):\n self.host = kwargs.get('host')\n self.port = kwargs.get('port')\n self.user = kwargs.get('user')\n self.password = kwargs.get('password')\n self.ssl = kwargs.get('ssl')\n self.ssl_ca = kwargs.get('ssl_ca')\n self.ssl_cert = kwargs.get('ssl_cert')\n self.ssl_key = kwargs.get('ssl_key')\n\n def _get_connnection(self):\n config = {\n \"host\": self.host,\n \"port\": self.port,\n \"user\": self.user,\n \"password\": self.password\n }\n if self.ssl is True:\n config['client_flags'] = [mysql.connector.constants.ClientFlag.SSL]\n if self.ssl_ca is not None:\n config[\"ssl_ca\"] = self.ssl_ca\n if self.ssl_cert is not None:\n config[\"ssl_cert\"] = self.ssl_cert\n if self.ssl_key is not None:\n config[\"ssl_key\"] = self.ssl_key\n return mysql.connector.connect(**config)\n\n def check_connection(self):\n try:\n con = self._get_connnection()\n with closing(con) as con:\n connected = con.is_connected()\n except Exception:\n connected = False\n return connected\n\n\nclass MySQL(Integration, MySQLConnectionChecker):\n def __init__(self, config, name, db_info):\n super().__init__(config, name)\n self.user = db_info.get('user')\n self.password = db_info.get('password')\n self.host = db_info.get('host')\n self.port = db_info.get('port')\n self.ssl = db_info.get('ssl')\n self.ssl_ca = db_info.get('ssl_ca')\n self.ssl_cert = db_info.get('ssl_cert')\n self.ssl_key = db_info.get('ssl_key')\n\n def _to_mysql_table(self, dtype_dict, predicted_cols, columns):\n subtype_map = {\n dtype.integer: 'int',\n dtype.float: 'double',\n dtype.binary: 'bool',\n dtype.date: 'Date',\n dtype.datetime: 'Datetime',\n dtype.binary: 'VARCHAR(500)',\n dtype.categorical: 'VARCHAR(500)',\n dtype.tags: 'VARCHAR(500)',\n dtype.image: 'VARCHAR(500)',\n dtype.video: 'VARCHAR(500)',\n dtype.audio: 'VARCHAR(500)',\n dtype.short_text: 'VARCHAR(500)',\n dtype.rich_text: 'VARCHAR(500)',\n dtype.array: 'VARCHAR(500)'\n }\n\n column_declaration = []\n for name in columns:\n try:\n col_subtype = dtype_dict[name]\n new_type = subtype_map[col_subtype]\n column_declaration.append(f' `{name}` {new_type} ')\n if name in predicted_cols:\n column_declaration.append(f' `{name}_original` {new_type} ')\n except Exception as e:\n log.error(f'Error: can not determine mysql data type for column {name}: {e}')\n\n return column_declaration\n\n def _escape_table_name(self, name):\n return '`' + name.replace('`', '``') + '`'\n\n def _query(self, query):\n con = self._get_connnection()\n with closing(con) as con:\n cur = con.cursor(dictionary=True, buffered=True)\n cur.execute(query)\n res = True\n try:\n res = cur.fetchall()\n except Exception:\n pass\n con.commit()\n\n return res\n\n def _get_connect_string(self, table):\n user = f\"{self.config['api']['mysql']['user']}_{self.name}\"\n password = self.config['api']['mysql']['password']\n host = self.config['api']['mysql']['host']\n port = self.config['api']['mysql']['port']\n\n if password is None or password == '':\n connect = f'mysql://{user}@{host}:{port}/mindsdb/{table}'\n else:\n connect = f'mysql://{user}:{password}@{host}:{port}/mindsdb/{table}'\n\n return connect\n\n def setup(self):\n self._query(f'DROP DATABASE IF EXISTS {self.mindsdb_database}')\n self._query(f'CREATE DATABASE IF NOT EXISTS {self.mindsdb_database}')\n\n connect = self._get_connect_string('predictors')\n\n q = f\"\"\"\n CREATE TABLE IF NOT EXISTS {self.mindsdb_database}.predictors (\n name VARCHAR(500),\n status VARCHAR(500),\n accuracy VARCHAR(500),\n predict VARCHAR(500),\n select_data_query VARCHAR(500),\n external_datasource VARCHAR(500),\n training_options VARCHAR(500),\n key name_key (name)\n ) ENGINE=FEDERATED CHARSET=utf8 CONNECTION='{connect}';\n \"\"\"\n self._query(q)\n\n connect = self._get_connect_string('commands')\n\n q = f\"\"\"\n CREATE TABLE IF NOT EXISTS {self.mindsdb_database}.commands (\n command VARCHAR(500),\n key command_key (command)\n ) ENGINE=FEDERATED CHARSET=utf8 CONNECTION='{connect}';\n \"\"\"\n self._query(q)\n\n def register_predictors(self, model_data_arr):\n for model_meta in model_data_arr:\n name = model_meta['name']\n predict = model_meta['predict']\n if not isinstance(predict, list):\n predict = [predict]\n columns_sql = ','.join(self._to_mysql_table(\n model_meta['dtype_dict'],\n predict,\n list(model_meta['dtype_dict'].keys())\n ))\n columns_sql += ',`when_data` varchar(500)'\n columns_sql += ',`select_data_query` varchar(500)'\n columns_sql += ',`external_datasource` varchar(500)'\n for col in predict:\n columns_sql += f',`{col}_confidence` double'\n if model_meta['dtype_dict'][col] in (dtype.integer, dtype.float):\n columns_sql += f',`{col}_min` double'\n columns_sql += f',`{col}_max` double'\n columns_sql += f',`{col}_explain` varchar(500)'\n\n connect = self._get_connect_string(name)\n\n self.unregister_predictor(name)\n q = f\"\"\"\n CREATE TABLE {self.mindsdb_database}.{self._escape_table_name(name)} (\n {columns_sql},\n index when_data_index (when_data),\n index select_data_query_index (select_data_query),\n index external_datasource_index (external_datasource)\n ) ENGINE=FEDERATED CHARSET=utf8 CONNECTION='{connect}';\n \"\"\"\n self._query(q)\n\n def unregister_predictor(self, name):\n q = f\"\"\"\n drop table if exists {self.mindsdb_database}.{self._escape_table_name(name)};\n \"\"\"\n self._query(q)\n", "path": "mindsdb/integrations/mysql/mysql.py" } ]
[ { "content": "import os\nimport shutil\nimport tempfile\n\nfrom contextlib import closing\nimport mysql.connector\n\nfrom lightwood.api import dtype\nfrom mindsdb.integrations.base import Integration\nfrom mindsdb.utilities.log import log\n\n\nclass MySQLConnectionChecker:\n def __init__(self, **kwargs):\n self.host = kwargs.get('host')\n self.port = kwargs.get('port')\n self.user = kwargs.get('user')\n self.password = kwargs.get('password')\n self.ssl = kwargs.get('ssl')\n self.ssl_ca = kwargs.get('ssl_ca')\n self.ssl_cert = kwargs.get('ssl_cert')\n self.ssl_key = kwargs.get('ssl_key')\n\n def _get_connnection(self):\n config = {\n \"host\": self.host,\n \"port\": self.port,\n \"user\": self.user,\n \"password\": self.password\n }\n if self.ssl is True:\n config['client_flags'] = [mysql.connector.constants.ClientFlag.SSL]\n if self.ssl_ca is not None:\n config[\"ssl_ca\"] = self.ssl_ca\n if self.ssl_cert is not None:\n config[\"ssl_cert\"] = self.ssl_cert\n if self.ssl_key is not None:\n config[\"ssl_key\"] = self.ssl_key\n return mysql.connector.connect(**config)\n\n def check_connection(self):\n try:\n con = self._get_connnection()\n with closing(con) as con:\n connected = con.is_connected()\n except Exception:\n connected = False\n return connected\n\n\nclass MySQL(Integration, MySQLConnectionChecker):\n def __init__(self, config, name, db_info):\n super().__init__(config, name)\n self.user = db_info.get('user')\n self.password = db_info.get('password')\n self.host = db_info.get('host')\n self.port = db_info.get('port')\n self.ssl = db_info.get('ssl')\n self.ssl_ca = db_info.get('ssl_ca')\n self.ssl_cert = db_info.get('ssl_cert')\n self.ssl_key = db_info.get('ssl_key')\n\n def _to_mysql_table(self, dtype_dict, predicted_cols, columns):\n subtype_map = {\n dtype.integer: 'int',\n dtype.float: 'double',\n dtype.binary: 'bool',\n dtype.date: 'Date',\n dtype.datetime: 'Datetime',\n dtype.binary: 'VARCHAR(500)',\n dtype.categorical: 'VARCHAR(500)',\n dtype.tags: 'VARCHAR(500)',\n dtype.image: 'VARCHAR(500)',\n dtype.video: 'VARCHAR(500)',\n dtype.audio: 'VARCHAR(500)',\n dtype.short_text: 'VARCHAR(500)',\n dtype.rich_text: 'VARCHAR(500)',\n dtype.array: 'VARCHAR(500)'\n }\n\n column_declaration = []\n for name in columns:\n try:\n col_subtype = dtype_dict[name]\n new_type = subtype_map[col_subtype]\n column_declaration.append(f' `{name}` {new_type} ')\n if name in predicted_cols:\n column_declaration.append(f' `{name}_original` {new_type} ')\n except Exception as e:\n log.error(f'Error: can not determine mysql data type for column {name}: {e}')\n\n return column_declaration\n\n def _escape_table_name(self, name):\n return '`' + name.replace('`', '``') + '`'\n\n def _query(self, query):\n con = self._get_connnection()\n with closing(con) as con:\n cur = con.cursor(dictionary=True, buffered=True)\n cur.execute(query)\n res = True\n try:\n res = cur.fetchall()\n except Exception:\n pass\n con.commit()\n\n return res\n\n def _get_connect_string(self, table):\n user = f\"{self.config['api']['mysql']['user']}_{self.name}\"\n password = self.config['api']['mysql']['password']\n host = self.config['api']['mysql']['host']\n port = self.config['api']['mysql']['port']\n\n if password is None or password == '':\n connect = f'mysql://{user}@{host}:{port}/mindsdb/{table}'\n else:\n connect = f'mysql://{user}:{password}@{host}:{port}/mindsdb/{table}'\n\n return connect\n\n def setup(self):\n self._query(f'DROP DATABASE IF EXISTS {self.mindsdb_database}')\n self._query(f'CREATE DATABASE IF NOT EXISTS {self.mindsdb_database}')\n\n connect = self._get_connect_string('predictors')\n\n q = f\"\"\"\n CREATE TABLE IF NOT EXISTS {self.mindsdb_database}.predictors (\n name VARCHAR(500),\n status VARCHAR(500),\n accuracy VARCHAR(500),\n predict VARCHAR(500),\n select_data_query VARCHAR(500),\n external_datasource VARCHAR(500),\n training_options VARCHAR(500),\n key name_key (name)\n ) ENGINE=FEDERATED CHARSET=utf8 CONNECTION='{connect}';\n \"\"\"\n self._query(q)\n\n connect = self._get_connect_string('commands')\n\n q = f\"\"\"\n CREATE TABLE IF NOT EXISTS {self.mindsdb_database}.commands (\n command VARCHAR(500),\n key command_key (command)\n ) ENGINE=FEDERATED CHARSET=utf8 CONNECTION='{connect}';\n \"\"\"\n self._query(q)\n\n def register_predictors(self, model_data_arr):\n for model_meta in model_data_arr:\n name = model_meta['name']\n predict = model_meta['predict']\n if not isinstance(predict, list):\n predict = [predict]\n columns_sql = ','.join(self._to_mysql_table(\n model_meta['dtype_dict'],\n predict,\n list(model_meta['dtype_dict'].keys())\n ))\n columns_sql += ',`when_data` varchar(500)'\n columns_sql += ',`select_data_query` varchar(500)'\n columns_sql += ',`external_datasource` varchar(500)'\n for col in predict:\n columns_sql += f',`{col}_confidence` double'\n if model_meta['dtype_dict'][col] in (dtype.integer, dtype.float):\n columns_sql += f',`{col}_min` double'\n columns_sql += f',`{col}_max` double'\n columns_sql += f',`{col}_explain` varchar(500)'\n\n connect = self._get_connect_string(name)\n\n self.unregister_predictor(name)\n q = f\"\"\"\n CREATE TABLE {self.mindsdb_database}.{self._escape_table_name(name)} (\n {columns_sql},\n index when_data_index (when_data),\n index select_data_query_index (select_data_query),\n index external_datasource_index (external_datasource)\n ) ENGINE=FEDERATED CHARSET=utf8 CONNECTION='{connect}';\n \"\"\"\n self._query(q)\n\n def unregister_predictor(self, name):\n q = f\"\"\"\n drop table if exists {self.mindsdb_database}.{self._escape_table_name(name)};\n \"\"\"\n self._query(q)\n\n def get_row_count(self, query):\n q = f\"\"\" \n SELECT COUNT(*) as count\n FROM ({query}) as query;\"\"\"\n result = self._query(q)\n return result[0]['count']\n", "path": "mindsdb/integrations/mysql/mysql.py" } ]
diff --git a/mindsdb/integrations/mysql/mysql.py b/mindsdb/integrations/mysql/mysql.py index 14c6c81d603..2237cf8fae1 100644 --- a/mindsdb/integrations/mysql/mysql.py +++ b/mindsdb/integrations/mysql/mysql.py @@ -190,3 +190,10 @@ def unregister_predictor(self, name): drop table if exists {self.mindsdb_database}.{self._escape_table_name(name)}; """ self._query(q) + + def get_row_count(self, query): + q = f""" + SELECT COUNT(*) as count + FROM ({query}) as query;""" + result = self._query(q) + return result[0]['count']
conan-io__conan-2763
[bug] Linter "Unable to import" warning when importing a shared Python Conan package in the build() step - [x] I've read the [CONTRIBUTING guide](https://raw.githubusercontent.com/conan-io/conan/develop/.github/CONTRIBUTING.md). - [x] I've specified the Conan version, operating system version and any tool that can be relevant. - [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion. --- I followed the instructions on http://docs.conan.io/en/latest/howtos/python_code_reuse.html. When I get to the "Requiring a python conan package" step, the linter gives me a warning about importing the shared package: $ git clone https://github.com/smokris/conan-test-library $ cd conan-test-library $ conan export . me/testing $ cd .. $ git clone https://github.com/smokris/conan-test-consumer $ cd conan-test-consumer $ conan create . me/testing HelloPyReuse/1.0@me/testing: Exporting package recipe Linter warnings WARN: Linter. Line 9: Unable to import 'hello' … HelloPyReuse/1.0@me/testing: Calling build() Hello World from Python! … (The imported package works fine; the problem is just that the linter is emitting a warning. I'd prefer that the linter not show this false-positive warning, to improve the linter's signal-to-noise ratio.) I'm able to reproduce this using: - Conan 1.1.1 on my local macOS 10.13.3 system - Conan 1.1.1 on Travis CI's Mac OS 10.10.5 image - Conan 1.1.1 on Travis CI's Ubuntu 14.04.5 image - Conan 1.2.0 on CentOS 7.4
[ { "content": "import json\nimport os\nimport sys\n\nimport platform\n\nfrom conans.client.output import Color\nfrom conans.errors import ConanException\nfrom subprocess import PIPE, Popen\nfrom conans import __path__ as root_path\n\n\ndef conan_linter(conanfile_path, out):\n if getattr(sys, 'frozen', False):\n out.info(\"No linter available. Use a pip installed conan for recipe linting\")\n return\n apply_lint = os.environ.get(\"CONAN_RECIPE_LINTER\", True)\n if not apply_lint or apply_lint == \"False\":\n return\n\n dir_path = os.path.dirname(root_path[0]).replace(\"\\\\\", \"/\")\n dirname = os.path.dirname(conanfile_path).replace(\"\\\\\", \"/\")\n hook = '--init-hook=\"import sys;sys.path.extend([\\'%s\\', \\'%s\\'])\"' % (dirname, dir_path)\n\n try:\n py3_msgs = None\n msgs, py3_msgs = _normal_linter(conanfile_path, hook)\n except Exception as e:\n out.warn(\"Failed pylint: %s\" % e)\n else:\n if py3_msgs:\n out.writeln(\"Python 3 incompatibilities\\n ERROR: %s\"\n % \"\\n ERROR: \".join(py3_msgs),\n front=Color.BRIGHT_MAGENTA)\n if msgs:\n out.writeln(\"Linter warnings\\n WARN: %s\" % \"\\n WARN: \".join(msgs),\n front=Color.MAGENTA)\n pylint_werr = os.environ.get(\"CONAN_PYLINT_WERR\", None)\n if pylint_werr and (py3_msgs or msgs):\n raise ConanException(\"Package recipe has linter errors. Please fix them.\")\n\n\ndef _runner(args):\n command = [\"pylint\", \"--output-format=json\"] + args\n command = \" \".join(command)\n shell = True if platform.system() != \"Windows\" else False\n proc = Popen(command, shell=shell, bufsize=10, stdout=PIPE, stderr=PIPE)\n stdout, _ = proc.communicate()\n return json.loads(stdout.decode(\"utf-8\")) if stdout else {}\n\n\ndef _normal_linter(conanfile_path, hook):\n args = ['--py3k', \"--enable=all\", \"--reports=no\", \"--disable=no-absolute-import\", \"--persistent=no\",\n hook, '\"%s\"' % conanfile_path]\n pylintrc = os.environ.get(\"CONAN_PYLINTRC\", None)\n if pylintrc:\n if not os.path.exists(pylintrc):\n raise ConanException(\"File %s defined by PYLINTRC doesn't exist\" % pylintrc)\n args.append('--rcfile=\"%s\"' % pylintrc)\n\n output_json = _runner(args)\n dynamic_fields = (\"source_folder\", \"build_folder\", \"package_folder\", \"info_build\",\n \"build_requires\", \"info\")\n\n def _accept_message(msg):\n symbol = msg.get(\"symbol\")\n text = msg.get(\"message\")\n\n if symbol == \"no-member\":\n for field in dynamic_fields:\n if field in text:\n return False\n if symbol == \"not-callable\" and \"self.copy is not callable\" == text:\n return False\n if symbol == \"not-callable\" and \"self.copy_deps is not callable\" == text:\n return False\n if symbol in (\"bare-except\", \"broad-except\"): # No exception type(s) specified\n return False\n\n return True\n\n result = []\n py3msgs = []\n for msg in output_json:\n if msg.get(\"type\") in (\"warning\", \"error\"):\n message_id = msg.get(\"symbol\")\n if message_id in (\"print-statement\", \"dict-iter-method\"):\n py3msgs.append(\"Py3 incompatibility. Line %s: %s\"\n % (msg.get(\"line\"), msg.get(\"message\")))\n elif _accept_message(msg):\n result.append(\"Linter. Line %s: %s\" % (msg.get(\"line\"), msg.get(\"message\")))\n\n return result, py3msgs\n", "path": "conans/client/cmd/export_linter.py" } ]
[ { "content": "import json\nimport os\nimport sys\n\nimport platform\n\nfrom conans.client.output import Color\nfrom conans.errors import ConanException\nfrom subprocess import PIPE, Popen\nfrom conans import __path__ as root_path\n\n\ndef conan_linter(conanfile_path, out):\n if getattr(sys, 'frozen', False):\n out.info(\"No linter available. Use a pip installed conan for recipe linting\")\n return\n apply_lint = os.environ.get(\"CONAN_RECIPE_LINTER\", True)\n if not apply_lint or apply_lint == \"False\":\n return\n\n dir_path = os.path.dirname(root_path[0]).replace(\"\\\\\", \"/\")\n dirname = os.path.dirname(conanfile_path).replace(\"\\\\\", \"/\")\n hook = '--init-hook=\"import sys;sys.path.extend([\\'%s\\', \\'%s\\'])\"' % (dirname, dir_path)\n\n try:\n py3_msgs = None\n msgs, py3_msgs = _normal_linter(conanfile_path, hook)\n except Exception as e:\n out.warn(\"Failed pylint: %s\" % e)\n else:\n if py3_msgs:\n out.writeln(\"Python 3 incompatibilities\\n ERROR: %s\"\n % \"\\n ERROR: \".join(py3_msgs),\n front=Color.BRIGHT_MAGENTA)\n if msgs:\n out.writeln(\"Linter warnings\\n WARN: %s\" % \"\\n WARN: \".join(msgs),\n front=Color.MAGENTA)\n pylint_werr = os.environ.get(\"CONAN_PYLINT_WERR\", None)\n if pylint_werr and (py3_msgs or msgs):\n raise ConanException(\"Package recipe has linter errors. Please fix them.\")\n\n\ndef _runner(args):\n command = [\"pylint\", \"--output-format=json\"] + args\n command = \" \".join(command)\n shell = True if platform.system() != \"Windows\" else False\n proc = Popen(command, shell=shell, bufsize=10, stdout=PIPE, stderr=PIPE)\n stdout, _ = proc.communicate()\n return json.loads(stdout.decode(\"utf-8\")) if stdout else {}\n\n\ndef _normal_linter(conanfile_path, hook):\n args = ['--py3k', \"--enable=all\", \"--reports=no\", \"--disable=no-absolute-import\", \"--persistent=no\",\n hook, '\"%s\"' % conanfile_path]\n pylintrc = os.environ.get(\"CONAN_PYLINTRC\", None)\n if pylintrc:\n if not os.path.exists(pylintrc):\n raise ConanException(\"File %s defined by PYLINTRC doesn't exist\" % pylintrc)\n args.append('--rcfile=\"%s\"' % pylintrc)\n\n output_json = _runner(args)\n dynamic_fields = (\"source_folder\", \"build_folder\", \"package_folder\", \"info_build\",\n \"build_requires\", \"info\")\n\n def _accept_message(msg):\n symbol = msg.get(\"symbol\")\n text = msg.get(\"message\")\n\n if symbol == \"no-member\":\n for field in dynamic_fields:\n if field in text:\n return False\n if symbol == \"not-callable\" and \"self.copy is not callable\" == text:\n return False\n if symbol == \"not-callable\" and \"self.copy_deps is not callable\" == text:\n return False\n if symbol in (\"bare-except\", \"broad-except\"): # No exception type(s) specified\n return False\n if symbol == \"import-error\" and msg.get(\"column\") > 3: # Import of a conan python package\n return False\n\n return True\n\n result = []\n py3msgs = []\n for msg in output_json:\n if msg.get(\"type\") in (\"warning\", \"error\"):\n message_id = msg.get(\"symbol\")\n if message_id in (\"print-statement\", \"dict-iter-method\"):\n py3msgs.append(\"Py3 incompatibility. Line %s: %s\"\n % (msg.get(\"line\"), msg.get(\"message\")))\n elif _accept_message(msg):\n result.append(\"Linter. Line %s: %s\" % (msg.get(\"line\"), msg.get(\"message\")))\n\n return result, py3msgs\n", "path": "conans/client/cmd/export_linter.py" } ]
diff --git a/conans/client/cmd/export_linter.py b/conans/client/cmd/export_linter.py index 112b73013a7..4084cdb44a0 100644 --- a/conans/client/cmd/export_linter.py +++ b/conans/client/cmd/export_linter.py @@ -76,6 +76,8 @@ def _accept_message(msg): return False if symbol in ("bare-except", "broad-except"): # No exception type(s) specified return False + if symbol == "import-error" and msg.get("column") > 3: # Import of a conan python package + return False return True diff --git a/conans/test/integration/python_build_test.py b/conans/test/integration/python_build_test.py index c641ad0b5ba..0fc9dec5f3f 100644 --- a/conans/test/integration/python_build_test.py +++ b/conans/test/integration/python_build_test.py @@ -89,7 +89,7 @@ def reuse_build_test(self): client = TestClient() client.save({CONANFILE: conanfile, "__init__.py": "", "mytest.py": test}) client.run("export . lasote/stable") - reuse = """from conans import ConanFile, tools + reuse = """from conans import ConanFile class ToolsTest(ConanFile): name = "Consumer" version = "0.1" @@ -102,13 +102,14 @@ def build(self): client.save({CONANFILE: reuse}, clean_first=True) client.run("create . conan/testing") self.assertIn("Consumer/0.1@conan/testing: Hello Foo", client.out) + self.assertNotIn("WARN: Linter. Line 8: Unable to import 'mytest'", client.out) def reuse_source_test(self): # https://github.com/conan-io/conan/issues/2644 client = TestClient() client.save({CONANFILE: conanfile, "__init__.py": "", "mytest.py": test}) client.run("export . lasote/stable") - reuse = """from conans import ConanFile, tools + reuse = """from conans import ConanFile class ToolsTest(ConanFile): name = "Consumer" version = "0.1" @@ -121,6 +122,7 @@ def source(self): client.save({CONANFILE: reuse}, clean_first=True) client.run("create . conan/testing") self.assertIn("Consumer/0.1@conan/testing: Hello Baz", client.out) + self.assertNotIn("WARN: Linter. Line 8: Unable to import 'mytest'", client.out) def reuse_test(self): client = TestClient() @@ -179,6 +181,7 @@ def basic_install_test(self): client.save({CONANFILE: reuse}, clean_first=True) client.run("export . lasote/stable") + self.assertNotIn("Unable to import 'mytest'", client.out) client.run("install Consumer/0.1@lasote/stable --build") lines = [line.split(":")[1] for line in str(client.user_io.out).splitlines() if line.startswith("Consumer/0.1@lasote/stable: Hello")]
buildbot__buildbot-5912
AttributeError: module 'sqlalchemy.engine.strategies' has no attribute 'PlainEngineStrategy' On buildbot version 3.0.1, since upgrading SQLAlchemy to version 1.4, `buildbot create-master` now throws an error: ``` Traceback (most recent call last): File "/opt/buildbot/master/venv/bin/buildbot", line 8, in <module> sys.exit(run()) File "/opt/buildbot/master/venv/lib/python3.8/site-packages/buildbot/scripts/runner.py", line 772, in run subcommandFunction = reflect.namedObject(subconfig.subcommandFunction) File "/opt/buildbot/master/venv/lib/python3.8/site-packages/twisted/python/reflect.py", line 170, in namedObject module = namedModule(".".join(classSplit[:-1])) File "/opt/buildbot/master/venv/lib/python3.8/site-packages/twisted/python/reflect.py", line 157, in namedModule topLevel = __import__(name) File "/opt/buildbot/master/venv/lib/python3.8/site-packages/buildbot/scripts/create_master.py", line 25, in <module> from buildbot.master import BuildMaster File "/opt/buildbot/master/venv/lib/python3.8/site-packages/buildbot/master.py", line 35, in <module> from buildbot.db import connector as dbconnector File "/opt/buildbot/master/venv/lib/python3.8/site-packages/buildbot/db/connector.py", line 30, in <module> from buildbot.db import enginestrategy File "/opt/buildbot/master/venv/lib/python3.8/site-packages/buildbot/db/enginestrategy.py", line 154, in <module> class BuildbotEngineStrategy(strategies.PlainEngineStrategy): AttributeError: module 'sqlalchemy.engine.strategies' has no attribute 'PlainEngineStrategy' ```` Restricting the sqlalchemy version to 1.3.23 seems to resolve this issue. Seems related to the removal of this interface in SQLAlchemy 1.4: https://github.com/sqlalchemy/sqlalchemy/commit/dfb20f07d8796ec27732df84c40b4ce4857fd83b
[ { "content": "#!/usr/bin/env python\n#\n# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\n\"\"\"\nStandard setup script.\n\"\"\"\nfrom setuptools import setup # isort:skip\n\n\nimport glob\nimport inspect\nimport os\nimport pkg_resources\nimport sys\nfrom distutils.command.install_data import install_data\nfrom distutils.command.sdist import sdist\nfrom pkg_resources import parse_version\n\nfrom buildbot import version\n\nBUILDING_WHEEL = bool(\"bdist_wheel\" in sys.argv)\n\n\ndef include(d, e):\n \"\"\"Generate a pair of (directory, file-list) for installation.\n\n 'd' -- A directory\n 'e' -- A glob pattern\"\"\"\n\n return (d, [f for f in glob.glob('{}/{}'.format(d, e)) if os.path.isfile(f)])\n\n\ndef include_statics(d):\n r = []\n for root, ds, fs in os.walk(d):\n r.append((root, [os.path.join(root, f) for f in fs]))\n return r\n\n\nclass install_data_twisted(install_data):\n\n \"\"\"make sure data files are installed in package.\n this is evil.\n copied from Twisted/setup.py.\n \"\"\"\n\n def finalize_options(self):\n self.set_undefined_options('install',\n ('install_lib', 'install_dir'),\n )\n super().finalize_options()\n\n def run(self):\n super().run()\n # ensure there's a buildbot/VERSION file\n fn = os.path.join(self.install_dir, 'buildbot', 'VERSION')\n open(fn, 'w').write(version)\n self.outfiles.append(fn)\n\n\nclass our_sdist(sdist):\n\n def make_release_tree(self, base_dir, files):\n sdist.make_release_tree(self, base_dir, files)\n\n # ensure there's a buildbot/VERSION file\n fn = os.path.join(base_dir, 'buildbot', 'VERSION')\n open(fn, 'w').write(version)\n\n # ensure that NEWS has a copy of the latest release notes, with the\n # proper version substituted\n src_fn = os.path.join('docs', 'relnotes/index.rst')\n with open(src_fn) as f:\n src = f.read()\n src = src.replace('|version|', version)\n dst_fn = os.path.join(base_dir, 'NEWS')\n with open(dst_fn, 'w') as f:\n f.write(src)\n\n\ndef define_plugin_entry(name, module_name):\n \"\"\"\n helper to produce lines suitable for setup.py's entry_points\n \"\"\"\n if isinstance(name, tuple):\n entry, name = name\n else:\n entry = name\n return '{} = {}:{}'.format(entry, module_name, name)\n\n\ndef concat_dicts(*dicts):\n result = dict()\n for d in dicts:\n result.update(d)\n return result\n\n\ndef define_plugin_entries(groups):\n \"\"\"\n helper to all groups for plugins\n \"\"\"\n result = dict()\n\n for group, modules in groups:\n tempo = []\n for module_name, names in modules:\n tempo.extend([define_plugin_entry(name, module_name)\n for name in names])\n result[group] = tempo\n\n return result\n\n\n__file__ = inspect.getframeinfo(inspect.currentframe()).filename\n\nwith open(os.path.join(os.path.dirname(__file__), 'README.rst')) as long_d_f:\n long_description = long_d_f.read()\n\nsetup_args = {\n 'name': \"buildbot\",\n 'version': version,\n 'description': \"The Continuous Integration Framework\",\n 'long_description': long_description,\n 'author': \"Brian Warner\",\n 'author_email': \"[email protected]\",\n 'maintainer': \"Dustin J. Mitchell\",\n 'maintainer_email': \"[email protected]\",\n 'url': \"http://buildbot.net/\",\n 'classifiers': [\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: No Input/Output (Daemon)',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n 'Topic :: Software Development :: Build Tools',\n 'Topic :: Software Development :: Testing',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n ],\n\n 'packages': [\n \"buildbot\",\n \"buildbot.configurators\",\n \"buildbot.worker\",\n \"buildbot.worker.protocols\",\n \"buildbot.changes\",\n \"buildbot.clients\",\n \"buildbot.data\",\n \"buildbot.db\",\n \"buildbot.db.migrate.versions\",\n \"buildbot.db.types\",\n \"buildbot.machine\",\n \"buildbot.monkeypatches\",\n \"buildbot.mq\",\n \"buildbot.plugins\",\n \"buildbot.process\",\n \"buildbot.process.users\",\n \"buildbot.reporters\",\n \"buildbot.reporters.generators\",\n \"buildbot.schedulers\",\n \"buildbot.scripts\",\n \"buildbot.secrets\",\n \"buildbot.secrets.providers\",\n \"buildbot.statistics\",\n \"buildbot.statistics.storage_backends\",\n \"buildbot.steps\",\n \"buildbot.steps.package\",\n \"buildbot.steps.package.deb\",\n \"buildbot.steps.package.rpm\",\n \"buildbot.steps.source\",\n \"buildbot.util\",\n \"buildbot.wamp\",\n \"buildbot.www\",\n \"buildbot.www.hooks\",\n \"buildbot.www.authz\",\n ] + ([] if BUILDING_WHEEL else [ # skip tests for wheels (save 50% of the archive)\n \"buildbot.test\",\n \"buildbot.test.util\",\n \"buildbot.test.fake\",\n \"buildbot.test.fakedb\",\n \"buildbot.test.fuzz\",\n \"buildbot.test.integration\",\n \"buildbot.test.integration.interop\",\n \"buildbot.test.regressions\",\n \"buildbot.test.unit\",\n ]),\n 'data_files': [\n include(\"buildbot/reporters/templates\", \"*.txt\"),\n (\"buildbot/db/migrate\", [\n \"buildbot/db/migrate/migrate.cfg\",\n ]),\n include(\"buildbot/db/migrate/versions\", \"*.py\"),\n (\"buildbot/scripts\", [\n \"buildbot/scripts/sample.cfg\",\n \"buildbot/scripts/buildbot_tac.tmpl\",\n ]),\n include(\"buildbot/spec\", \"*.raml\"),\n include(\"buildbot/spec/types\", \"*.raml\"),\n include(\"buildbot/test/unit/test_templates_dir\", \"*.html\"),\n include(\"buildbot/test/unit/test_templates_dir/plugin\", \"*.*\"),\n include(\"buildbot/test/integration/pki\", \"*.*\"),\n include(\"buildbot/test/integration/pki/ca\", \"*.*\"),\n ] + include_statics(\"buildbot/www/static\"),\n 'cmdclass': {'install_data': install_data_twisted,\n 'sdist': our_sdist},\n 'entry_points': concat_dicts(define_plugin_entries([\n ('buildbot.changes', [\n ('buildbot.changes.mail', [\n 'MaildirSource', 'CVSMaildirSource',\n 'SVNCommitEmailMaildirSource',\n 'BzrLaunchpadEmailMaildirSource']),\n ('buildbot.changes.bitbucket', ['BitbucketPullrequestPoller']),\n ('buildbot.changes.github', ['GitHubPullrequestPoller']),\n ('buildbot.changes.gerritchangesource', [\n 'GerritChangeSource', 'GerritEventLogPoller']),\n ('buildbot.changes.gitpoller', ['GitPoller']),\n ('buildbot.changes.hgpoller', ['HgPoller']),\n ('buildbot.changes.p4poller', ['P4Source']),\n ('buildbot.changes.pb', ['PBChangeSource']),\n ('buildbot.changes.svnpoller', ['SVNPoller'])\n ]),\n ('buildbot.schedulers', [\n ('buildbot.schedulers.basic', [\n 'SingleBranchScheduler', 'AnyBranchScheduler']),\n ('buildbot.schedulers.dependent', ['Dependent']),\n ('buildbot.schedulers.triggerable', ['Triggerable']),\n ('buildbot.schedulers.forcesched', ['ForceScheduler']),\n ('buildbot.schedulers.timed', [\n 'Periodic', 'Nightly', 'NightlyTriggerable']),\n ('buildbot.schedulers.trysched', [\n 'Try_Jobdir', 'Try_Userpass'])\n ]),\n ('buildbot.secrets', [\n ('buildbot.secrets.providers.file', ['SecretInAFile']),\n ('buildbot.secrets.providers.passwordstore', ['SecretInPass']),\n ('buildbot.secrets.providers.vault', ['HashiCorpVaultSecretProvider'])\n ]),\n ('buildbot.worker', [\n ('buildbot.worker.base', ['Worker']),\n ('buildbot.worker.ec2', ['EC2LatentWorker']),\n ('buildbot.worker.libvirt', ['LibVirtWorker']),\n ('buildbot.worker.openstack', ['OpenStackLatentWorker']),\n ('buildbot.worker.docker', ['DockerLatentWorker']),\n ('buildbot.worker.kubernetes', ['KubeLatentWorker']),\n ('buildbot.worker.local', ['LocalWorker']),\n ]),\n ('buildbot.machine', [\n ('buildbot.machine.base', ['Machine']),\n ]),\n ('buildbot.steps', [\n ('buildbot.process.buildstep', ['BuildStep']),\n ('buildbot.steps.cmake', ['CMake']),\n ('buildbot.steps.cppcheck', ['Cppcheck']),\n ('buildbot.steps.http', [\n 'HTTPStep', 'POST', 'GET', 'PUT', 'DELETE', 'HEAD',\n 'OPTIONS',\n 'HTTPStepNewStyle', 'POSTNewStyle', 'GETNewStyle', 'PUTNewStyle', 'DELETENewStyle',\n 'HEADNewStyle', 'OPTIONSNewStyle']),\n ('buildbot.steps.master', [\n 'MasterShellCommand', 'MasterShellCommandNewStyle',\n 'SetProperty', 'SetProperties', 'LogRenderable', \"Assert\"]),\n ('buildbot.steps.maxq', ['MaxQ']),\n ('buildbot.steps.mswin', ['Robocopy']),\n ('buildbot.steps.package.deb.lintian', ['DebLintian']),\n ('buildbot.steps.package.deb.pbuilder', [\n 'DebPbuilder', 'DebCowbuilder', 'UbuPbuilder',\n 'UbuCowbuilder']),\n ('buildbot.steps.package.rpm.mock', [\n 'Mock', 'MockBuildSRPM', 'MockRebuild']),\n ('buildbot.steps.package.rpm.rpmbuild', ['RpmBuild']),\n ('buildbot.steps.package.rpm.rpmlint', ['RpmLint']),\n ('buildbot.steps.python', [\n 'BuildEPYDoc', 'PyFlakes', 'PyLint', 'Sphinx']),\n ('buildbot.steps.python_twisted', [\n 'HLint', 'Trial', 'RemovePYCs']),\n ('buildbot.steps.shell', [\n 'ShellCommand', 'ShellCommandNewStyle', 'TreeSize',\n 'SetPropertyFromCommand', 'SetPropertyFromCommandNewStyle',\n 'Configure', 'ConfigureNewStyle',\n 'WarningCountingShellCommand', 'WarningCountingShellCommandNewStyle',\n 'Compile', 'CompileNewStyle',\n 'Test', 'TestNewStyle', 'PerlModuleTest']),\n ('buildbot.steps.shellsequence', ['ShellSequence']),\n ('buildbot.steps.source.bzr', ['Bzr']),\n ('buildbot.steps.source.cvs', ['CVS']),\n ('buildbot.steps.source.darcs', ['Darcs']),\n ('buildbot.steps.source.gerrit', ['Gerrit']),\n ('buildbot.steps.source.git', ['Git', 'GitCommit', 'GitPush', 'GitTag']),\n ('buildbot.steps.source.github', ['GitHub']),\n ('buildbot.steps.source.gitlab', ['GitLab']),\n ('buildbot.steps.source.mercurial', ['Mercurial']),\n ('buildbot.steps.source.mtn', ['Monotone']),\n ('buildbot.steps.source.p4', ['P4']),\n ('buildbot.steps.source.repo', ['Repo']),\n ('buildbot.steps.source.svn', ['SVN']),\n ('buildbot.steps.subunit', ['SubunitShellCommand']),\n ('buildbot.steps.transfer', [\n 'FileUpload', 'DirectoryUpload', 'MultipleFileUpload',\n 'FileDownload', 'StringDownload', 'JSONStringDownload',\n 'JSONPropertiesDownload']),\n ('buildbot.steps.trigger', ['Trigger']),\n ('buildbot.steps.vstudio', [\n 'VC6', 'VC7', 'VS2003', 'VC8', 'VS2005', 'VCExpress9', 'VC9',\n 'VS2008', 'VC10', 'VS2010', 'VC11', 'VS2012', 'VC12', 'VS2013',\n 'VC14', 'VS2015', 'VC141', 'VS2017', 'MsBuild4', 'MsBuild',\n 'MsBuild12', 'MsBuild14', 'MsBuild141']),\n ('buildbot.steps.worker', [\n 'SetPropertiesFromEnv', 'FileExists', 'CopyDirectory',\n 'RemoveDirectory', 'MakeDirectory']),\n ]),\n ('buildbot.reporters', [\n ('buildbot.reporters.generators.build', [\n 'BuildStatusGenerator',\n 'BuildStartEndStatusGenerator',\n ]),\n ('buildbot.reporters.generators.buildset', ['BuildSetStatusGenerator']),\n ('buildbot.reporters.generators.worker', ['WorkerMissingGenerator']),\n ('buildbot.reporters.mail', ['MailNotifier']),\n ('buildbot.reporters.pushjet', ['PushjetNotifier']),\n ('buildbot.reporters.pushover', ['PushoverNotifier']),\n ('buildbot.reporters.message', [\n 'MessageFormatter',\n 'MessageFormatterEmpty',\n 'MessageFormatterFunction',\n 'MessageFormatterMissingWorker',\n 'MessageFormatterRenderable',\n ]),\n ('buildbot.reporters.gerrit', ['GerritStatusPush']),\n ('buildbot.reporters.gerrit_verify_status',\n ['GerritVerifyStatusPush']),\n ('buildbot.reporters.http', ['HttpStatusPush']),\n ('buildbot.reporters.github', ['GitHubStatusPush', 'GitHubCommentPush']),\n ('buildbot.reporters.gitlab', ['GitLabStatusPush']),\n ('buildbot.reporters.bitbucketserver', [\n 'BitbucketServerStatusPush',\n 'BitbucketServerCoreAPIStatusPush',\n 'BitbucketServerPRCommentPush'\n ]),\n ('buildbot.reporters.bitbucket', ['BitbucketStatusPush']),\n ('buildbot.reporters.irc', ['IRC']),\n ('buildbot.reporters.telegram', ['TelegramBot']),\n ('buildbot.reporters.zulip', ['ZulipStatusPush']),\n ]),\n ('buildbot.util', [\n # Connection seems to be a way too generic name, though\n ('buildbot.worker.libvirt', ['Connection']),\n ('buildbot.changes.filter', ['ChangeFilter']),\n ('buildbot.changes.gerritchangesource', ['GerritChangeFilter']),\n ('buildbot.changes.svnpoller', [\n ('svn.split_file_projects_branches',\n 'split_file_projects_branches'),\n ('svn.split_file_branches', 'split_file_branches'),\n ('svn.split_file_alwaystrunk', 'split_file_alwaystrunk')]),\n ('buildbot.configurators.janitor', ['JanitorConfigurator']),\n ('buildbot.config', ['BuilderConfig']),\n ('buildbot.locks', [\n 'MasterLock',\n 'WorkerLock',\n ]),\n ('buildbot.manhole', [\n 'AuthorizedKeysManhole', 'PasswordManhole', 'TelnetManhole']),\n ('buildbot.process.builder', [\n 'enforceChosenWorker',\n ]),\n ('buildbot.process.factory', [\n 'BuildFactory', 'GNUAutoconf', 'CPAN', 'Distutils', 'Trial',\n 'BasicBuildFactory', 'QuickBuildFactory', 'BasicSVN']),\n ('buildbot.process.logobserver', ['LogLineObserver']),\n ('buildbot.process.properties', [\n 'FlattenList', 'Interpolate', 'Property', 'Transform',\n 'WithProperties', 'renderer', 'Secret']),\n ('buildbot.process.users.manual', [\n 'CommandlineUserManager']),\n ('buildbot.revlinks', ['RevlinkMatch']),\n ('buildbot.reporters.utils', ['URLForBuild']),\n ('buildbot.schedulers.forcesched', [\n 'AnyPropertyParameter', 'BooleanParameter',\n 'ChoiceStringParameter',\n 'CodebaseParameter', 'FileParameter', 'FixedParameter', 'InheritBuildParameter',\n 'IntParameter', 'NestedParameter', 'ParameterGroup',\n 'PatchParameter',\n 'StringParameter', 'TextParameter', 'UserNameParameter',\n 'WorkerChoiceParameter',\n ]),\n ('buildbot.process.results', [\n 'Results', 'SUCCESS', 'WARNINGS', 'FAILURE', 'SKIPPED',\n 'EXCEPTION', 'RETRY', 'CANCELLED']),\n ('buildbot.steps.source.repo', [\n ('repo.DownloadsFromChangeSource',\n 'RepoDownloadsFromChangeSource'),\n ('repo.DownloadsFromProperties',\n 'RepoDownloadsFromProperties')]),\n ('buildbot.steps.shellsequence', ['ShellArg']),\n ('buildbot.util.kubeclientservice', [\n 'KubeHardcodedConfig', 'KubeCtlProxyConfigLoader', 'KubeInClusterConfigLoader'\n ]),\n ('buildbot.www.avatar', ['AvatarGravatar', 'AvatarGitHub']),\n ('buildbot.www.auth', [\n 'UserPasswordAuth', 'HTPasswdAuth', 'RemoteUserAuth', 'CustomAuth']),\n ('buildbot.www.ldapuserinfo', ['LdapUserInfo']),\n ('buildbot.www.oauth2', [\n 'GoogleAuth', 'GitHubAuth', 'GitLabAuth', 'BitbucketAuth']),\n ('buildbot.db.dbconfig', [\n 'DbConfig']),\n ('buildbot.www.authz', [\n 'Authz', 'fnmatchStrMatcher', 'reStrMatcher']),\n ('buildbot.www.authz.roles', [\n 'RolesFromEmails', 'RolesFromGroups', 'RolesFromOwner', 'RolesFromUsername',\n 'RolesFromDomain']),\n ('buildbot.www.authz.endpointmatchers', [\n 'AnyEndpointMatcher', 'StopBuildEndpointMatcher', 'ForceBuildEndpointMatcher',\n 'RebuildBuildEndpointMatcher', 'AnyControlEndpointMatcher',\n 'EnableSchedulerEndpointMatcher'\n ]),\n ]),\n ('buildbot.webhooks', [\n ('buildbot.www.hooks.base', ['base']),\n ('buildbot.www.hooks.bitbucket', ['bitbucket']),\n ('buildbot.www.hooks.github', ['github']),\n ('buildbot.www.hooks.gitlab', ['gitlab']),\n ('buildbot.www.hooks.gitorious', ['gitorious']),\n ('buildbot.www.hooks.poller', ['poller']),\n ('buildbot.www.hooks.bitbucketcloud', ['bitbucketcloud']),\n ('buildbot.www.hooks.bitbucketserver', ['bitbucketserver'])\n ])\n ]), {\n 'console_scripts': [\n 'buildbot=buildbot.scripts.runner:run',\n # this will also be shipped on non windows :-(\n 'buildbot_windows_service=buildbot.scripts.windows_service:HandleCommandLine',\n ]}\n )\n}\n\n# set zip_safe to false to force Windows installs to always unpack eggs\n# into directories, which seems to work better --\n# see http://buildbot.net/trac/ticket/907\nif sys.platform == \"win32\":\n setup_args['zip_safe'] = False\n\npy_36 = sys.version_info[0] > 3 or (\n sys.version_info[0] == 3 and sys.version_info[1] >= 6)\nif not py_36:\n raise RuntimeError(\"Buildbot master requires at least Python-3.6\")\n\n# pip<1.4 doesn't have the --pre flag, and will thus attempt to install alpha\n# and beta versions of Buildbot. Prevent that from happening.\nVERSION_MSG = \"\"\"\nThis is a pre-release version of Buildbot, which can only be installed with\npip-1.4 or later Try installing the latest stable version of Buildbot instead:\n pip install buildbot==0.8.12\nSee https://pypi.python.org/pypi/buildbot to verify the current stable version.\n\"\"\"\nif 'a' in version or 'b' in version:\n try:\n pip_dist = pkg_resources.get_distribution('pip')\n except pkg_resources.DistributionNotFound:\n pip_dist = None\n\n if pip_dist:\n if parse_version(pip_dist.version) < parse_version('1.4'):\n raise RuntimeError(VERSION_MSG)\n\ntwisted_ver = \">= 17.9.0\"\nautobahn_ver = \">= 0.16.0\"\ntxaio_ver = \">= 2.2.2\"\n\nbundle_version = version.split(\"-\")[0]\n\n# dependencies\nsetup_args['install_requires'] = [\n 'setuptools >= 8.0',\n 'Twisted ' + twisted_ver,\n 'Jinja2 >= 2.1',\n # required for tests, but Twisted requires this anyway\n 'zope.interface >= 4.1.1',\n 'sqlalchemy>=1.2.0',\n 'sqlalchemy-migrate>=0.13',\n 'python-dateutil>=1.5',\n 'txaio ' + txaio_ver,\n 'autobahn ' + autobahn_ver,\n 'PyJWT',\n 'pyyaml'\n]\n\n# Unit test dependencies.\ntest_deps = [\n # http client libraries\n 'treq',\n 'txrequests',\n # pypugjs required for custom templates tests\n 'pypugjs',\n # boto3 and moto required for running EC2 tests\n 'boto3',\n 'moto',\n 'mock>=2.0.0',\n 'parameterized',\n]\nif sys.platform != 'win32':\n test_deps += [\n # LZ4 fails to build on Windows:\n # https://github.com/steeve/python-lz4/issues/27\n # lz4 required for log compression tests.\n 'lz4',\n ]\n\nsetup_args['tests_require'] = test_deps\n\nsetup_args['extras_require'] = {\n 'test': [\n 'setuptools_trial',\n 'isort',\n # spellcheck introduced in version 1.4.0\n 'pylint<1.7.0',\n 'pyenchant',\n 'flake8~=2.6.0',\n ] + test_deps,\n 'bundle': [\n \"buildbot-www=={0}\".format(bundle_version),\n \"buildbot-worker=={0}\".format(bundle_version),\n \"buildbot-waterfall-view=={0}\".format(bundle_version),\n \"buildbot-console-view=={0}\".format(bundle_version),\n \"buildbot-grid-view=={0}\".format(bundle_version),\n ],\n 'tls': [\n 'Twisted[tls] ' + twisted_ver,\n # There are bugs with extras inside extras:\n # <https://github.com/pypa/pip/issues/3516>\n # so we explicitly include Twisted[tls] dependencies.\n 'pyopenssl >= 16.0.0',\n 'service_identity',\n 'idna >= 0.6',\n ],\n 'docs': [\n 'docutils>=0.16.0',\n 'sphinx>=3.2.0',\n 'sphinx-rtd-theme>=0.5',\n 'sphinxcontrib-blockdiag',\n 'sphinxcontrib-spelling',\n 'sphinxcontrib-websupport',\n 'pyenchant',\n 'sphinx-jinja',\n 'towncrier',\n ],\n}\n\nif '--help-commands' in sys.argv or 'trial' in sys.argv or 'test' in sys.argv:\n setup_args['setup_requires'] = [\n 'setuptools_trial',\n ]\n\nif os.getenv('NO_INSTALL_REQS'):\n setup_args['install_requires'] = None\n setup_args['extras_require'] = None\n\nif __name__ == '__main__':\n setup(**setup_args)\n\n# Local Variables:\n# fill-column: 71\n# End:\n", "path": "master/setup.py" } ]
[ { "content": "#!/usr/bin/env python\n#\n# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\n\"\"\"\nStandard setup script.\n\"\"\"\nfrom setuptools import setup # isort:skip\n\n\nimport glob\nimport inspect\nimport os\nimport pkg_resources\nimport sys\nfrom distutils.command.install_data import install_data\nfrom distutils.command.sdist import sdist\nfrom pkg_resources import parse_version\n\nfrom buildbot import version\n\nBUILDING_WHEEL = bool(\"bdist_wheel\" in sys.argv)\n\n\ndef include(d, e):\n \"\"\"Generate a pair of (directory, file-list) for installation.\n\n 'd' -- A directory\n 'e' -- A glob pattern\"\"\"\n\n return (d, [f for f in glob.glob('{}/{}'.format(d, e)) if os.path.isfile(f)])\n\n\ndef include_statics(d):\n r = []\n for root, ds, fs in os.walk(d):\n r.append((root, [os.path.join(root, f) for f in fs]))\n return r\n\n\nclass install_data_twisted(install_data):\n\n \"\"\"make sure data files are installed in package.\n this is evil.\n copied from Twisted/setup.py.\n \"\"\"\n\n def finalize_options(self):\n self.set_undefined_options('install',\n ('install_lib', 'install_dir'),\n )\n super().finalize_options()\n\n def run(self):\n super().run()\n # ensure there's a buildbot/VERSION file\n fn = os.path.join(self.install_dir, 'buildbot', 'VERSION')\n open(fn, 'w').write(version)\n self.outfiles.append(fn)\n\n\nclass our_sdist(sdist):\n\n def make_release_tree(self, base_dir, files):\n sdist.make_release_tree(self, base_dir, files)\n\n # ensure there's a buildbot/VERSION file\n fn = os.path.join(base_dir, 'buildbot', 'VERSION')\n open(fn, 'w').write(version)\n\n # ensure that NEWS has a copy of the latest release notes, with the\n # proper version substituted\n src_fn = os.path.join('docs', 'relnotes/index.rst')\n with open(src_fn) as f:\n src = f.read()\n src = src.replace('|version|', version)\n dst_fn = os.path.join(base_dir, 'NEWS')\n with open(dst_fn, 'w') as f:\n f.write(src)\n\n\ndef define_plugin_entry(name, module_name):\n \"\"\"\n helper to produce lines suitable for setup.py's entry_points\n \"\"\"\n if isinstance(name, tuple):\n entry, name = name\n else:\n entry = name\n return '{} = {}:{}'.format(entry, module_name, name)\n\n\ndef concat_dicts(*dicts):\n result = dict()\n for d in dicts:\n result.update(d)\n return result\n\n\ndef define_plugin_entries(groups):\n \"\"\"\n helper to all groups for plugins\n \"\"\"\n result = dict()\n\n for group, modules in groups:\n tempo = []\n for module_name, names in modules:\n tempo.extend([define_plugin_entry(name, module_name)\n for name in names])\n result[group] = tempo\n\n return result\n\n\n__file__ = inspect.getframeinfo(inspect.currentframe()).filename\n\nwith open(os.path.join(os.path.dirname(__file__), 'README.rst')) as long_d_f:\n long_description = long_d_f.read()\n\nsetup_args = {\n 'name': \"buildbot\",\n 'version': version,\n 'description': \"The Continuous Integration Framework\",\n 'long_description': long_description,\n 'author': \"Brian Warner\",\n 'author_email': \"[email protected]\",\n 'maintainer': \"Dustin J. Mitchell\",\n 'maintainer_email': \"[email protected]\",\n 'url': \"http://buildbot.net/\",\n 'classifiers': [\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: No Input/Output (Daemon)',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n 'Topic :: Software Development :: Build Tools',\n 'Topic :: Software Development :: Testing',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n ],\n\n 'packages': [\n \"buildbot\",\n \"buildbot.configurators\",\n \"buildbot.worker\",\n \"buildbot.worker.protocols\",\n \"buildbot.changes\",\n \"buildbot.clients\",\n \"buildbot.data\",\n \"buildbot.db\",\n \"buildbot.db.migrate.versions\",\n \"buildbot.db.types\",\n \"buildbot.machine\",\n \"buildbot.monkeypatches\",\n \"buildbot.mq\",\n \"buildbot.plugins\",\n \"buildbot.process\",\n \"buildbot.process.users\",\n \"buildbot.reporters\",\n \"buildbot.reporters.generators\",\n \"buildbot.schedulers\",\n \"buildbot.scripts\",\n \"buildbot.secrets\",\n \"buildbot.secrets.providers\",\n \"buildbot.statistics\",\n \"buildbot.statistics.storage_backends\",\n \"buildbot.steps\",\n \"buildbot.steps.package\",\n \"buildbot.steps.package.deb\",\n \"buildbot.steps.package.rpm\",\n \"buildbot.steps.source\",\n \"buildbot.util\",\n \"buildbot.wamp\",\n \"buildbot.www\",\n \"buildbot.www.hooks\",\n \"buildbot.www.authz\",\n ] + ([] if BUILDING_WHEEL else [ # skip tests for wheels (save 50% of the archive)\n \"buildbot.test\",\n \"buildbot.test.util\",\n \"buildbot.test.fake\",\n \"buildbot.test.fakedb\",\n \"buildbot.test.fuzz\",\n \"buildbot.test.integration\",\n \"buildbot.test.integration.interop\",\n \"buildbot.test.regressions\",\n \"buildbot.test.unit\",\n ]),\n 'data_files': [\n include(\"buildbot/reporters/templates\", \"*.txt\"),\n (\"buildbot/db/migrate\", [\n \"buildbot/db/migrate/migrate.cfg\",\n ]),\n include(\"buildbot/db/migrate/versions\", \"*.py\"),\n (\"buildbot/scripts\", [\n \"buildbot/scripts/sample.cfg\",\n \"buildbot/scripts/buildbot_tac.tmpl\",\n ]),\n include(\"buildbot/spec\", \"*.raml\"),\n include(\"buildbot/spec/types\", \"*.raml\"),\n include(\"buildbot/test/unit/test_templates_dir\", \"*.html\"),\n include(\"buildbot/test/unit/test_templates_dir/plugin\", \"*.*\"),\n include(\"buildbot/test/integration/pki\", \"*.*\"),\n include(\"buildbot/test/integration/pki/ca\", \"*.*\"),\n ] + include_statics(\"buildbot/www/static\"),\n 'cmdclass': {'install_data': install_data_twisted,\n 'sdist': our_sdist},\n 'entry_points': concat_dicts(define_plugin_entries([\n ('buildbot.changes', [\n ('buildbot.changes.mail', [\n 'MaildirSource', 'CVSMaildirSource',\n 'SVNCommitEmailMaildirSource',\n 'BzrLaunchpadEmailMaildirSource']),\n ('buildbot.changes.bitbucket', ['BitbucketPullrequestPoller']),\n ('buildbot.changes.github', ['GitHubPullrequestPoller']),\n ('buildbot.changes.gerritchangesource', [\n 'GerritChangeSource', 'GerritEventLogPoller']),\n ('buildbot.changes.gitpoller', ['GitPoller']),\n ('buildbot.changes.hgpoller', ['HgPoller']),\n ('buildbot.changes.p4poller', ['P4Source']),\n ('buildbot.changes.pb', ['PBChangeSource']),\n ('buildbot.changes.svnpoller', ['SVNPoller'])\n ]),\n ('buildbot.schedulers', [\n ('buildbot.schedulers.basic', [\n 'SingleBranchScheduler', 'AnyBranchScheduler']),\n ('buildbot.schedulers.dependent', ['Dependent']),\n ('buildbot.schedulers.triggerable', ['Triggerable']),\n ('buildbot.schedulers.forcesched', ['ForceScheduler']),\n ('buildbot.schedulers.timed', [\n 'Periodic', 'Nightly', 'NightlyTriggerable']),\n ('buildbot.schedulers.trysched', [\n 'Try_Jobdir', 'Try_Userpass'])\n ]),\n ('buildbot.secrets', [\n ('buildbot.secrets.providers.file', ['SecretInAFile']),\n ('buildbot.secrets.providers.passwordstore', ['SecretInPass']),\n ('buildbot.secrets.providers.vault', ['HashiCorpVaultSecretProvider'])\n ]),\n ('buildbot.worker', [\n ('buildbot.worker.base', ['Worker']),\n ('buildbot.worker.ec2', ['EC2LatentWorker']),\n ('buildbot.worker.libvirt', ['LibVirtWorker']),\n ('buildbot.worker.openstack', ['OpenStackLatentWorker']),\n ('buildbot.worker.docker', ['DockerLatentWorker']),\n ('buildbot.worker.kubernetes', ['KubeLatentWorker']),\n ('buildbot.worker.local', ['LocalWorker']),\n ]),\n ('buildbot.machine', [\n ('buildbot.machine.base', ['Machine']),\n ]),\n ('buildbot.steps', [\n ('buildbot.process.buildstep', ['BuildStep']),\n ('buildbot.steps.cmake', ['CMake']),\n ('buildbot.steps.cppcheck', ['Cppcheck']),\n ('buildbot.steps.http', [\n 'HTTPStep', 'POST', 'GET', 'PUT', 'DELETE', 'HEAD',\n 'OPTIONS',\n 'HTTPStepNewStyle', 'POSTNewStyle', 'GETNewStyle', 'PUTNewStyle', 'DELETENewStyle',\n 'HEADNewStyle', 'OPTIONSNewStyle']),\n ('buildbot.steps.master', [\n 'MasterShellCommand', 'MasterShellCommandNewStyle',\n 'SetProperty', 'SetProperties', 'LogRenderable', \"Assert\"]),\n ('buildbot.steps.maxq', ['MaxQ']),\n ('buildbot.steps.mswin', ['Robocopy']),\n ('buildbot.steps.package.deb.lintian', ['DebLintian']),\n ('buildbot.steps.package.deb.pbuilder', [\n 'DebPbuilder', 'DebCowbuilder', 'UbuPbuilder',\n 'UbuCowbuilder']),\n ('buildbot.steps.package.rpm.mock', [\n 'Mock', 'MockBuildSRPM', 'MockRebuild']),\n ('buildbot.steps.package.rpm.rpmbuild', ['RpmBuild']),\n ('buildbot.steps.package.rpm.rpmlint', ['RpmLint']),\n ('buildbot.steps.python', [\n 'BuildEPYDoc', 'PyFlakes', 'PyLint', 'Sphinx']),\n ('buildbot.steps.python_twisted', [\n 'HLint', 'Trial', 'RemovePYCs']),\n ('buildbot.steps.shell', [\n 'ShellCommand', 'ShellCommandNewStyle', 'TreeSize',\n 'SetPropertyFromCommand', 'SetPropertyFromCommandNewStyle',\n 'Configure', 'ConfigureNewStyle',\n 'WarningCountingShellCommand', 'WarningCountingShellCommandNewStyle',\n 'Compile', 'CompileNewStyle',\n 'Test', 'TestNewStyle', 'PerlModuleTest']),\n ('buildbot.steps.shellsequence', ['ShellSequence']),\n ('buildbot.steps.source.bzr', ['Bzr']),\n ('buildbot.steps.source.cvs', ['CVS']),\n ('buildbot.steps.source.darcs', ['Darcs']),\n ('buildbot.steps.source.gerrit', ['Gerrit']),\n ('buildbot.steps.source.git', ['Git', 'GitCommit', 'GitPush', 'GitTag']),\n ('buildbot.steps.source.github', ['GitHub']),\n ('buildbot.steps.source.gitlab', ['GitLab']),\n ('buildbot.steps.source.mercurial', ['Mercurial']),\n ('buildbot.steps.source.mtn', ['Monotone']),\n ('buildbot.steps.source.p4', ['P4']),\n ('buildbot.steps.source.repo', ['Repo']),\n ('buildbot.steps.source.svn', ['SVN']),\n ('buildbot.steps.subunit', ['SubunitShellCommand']),\n ('buildbot.steps.transfer', [\n 'FileUpload', 'DirectoryUpload', 'MultipleFileUpload',\n 'FileDownload', 'StringDownload', 'JSONStringDownload',\n 'JSONPropertiesDownload']),\n ('buildbot.steps.trigger', ['Trigger']),\n ('buildbot.steps.vstudio', [\n 'VC6', 'VC7', 'VS2003', 'VC8', 'VS2005', 'VCExpress9', 'VC9',\n 'VS2008', 'VC10', 'VS2010', 'VC11', 'VS2012', 'VC12', 'VS2013',\n 'VC14', 'VS2015', 'VC141', 'VS2017', 'MsBuild4', 'MsBuild',\n 'MsBuild12', 'MsBuild14', 'MsBuild141']),\n ('buildbot.steps.worker', [\n 'SetPropertiesFromEnv', 'FileExists', 'CopyDirectory',\n 'RemoveDirectory', 'MakeDirectory']),\n ]),\n ('buildbot.reporters', [\n ('buildbot.reporters.generators.build', [\n 'BuildStatusGenerator',\n 'BuildStartEndStatusGenerator',\n ]),\n ('buildbot.reporters.generators.buildset', ['BuildSetStatusGenerator']),\n ('buildbot.reporters.generators.worker', ['WorkerMissingGenerator']),\n ('buildbot.reporters.mail', ['MailNotifier']),\n ('buildbot.reporters.pushjet', ['PushjetNotifier']),\n ('buildbot.reporters.pushover', ['PushoverNotifier']),\n ('buildbot.reporters.message', [\n 'MessageFormatter',\n 'MessageFormatterEmpty',\n 'MessageFormatterFunction',\n 'MessageFormatterMissingWorker',\n 'MessageFormatterRenderable',\n ]),\n ('buildbot.reporters.gerrit', ['GerritStatusPush']),\n ('buildbot.reporters.gerrit_verify_status',\n ['GerritVerifyStatusPush']),\n ('buildbot.reporters.http', ['HttpStatusPush']),\n ('buildbot.reporters.github', ['GitHubStatusPush', 'GitHubCommentPush']),\n ('buildbot.reporters.gitlab', ['GitLabStatusPush']),\n ('buildbot.reporters.bitbucketserver', [\n 'BitbucketServerStatusPush',\n 'BitbucketServerCoreAPIStatusPush',\n 'BitbucketServerPRCommentPush'\n ]),\n ('buildbot.reporters.bitbucket', ['BitbucketStatusPush']),\n ('buildbot.reporters.irc', ['IRC']),\n ('buildbot.reporters.telegram', ['TelegramBot']),\n ('buildbot.reporters.zulip', ['ZulipStatusPush']),\n ]),\n ('buildbot.util', [\n # Connection seems to be a way too generic name, though\n ('buildbot.worker.libvirt', ['Connection']),\n ('buildbot.changes.filter', ['ChangeFilter']),\n ('buildbot.changes.gerritchangesource', ['GerritChangeFilter']),\n ('buildbot.changes.svnpoller', [\n ('svn.split_file_projects_branches',\n 'split_file_projects_branches'),\n ('svn.split_file_branches', 'split_file_branches'),\n ('svn.split_file_alwaystrunk', 'split_file_alwaystrunk')]),\n ('buildbot.configurators.janitor', ['JanitorConfigurator']),\n ('buildbot.config', ['BuilderConfig']),\n ('buildbot.locks', [\n 'MasterLock',\n 'WorkerLock',\n ]),\n ('buildbot.manhole', [\n 'AuthorizedKeysManhole', 'PasswordManhole', 'TelnetManhole']),\n ('buildbot.process.builder', [\n 'enforceChosenWorker',\n ]),\n ('buildbot.process.factory', [\n 'BuildFactory', 'GNUAutoconf', 'CPAN', 'Distutils', 'Trial',\n 'BasicBuildFactory', 'QuickBuildFactory', 'BasicSVN']),\n ('buildbot.process.logobserver', ['LogLineObserver']),\n ('buildbot.process.properties', [\n 'FlattenList', 'Interpolate', 'Property', 'Transform',\n 'WithProperties', 'renderer', 'Secret']),\n ('buildbot.process.users.manual', [\n 'CommandlineUserManager']),\n ('buildbot.revlinks', ['RevlinkMatch']),\n ('buildbot.reporters.utils', ['URLForBuild']),\n ('buildbot.schedulers.forcesched', [\n 'AnyPropertyParameter', 'BooleanParameter',\n 'ChoiceStringParameter',\n 'CodebaseParameter', 'FileParameter', 'FixedParameter', 'InheritBuildParameter',\n 'IntParameter', 'NestedParameter', 'ParameterGroup',\n 'PatchParameter',\n 'StringParameter', 'TextParameter', 'UserNameParameter',\n 'WorkerChoiceParameter',\n ]),\n ('buildbot.process.results', [\n 'Results', 'SUCCESS', 'WARNINGS', 'FAILURE', 'SKIPPED',\n 'EXCEPTION', 'RETRY', 'CANCELLED']),\n ('buildbot.steps.source.repo', [\n ('repo.DownloadsFromChangeSource',\n 'RepoDownloadsFromChangeSource'),\n ('repo.DownloadsFromProperties',\n 'RepoDownloadsFromProperties')]),\n ('buildbot.steps.shellsequence', ['ShellArg']),\n ('buildbot.util.kubeclientservice', [\n 'KubeHardcodedConfig', 'KubeCtlProxyConfigLoader', 'KubeInClusterConfigLoader'\n ]),\n ('buildbot.www.avatar', ['AvatarGravatar', 'AvatarGitHub']),\n ('buildbot.www.auth', [\n 'UserPasswordAuth', 'HTPasswdAuth', 'RemoteUserAuth', 'CustomAuth']),\n ('buildbot.www.ldapuserinfo', ['LdapUserInfo']),\n ('buildbot.www.oauth2', [\n 'GoogleAuth', 'GitHubAuth', 'GitLabAuth', 'BitbucketAuth']),\n ('buildbot.db.dbconfig', [\n 'DbConfig']),\n ('buildbot.www.authz', [\n 'Authz', 'fnmatchStrMatcher', 'reStrMatcher']),\n ('buildbot.www.authz.roles', [\n 'RolesFromEmails', 'RolesFromGroups', 'RolesFromOwner', 'RolesFromUsername',\n 'RolesFromDomain']),\n ('buildbot.www.authz.endpointmatchers', [\n 'AnyEndpointMatcher', 'StopBuildEndpointMatcher', 'ForceBuildEndpointMatcher',\n 'RebuildBuildEndpointMatcher', 'AnyControlEndpointMatcher',\n 'EnableSchedulerEndpointMatcher'\n ]),\n ]),\n ('buildbot.webhooks', [\n ('buildbot.www.hooks.base', ['base']),\n ('buildbot.www.hooks.bitbucket', ['bitbucket']),\n ('buildbot.www.hooks.github', ['github']),\n ('buildbot.www.hooks.gitlab', ['gitlab']),\n ('buildbot.www.hooks.gitorious', ['gitorious']),\n ('buildbot.www.hooks.poller', ['poller']),\n ('buildbot.www.hooks.bitbucketcloud', ['bitbucketcloud']),\n ('buildbot.www.hooks.bitbucketserver', ['bitbucketserver'])\n ])\n ]), {\n 'console_scripts': [\n 'buildbot=buildbot.scripts.runner:run',\n # this will also be shipped on non windows :-(\n 'buildbot_windows_service=buildbot.scripts.windows_service:HandleCommandLine',\n ]}\n )\n}\n\n# set zip_safe to false to force Windows installs to always unpack eggs\n# into directories, which seems to work better --\n# see http://buildbot.net/trac/ticket/907\nif sys.platform == \"win32\":\n setup_args['zip_safe'] = False\n\npy_36 = sys.version_info[0] > 3 or (\n sys.version_info[0] == 3 and sys.version_info[1] >= 6)\nif not py_36:\n raise RuntimeError(\"Buildbot master requires at least Python-3.6\")\n\n# pip<1.4 doesn't have the --pre flag, and will thus attempt to install alpha\n# and beta versions of Buildbot. Prevent that from happening.\nVERSION_MSG = \"\"\"\nThis is a pre-release version of Buildbot, which can only be installed with\npip-1.4 or later Try installing the latest stable version of Buildbot instead:\n pip install buildbot==0.8.12\nSee https://pypi.python.org/pypi/buildbot to verify the current stable version.\n\"\"\"\nif 'a' in version or 'b' in version:\n try:\n pip_dist = pkg_resources.get_distribution('pip')\n except pkg_resources.DistributionNotFound:\n pip_dist = None\n\n if pip_dist:\n if parse_version(pip_dist.version) < parse_version('1.4'):\n raise RuntimeError(VERSION_MSG)\n\ntwisted_ver = \">= 17.9.0\"\nautobahn_ver = \">= 0.16.0\"\ntxaio_ver = \">= 2.2.2\"\n\nbundle_version = version.split(\"-\")[0]\n\n# dependencies\nsetup_args['install_requires'] = [\n 'setuptools >= 8.0',\n 'Twisted ' + twisted_ver,\n 'Jinja2 >= 2.1',\n # required for tests, but Twisted requires this anyway\n 'zope.interface >= 4.1.1',\n 'sqlalchemy >= 1.2.0, < 1.4',\n 'sqlalchemy-migrate>=0.13',\n 'python-dateutil>=1.5',\n 'txaio ' + txaio_ver,\n 'autobahn ' + autobahn_ver,\n 'PyJWT',\n 'pyyaml'\n]\n\n# Unit test dependencies.\ntest_deps = [\n # http client libraries\n 'treq',\n 'txrequests',\n # pypugjs required for custom templates tests\n 'pypugjs',\n # boto3 and moto required for running EC2 tests\n 'boto3',\n 'moto',\n 'mock>=2.0.0',\n 'parameterized',\n]\nif sys.platform != 'win32':\n test_deps += [\n # LZ4 fails to build on Windows:\n # https://github.com/steeve/python-lz4/issues/27\n # lz4 required for log compression tests.\n 'lz4',\n ]\n\nsetup_args['tests_require'] = test_deps\n\nsetup_args['extras_require'] = {\n 'test': [\n 'setuptools_trial',\n 'isort',\n # spellcheck introduced in version 1.4.0\n 'pylint<1.7.0',\n 'pyenchant',\n 'flake8~=2.6.0',\n ] + test_deps,\n 'bundle': [\n \"buildbot-www=={0}\".format(bundle_version),\n \"buildbot-worker=={0}\".format(bundle_version),\n \"buildbot-waterfall-view=={0}\".format(bundle_version),\n \"buildbot-console-view=={0}\".format(bundle_version),\n \"buildbot-grid-view=={0}\".format(bundle_version),\n ],\n 'tls': [\n 'Twisted[tls] ' + twisted_ver,\n # There are bugs with extras inside extras:\n # <https://github.com/pypa/pip/issues/3516>\n # so we explicitly include Twisted[tls] dependencies.\n 'pyopenssl >= 16.0.0',\n 'service_identity',\n 'idna >= 0.6',\n ],\n 'docs': [\n 'docutils>=0.16.0',\n 'sphinx>=3.2.0',\n 'sphinx-rtd-theme>=0.5',\n 'sphinxcontrib-blockdiag',\n 'sphinxcontrib-spelling',\n 'sphinxcontrib-websupport',\n 'pyenchant',\n 'sphinx-jinja',\n 'towncrier',\n ],\n}\n\nif '--help-commands' in sys.argv or 'trial' in sys.argv or 'test' in sys.argv:\n setup_args['setup_requires'] = [\n 'setuptools_trial',\n ]\n\nif os.getenv('NO_INSTALL_REQS'):\n setup_args['install_requires'] = None\n setup_args['extras_require'] = None\n\nif __name__ == '__main__':\n setup(**setup_args)\n\n# Local Variables:\n# fill-column: 71\n# End:\n", "path": "master/setup.py" } ]
diff --git a/master/buildbot/newsfragments/sqlalchemy-1-4-compatibility.bugfix b/master/buildbot/newsfragments/sqlalchemy-1-4-compatibility.bugfix new file mode 100644 index 000000000000..d5d482db09cb --- /dev/null +++ b/master/buildbot/newsfragments/sqlalchemy-1-4-compatibility.bugfix @@ -0,0 +1 @@ +Updated Buildbot requirements to specify sqlalchemy 1.4 and newer as not supported yet. diff --git a/master/setup.py b/master/setup.py index 19a31aa844cd..f21f720459a4 100755 --- a/master/setup.py +++ b/master/setup.py @@ -491,7 +491,7 @@ def define_plugin_entries(groups): 'Jinja2 >= 2.1', # required for tests, but Twisted requires this anyway 'zope.interface >= 4.1.1', - 'sqlalchemy>=1.2.0', + 'sqlalchemy >= 1.2.0, < 1.4', 'sqlalchemy-migrate>=0.13', 'python-dateutil>=1.5', 'txaio ' + txaio_ver,
MongoEngine__mongoengine-1454
Rename modifier missing from update Not sure if this is intentional or not but it would be useful to have the `$rename` operator (or "modifier" for the update method for QuerySet and Document) available. I'm currently working around it with `exec_js`, like so: ``` python Document.objects.exec_js(""" function() { db[collection].update({}, {$rename: {foo: 'bar'}}); }""") ```
[ { "content": "from mongoengine.errors import NotRegistered\n\n__all__ = ('UPDATE_OPERATORS', 'get_document', '_document_registry')\n\n\nUPDATE_OPERATORS = set(['set', 'unset', 'inc', 'dec', 'pop', 'push',\n 'push_all', 'pull', 'pull_all', 'add_to_set',\n 'set_on_insert', 'min', 'max'])\n\n\n_document_registry = {}\n\n\ndef get_document(name):\n \"\"\"Get a document class by name.\"\"\"\n doc = _document_registry.get(name, None)\n if not doc:\n # Possible old style name\n single_end = name.split('.')[-1]\n compound_end = '.%s' % single_end\n possible_match = [k for k in _document_registry.keys()\n if k.endswith(compound_end) or k == single_end]\n if len(possible_match) == 1:\n doc = _document_registry.get(possible_match.pop(), None)\n if not doc:\n raise NotRegistered(\"\"\"\n `%s` has not been registered in the document registry.\n Importing the document class automatically registers it, has it\n been imported?\n \"\"\".strip() % name)\n return doc\n", "path": "mongoengine/base/common.py" } ]
[ { "content": "from mongoengine.errors import NotRegistered\n\n__all__ = ('UPDATE_OPERATORS', 'get_document', '_document_registry')\n\n\nUPDATE_OPERATORS = set(['set', 'unset', 'inc', 'dec', 'pop', 'push',\n 'push_all', 'pull', 'pull_all', 'add_to_set',\n 'set_on_insert', 'min', 'max', 'rename'])\n\n\n_document_registry = {}\n\n\ndef get_document(name):\n \"\"\"Get a document class by name.\"\"\"\n doc = _document_registry.get(name, None)\n if not doc:\n # Possible old style name\n single_end = name.split('.')[-1]\n compound_end = '.%s' % single_end\n possible_match = [k for k in _document_registry.keys()\n if k.endswith(compound_end) or k == single_end]\n if len(possible_match) == 1:\n doc = _document_registry.get(possible_match.pop(), None)\n if not doc:\n raise NotRegistered(\"\"\"\n `%s` has not been registered in the document registry.\n Importing the document class automatically registers it, has it\n been imported?\n \"\"\".strip() % name)\n return doc\n", "path": "mongoengine/base/common.py" } ]
diff --git a/mongoengine/base/common.py b/mongoengine/base/common.py index da2b8b68b..b9971ff71 100644 --- a/mongoengine/base/common.py +++ b/mongoengine/base/common.py @@ -5,7 +5,7 @@ UPDATE_OPERATORS = set(['set', 'unset', 'inc', 'dec', 'pop', 'push', 'push_all', 'pull', 'pull_all', 'add_to_set', - 'set_on_insert', 'min', 'max']) + 'set_on_insert', 'min', 'max', 'rename']) _document_registry = {} diff --git a/tests/document/instance.py b/tests/document/instance.py index b187f766b..9b52c809a 100644 --- a/tests/document/instance.py +++ b/tests/document/instance.py @@ -1232,6 +1232,19 @@ def test_update(self): self.assertEqual(person.name, None) self.assertEqual(person.age, None) + def test_update_rename_operator(self): + """Test the $rename operator.""" + coll = self.Person._get_collection() + doc = self.Person(name='John').save() + raw_doc = coll.find_one({'_id': doc.pk}) + self.assertEqual(set(raw_doc.keys()), set(['_id', '_cls', 'name'])) + + doc.update(rename__name='first_name') + raw_doc = coll.find_one({'_id': doc.pk}) + self.assertEqual(set(raw_doc.keys()), + set(['_id', '_cls', 'first_name'])) + self.assertEqual(raw_doc['first_name'], 'John') + def test_inserts_if_you_set_the_pk(self): p1 = self.Person(name='p1', id=bson.ObjectId()).save() p2 = self.Person(name='p2')
weni-ai__bothub-engine-230
Updating settings and remove sentences, training keeps enabled Reported by @johncordeiro in https://github.com/Ilhasoft/bothub/issues/44
[ { "content": "import uuid\nimport base64\nimport requests\n\nfrom functools import reduce\nfrom django.db import models\nfrom django.utils.translation import gettext as _\nfrom django.utils import timezone\nfrom django.conf import settings\nfrom django.core.validators import RegexValidator, _lazy_re_compile\nfrom django.core.mail import send_mail\nfrom django.template.loader import render_to_string\nfrom django.dispatch import receiver\nfrom django.core.exceptions import ValidationError\n\nfrom bothub.authentication.models import User\n\nfrom . import languages\nfrom .exceptions import RepositoryUpdateAlreadyStartedTraining\nfrom .exceptions import RepositoryUpdateAlreadyTrained\nfrom .exceptions import TrainingNotAllowed\nfrom .exceptions import DoesNotHaveTranslation\n\n\nitem_key_regex = _lazy_re_compile(r'^[-a-z0-9_]+\\Z')\nvalidate_item_key = RegexValidator(\n item_key_regex,\n _('Enter a valid value consisting of lowercase letters, numbers, ' +\n 'underscores or hyphens.'),\n 'invalid'\n)\n\n\ndef can_t_be_other(value):\n if value == 'other':\n raise ValidationError(_('The label can\\'t be named as \"other\"'))\n\n\nclass RepositoryCategory(models.Model):\n class Meta:\n verbose_name = _('repository category')\n verbose_name_plural = _('repository categories')\n\n name = models.CharField(\n _('name'),\n max_length=32)\n\n def __str__(self):\n return self.name # pragma: no cover\n\n\nclass RepositoryQuerySet(models.QuerySet):\n def publics(self):\n return self.filter(is_private=False)\n\n def order_by_relevance(self):\n return self \\\n .annotate(votes_summ=models.Sum('votes__vote')) \\\n .annotate(examples_sum=models.Sum('updates__added')) \\\n .order_by('-votes_summ', '-examples_sum', '-created_at')\n\n\nclass RepositoryManager(models.Manager):\n def get_queryset(self):\n return RepositoryQuerySet(self.model, using=self._db)\n\n\nclass Repository(models.Model):\n class Meta:\n verbose_name = _('repository')\n verbose_name_plural = _('repositories')\n unique_together = ['owner', 'slug']\n\n CATEGORIES_HELP_TEXT = _('Categories for approaching repositories with ' +\n 'the same purpose')\n DESCRIPTION_HELP_TEXT = _('Tell what your bot do!')\n\n uuid = models.UUIDField(\n _('UUID'),\n primary_key=True,\n default=uuid.uuid4,\n editable=False)\n owner = models.ForeignKey(\n User,\n models.CASCADE,\n related_name='repositories')\n name = models.CharField(\n _('name'),\n max_length=64,\n help_text=_('Repository display name'))\n slug = models.SlugField(\n _('slug'),\n max_length=32,\n help_text=_('Easy way to found and share repositories'))\n language = models.CharField(\n _('language'),\n max_length=5,\n help_text=_('Repository\\'s examples language. The examples can be ' +\n 'translated to other languages.'),\n validators=[\n languages.validate_language,\n ])\n use_language_model_featurizer = models.BooleanField(\n _('Use language model featurizer'),\n help_text=_('You can use language featurizer to get words ' +\n 'similarity. You need less examples to create a great ' +\n 'bot.'),\n default=True)\n use_competing_intents = models.BooleanField(\n _('Use competing intents'),\n help_text=_('When using competing intents the confidence of the ' +\n 'prediction is distributed in all the intents.'),\n default=False)\n categories = models.ManyToManyField(\n RepositoryCategory,\n help_text=CATEGORIES_HELP_TEXT)\n description = models.TextField(\n _('description'),\n blank=True,\n help_text=DESCRIPTION_HELP_TEXT)\n is_private = models.BooleanField(\n _('private'),\n default=False,\n help_text=_('Your repository can be private, only you can see and' +\n ' use, or can be public and all community can see and ' +\n 'use.'))\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n objects = RepositoryManager()\n\n nlp_train_url = '{}train/'.format(settings.BOTHUB_NLP_BASE_URL)\n nlp_analyze_url = '{}parse/'.format(settings.BOTHUB_NLP_BASE_URL)\n\n @classmethod\n def request_nlp_train(cls, user_authorization):\n r = requests.post( # pragma: no cover\n cls.nlp_train_url,\n data={},\n headers={'Authorization': 'Bearer {}'.format(\n user_authorization.uuid)})\n return r # pragma: no cover\n\n @classmethod\n def request_nlp_analyze(cls, user_authorization, data):\n r = requests.post( # pragma: no cover\n cls.nlp_analyze_url,\n data={\n 'text': data.get('text'),\n 'language': data.get('language'),\n },\n headers={'Authorization': 'Bearer {}'.format(\n user_authorization.uuid)})\n return r # pragma: no cover\n\n @property\n def available_languages(self):\n examples = self.examples()\n examples_languages = examples.values_list(\n 'repository_update__language',\n flat=True)\n translations_languages = examples.annotate(\n translations_count=models.Count('translations')).filter(\n translations_count__gt=0).values_list(\n 'translations__language',\n flat=True)\n return list(set(\n [self.language] +\n list(examples_languages) +\n list(translations_languages)))\n\n @property\n def languages_status(self):\n return dict(\n map(\n lambda language: (\n language,\n self.language_status(language)),\n settings.SUPPORTED_LANGUAGES.keys(),\n ))\n\n @property\n def current_updates(self):\n return map(\n lambda lang: self.current_update(lang),\n self.available_languages)\n\n @property\n def requirements_to_train(self):\n return dict(filter(\n lambda l: l[1],\n map(\n lambda u: (u.language, u.requirements_to_train,),\n self.current_updates)))\n\n @property\n def languages_ready_for_train(self):\n return dict(map(\n lambda u: (u.language, u.ready_for_train,),\n self.current_updates))\n\n @property\n def ready_for_train(self):\n return reduce(\n lambda current, u: u.ready_for_train or current,\n self.current_updates,\n False)\n\n @property\n def languages_warnings(self):\n return dict(filter(\n lambda w: len(w[1]) > 0,\n map(\n lambda u: (u.language, u.warnings,),\n self.current_updates)))\n\n @property\n def votes_sum(self):\n return self.votes.aggregate(\n votes_sum=models.Sum('vote')).get('votes_sum')\n\n @property\n def intents(self):\n return list(set(self.examples(\n exclude_deleted=True).exclude(\n intent='').values_list(\n 'intent',\n flat=True)))\n\n @property\n def current_entities(self):\n return self.entities.filter(value__in=self.examples(\n exclude_deleted=True).exclude(\n entities__entity__value__isnull=True).values_list(\n 'entities__entity__value',\n flat=True).distinct())\n\n @property\n def entities_list(self):\n return self.current_entities.values_list(\n 'value',\n flat=True).distinct()\n\n @property\n def current_labels(self):\n return self.labels.filter(\n entities__value__in=self.entities_list).distinct()\n\n @property\n def labels_list(self):\n return self.current_labels.values_list(\n 'value',\n flat=True).distinct()\n\n @property\n def other_entities(self):\n return self.current_entities.filter(label__isnull=True)\n\n @property\n def admins(self):\n admins = [self.owner] + [\n authorization.user for authorization in\n self.authorizations.filter(role=RepositoryAuthorization.ROLE_ADMIN)\n ]\n return list(set(admins))\n\n def __str__(self):\n return 'Repository {} - {}/{}'.format(\n self.name,\n self.owner.nickname,\n self.slug,\n )\n\n def examples(self, language=None, exclude_deleted=True, queryset=None):\n if queryset is None:\n queryset = RepositoryExample.objects\n query = queryset.filter(\n repository_update__repository=self)\n if language:\n query = query.filter(\n repository_update__language=language)\n if exclude_deleted:\n return query.exclude(deleted_in__isnull=False)\n return query\n\n def language_status(self, language):\n is_base_language = self.language == language\n examples = self.examples(language)\n base_examples = self.examples(self.language)\n base_translations = RepositoryTranslatedExample.objects.filter(\n original_example__in=base_examples,\n language=language)\n\n examples_count = examples.count()\n base_examples_count = base_examples.count()\n base_translations_count = base_translations.count()\n base_translations_percentage = (\n base_translations_count / (\n base_examples_count if base_examples_count > 0 else 1)) * 100\n\n return {\n 'is_base_language': is_base_language,\n 'examples': {\n 'count': examples_count,\n 'entities': list(\n set(\n filter(\n lambda x: x,\n examples.values_list(\n 'entities__entity',\n flat=True).distinct()))),\n },\n 'base_translations': {\n 'count': base_translations_count,\n 'percentage': base_translations_percentage,\n },\n }\n\n def current_update(self, language=None):\n language = language or self.language\n repository_update, created = self.updates.get_or_create(\n language=language,\n training_started_at=None)\n return repository_update\n\n def last_trained_update(self, language=None):\n language = language or self.language\n return self.updates.filter(\n language=language,\n by__isnull=False,\n trained_at__isnull=False).first()\n\n def get_user_authorization(self, user):\n if user.is_anonymous:\n return RepositoryAuthorization(repository=self)\n get, created = RepositoryAuthorization.objects.get_or_create(\n user=user,\n repository=self)\n return get\n\n def get_absolute_url(self):\n return '{}{}/{}/'.format(\n settings.BOTHUB_WEBAPP_BASE_URL,\n self.owner.nickname,\n self.slug)\n\n\nclass RepositoryUpdate(models.Model):\n class Meta:\n verbose_name = _('repository update')\n verbose_name_plural = _('repository updates')\n ordering = ['-created_at']\n\n MIN_EXAMPLES_PER_INTENT = 2\n MIN_EXAMPLES_PER_ENTITY = 2\n RECOMMENDED_INTENTS = 2\n\n repository = models.ForeignKey(\n Repository,\n models.CASCADE,\n related_name='updates')\n language = models.CharField(\n _('language'),\n max_length=5,\n validators=[\n languages.validate_language,\n ])\n use_language_model_featurizer = models.BooleanField(default=True)\n use_competing_intents = models.BooleanField(default=False)\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n bot_data = models.TextField(\n _('bot data'),\n blank=True,\n editable=False)\n by = models.ForeignKey(\n User,\n models.CASCADE,\n blank=True,\n null=True)\n training_started_at = models.DateTimeField(\n _('training started at'),\n blank=True,\n null=True)\n trained_at = models.DateTimeField(\n _('trained at'),\n blank=True,\n null=True)\n failed_at = models.DateTimeField(\n _('failed at'),\n blank=True,\n null=True)\n training_log = models.TextField(\n _('training log'),\n blank=True,\n editable=False)\n\n @property\n def examples(self):\n examples = self.repository.examples(exclude_deleted=False).filter(\n models.Q(repository_update__language=self.language) |\n models.Q(translations__language=self.language))\n if self.training_started_at:\n t_started_at = self.training_started_at\n examples = examples.exclude(\n models.Q(repository_update__created_at__gt=t_started_at) |\n models.Q(deleted_in=self) |\n models.Q(deleted_in__training_started_at__lt=t_started_at))\n else:\n examples = examples.exclude(deleted_in__isnull=False)\n return examples\n\n @property\n def requirements_to_train(self):\n try:\n self.validate_init_train()\n except RepositoryUpdateAlreadyTrained as e:\n return [_('This bot version has already been trained.')]\n except RepositoryUpdateAlreadyStartedTraining as e:\n return [_('This bot version is being trained.')]\n\n r = []\n\n intents = self.examples.values_list('intent', flat=True)\n\n if '' in intents:\n r.append(_('All examples need have a intent.'))\n\n weak_intents = self.examples.values('intent').annotate(\n intent_count=models.Count('id')).order_by().exclude(\n intent_count__gte=self.MIN_EXAMPLES_PER_INTENT)\n if weak_intents.exists():\n for i in weak_intents:\n r.append(_('Intent \"{}\" has only {} examples. ' +\n 'Minimum is {}.').format(\n i.get('intent'),\n i.get('intent_count'),\n self.MIN_EXAMPLES_PER_INTENT))\n\n weak_entities = self.examples.annotate(\n es_count=models.Count('entities')).filter(\n es_count__gte=1).values(\n 'entities__entity__value').annotate(\n entities_count=models.Count('id')).order_by().exclude(\n entities_count__gte=self.MIN_EXAMPLES_PER_ENTITY)\n if weak_entities.exists():\n for e in weak_entities:\n r.append(_('Entity \"{}\" has only {} examples. ' +\n 'Minimum is {}.').format(\n e.get('entities__entity__value'),\n e.get('entities_count'),\n self.MIN_EXAMPLES_PER_ENTITY))\n\n return r\n\n @property\n def ready_for_train(self):\n if self.training_started_at:\n return False\n\n previous_update = self.repository.updates.filter(\n language=self.language,\n by__isnull=False,\n training_started_at__isnull=False,\n created_at__lt=self.created_at).first()\n\n if previous_update:\n if previous_update.use_language_model_featurizer is not \\\n self.repository.use_language_model_featurizer:\n return True\n if previous_update.use_competing_intents is not \\\n self.repository.use_competing_intents:\n return True\n if previous_update.failed_at:\n return True\n\n if not self.added.exists() and \\\n not self.translated_added.exists() and \\\n not self.deleted.exists():\n return False\n\n if self.examples.count() == 0:\n return False\n\n return len(self.requirements_to_train) is 0\n\n @property\n def intents(self):\n return list(set(self.examples.values_list('intent', flat=True)))\n\n @property\n def warnings(self):\n w = []\n if 0 < len(self.intents) < self.RECOMMENDED_INTENTS:\n w.append(_('You need to have at least {} intents for the ' +\n 'algorithm to identify intents.').format(\n self.RECOMMENDED_INTENTS))\n return w\n\n def __str__(self):\n return 'Repository Update #{}'.format(self.id)\n\n def validate_init_train(self, by=None):\n if self.trained_at:\n raise RepositoryUpdateAlreadyTrained()\n if self.training_started_at:\n raise RepositoryUpdateAlreadyStartedTraining()\n if by:\n authorization = self.repository.get_user_authorization(by)\n if not authorization.can_write:\n raise TrainingNotAllowed()\n\n def start_training(self, by):\n self.validate_init_train(by)\n self.by = by\n self.training_started_at = timezone.now()\n self.use_language_model_featurizer = self.repository \\\n .use_language_model_featurizer\n self.use_competing_intents = self.repository.use_competing_intents\n self.save(\n update_fields=[\n 'by',\n 'training_started_at',\n 'use_language_model_featurizer',\n 'use_competing_intents',\n ])\n\n def save_training(self, bot_data):\n if self.trained_at:\n raise RepositoryUpdateAlreadyTrained()\n\n self.trained_at = timezone.now()\n self.bot_data = base64.b64encode(bot_data).decode('utf8')\n self.save(\n update_fields=[\n 'trained_at',\n 'bot_data',\n ])\n\n def get_bot_data(self):\n return base64.b64decode(self.bot_data)\n\n def train_fail(self):\n self.failed_at = timezone.now()\n self.save(\n update_fields=[\n 'failed_at',\n ])\n\n\nclass RepositoryExample(models.Model):\n class Meta:\n verbose_name = _('repository example')\n verbose_name_plural = _('repository examples')\n ordering = ['-created_at']\n\n repository_update = models.ForeignKey(\n RepositoryUpdate,\n models.CASCADE,\n related_name='added',\n editable=False)\n deleted_in = models.ForeignKey(\n RepositoryUpdate,\n models.CASCADE,\n related_name='deleted',\n blank=True,\n null=True)\n text = models.TextField(\n _('text'),\n help_text=_('Example text'))\n intent = models.CharField(\n _('intent'),\n max_length=64,\n default='no_intent',\n help_text=_('Example intent reference'),\n validators=[validate_item_key])\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n @property\n def language(self):\n return self.repository_update.language\n\n def has_valid_entities(self, language=None):\n if not language or language == self.repository_update.language:\n return True\n return self.get_translation(language).has_valid_entities\n\n def get_translation(self, language):\n try:\n return self.translations.get(language=language)\n except RepositoryTranslatedExample.DoesNotExist:\n raise DoesNotHaveTranslation()\n\n def get_text(self, language=None):\n if not language or language == self.repository_update.language:\n return self.text\n return self.get_translation(language).text\n\n def get_entities(self, language):\n if not language or language == self.repository_update.language:\n return self.entities.all()\n return self.get_translation(language).entities.all()\n\n def delete(self):\n self.deleted_in = self.repository_update.repository.current_update(\n self.repository_update.language)\n self.save(update_fields=['deleted_in'])\n\n\nclass RepositoryTranslatedExampleManager(models.Manager):\n def create(self, *args, original_example=None, language=None, **kwargs):\n repository = original_example.repository_update.repository\n return super().create(\n *args,\n repository_update=repository.current_update(language),\n original_example=original_example,\n language=language,\n **kwargs)\n\n\nclass RepositoryTranslatedExample(models.Model):\n class Meta:\n verbose_name = _('repository translated example')\n verbose_name_plural = _('repository translated examples')\n unique_together = ['original_example', 'language']\n ordering = ['-created_at']\n\n repository_update = models.ForeignKey(\n RepositoryUpdate,\n models.CASCADE,\n related_name='translated_added',\n editable=False)\n original_example = models.ForeignKey(\n RepositoryExample,\n models.CASCADE,\n related_name='translations',\n editable=False,\n help_text=_('Example object'))\n language = models.CharField(\n _('language'),\n max_length=5,\n help_text=_('Translation language'),\n validators=[\n languages.validate_language,\n ])\n text = models.TextField(\n _('text'),\n help_text=_('Translation text'))\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n objects = RepositoryTranslatedExampleManager()\n\n def entities_list_lambda_sort(item):\n return item.get('entity')\n\n @classmethod\n def same_entities_validator(cls, a, b):\n a_len = len(a)\n if a_len != len(b):\n return False\n a_sorted = sorted(\n a,\n key=cls.entities_list_lambda_sort)\n b_sorted = sorted(\n b,\n key=cls.entities_list_lambda_sort)\n for i in range(a_len):\n if a_sorted[i].get('entity') != b_sorted[i].get('entity'):\n return False\n return True\n\n @classmethod\n def count_entities(cls, entities_list, to_str=False):\n r = {}\n for e in entities_list:\n r.update({e.get('entity'): r.get('entity', 0) + 1})\n if to_str:\n r = ', '.join(map(\n lambda x: '{} {}'.format(x[1], x[0]),\n r.items())) if entities_list else 'no entities'\n return r\n\n @property\n def has_valid_entities(self):\n original_entities = self.original_example.entities.all()\n my_entities = self.entities.all()\n return RepositoryTranslatedExample.same_entities_validator(\n list(map(lambda x: x.to_dict, original_entities)),\n list(map(lambda x: x.to_dict, my_entities)))\n\n\nclass RepositoryEntityLabelQueryset(models.QuerySet):\n def get(self, repository, value):\n try:\n return super().get(\n repository=repository,\n value=value)\n except self.model.DoesNotExist as e:\n return super().create(\n repository=repository,\n value=value)\n\n\nclass RepositoryEntityLabelManager(models.Manager):\n def get_queryset(self):\n return RepositoryEntityLabelQueryset(self.model, using=self._db)\n\n\nclass RepositoryEntityLabel(models.Model):\n class Meta:\n unique_together = ['repository', 'value']\n\n repository = models.ForeignKey(\n Repository,\n on_delete=models.CASCADE,\n related_name='labels')\n value = models.CharField(\n _('label'),\n max_length=64,\n validators=[\n validate_item_key,\n can_t_be_other,\n ],\n blank=True)\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n objects = RepositoryEntityLabelManager()\n\n def examples(self, exclude_deleted=True):\n return self.repository.examples(\n exclude_deleted=exclude_deleted).filter(\n entities__entity__label=self)\n\n\nclass RepositoryEntityQueryset(models.QuerySet):\n def get(self, repository, value):\n try:\n return super().get(\n repository=repository,\n value=value)\n except self.model.DoesNotExist as e:\n return super().create(\n repository=repository,\n value=value)\n\n\nclass RepositoryEntityManager(models.Manager):\n def get_queryset(self):\n return RepositoryEntityQueryset(self.model, using=self._db)\n\n\nclass RepositoryEntity(models.Model):\n class Meta:\n unique_together = ['repository', 'value']\n\n repository = models.ForeignKey(\n Repository,\n on_delete=models.CASCADE,\n related_name='entities')\n value = models.CharField(\n _('entity'),\n max_length=64,\n help_text=_('Entity name'),\n validators=[validate_item_key])\n label = models.ForeignKey(\n RepositoryEntityLabel,\n on_delete=models.CASCADE,\n related_name='entities',\n null=True,\n blank=True)\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n objects = RepositoryEntityManager()\n\n def set_label(self, value):\n if not value:\n self.label = None\n else:\n self.label = RepositoryEntityLabel.objects.get(\n repository=self.repository,\n value=value)\n\n\nclass EntityBaseQueryset(models.QuerySet):\n def create(self, entity, **kwargs):\n if type(entity) is not RepositoryEntity:\n instance = self.model(**kwargs)\n repository = instance.example.repository_update.repository\n entity = RepositoryEntity.objects.get(\n repository=repository,\n value=entity)\n return super().create(\n entity=entity,\n **kwargs)\n\n\nclass EntityBaseManager(models.Manager):\n def get_queryset(self):\n return EntityBaseQueryset(self.model, using=self._db)\n\n\nclass EntityBase(models.Model):\n class Meta:\n verbose_name = _('repository example entity')\n verbose_name_plural = _('repository example entities')\n abstract = True\n\n start = models.PositiveIntegerField(\n _('start'),\n help_text=_('Start index of entity value in example text'))\n end = models.PositiveIntegerField(\n _('end'),\n help_text=_('End index of entity value in example text'))\n entity = models.ForeignKey(\n RepositoryEntity,\n on_delete=models.CASCADE)\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n objects = EntityBaseManager()\n\n @property\n def example(self):\n return self.get_example()\n\n @property\n def value(self):\n return self.example.text[self.start:self.end]\n\n @property\n def rasa_nlu_data(self):\n return {\n 'start': self.start,\n 'end': self.end,\n 'value': self.value,\n 'entity': self.entity.value,\n }\n\n @property\n def to_dict(self):\n return self.get_rasa_nlu_data()\n\n def get_example(self):\n pass # pragma: no cover\n\n def get_rasa_nlu_data(self, label_as_entity=False):\n return {\n 'start': self.start,\n 'end': self.end,\n 'entity': self.entity.label.value\n if label_as_entity else self.entity.value,\n }\n\n\nclass RepositoryExampleEntity(EntityBase):\n repository_example = models.ForeignKey(\n RepositoryExample,\n models.CASCADE,\n related_name='entities',\n editable=False,\n help_text=_('Example object'))\n\n def get_example(self):\n return self.repository_example\n\n\nclass RepositoryTranslatedExampleEntity(EntityBase):\n repository_translated_example = models.ForeignKey(\n RepositoryTranslatedExample,\n models.CASCADE,\n related_name='entities',\n editable=False,\n help_text=_('Translated example object'))\n\n def get_example(self):\n return self.repository_translated_example\n\n\nclass RepositoryAuthorization(models.Model):\n class Meta:\n verbose_name = _('repository authorization')\n verbose_name_plural = _('repository authorizations')\n unique_together = ['user', 'repository']\n\n LEVEL_NOTHING = 0\n LEVEL_READER = 1\n LEVEL_CONTRIBUTOR = 2\n LEVEL_ADMIN = 3\n\n ROLE_NOT_SETTED = 0\n ROLE_USER = 1\n ROLE_CONTRIBUTOR = 2\n ROLE_ADMIN = 3\n\n ROLE_CHOICES = [\n (ROLE_NOT_SETTED, _('not set')),\n (ROLE_USER, _('user')),\n (ROLE_CONTRIBUTOR, _('contributor')),\n (ROLE_ADMIN, _('admin')),\n ]\n\n uuid = models.UUIDField(\n _('UUID'),\n primary_key=True,\n default=uuid.uuid4,\n editable=False)\n user = models.ForeignKey(\n User,\n models.CASCADE)\n repository = models.ForeignKey(\n Repository,\n models.CASCADE,\n related_name='authorizations')\n role = models.PositiveIntegerField(\n _('role'),\n choices=ROLE_CHOICES,\n default=ROLE_NOT_SETTED)\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n @property\n def level(self):\n try:\n user = self.user\n except User.DoesNotExist:\n user = None\n\n if user and self.repository.owner == user:\n return RepositoryAuthorization.LEVEL_ADMIN\n\n if self.role == RepositoryAuthorization.ROLE_NOT_SETTED:\n if self.repository.is_private:\n return RepositoryAuthorization.LEVEL_NOTHING\n return RepositoryAuthorization.LEVEL_READER\n\n if self.role == RepositoryAuthorization.ROLE_USER:\n return RepositoryAuthorization.LEVEL_READER\n\n if self.role == RepositoryAuthorization.ROLE_CONTRIBUTOR:\n return RepositoryAuthorization.LEVEL_CONTRIBUTOR\n\n if self.role == RepositoryAuthorization.ROLE_ADMIN:\n return RepositoryAuthorization.LEVEL_ADMIN\n\n return RepositoryAuthorization.LEVEL_NOTHING\n\n @property\n def can_read(self):\n return self.level in [\n RepositoryAuthorization.LEVEL_READER,\n RepositoryAuthorization.LEVEL_CONTRIBUTOR,\n RepositoryAuthorization.LEVEL_ADMIN,\n ]\n\n @property\n def can_contribute(self):\n return self.level in [\n RepositoryAuthorization.LEVEL_CONTRIBUTOR,\n RepositoryAuthorization.LEVEL_ADMIN,\n ]\n\n @property\n def can_write(self):\n return self.level in [\n RepositoryAuthorization.LEVEL_ADMIN,\n ]\n\n @property\n def is_admin(self):\n return self.level == RepositoryAuthorization.LEVEL_ADMIN\n\n @property\n def is_owner(self):\n try:\n user = self.user\n except User.DoesNotExist:\n return False\n return self.repository.owner == user\n\n @property\n def role_verbose(self):\n return dict(RepositoryAuthorization.ROLE_CHOICES).get(self.role)\n\n def send_new_role_email(self, responsible=None):\n if not settings.SEND_EMAILS:\n return False\n responsible_name = responsible and responsible.name \\\n or self.repository.owner.name\n context = {\n 'responsible_name': responsible_name,\n 'user_name': self.user.name,\n 'repository_name': self.repository.name,\n 'repository_url': self.repository.get_absolute_url(),\n 'new_role': self.role_verbose,\n }\n send_mail(\n _('New role in {}').format(self.repository.name),\n render_to_string(\n 'common/emails/new_role.txt',\n context),\n None,\n [self.user.email],\n html_message=render_to_string(\n 'common/emails/new_role.html',\n context))\n\n\nclass RepositoryVote(models.Model):\n UP_VOTE = 1\n DOWN_VOTE = -1\n NEUTRAL_VOTE = 0\n VOTE_CHOICES = [\n (UP_VOTE, _('Up'),),\n (DOWN_VOTE, _('Down')),\n (NEUTRAL_VOTE, _('Neutral')),\n ]\n\n class Meta:\n verbose_name = _('repository vote')\n verbose_name_plural = _('repository votes')\n unique_together = [\n 'user',\n 'repository',\n ]\n\n user = models.ForeignKey(\n User,\n models.CASCADE,\n related_name='repository_votes')\n repository = models.ForeignKey(\n Repository,\n models.CASCADE,\n related_name='votes')\n vote = models.IntegerField(\n _('vote'),\n choices=VOTE_CHOICES)\n\n\nclass RequestRepositoryAuthorization(models.Model):\n class Meta:\n unique_together = ['user', 'repository']\n\n user = models.ForeignKey(\n User,\n models.CASCADE,\n related_name='requests')\n repository = models.ForeignKey(\n Repository,\n models.CASCADE,\n related_name='requests')\n text = models.CharField(\n _('text'),\n max_length=250)\n approved_by = models.ForeignKey(\n User,\n models.CASCADE,\n blank=True,\n null=True)\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True,\n editable=False)\n\n def send_new_request_email_to_admins(self):\n if not settings.SEND_EMAILS:\n return False\n context = {\n 'user_name': self.user.name,\n 'repository_name': self.repository.name,\n 'text': self.text,\n 'repository_url': self.repository.get_absolute_url(),\n }\n for admin in self.repository.admins:\n send_mail(\n _('New authorization request in {}').format(\n self.repository.name),\n render_to_string(\n 'common/emails/new_request.txt',\n context),\n None,\n [admin.email],\n html_message=render_to_string(\n 'common/emails/new_request.html',\n context))\n\n def send_request_rejected_email(self):\n if not settings.SEND_EMAILS:\n return False\n context = {\n 'repository_name': self.repository.name,\n }\n send_mail(\n _('Access denied to {}').format(\n self.repository.name),\n render_to_string(\n 'common/emails/request_rejected.txt',\n context),\n None,\n [self.user.email],\n html_message=render_to_string(\n 'common/emails/request_rejected.html',\n context))\n\n def send_request_approved_email(self):\n if not settings.SEND_EMAILS:\n return False\n context = {\n 'admin_name': self.approved_by.name,\n 'repository_name': self.repository.name,\n }\n send_mail(\n _('Authorization Request Approved to {}').format(\n self.repository.name),\n render_to_string(\n 'common/emails/request_approved.txt',\n context),\n None,\n [self.user.email],\n html_message=render_to_string(\n 'common/emails/request_approved.html',\n context))\n\n\n@receiver(models.signals.pre_save, sender=RequestRepositoryAuthorization)\ndef set_user_role_on_approved(instance, **kwargs):\n current = None\n try:\n current = RequestRepositoryAuthorization.objects.get(pk=instance.pk)\n except RequestRepositoryAuthorization.DoesNotExist as e:\n pass\n\n if not current:\n return False\n\n if current.approved_by is None and \\\n current.approved_by is not instance.approved_by:\n user_authorization = instance.repository.get_user_authorization(\n instance.user)\n user_authorization.role = RepositoryAuthorization.ROLE_USER\n user_authorization.save(update_fields=['role'])\n instance.send_request_approved_email()\n else:\n raise ValidationError(\n _('You can change approved_by just one time.'))\n\n\n@receiver(models.signals.post_save, sender=RequestRepositoryAuthorization)\ndef send_new_request_email_to_admins_on_created(instance, created, **kwargs):\n if created:\n instance.send_new_request_email_to_admins()\n\n\n@receiver(models.signals.post_delete, sender=RequestRepositoryAuthorization)\ndef send_request_rejected_email(instance, **kwargs):\n instance.send_request_rejected_email()\n", "path": "bothub/common/models.py" } ]
[ { "content": "import uuid\nimport base64\nimport requests\n\nfrom functools import reduce\nfrom django.db import models\nfrom django.utils.translation import gettext as _\nfrom django.utils import timezone\nfrom django.conf import settings\nfrom django.core.validators import RegexValidator, _lazy_re_compile\nfrom django.core.mail import send_mail\nfrom django.template.loader import render_to_string\nfrom django.dispatch import receiver\nfrom django.core.exceptions import ValidationError\n\nfrom bothub.authentication.models import User\n\nfrom . import languages\nfrom .exceptions import RepositoryUpdateAlreadyStartedTraining\nfrom .exceptions import RepositoryUpdateAlreadyTrained\nfrom .exceptions import TrainingNotAllowed\nfrom .exceptions import DoesNotHaveTranslation\n\n\nitem_key_regex = _lazy_re_compile(r'^[-a-z0-9_]+\\Z')\nvalidate_item_key = RegexValidator(\n item_key_regex,\n _('Enter a valid value consisting of lowercase letters, numbers, ' +\n 'underscores or hyphens.'),\n 'invalid'\n)\n\n\ndef can_t_be_other(value):\n if value == 'other':\n raise ValidationError(_('The label can\\'t be named as \"other\"'))\n\n\nclass RepositoryCategory(models.Model):\n class Meta:\n verbose_name = _('repository category')\n verbose_name_plural = _('repository categories')\n\n name = models.CharField(\n _('name'),\n max_length=32)\n\n def __str__(self):\n return self.name # pragma: no cover\n\n\nclass RepositoryQuerySet(models.QuerySet):\n def publics(self):\n return self.filter(is_private=False)\n\n def order_by_relevance(self):\n return self \\\n .annotate(votes_summ=models.Sum('votes__vote')) \\\n .annotate(examples_sum=models.Sum('updates__added')) \\\n .order_by('-votes_summ', '-examples_sum', '-created_at')\n\n\nclass RepositoryManager(models.Manager):\n def get_queryset(self):\n return RepositoryQuerySet(self.model, using=self._db)\n\n\nclass Repository(models.Model):\n class Meta:\n verbose_name = _('repository')\n verbose_name_plural = _('repositories')\n unique_together = ['owner', 'slug']\n\n CATEGORIES_HELP_TEXT = _('Categories for approaching repositories with ' +\n 'the same purpose')\n DESCRIPTION_HELP_TEXT = _('Tell what your bot do!')\n\n uuid = models.UUIDField(\n _('UUID'),\n primary_key=True,\n default=uuid.uuid4,\n editable=False)\n owner = models.ForeignKey(\n User,\n models.CASCADE,\n related_name='repositories')\n name = models.CharField(\n _('name'),\n max_length=64,\n help_text=_('Repository display name'))\n slug = models.SlugField(\n _('slug'),\n max_length=32,\n help_text=_('Easy way to found and share repositories'))\n language = models.CharField(\n _('language'),\n max_length=5,\n help_text=_('Repository\\'s examples language. The examples can be ' +\n 'translated to other languages.'),\n validators=[\n languages.validate_language,\n ])\n use_language_model_featurizer = models.BooleanField(\n _('Use language model featurizer'),\n help_text=_('You can use language featurizer to get words ' +\n 'similarity. You need less examples to create a great ' +\n 'bot.'),\n default=True)\n use_competing_intents = models.BooleanField(\n _('Use competing intents'),\n help_text=_('When using competing intents the confidence of the ' +\n 'prediction is distributed in all the intents.'),\n default=False)\n categories = models.ManyToManyField(\n RepositoryCategory,\n help_text=CATEGORIES_HELP_TEXT)\n description = models.TextField(\n _('description'),\n blank=True,\n help_text=DESCRIPTION_HELP_TEXT)\n is_private = models.BooleanField(\n _('private'),\n default=False,\n help_text=_('Your repository can be private, only you can see and' +\n ' use, or can be public and all community can see and ' +\n 'use.'))\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n objects = RepositoryManager()\n\n nlp_train_url = '{}train/'.format(settings.BOTHUB_NLP_BASE_URL)\n nlp_analyze_url = '{}parse/'.format(settings.BOTHUB_NLP_BASE_URL)\n\n @classmethod\n def request_nlp_train(cls, user_authorization):\n r = requests.post( # pragma: no cover\n cls.nlp_train_url,\n data={},\n headers={'Authorization': 'Bearer {}'.format(\n user_authorization.uuid)})\n return r # pragma: no cover\n\n @classmethod\n def request_nlp_analyze(cls, user_authorization, data):\n r = requests.post( # pragma: no cover\n cls.nlp_analyze_url,\n data={\n 'text': data.get('text'),\n 'language': data.get('language'),\n },\n headers={'Authorization': 'Bearer {}'.format(\n user_authorization.uuid)})\n return r # pragma: no cover\n\n @property\n def available_languages(self):\n examples = self.examples()\n examples_languages = examples.values_list(\n 'repository_update__language',\n flat=True)\n translations_languages = examples.annotate(\n translations_count=models.Count('translations')).filter(\n translations_count__gt=0).values_list(\n 'translations__language',\n flat=True)\n return list(set(\n [self.language] +\n list(examples_languages) +\n list(translations_languages)))\n\n @property\n def languages_status(self):\n return dict(\n map(\n lambda language: (\n language,\n self.language_status(language)),\n settings.SUPPORTED_LANGUAGES.keys(),\n ))\n\n @property\n def current_updates(self):\n return map(\n lambda lang: self.current_update(lang),\n self.available_languages)\n\n @property\n def requirements_to_train(self):\n return dict(filter(\n lambda l: l[1],\n map(\n lambda u: (u.language, u.requirements_to_train,),\n self.current_updates)))\n\n @property\n def languages_ready_for_train(self):\n return dict(map(\n lambda u: (u.language, u.ready_for_train,),\n self.current_updates))\n\n @property\n def ready_for_train(self):\n return reduce(\n lambda current, u: u.ready_for_train or current,\n self.current_updates,\n False)\n\n @property\n def languages_warnings(self):\n return dict(filter(\n lambda w: len(w[1]) > 0,\n map(\n lambda u: (u.language, u.warnings,),\n self.current_updates)))\n\n @property\n def votes_sum(self):\n return self.votes.aggregate(\n votes_sum=models.Sum('vote')).get('votes_sum')\n\n @property\n def intents(self):\n return list(set(self.examples(\n exclude_deleted=True).exclude(\n intent='').values_list(\n 'intent',\n flat=True)))\n\n @property\n def current_entities(self):\n return self.entities.filter(value__in=self.examples(\n exclude_deleted=True).exclude(\n entities__entity__value__isnull=True).values_list(\n 'entities__entity__value',\n flat=True).distinct())\n\n @property\n def entities_list(self):\n return self.current_entities.values_list(\n 'value',\n flat=True).distinct()\n\n @property\n def current_labels(self):\n return self.labels.filter(\n entities__value__in=self.entities_list).distinct()\n\n @property\n def labels_list(self):\n return self.current_labels.values_list(\n 'value',\n flat=True).distinct()\n\n @property\n def other_entities(self):\n return self.current_entities.filter(label__isnull=True)\n\n @property\n def admins(self):\n admins = [self.owner] + [\n authorization.user for authorization in\n self.authorizations.filter(role=RepositoryAuthorization.ROLE_ADMIN)\n ]\n return list(set(admins))\n\n def __str__(self):\n return 'Repository {} - {}/{}'.format(\n self.name,\n self.owner.nickname,\n self.slug,\n )\n\n def examples(self, language=None, exclude_deleted=True, queryset=None):\n if queryset is None:\n queryset = RepositoryExample.objects\n query = queryset.filter(\n repository_update__repository=self)\n if language:\n query = query.filter(\n repository_update__language=language)\n if exclude_deleted:\n return query.exclude(deleted_in__isnull=False)\n return query\n\n def language_status(self, language):\n is_base_language = self.language == language\n examples = self.examples(language)\n base_examples = self.examples(self.language)\n base_translations = RepositoryTranslatedExample.objects.filter(\n original_example__in=base_examples,\n language=language)\n\n examples_count = examples.count()\n base_examples_count = base_examples.count()\n base_translations_count = base_translations.count()\n base_translations_percentage = (\n base_translations_count / (\n base_examples_count if base_examples_count > 0 else 1)) * 100\n\n return {\n 'is_base_language': is_base_language,\n 'examples': {\n 'count': examples_count,\n 'entities': list(\n set(\n filter(\n lambda x: x,\n examples.values_list(\n 'entities__entity',\n flat=True).distinct()))),\n },\n 'base_translations': {\n 'count': base_translations_count,\n 'percentage': base_translations_percentage,\n },\n }\n\n def current_update(self, language=None):\n language = language or self.language\n repository_update, created = self.updates.get_or_create(\n language=language,\n training_started_at=None)\n return repository_update\n\n def last_trained_update(self, language=None):\n language = language or self.language\n return self.updates.filter(\n language=language,\n by__isnull=False,\n trained_at__isnull=False).first()\n\n def get_user_authorization(self, user):\n if user.is_anonymous:\n return RepositoryAuthorization(repository=self)\n get, created = RepositoryAuthorization.objects.get_or_create(\n user=user,\n repository=self)\n return get\n\n def get_absolute_url(self):\n return '{}{}/{}/'.format(\n settings.BOTHUB_WEBAPP_BASE_URL,\n self.owner.nickname,\n self.slug)\n\n\nclass RepositoryUpdate(models.Model):\n class Meta:\n verbose_name = _('repository update')\n verbose_name_plural = _('repository updates')\n ordering = ['-created_at']\n\n MIN_EXAMPLES_PER_INTENT = 2\n MIN_EXAMPLES_PER_ENTITY = 2\n RECOMMENDED_INTENTS = 2\n\n repository = models.ForeignKey(\n Repository,\n models.CASCADE,\n related_name='updates')\n language = models.CharField(\n _('language'),\n max_length=5,\n validators=[\n languages.validate_language,\n ])\n use_language_model_featurizer = models.BooleanField(default=True)\n use_competing_intents = models.BooleanField(default=False)\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n bot_data = models.TextField(\n _('bot data'),\n blank=True,\n editable=False)\n by = models.ForeignKey(\n User,\n models.CASCADE,\n blank=True,\n null=True)\n training_started_at = models.DateTimeField(\n _('training started at'),\n blank=True,\n null=True)\n trained_at = models.DateTimeField(\n _('trained at'),\n blank=True,\n null=True)\n failed_at = models.DateTimeField(\n _('failed at'),\n blank=True,\n null=True)\n training_log = models.TextField(\n _('training log'),\n blank=True,\n editable=False)\n\n @property\n def examples(self):\n examples = self.repository.examples(exclude_deleted=False).filter(\n models.Q(repository_update__language=self.language) |\n models.Q(translations__language=self.language))\n if self.training_started_at:\n t_started_at = self.training_started_at\n examples = examples.exclude(\n models.Q(repository_update__created_at__gt=t_started_at) |\n models.Q(deleted_in=self) |\n models.Q(deleted_in__training_started_at__lt=t_started_at))\n else:\n examples = examples.exclude(deleted_in__isnull=False)\n return examples\n\n @property\n def requirements_to_train(self):\n try:\n self.validate_init_train()\n except RepositoryUpdateAlreadyTrained as e:\n return [_('This bot version has already been trained.')]\n except RepositoryUpdateAlreadyStartedTraining as e:\n return [_('This bot version is being trained.')]\n\n r = []\n\n intents = self.examples.values_list('intent', flat=True)\n\n if '' in intents:\n r.append(_('All examples need have a intent.'))\n\n weak_intents = self.examples.values('intent').annotate(\n intent_count=models.Count('id')).order_by().exclude(\n intent_count__gte=self.MIN_EXAMPLES_PER_INTENT)\n if weak_intents.exists():\n for i in weak_intents:\n r.append(_('Intent \"{}\" has only {} examples. ' +\n 'Minimum is {}.').format(\n i.get('intent'),\n i.get('intent_count'),\n self.MIN_EXAMPLES_PER_INTENT))\n\n weak_entities = self.examples.annotate(\n es_count=models.Count('entities')).filter(\n es_count__gte=1).values(\n 'entities__entity__value').annotate(\n entities_count=models.Count('id')).order_by().exclude(\n entities_count__gte=self.MIN_EXAMPLES_PER_ENTITY)\n if weak_entities.exists():\n for e in weak_entities:\n r.append(_('Entity \"{}\" has only {} examples. ' +\n 'Minimum is {}.').format(\n e.get('entities__entity__value'),\n e.get('entities_count'),\n self.MIN_EXAMPLES_PER_ENTITY))\n\n return r\n\n @property\n def ready_for_train(self):\n if self.training_started_at:\n return False\n\n if len(self.requirements_to_train) > 0:\n return False\n\n previous_update = self.repository.updates.filter(\n language=self.language,\n by__isnull=False,\n training_started_at__isnull=False,\n created_at__lt=self.created_at).first()\n\n if previous_update:\n if previous_update.use_language_model_featurizer is not \\\n self.repository.use_language_model_featurizer:\n return True\n if previous_update.use_competing_intents is not \\\n self.repository.use_competing_intents:\n return True\n if previous_update.failed_at:\n return True\n\n if not self.added.exists() and \\\n not self.translated_added.exists() and \\\n not self.deleted.exists():\n return False\n\n if self.examples.count() == 0:\n return False\n\n return len(self.requirements_to_train) is 0\n\n @property\n def intents(self):\n return list(set(self.examples.values_list('intent', flat=True)))\n\n @property\n def warnings(self):\n w = []\n if 0 < len(self.intents) < self.RECOMMENDED_INTENTS:\n w.append(_('You need to have at least {} intents for the ' +\n 'algorithm to identify intents.').format(\n self.RECOMMENDED_INTENTS))\n return w\n\n def __str__(self):\n return 'Repository Update #{}'.format(self.id)\n\n def validate_init_train(self, by=None):\n if self.trained_at:\n raise RepositoryUpdateAlreadyTrained()\n if self.training_started_at:\n raise RepositoryUpdateAlreadyStartedTraining()\n if by:\n authorization = self.repository.get_user_authorization(by)\n if not authorization.can_write:\n raise TrainingNotAllowed()\n\n def start_training(self, by):\n self.validate_init_train(by)\n self.by = by\n self.training_started_at = timezone.now()\n self.use_language_model_featurizer = self.repository \\\n .use_language_model_featurizer\n self.use_competing_intents = self.repository.use_competing_intents\n self.save(\n update_fields=[\n 'by',\n 'training_started_at',\n 'use_language_model_featurizer',\n 'use_competing_intents',\n ])\n\n def save_training(self, bot_data):\n if self.trained_at:\n raise RepositoryUpdateAlreadyTrained()\n\n self.trained_at = timezone.now()\n self.bot_data = base64.b64encode(bot_data).decode('utf8')\n self.save(\n update_fields=[\n 'trained_at',\n 'bot_data',\n ])\n\n def get_bot_data(self):\n return base64.b64decode(self.bot_data)\n\n def train_fail(self):\n self.failed_at = timezone.now()\n self.save(\n update_fields=[\n 'failed_at',\n ])\n\n\nclass RepositoryExample(models.Model):\n class Meta:\n verbose_name = _('repository example')\n verbose_name_plural = _('repository examples')\n ordering = ['-created_at']\n\n repository_update = models.ForeignKey(\n RepositoryUpdate,\n models.CASCADE,\n related_name='added',\n editable=False)\n deleted_in = models.ForeignKey(\n RepositoryUpdate,\n models.CASCADE,\n related_name='deleted',\n blank=True,\n null=True)\n text = models.TextField(\n _('text'),\n help_text=_('Example text'))\n intent = models.CharField(\n _('intent'),\n max_length=64,\n default='no_intent',\n help_text=_('Example intent reference'),\n validators=[validate_item_key])\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n @property\n def language(self):\n return self.repository_update.language\n\n def has_valid_entities(self, language=None):\n if not language or language == self.repository_update.language:\n return True\n return self.get_translation(language).has_valid_entities\n\n def get_translation(self, language):\n try:\n return self.translations.get(language=language)\n except RepositoryTranslatedExample.DoesNotExist:\n raise DoesNotHaveTranslation()\n\n def get_text(self, language=None):\n if not language or language == self.repository_update.language:\n return self.text\n return self.get_translation(language).text\n\n def get_entities(self, language):\n if not language or language == self.repository_update.language:\n return self.entities.all()\n return self.get_translation(language).entities.all()\n\n def delete(self):\n self.deleted_in = self.repository_update.repository.current_update(\n self.repository_update.language)\n self.save(update_fields=['deleted_in'])\n\n\nclass RepositoryTranslatedExampleManager(models.Manager):\n def create(self, *args, original_example=None, language=None, **kwargs):\n repository = original_example.repository_update.repository\n return super().create(\n *args,\n repository_update=repository.current_update(language),\n original_example=original_example,\n language=language,\n **kwargs)\n\n\nclass RepositoryTranslatedExample(models.Model):\n class Meta:\n verbose_name = _('repository translated example')\n verbose_name_plural = _('repository translated examples')\n unique_together = ['original_example', 'language']\n ordering = ['-created_at']\n\n repository_update = models.ForeignKey(\n RepositoryUpdate,\n models.CASCADE,\n related_name='translated_added',\n editable=False)\n original_example = models.ForeignKey(\n RepositoryExample,\n models.CASCADE,\n related_name='translations',\n editable=False,\n help_text=_('Example object'))\n language = models.CharField(\n _('language'),\n max_length=5,\n help_text=_('Translation language'),\n validators=[\n languages.validate_language,\n ])\n text = models.TextField(\n _('text'),\n help_text=_('Translation text'))\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n objects = RepositoryTranslatedExampleManager()\n\n def entities_list_lambda_sort(item):\n return item.get('entity')\n\n @classmethod\n def same_entities_validator(cls, a, b):\n a_len = len(a)\n if a_len != len(b):\n return False\n a_sorted = sorted(\n a,\n key=cls.entities_list_lambda_sort)\n b_sorted = sorted(\n b,\n key=cls.entities_list_lambda_sort)\n for i in range(a_len):\n if a_sorted[i].get('entity') != b_sorted[i].get('entity'):\n return False\n return True\n\n @classmethod\n def count_entities(cls, entities_list, to_str=False):\n r = {}\n for e in entities_list:\n r.update({e.get('entity'): r.get('entity', 0) + 1})\n if to_str:\n r = ', '.join(map(\n lambda x: '{} {}'.format(x[1], x[0]),\n r.items())) if entities_list else 'no entities'\n return r\n\n @property\n def has_valid_entities(self):\n original_entities = self.original_example.entities.all()\n my_entities = self.entities.all()\n return RepositoryTranslatedExample.same_entities_validator(\n list(map(lambda x: x.to_dict, original_entities)),\n list(map(lambda x: x.to_dict, my_entities)))\n\n\nclass RepositoryEntityLabelQueryset(models.QuerySet):\n def get(self, repository, value):\n try:\n return super().get(\n repository=repository,\n value=value)\n except self.model.DoesNotExist as e:\n return super().create(\n repository=repository,\n value=value)\n\n\nclass RepositoryEntityLabelManager(models.Manager):\n def get_queryset(self):\n return RepositoryEntityLabelQueryset(self.model, using=self._db)\n\n\nclass RepositoryEntityLabel(models.Model):\n class Meta:\n unique_together = ['repository', 'value']\n\n repository = models.ForeignKey(\n Repository,\n on_delete=models.CASCADE,\n related_name='labels')\n value = models.CharField(\n _('label'),\n max_length=64,\n validators=[\n validate_item_key,\n can_t_be_other,\n ],\n blank=True)\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n objects = RepositoryEntityLabelManager()\n\n def examples(self, exclude_deleted=True):\n return self.repository.examples(\n exclude_deleted=exclude_deleted).filter(\n entities__entity__label=self)\n\n\nclass RepositoryEntityQueryset(models.QuerySet):\n def get(self, repository, value):\n try:\n return super().get(\n repository=repository,\n value=value)\n except self.model.DoesNotExist as e:\n return super().create(\n repository=repository,\n value=value)\n\n\nclass RepositoryEntityManager(models.Manager):\n def get_queryset(self):\n return RepositoryEntityQueryset(self.model, using=self._db)\n\n\nclass RepositoryEntity(models.Model):\n class Meta:\n unique_together = ['repository', 'value']\n\n repository = models.ForeignKey(\n Repository,\n on_delete=models.CASCADE,\n related_name='entities')\n value = models.CharField(\n _('entity'),\n max_length=64,\n help_text=_('Entity name'),\n validators=[validate_item_key])\n label = models.ForeignKey(\n RepositoryEntityLabel,\n on_delete=models.CASCADE,\n related_name='entities',\n null=True,\n blank=True)\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n objects = RepositoryEntityManager()\n\n def set_label(self, value):\n if not value:\n self.label = None\n else:\n self.label = RepositoryEntityLabel.objects.get(\n repository=self.repository,\n value=value)\n\n\nclass EntityBaseQueryset(models.QuerySet):\n def create(self, entity, **kwargs):\n if type(entity) is not RepositoryEntity:\n instance = self.model(**kwargs)\n repository = instance.example.repository_update.repository\n entity = RepositoryEntity.objects.get(\n repository=repository,\n value=entity)\n return super().create(\n entity=entity,\n **kwargs)\n\n\nclass EntityBaseManager(models.Manager):\n def get_queryset(self):\n return EntityBaseQueryset(self.model, using=self._db)\n\n\nclass EntityBase(models.Model):\n class Meta:\n verbose_name = _('repository example entity')\n verbose_name_plural = _('repository example entities')\n abstract = True\n\n start = models.PositiveIntegerField(\n _('start'),\n help_text=_('Start index of entity value in example text'))\n end = models.PositiveIntegerField(\n _('end'),\n help_text=_('End index of entity value in example text'))\n entity = models.ForeignKey(\n RepositoryEntity,\n on_delete=models.CASCADE)\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n objects = EntityBaseManager()\n\n @property\n def example(self):\n return self.get_example()\n\n @property\n def value(self):\n return self.example.text[self.start:self.end]\n\n @property\n def rasa_nlu_data(self):\n return {\n 'start': self.start,\n 'end': self.end,\n 'value': self.value,\n 'entity': self.entity.value,\n }\n\n @property\n def to_dict(self):\n return self.get_rasa_nlu_data()\n\n def get_example(self):\n pass # pragma: no cover\n\n def get_rasa_nlu_data(self, label_as_entity=False):\n return {\n 'start': self.start,\n 'end': self.end,\n 'entity': self.entity.label.value\n if label_as_entity else self.entity.value,\n }\n\n\nclass RepositoryExampleEntity(EntityBase):\n repository_example = models.ForeignKey(\n RepositoryExample,\n models.CASCADE,\n related_name='entities',\n editable=False,\n help_text=_('Example object'))\n\n def get_example(self):\n return self.repository_example\n\n\nclass RepositoryTranslatedExampleEntity(EntityBase):\n repository_translated_example = models.ForeignKey(\n RepositoryTranslatedExample,\n models.CASCADE,\n related_name='entities',\n editable=False,\n help_text=_('Translated example object'))\n\n def get_example(self):\n return self.repository_translated_example\n\n\nclass RepositoryAuthorization(models.Model):\n class Meta:\n verbose_name = _('repository authorization')\n verbose_name_plural = _('repository authorizations')\n unique_together = ['user', 'repository']\n\n LEVEL_NOTHING = 0\n LEVEL_READER = 1\n LEVEL_CONTRIBUTOR = 2\n LEVEL_ADMIN = 3\n\n ROLE_NOT_SETTED = 0\n ROLE_USER = 1\n ROLE_CONTRIBUTOR = 2\n ROLE_ADMIN = 3\n\n ROLE_CHOICES = [\n (ROLE_NOT_SETTED, _('not set')),\n (ROLE_USER, _('user')),\n (ROLE_CONTRIBUTOR, _('contributor')),\n (ROLE_ADMIN, _('admin')),\n ]\n\n uuid = models.UUIDField(\n _('UUID'),\n primary_key=True,\n default=uuid.uuid4,\n editable=False)\n user = models.ForeignKey(\n User,\n models.CASCADE)\n repository = models.ForeignKey(\n Repository,\n models.CASCADE,\n related_name='authorizations')\n role = models.PositiveIntegerField(\n _('role'),\n choices=ROLE_CHOICES,\n default=ROLE_NOT_SETTED)\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n @property\n def level(self):\n try:\n user = self.user\n except User.DoesNotExist:\n user = None\n\n if user and self.repository.owner == user:\n return RepositoryAuthorization.LEVEL_ADMIN\n\n if self.role == RepositoryAuthorization.ROLE_NOT_SETTED:\n if self.repository.is_private:\n return RepositoryAuthorization.LEVEL_NOTHING\n return RepositoryAuthorization.LEVEL_READER\n\n if self.role == RepositoryAuthorization.ROLE_USER:\n return RepositoryAuthorization.LEVEL_READER\n\n if self.role == RepositoryAuthorization.ROLE_CONTRIBUTOR:\n return RepositoryAuthorization.LEVEL_CONTRIBUTOR\n\n if self.role == RepositoryAuthorization.ROLE_ADMIN:\n return RepositoryAuthorization.LEVEL_ADMIN\n\n return RepositoryAuthorization.LEVEL_NOTHING\n\n @property\n def can_read(self):\n return self.level in [\n RepositoryAuthorization.LEVEL_READER,\n RepositoryAuthorization.LEVEL_CONTRIBUTOR,\n RepositoryAuthorization.LEVEL_ADMIN,\n ]\n\n @property\n def can_contribute(self):\n return self.level in [\n RepositoryAuthorization.LEVEL_CONTRIBUTOR,\n RepositoryAuthorization.LEVEL_ADMIN,\n ]\n\n @property\n def can_write(self):\n return self.level in [\n RepositoryAuthorization.LEVEL_ADMIN,\n ]\n\n @property\n def is_admin(self):\n return self.level == RepositoryAuthorization.LEVEL_ADMIN\n\n @property\n def is_owner(self):\n try:\n user = self.user\n except User.DoesNotExist:\n return False\n return self.repository.owner == user\n\n @property\n def role_verbose(self):\n return dict(RepositoryAuthorization.ROLE_CHOICES).get(self.role)\n\n def send_new_role_email(self, responsible=None):\n if not settings.SEND_EMAILS:\n return False\n responsible_name = responsible and responsible.name \\\n or self.repository.owner.name\n context = {\n 'responsible_name': responsible_name,\n 'user_name': self.user.name,\n 'repository_name': self.repository.name,\n 'repository_url': self.repository.get_absolute_url(),\n 'new_role': self.role_verbose,\n }\n send_mail(\n _('New role in {}').format(self.repository.name),\n render_to_string(\n 'common/emails/new_role.txt',\n context),\n None,\n [self.user.email],\n html_message=render_to_string(\n 'common/emails/new_role.html',\n context))\n\n\nclass RepositoryVote(models.Model):\n UP_VOTE = 1\n DOWN_VOTE = -1\n NEUTRAL_VOTE = 0\n VOTE_CHOICES = [\n (UP_VOTE, _('Up'),),\n (DOWN_VOTE, _('Down')),\n (NEUTRAL_VOTE, _('Neutral')),\n ]\n\n class Meta:\n verbose_name = _('repository vote')\n verbose_name_plural = _('repository votes')\n unique_together = [\n 'user',\n 'repository',\n ]\n\n user = models.ForeignKey(\n User,\n models.CASCADE,\n related_name='repository_votes')\n repository = models.ForeignKey(\n Repository,\n models.CASCADE,\n related_name='votes')\n vote = models.IntegerField(\n _('vote'),\n choices=VOTE_CHOICES)\n\n\nclass RequestRepositoryAuthorization(models.Model):\n class Meta:\n unique_together = ['user', 'repository']\n\n user = models.ForeignKey(\n User,\n models.CASCADE,\n related_name='requests')\n repository = models.ForeignKey(\n Repository,\n models.CASCADE,\n related_name='requests')\n text = models.CharField(\n _('text'),\n max_length=250)\n approved_by = models.ForeignKey(\n User,\n models.CASCADE,\n blank=True,\n null=True)\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True,\n editable=False)\n\n def send_new_request_email_to_admins(self):\n if not settings.SEND_EMAILS:\n return False\n context = {\n 'user_name': self.user.name,\n 'repository_name': self.repository.name,\n 'text': self.text,\n 'repository_url': self.repository.get_absolute_url(),\n }\n for admin in self.repository.admins:\n send_mail(\n _('New authorization request in {}').format(\n self.repository.name),\n render_to_string(\n 'common/emails/new_request.txt',\n context),\n None,\n [admin.email],\n html_message=render_to_string(\n 'common/emails/new_request.html',\n context))\n\n def send_request_rejected_email(self):\n if not settings.SEND_EMAILS:\n return False\n context = {\n 'repository_name': self.repository.name,\n }\n send_mail(\n _('Access denied to {}').format(\n self.repository.name),\n render_to_string(\n 'common/emails/request_rejected.txt',\n context),\n None,\n [self.user.email],\n html_message=render_to_string(\n 'common/emails/request_rejected.html',\n context))\n\n def send_request_approved_email(self):\n if not settings.SEND_EMAILS:\n return False\n context = {\n 'admin_name': self.approved_by.name,\n 'repository_name': self.repository.name,\n }\n send_mail(\n _('Authorization Request Approved to {}').format(\n self.repository.name),\n render_to_string(\n 'common/emails/request_approved.txt',\n context),\n None,\n [self.user.email],\n html_message=render_to_string(\n 'common/emails/request_approved.html',\n context))\n\n\n@receiver(models.signals.pre_save, sender=RequestRepositoryAuthorization)\ndef set_user_role_on_approved(instance, **kwargs):\n current = None\n try:\n current = RequestRepositoryAuthorization.objects.get(pk=instance.pk)\n except RequestRepositoryAuthorization.DoesNotExist as e:\n pass\n\n if not current:\n return False\n\n if current.approved_by is None and \\\n current.approved_by is not instance.approved_by:\n user_authorization = instance.repository.get_user_authorization(\n instance.user)\n user_authorization.role = RepositoryAuthorization.ROLE_USER\n user_authorization.save(update_fields=['role'])\n instance.send_request_approved_email()\n else:\n raise ValidationError(\n _('You can change approved_by just one time.'))\n\n\n@receiver(models.signals.post_save, sender=RequestRepositoryAuthorization)\ndef send_new_request_email_to_admins_on_created(instance, created, **kwargs):\n if created:\n instance.send_new_request_email_to_admins()\n\n\n@receiver(models.signals.post_delete, sender=RequestRepositoryAuthorization)\ndef send_request_rejected_email(instance, **kwargs):\n instance.send_request_rejected_email()\n", "path": "bothub/common/models.py" } ]
diff --git a/bothub/common/models.py b/bothub/common/models.py index 74711b74..44872001 100644 --- a/bothub/common/models.py +++ b/bothub/common/models.py @@ -460,6 +460,9 @@ def ready_for_train(self): if self.training_started_at: return False + if len(self.requirements_to_train) > 0: + return False + previous_update = self.repository.updates.filter( language=self.language, by__isnull=False, diff --git a/bothub/common/tests.py b/bothub/common/tests.py index 441e2aa1..d4d05f31 100644 --- a/bothub/common/tests.py +++ b/bothub/common/tests.py @@ -767,7 +767,8 @@ def setUp(self): owner=self.owner, name='Test', slug='test', - language=languages.LANGUAGE_EN) + language=languages.LANGUAGE_EN, + use_language_model_featurizer=False) def test_be_true(self): RepositoryExample.objects.create( @@ -871,6 +872,19 @@ def test_entity_dont_have_min_examples(self): entity='hi') self.assertTrue(self.repository.current_update().ready_for_train) + def test_settings_change_exists_requirements(self): + self.repository.current_update().start_training(self.owner) + self.repository.use_language_model_featurizer = True + self.repository.save() + RepositoryExample.objects.create( + repository_update=self.repository.current_update(), + text='hello', + intent='greet') + self.assertEqual( + len(self.repository.current_update().requirements_to_train), + 1) + self.assertFalse(self.repository.current_update().ready_for_train) + def test_no_examples(self): example = RepositoryExample.objects.create( repository_update=self.repository.current_update(),
hydroshare__hydroshare-2690
Hyperlink DOIs against preferred resolver Hello :-) The DOI foundation recommends [this new, secure resolver](https://www.doi.org/doi_handbook/3_Resolution.html#3.8). Would a PR that updates all static links & the code that generates new ones, plus the test cases be welcome? Cheers!
[ { "content": "import os\nimport zipfile\nimport shutil\nimport logging\nimport requests\n\nfrom django.conf import settings\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.core.files import File\nfrom django.core.files.uploadedfile import UploadedFile\nfrom django.core.exceptions import ValidationError, PermissionDenied\nfrom django.db import transaction\n\nfrom rest_framework import status\n\nfrom hs_core.hydroshare import hs_bagit\nfrom hs_core.models import ResourceFile\nfrom hs_core import signals\nfrom hs_core.hydroshare import utils\nfrom hs_access_control.models import ResourceAccess, UserResourcePrivilege, PrivilegeCodes\nfrom hs_labels.models import ResourceLabels\n\n\nFILE_SIZE_LIMIT = 1*(1024 ** 3)\nFILE_SIZE_LIMIT_FOR_DISPLAY = '1G'\nMETADATA_STATUS_SUFFICIENT = 'Sufficient to publish or make public'\nMETADATA_STATUS_INSUFFICIENT = 'Insufficient to publish or make public'\n\nlogger = logging.getLogger(__name__)\n\n\ndef update_quota_usage(res):\n from hs_core.tasks import update_quota_usage_task\n quser = res.get_quota_holder()\n if quser is None:\n # no quota holder for this resource, this should not happen, but check just in case\n logger.error('no quota holder is found for resource' + res.short_id)\n return\n # update quota usage by a celery task in 1 minute to give iRODS quota usage computation\n # services enough time to finish before reflecting the quota usage in django DB\n update_quota_usage_task.apply_async((quser.username,), countdown=60)\n\n\ndef get_resource(pk):\n \"\"\"\n Retrieve an instance of type Bags associated with the resource identified by **pk**\n\n Parameters: pk - Unique HydroShare identifier for the resource to be retrieved.\n\n Returns: An instance of type Bags.\n\n Raises:\n Exceptions.NotFound - The resource identified by pid does not exist\n \"\"\"\n\n return utils.get_resource_by_shortkey(pk).baseresource.bags.first()\n\n\ndef get_science_metadata(pk):\n \"\"\"\n Describes the resource identified by the pid by returning the associated science metadata\n object (xml+rdf string). If the resource does not exist, Exceptions.NotFound must be raised.\n\n REST URL: GET /scimeta/{pid}\n\n Parameters: pk - Unique HydroShare identifier for the resource whose science metadata is to\n be retrieved.\n\n Returns: Science metadata document describing the resource.\n\n Return Type: xml+rdf string\n\n Raises: Exceptions.NotAuthorized - The user is not authorized\n Exceptions.NotFound - The resource identified by pid does not exist\n Exception.ServiceFailure - The service is unable to process the request\n \"\"\"\n res = utils.get_resource_by_shortkey(pk)\n return res.metadata.get_xml()\n\n\ndef get_capabilities(pk):\n \"\"\"\n Describes API services exposed for a resource. If there are extra capabilites for a particular\n resource type over and above the standard Hydroshare API, then this API call will list these\n\n REST URL: GET /capabilites/{pid}\n\n Parameters: Unique HydroShare identifier for the resource whose capabilites are to be retrieved.\n\n Return Type: Capabilites\n\n Raises:\n Exceptions.NotAuthorized - The user is not authorized\n Exceptions.NotFound - The resource identified by pid does not exist\n Exception.ServiceFailure - The service is unable to process the request\n \"\"\"\n res = utils.get_resource_by_shortkey(pk)\n return getattr(res, 'extra_capabilities', lambda: None)()\n\n\ndef get_resource_file(pk, filename):\n \"\"\"\n Called by clients to get an individual file within a HydroShare resource.\n\n REST URL: GET /resource/{pid}/files/{filename}\n\n Parameters:\n pid - Unique HydroShare identifier for the resource from which the file will be extracted.\n filename - The data bytes of the file that will be extracted from the resource identified by pid\n\n Returns: The bytes of the file extracted from the resource\n\n Return Type: pid\n\n Raises:\n Exceptions.NotAuthorized - The user is not authorized\n Exceptions.NotFound - The resource identified does not exist or the file identified by filename\n does not exist\n Exception.ServiceFailure - The service is unable to process the request\n \"\"\"\n resource = utils.get_resource_by_shortkey(pk)\n for f in ResourceFile.objects.filter(object_id=resource.id):\n if os.path.basename(f.resource_file.name) == filename:\n return f.resource_file\n raise ObjectDoesNotExist(filename)\n\n\ndef update_resource_file(pk, filename, f):\n \"\"\"\n Called by clients to update an individual file within a HydroShare resource.\n\n REST URL: PUT /resource/{pid}/files/{filename}\n\n Parameters:\n pid - Unique HydroShare identifier for the resource from which the file will be extracted.\n filename - The data bytes of the file that will be extracted from the resource identified by pid\n file - the data bytes of the file to update\n\n Returns: The bytes of the file extracted from the resource\n\n Return Type: pid\n\n Raises:\n Exceptions.NotAuthorized - The user is not authorized\n Exceptions.NotFound - The resource identified does not exist or the file identified by filename\n does not exist\n Exception.ServiceFailure - The service is unable to process the request\n \"\"\"\n # TODO: does not update metadata; does not check resource state\n resource = utils.get_resource_by_shortkey(pk)\n for rf in ResourceFile.objects.filter(object_id=resource.id):\n if rf.short_path == filename:\n if rf.resource_file:\n # TODO: should use delete_resource_file\n rf.resource_file.delete()\n # TODO: should use add_file_to_resource\n rf.resource_file = File(f) if not isinstance(f, UploadedFile) else f\n rf.save()\n if rf.fed_resource_file:\n # TODO: should use delete_resource_file\n rf.fed_resource_file.delete()\n # TODO: should use add_file_to_resource\n rf.fed_resource_file = File(f) if not isinstance(f, UploadedFile) else f\n rf.save()\n return rf\n raise ObjectDoesNotExist(filename)\n\n\ndef get_related(pk):\n \"\"\"\n Returns a list of pids for resources that are related to the resource identified by the\n specified pid.\n\n REST URL: GET /related/{pid}\n\n Parameters:\n pid - Unique HydroShare identifier for the resource whose related resources are to be retrieved.\n\n Returns: List of pids for resources that are related to the specified resource.\n\n Return Type: List of pids\n\n Raises:\n Exceptions.NotAuthorized - The user is not authorized\n Exceptions.NotFound - The resource identified by pid does not exist\n Exception.ServiceFailure - The service is unable to process the request\n\n\n \"\"\"\n raise NotImplemented()\n\n\ndef get_checksum(pk):\n \"\"\"\n Returns a checksum for the specified resource using the MD5 algorithm. The result is used to\n determine if two instances referenced by a pid are identical.\n\n REST URL: GET /checksum/{pid}\n\n Parameters:\n pid - Unique HydroShare identifier for the resource for which the checksum is to be returned.\n\n Returns: Checksum of the resource identified by pid.\n\n Return Type: Checksum\n\n Raises:\n Exceptions.NotAuthorized - The user is not authorized\n Exceptions.NotFound - The resource specified by pid does not exist\n Exception.ServiceFailure - The service is unable to process the request\n \"\"\"\n raise NotImplementedError()\n\n\ndef check_resource_files(files=()):\n \"\"\"\n internally used method to check whether the uploaded files are within\n the supported maximal size limit. Also returns sum size of all files for\n quota check purpose if all files are within allowed size limit\n\n Parameters:\n files - list of Django File or UploadedFile objects to be attached to the resource\n Returns: (status, sum_size) tuple where status is True if files are within FILE_SIZE_LIMIT\n and False if not, and sum_size is the size summation over all files if status is\n True, and -1 if status is False\n \"\"\"\n sum = 0\n for file in files:\n if not isinstance(file, UploadedFile):\n # if file is already on the server, e.g., a file transferred directly from iRODS,\n # the file should not be subject to file size check since the file size check is\n # only prompted by file upload limit\n if hasattr(file, '_size'):\n sum += int(file._size)\n elif hasattr(file, 'size'):\n sum += int(file.size)\n else:\n try:\n size = os.stat(file).st_size\n except (TypeError, OSError):\n size = 0\n sum += size\n continue\n if hasattr(file, '_size') and file._size is not None:\n size = int(file._size)\n elif hasattr(file, 'size') and file.size is not None:\n size = int(file.size)\n else:\n try:\n size = int(os.stat(file.name).st_size)\n except (TypeError, OSError):\n size = 0\n sum += size\n if size > FILE_SIZE_LIMIT:\n # file is greater than FILE_SIZE_LIMIT, which is not allowed\n return False, -1\n\n return True, sum\n\n\ndef check_resource_type(resource_type):\n \"\"\"\n internally used method to check the resource type\n\n Parameters:\n resource_type: the resource type string to check\n Returns: the resource type class matching the resource type string; if no match is found,\n returns None\n \"\"\"\n for tp in utils.get_resource_types():\n if resource_type == tp.__name__:\n res_cls = tp\n break\n else:\n raise NotImplementedError(\"Type {resource_type} does not exist\".format(\n resource_type=resource_type))\n return res_cls\n\n\ndef add_zip_file_contents_to_resource_async(resource, f):\n \"\"\"\n Launch asynchronous celery task to add zip file contents to a resource.\n Note: will copy the zip file into a temporary space accessible to both\n the Django server and the Celery worker.\n :param resource: Resource to which file should be added\n :param f: TemporaryUploadedFile object (or object that implements temporary_file_path())\n representing a zip file whose contents are to be added to a resource.\n \"\"\"\n # Add contents of zipfile asynchronously; wait 30 seconds to be \"sure\" that resource creation\n # has finished.\n uploaded_filepath = f.temporary_file_path()\n tmp_dir = getattr(settings, 'HYDROSHARE_SHARED_TEMP', '/shared_tmp')\n logger.debug(\"Copying uploaded file from {0} to {1}\".format(uploaded_filepath,\n tmp_dir))\n shutil.copy(uploaded_filepath, tmp_dir)\n zfile_name = os.path.join(tmp_dir, os.path.basename(uploaded_filepath))\n logger.debug(\"Retained upload as {0}\".format(zfile_name))\n # Import here to avoid circular reference\n from hs_core.tasks import add_zip_file_contents_to_resource\n add_zip_file_contents_to_resource.apply_async((resource.short_id, zfile_name),\n countdown=30)\n resource.file_unpack_status = 'Pending'\n resource.save()\n\n\ndef create_resource(\n resource_type, owner, title,\n edit_users=None, view_users=None, edit_groups=None, view_groups=None,\n keywords=(), metadata=None, extra_metadata=None,\n files=(), source_names=[], fed_res_path='', move=False,\n create_metadata=True,\n create_bag=True, unpack_file=False, **kwargs):\n \"\"\"\n Called by a client to add a new resource to HydroShare. The caller must have authorization to\n write content to HydroShare. The pid for the resource is assigned by HydroShare upon inserting\n the resource. The create method returns the newly-assigned pid.\n\n REST URL: POST /resource\n\n Parameters:\n\n Returns: The newly created resource\n\n Return Type: BaseResource resource object\n\n Note: The calling user will automatically be set as the owner of the created resource.\n\n Implementation notes:\n\n 1. pid is called short_id. This is because pid is a UNIX term for Process ID and could be\n confusing.\n\n 2. return type is an instance of hs_core.models.BaseResource class. This is for efficiency in\n the native API. The native API should return actual instance rather than IDs wherever possible\n to avoid repeated lookups in the database when they are unnecessary.\n\n 3. resource_type is a string: see parameter list\n\n :param resource_type: string. the type of the resource such as GenericResource\n :param owner: email address, username, or User instance. The owner of the resource\n :param title: string. the title of the resource\n :param edit_users: list of email addresses, usernames, or User instances who will be given edit\n permissions\n :param view_users: list of email addresses, usernames, or User instances who will be given view\n permissions\n :param edit_groups: list of group names or Group instances who will be given edit permissions\n :param view_groups: list of group names or Group instances who will be given view permissions\n :param keywords: string list. list of keywords to add to the resource\n :param metadata: list of dicts containing keys (element names) and corresponding values as\n dicts { 'creator': {'name':'John Smith'}}.\n :param extra_metadata: one dict containing keys and corresponding values\n { 'Outlet Point Latitude': '40', 'Outlet Point Longitude': '-110'}.\n :param files: list of Django File or UploadedFile objects to be attached to the resource\n :param source_names: a list of file names from a federated zone to be\n used to create the resource in the federated zone, default is empty list\n :param fed_res_path: the federated zone path in the format of\n /federation_zone/home/localHydroProxy that indicate where the resource\n is stored, default is empty string\n :param move: a value of False or True indicating whether the content files\n should be erased from the source directory. default is False.\n :param create_bag: whether to create a bag for the newly created resource or not.\n By default, the bag is created.\n :param unpack_file: boolean. If files contains a single zip file, and unpack_file is True,\n the unpacked contents of the zip file will be added to the resource instead of the zip file.\n :param kwargs: extra arguments to fill in required values in AbstractResource subclasses\n\n :return: a new resource which is an instance of BaseResource with specificed resource_type.\n \"\"\"\n if __debug__:\n assert(isinstance(source_names, list))\n\n with transaction.atomic():\n cls = check_resource_type(resource_type)\n owner = utils.user_from_id(owner)\n\n # create the resource\n resource = cls.objects.create(\n resource_type=resource_type,\n user=owner,\n creator=owner,\n title=title,\n last_changed_by=owner,\n in_menus=[],\n **kwargs\n )\n\n resource.resource_type = resource_type\n\n # by default make resource private\n resource.set_slug('resource{0}{1}'.format('/', resource.short_id))\n resource.save()\n\n if not metadata:\n metadata = []\n\n if extra_metadata is not None:\n resource.extra_metadata = extra_metadata\n resource.save()\n\n if fed_res_path:\n resource.resource_federation_path = fed_res_path\n resource.save()\n\n # TODO: It would be safer to require an explicit zone path rather than harvesting file path\n elif len(source_names) > 0:\n resource.resource_federation_path = utils.get_federated_zone_home_path(source_names[0])\n resource.save()\n\n # by default resource is private\n resource_access = ResourceAccess(resource=resource)\n resource_access.save()\n # use the built-in share routine to set initial provenance.\n UserResourcePrivilege.share(resource=resource, grantor=owner, user=owner,\n privilege=PrivilegeCodes.OWNER)\n\n resource_labels = ResourceLabels(resource=resource)\n resource_labels.save()\n\n if edit_users:\n for user in edit_users:\n user = utils.user_from_id(user)\n owner.uaccess.share_resource_with_user(resource, user, PrivilegeCodes.CHANGE)\n\n if view_users:\n for user in view_users:\n user = utils.user_from_id(user)\n owner.uaccess.share_resource_with_user(resource, user, PrivilegeCodes.VIEW)\n\n if edit_groups:\n for group in edit_groups:\n group = utils.group_from_id(group)\n owner.uaccess.share_resource_with_group(resource, group, PrivilegeCodes.CHANGE)\n\n if view_groups:\n for group in view_groups:\n group = utils.group_from_id(group)\n owner.uaccess.share_resource_with_group(resource, group, PrivilegeCodes.VIEW)\n\n # set quota of this resource to this creator\n # quota holder has to be set before the files are added in order for real time iRODS\n # quota micro-services to work\n resource.set_quota_holder(owner, owner)\n\n if len(files) == 1 and unpack_file and zipfile.is_zipfile(files[0]):\n # Add contents of zipfile as resource files asynchronously\n # Note: this is done asynchronously as unzipping may take\n # a long time (~15 seconds to many minutes).\n add_zip_file_contents_to_resource_async(resource, files[0])\n else:\n # Add resource file(s) now\n # Note: this is done synchronously as it should only take a\n # few seconds. We may want to add the option to do this\n # asynchronously if the file size is large and would take\n # more than ~15 seconds to complete.\n add_resource_files(resource.short_id, *files, source_names=source_names,\n move=move)\n\n if create_metadata:\n # prepare default metadata\n utils.prepare_resource_default_metadata(resource=resource, metadata=metadata,\n res_title=title)\n\n for element in metadata:\n # here k is the name of the element\n # v is a dict of all element attributes/field names and field values\n k, v = element.items()[0]\n resource.metadata.create_element(k, **v)\n\n for keyword in keywords:\n resource.metadata.create_element('subject', value=keyword)\n\n resource.title = resource.metadata.title.value\n resource.save()\n\n if create_bag:\n hs_bagit.create_bag(resource)\n\n # set the resource to private\n resource.setAVU('isPublic', resource.raccess.public)\n\n # set the resource type (which is immutable)\n resource.setAVU(\"resourceType\", resource._meta.object_name)\n\n return resource\n\n\n# TODO: this is incredibly misnamed. It should not be used to create empty resources!\ndef create_empty_resource(pk, user, action='version'):\n \"\"\"\n Create a resource with empty content and empty metadata for resource versioning or copying.\n This empty resource object is then used to create metadata and content from its original\n resource. This separate routine is needed to return a new resource object to the calling\n view so that if an exception is raised, this empty resource object can be deleted for clean-up.\n Args:\n pk: the unique HydroShare identifier for the resource that is to be versioned or copied.\n user: the user who requests to create a new version for the resource or copy the resource.\n action: \"version\" or \"copy\" with default action being \"version\"\n Returns:\n the empty new resource that is created as an initial new version or copy for the original\n resource which is then further populated with metadata and content in a subsequent step.\n \"\"\"\n res = utils.get_resource_by_shortkey(pk)\n if action == 'version':\n if not user.uaccess.owns_resource(res):\n raise PermissionDenied('Only resource owners can create new versions')\n elif action == 'copy':\n # import here to avoid circular import\n from hs_core.views.utils import can_user_copy_resource\n if not user.uaccess.can_view_resource(res):\n raise PermissionDenied('You do not have permission to view this resource')\n allow_copy = can_user_copy_resource(res, user)\n if not allow_copy:\n raise PermissionDenied('The license for this resource does not permit copying')\n else:\n raise ValidationError('Input parameter error: action needs to be version or copy')\n\n # create the resource without files and without creating bags first\n new_resource = create_resource(\n resource_type=res.resource_type,\n owner=user,\n title=res.metadata.title.value,\n create_metadata=False,\n fed_res_path=res.resource_federation_path,\n create_bag=False\n )\n return new_resource\n\n\ndef copy_resource(ori_res, new_res):\n \"\"\"\n Populate metadata and contents from ori_res object to new_res object to make new_res object\n as a copy of the ori_res object\n Args:\n ori_res: the original resource that is to be copied.\n new_res: the new_res to be populated with metadata and content from the original resource\n as a copy of the original resource.\n Returns:\n the new resource copied from the original resource\n \"\"\"\n\n # add files directly via irods backend file operation\n utils.copy_resource_files_and_AVUs(ori_res.short_id, new_res.short_id)\n\n utils.copy_and_create_metadata(ori_res, new_res)\n\n hs_identifier = ori_res.metadata.identifiers.all().filter(name=\"hydroShareIdentifier\")[0]\n if hs_identifier:\n new_res.metadata.create_element('source', derived_from=hs_identifier.url)\n\n if ori_res.resource_type.lower() == \"collectionresource\":\n # clone contained_res list of original collection and add to new collection\n # note that new collection will not contain \"deleted resources\"\n new_res.resources = ori_res.resources.all()\n\n # create bag for the new resource\n hs_bagit.create_bag(new_res)\n\n return new_res\n\n\ndef create_new_version_resource(ori_res, new_res, user):\n \"\"\"\n Populate metadata and contents from ori_res object to new_res object to make new_res object as\n a new version of the ori_res object\n Args:\n ori_res: the original resource that is to be versioned.\n new_res: the new_res to be populated with metadata and content from the original resource\n to make it a new version\n user: the requesting user\n Returns:\n the new versioned resource for the original resource and thus obsolete the original resource\n\n \"\"\"\n # newly created new resource version is private initially\n # add files directly via irods backend file operation\n utils.copy_resource_files_and_AVUs(ori_res.short_id, new_res.short_id)\n\n # copy metadata from source resource to target new-versioned resource except three elements\n utils.copy_and_create_metadata(ori_res, new_res)\n\n # add or update Relation element to link source and target resources\n hs_identifier = new_res.metadata.identifiers.all().filter(name=\"hydroShareIdentifier\")[0]\n ori_res.metadata.create_element('relation', type='isReplacedBy', value=hs_identifier.url)\n\n if new_res.metadata.relations.all().filter(type='isVersionOf').exists():\n # the original resource is already a versioned resource, and its isVersionOf relation\n # element is copied over to this new version resource, needs to delete this element so\n # it can be created to link to its original resource correctly\n eid = new_res.metadata.relations.all().filter(type='isVersionOf').first().id\n new_res.metadata.delete_element('relation', eid)\n\n hs_identifier = ori_res.metadata.identifiers.all().filter(name=\"hydroShareIdentifier\")[0]\n new_res.metadata.create_element('relation', type='isVersionOf', value=hs_identifier.url)\n\n if ori_res.resource_type.lower() == \"collectionresource\":\n # clone contained_res list of original collection and add to new collection\n # note that new version collection will not contain \"deleted resources\"\n new_res.resources = ori_res.resources.all()\n\n # create bag for the new resource\n hs_bagit.create_bag(new_res)\n\n # since an isReplaceBy relation element is added to original resource, needs to call\n # resource_modified() for original resource\n utils.resource_modified(ori_res, user, overwrite_bag=False)\n # if everything goes well up to this point, set original resource to be immutable so that\n # obsoleted resources cannot be modified from REST API\n ori_res.raccess.immutable = True\n ori_res.raccess.save()\n return new_res\n\n\ndef add_resource_files(pk, *files, **kwargs):\n \"\"\"\n Called by clients to update a resource in HydroShare by adding one or more files.\n\n REST URL: PUT /resource/{pid}/files/{file}\n\n Parameters:\n pk - Unique HydroShare identifier for the resource that is to be updated.\n files - A list of file-like objects representing files that will be added\n to the existing resource identified by pid\n\n Returns: A list of ResourceFile objects added to the resource\n\n Return Type: list\n\n Raises:\n Exceptions.NotAuthorized - The user is not authorized\n Exceptions.InvalidContent - The content of the file is invalid\n Exception.ServiceFailure - The service is unable to process the request\n\n Notes:\n This does **not** handle mutability; changes to immutable resources should be denied elsewhere.\n\n \"\"\"\n resource = utils.get_resource_by_shortkey(pk)\n ret = []\n source_names = kwargs.pop('source_names', [])\n\n if __debug__:\n assert(isinstance(source_names, list))\n\n move = kwargs.pop('move', False)\n folder = kwargs.pop('folder', None)\n\n if __debug__: # assure that there are no spurious kwargs left.\n for k in kwargs:\n print(\"kwargs[{}]\".format(k))\n assert len(kwargs) == 0\n\n for f in files:\n ret.append(utils.add_file_to_resource(resource, f, folder=folder))\n\n if len(source_names) > 0:\n for ifname in source_names:\n ret.append(utils.add_file_to_resource(resource, None,\n folder=folder,\n source_name=ifname,\n move=move))\n if not ret:\n # no file has been added, make sure data/contents directory exists if no file is added\n utils.create_empty_contents_directory(resource)\n else:\n # some file(s) added, need to update quota usage\n update_quota_usage(resource)\n return ret\n\n\ndef update_science_metadata(pk, metadata, user):\n \"\"\"\n Updates science metadata for a resource\n\n Args:\n pk: Unique HydroShare identifier for the resource for which science metadata needs to be\n updated.\n metadata: a list of dictionary items containing data for each metadata element that needs to\n be updated\n user: user who is updating metadata\n example metadata format:\n [\n {'title': {'value': 'Updated Resource Title'}},\n {'description': {'abstract': 'Updated Resource Abstract'}},\n {'date': {'type': 'valid', 'start_date': '1/26/2016', 'end_date': '12/31/2016'}},\n {'creator': {'name': 'John Smith', 'email': '[email protected]'}},\n {'creator': {'name': 'Lisa Molley', 'email': '[email protected]'}},\n {'contributor': {'name': 'Kelvin Marshal', 'email': '[email protected]',\n 'organization': 'Utah State University',\n 'profile_links': [{'type': 'yahooProfile', 'url':\n 'http://yahoo.com/LH001'}]}},\n {'coverage': {'type': 'period', 'value': {'name': 'Name for period coverage',\n 'start': '1/1/2000',\n 'end': '12/12/2012'}}},\n {'coverage': {'type': 'point', 'value': {'name': 'Name for point coverage', 'east':\n '56.45678',\n 'north': '12.6789', 'units': 'decimal deg'}}},\n {'identifier': {'name': 'someIdentifier', 'url': \"http://some.org/001\"}},\n {'language': {'code': 'fre'}},\n {'relation': {'type': 'isPartOf', 'value': 'http://hydroshare.org/resource/001'}},\n {'rights': {'statement': 'This is the rights statement for this resource',\n 'url': 'http://rights.ord/001'}},\n {'source': {'derived_from': 'http://hydroshare.org/resource/0001'}},\n {'subject': {'value': 'sub-1'}},\n {'subject': {'value': 'sub-2'}},\n ]\n\n Returns:\n \"\"\"\n resource = utils.get_resource_by_shortkey(pk)\n resource.metadata.update(metadata, user)\n utils.resource_modified(resource, user, overwrite_bag=False)\n\n # set to private if metadata has become non-compliant\n resource.update_public_and_discoverable() # set to False if necessary\n\n\ndef delete_resource(pk):\n \"\"\"\n Deletes a resource managed by HydroShare. The caller must be an owner of the resource or an\n administrator to perform this function. The operation removes the resource from further\n interaction with HydroShare services and interfaces. The implementation may delete the resource\n bytes, and should do so since a delete operation may be in response to a problem with the\n resource (e.g., it contains malicious content, is inappropriate, or is subject to a legal\n request). If the resource does not exist, the Exceptions.NotFound exception is raised.\n\n REST URL: DELETE /resource/{pid}\n\n Parameters:\n pid - The unique HydroShare identifier of the resource to be deleted\n\n Returns:\n The pid of the resource that was deleted\n\n Return Type: pid\n\n Raises:\n Exceptions.NotAuthorized - The user is not authorized\n Exceptions.NotFound - The resource identified by pid does not exist\n Exception.ServiceFailure - The service is unable to process the request\n\n Note: Only HydroShare administrators will be able to delete formally published resour\n \"\"\"\n\n res = utils.get_resource_by_shortkey(pk)\n\n if res.metadata.relations.all().filter(type='isReplacedBy').exists():\n raise ValidationError('An obsoleted resource in the middle of the obsolescence chain '\n 'cannot be deleted.')\n\n # when the most recent version of a resource in an obsolescence chain is deleted, the previous\n # version in the chain needs to be set as the \"active\" version by deleting \"isReplacedBy\"\n # relation element\n if res.metadata.relations.all().filter(type='isVersionOf').exists():\n is_version_of_res_link = \\\n res.metadata.relations.all().filter(type='isVersionOf').first().value\n idx = is_version_of_res_link.rindex('/')\n if idx == -1:\n obsolete_res_id = is_version_of_res_link\n else:\n obsolete_res_id = is_version_of_res_link[idx+1:]\n obsolete_res = utils.get_resource_by_shortkey(obsolete_res_id)\n if obsolete_res.metadata.relations.all().filter(type='isReplacedBy').exists():\n eid = obsolete_res.metadata.relations.all().filter(type='isReplacedBy').first().id\n obsolete_res.metadata.delete_element('relation', eid)\n # also make this obsoleted resource editable now that it becomes the latest version\n obsolete_res.raccess.immutable = False\n obsolete_res.raccess.save()\n\n # need to update quota usage when a resource is deleted\n update_quota_usage(res)\n\n res.delete()\n return pk\n\n\ndef get_resource_file_name(f):\n \"\"\"\n get the file name of a specific ResourceFile object f\n Args:\n f: the ResourceFile object to return name for\n Returns:\n the file name of the ResourceFile object f\n \"\"\"\n return f.storage_path\n\n\ndef delete_resource_file_only(resource, f):\n \"\"\"\n Delete the single resource file f from the resource without sending signals and\n without deleting related metadata element. This function is called by delete_resource_file()\n function as well as from pre-delete signal handler for specific resource types\n (e.g., netCDF, raster, and feature) where when one resource file is deleted,\n some other resource files needs to be deleted as well.\n Args:\n resource: the resource from which the file f is to be deleted\n f: the ResourceFile object to be deleted\n Returns: unqualified relative path to file that has been deleted\n \"\"\"\n short_path = f.short_path\n f.delete()\n # need to update quota usage when a file is deleted\n update_quota_usage(resource)\n return short_path\n\n\ndef delete_format_metadata_after_delete_file(resource, file_name):\n \"\"\"\n delete format metadata as appropriate after a file is deleted.\n :param resource: BaseResource object representing a HydroShare resource\n :param file_name: the file name to be deleted\n :return:\n \"\"\"\n delete_file_mime_type = utils.get_file_mime_type(file_name)\n delete_file_extension = os.path.splitext(file_name)[1]\n\n # if there is no other resource file with the same extension as the\n # file just deleted then delete the matching format metadata element for the resource\n resource_file_extensions = [os.path.splitext(get_resource_file_name(f))[1] for f in\n resource.files.all()]\n if delete_file_extension not in resource_file_extensions:\n format_element = resource.metadata.formats.filter(value=delete_file_mime_type).first()\n if format_element:\n resource.metadata.delete_element(format_element.term, format_element.id)\n\n\n# TODO: test in-folder delete of short path\ndef filter_condition(filename_or_id, fl):\n \"\"\"\n Converted lambda definition of filter_condition into def to conform to pep8 E731 rule: do not\n assign a lambda expression, use a def\n :param filename_or_id: passed in filename_or id as the filter\n :param fl: the ResourceFile object to filter against\n :return: boolean indicating whether fl conforms to filename_or_id\n \"\"\"\n try:\n file_id = int(filename_or_id)\n return fl.id == file_id\n except ValueError:\n return fl.short_path == filename_or_id\n\n\n# TODO: Remove option for file id, not needed since names are unique.\n# TODO: Test that short_path deletes properly.\ndef delete_resource_file(pk, filename_or_id, user, delete_logical_file=True):\n \"\"\"\n Deletes an individual file from a HydroShare resource. If the file does not exist,\n the Exceptions.NotFound exception is raised.\n\n REST URL: DELETE /resource/{pid}/files/{filename}\n\n Parameters:\n :param pk: The unique HydroShare identifier for the resource from which the file will be deleted\n :param filename_or_id: Name of the file or id of the file to be deleted from the resource\n :param user: requesting user\n :param delete_logical_file: If True then if the ResourceFile object to be deleted is part of a\n LogicalFile object then the LogicalFile object will be deleted which deletes all associated\n ResourceFile objects and file type metadata objects.\n\n :returns: The name or id of the file which was deleted\n\n Return Type: string or integer\n\n Raises:\n Exceptions.NotAuthorized - The user is not authorized\n Exceptions.NotFound - The resource identified by pid does not exist or the file identified by\n file does not exist\n Exception.ServiceFailure - The service is unable to process the request\n\n Note: This does not handle immutability as previously intended.\n \"\"\"\n resource = utils.get_resource_by_shortkey(pk)\n res_cls = resource.__class__\n\n for f in ResourceFile.objects.filter(object_id=resource.id):\n if filter_condition(filename_or_id, f):\n if delete_logical_file:\n if f.logical_file is not None:\n # logical_delete() calls this function (delete_resource_file())\n # to delete each of its contained ResourceFile objects\n f.logical_file.logical_delete(user)\n return filename_or_id\n\n signals.pre_delete_file_from_resource.send(sender=res_cls, file=f,\n resource=resource, user=user)\n\n # Pabitra: better to use f.delete() here and get rid of the\n # delete_resource_file_only() util function\n # Hong: now that I am adding update_quota_usage() call in delete_resource_file_only(),\n # there is merit to keep file deletion call in a util function so that some action\n # can be bundled together with a file deletion operation\n file_name = delete_resource_file_only(resource, f)\n\n # This presumes that the file is no longer in django\n delete_format_metadata_after_delete_file(resource, file_name)\n\n signals.post_delete_file_from_resource.send(sender=res_cls, resource=resource)\n\n # set to private if necessary -- AFTER post_delete_file handling\n resource.update_public_and_discoverable() # set to False if necessary\n\n # generate bag\n utils.resource_modified(resource, user, overwrite_bag=False)\n\n return filename_or_id\n\n # if execution gets here, file was not found\n raise ObjectDoesNotExist(str.format(\"resource {}, file {} not found\",\n resource.short_id, filename_or_id))\n\n\ndef get_resource_doi(res_id, flag=''):\n doi_str = \"http://dx.doi.org/10.4211/hs.{shortkey}\".format(shortkey=res_id)\n if flag:\n return \"{doi}{append_flag}\".format(doi=doi_str, append_flag=flag)\n else:\n return doi_str\n\n\ndef get_activated_doi(doi):\n \"\"\"\n Get activated DOI with flags removed. The following two flags are appended\n to the DOI string to indicate publication status for internal use:\n 'pending' flag indicates the metadata deposition with CrossRef succeeds, but\n pending activation with CrossRef for DOI to take effect.\n 'failure' flag indicates the metadata deposition failed with CrossRef due to\n network or system issues with CrossRef\n\n Args:\n doi: the DOI string with possible status flags appended\n\n Returns:\n the activated DOI with all flags removed if any\n \"\"\"\n idx1 = doi.find('pending')\n idx2 = doi.find('failure')\n if idx1 >= 0:\n return doi[:idx1]\n elif idx2 >= 0:\n return doi[:idx2]\n else:\n return doi\n\n\ndef get_crossref_url():\n main_url = 'https://test.crossref.org/'\n if not settings.USE_CROSSREF_TEST:\n main_url = 'https://doi.crossref.org/'\n return main_url\n\n\ndef deposit_res_metadata_with_crossref(res):\n \"\"\"\n Deposit resource metadata with CrossRef DOI registration agency.\n Args:\n res: the resource object with its metadata to be deposited for publication\n\n Returns:\n response returned for the metadata deposition request from CrossRef\n\n \"\"\"\n xml_file_name = '{uuid}_deposit_metadata.xml'.format(uuid=res.short_id)\n # using HTTP to POST deposit xml file to crossref\n post_data = {\n 'operation': 'doMDUpload',\n 'login_id': settings.CROSSREF_LOGIN_ID,\n 'login_passwd': settings.CROSSREF_LOGIN_PWD\n }\n files = {'file': (xml_file_name, res.get_crossref_deposit_xml())}\n # exceptions will be raised if POST request fails\n main_url = get_crossref_url()\n post_url = '{MAIN_URL}servlet/deposit'.format(MAIN_URL=main_url)\n response = requests.post(post_url, data=post_data, files=files)\n return response\n\n\ndef publish_resource(user, pk):\n \"\"\"\n Formally publishes a resource in HydroShare. Triggers the creation of a DOI for the resource,\n and triggers the exposure of the resource to the HydroShare DataONE Member Node. The user must\n be an owner of a resource or an administrator to perform this action.\n\n Parameters:\n user - requesting user to publish the resource who must be one of the owners of the resource\n pk - Unique HydroShare identifier for the resource to be formally published.\n\n Returns: The id of the resource that was published\n\n Return Type: string\n\n Raises:\n Exceptions.NotAuthorized - The user is not authorized\n Exceptions.NotFound - The resource identified by pid does not exist\n Exception.ServiceFailure - The service is unable to process the request\n and other general exceptions\n\n Note: This is different than just giving public access to a resource via access control rule\n \"\"\"\n resource = utils.get_resource_by_shortkey(pk)\n\n # TODO: whether a resource can be published is not considered in can_be_published\n # TODO: can_be_published is currently an alias for can_be_public_or_discoverable\n if not resource.can_be_published:\n raise ValidationError(\"This resource cannot be published since it does not have required \"\n \"metadata or content files or this resource type is not allowed \"\n \"for publication.\")\n\n # append pending to the doi field to indicate DOI is not activated yet. Upon successful\n # activation, \"pending\" will be removed from DOI field\n resource.doi = get_resource_doi(pk, 'pending')\n resource.save()\n\n response = deposit_res_metadata_with_crossref(resource)\n if not response.status_code == status.HTTP_200_OK:\n # resource metadata deposition failed from CrossRef - set failure flag to be retried in a\n # crontab celery task\n resource.doi = get_resource_doi(pk, 'failure')\n resource.save()\n\n resource.set_public(True) # also sets discoverable to True\n resource.raccess.immutable = True\n resource.raccess.shareable = False\n resource.raccess.published = True\n resource.raccess.save()\n\n # change \"Publisher\" element of science metadata to CUAHSI\n md_args = {'name': 'Consortium of Universities for the Advancement of Hydrologic Science, '\n 'Inc. (CUAHSI)',\n 'url': 'https://www.cuahsi.org'}\n resource.metadata.create_element('Publisher', **md_args)\n\n # create published date\n resource.metadata.create_element('date', type='published', start_date=resource.updated)\n\n # add doi to \"Identifier\" element of science metadata\n md_args = {'name': 'doi',\n 'url': get_activated_doi(resource.doi)}\n resource.metadata.create_element('Identifier', **md_args)\n\n utils.resource_modified(resource, user, overwrite_bag=False)\n\n return pk\n\n\ndef resolve_doi(doi):\n \"\"\"\n Takes as input a DOI and returns the internal HydroShare identifier (pid) for a resource.\n This method will be used to get the HydroShare pid for a resource identified by a doi for\n further operations using the web service API.\n\n REST URL: GET /resolveDOI/{doi}\n\n Parameters: doi - A doi assigned to a resource in HydroShare.\n\n Returns: The pid of the resource that was published\n\n Return Type: pid\n\n Raises:\n Exceptions.NotAuthorized - The user is not authorized\n Exceptions.NotFound - The resource identified by pid does not exist\n Exception.ServiceFailure - The service is unable to process the request\n\n Note: All HydroShare methods (except this one) will use HydroShare internal identifiers\n (pids). This method exists so that a program can resolve the pid for a DOI.\n \"\"\"\n return utils.get_resource_by_doi(doi).short_id\n\n\ndef create_metadata_element(resource_short_id, element_model_name, **kwargs):\n \"\"\"\n Creates a specific type of metadata element for a given resource\n\n :param resource_short_id: id of the resource for which a metadata element needs to be created\n :param element_model_name: metadata element name (e.g., creator)\n :param kwargs: metadata element attribute name/value pairs for all those attributes that\n require a value\n :return:\n \"\"\"\n res = utils.get_resource_by_shortkey(resource_short_id)\n res.metadata.create_element(element_model_name, **kwargs)\n\n\ndef update_metadata_element(resource_short_id, element_model_name, element_id, **kwargs):\n \"\"\"\n Updates the data associated with a metadata element for a specified resource\n\n :param resource_short_id: id of the resource for which a metadata element needs to be updated\n :param element_model_name: metadata element name (e.g., creator)\n :param element_id: id of the metadata element to be updated\n :param kwargs: metadata element attribute name/value pairs for all those attributes that need\n update\n :return:\n \"\"\"\n res = utils.get_resource_by_shortkey(resource_short_id)\n res.metadata.update_element(element_model_name, element_id, **kwargs)\n\n\ndef delete_metadata_element(resource_short_id, element_model_name, element_id):\n \"\"\"\n Deletes a specific type of metadata element for a specified resource\n\n :param resource_short_id: id of the resource for which metadata element to be deleted\n :param element_model_name: metadata element name (e.g., creator)\n :param element_id: id of the metadata element to be deleted\n :return:\n \"\"\"\n res = utils.get_resource_by_shortkey(resource_short_id)\n res.metadata.delete_element(element_model_name, element_id)\n", "path": "hs_core/hydroshare/resource.py" } ]
[ { "content": "import os\nimport zipfile\nimport shutil\nimport logging\nimport requests\n\nfrom django.conf import settings\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.core.files import File\nfrom django.core.files.uploadedfile import UploadedFile\nfrom django.core.exceptions import ValidationError, PermissionDenied\nfrom django.db import transaction\n\nfrom rest_framework import status\n\nfrom hs_core.hydroshare import hs_bagit\nfrom hs_core.models import ResourceFile\nfrom hs_core import signals\nfrom hs_core.hydroshare import utils\nfrom hs_access_control.models import ResourceAccess, UserResourcePrivilege, PrivilegeCodes\nfrom hs_labels.models import ResourceLabels\n\n\nFILE_SIZE_LIMIT = 1*(1024 ** 3)\nFILE_SIZE_LIMIT_FOR_DISPLAY = '1G'\nMETADATA_STATUS_SUFFICIENT = 'Sufficient to publish or make public'\nMETADATA_STATUS_INSUFFICIENT = 'Insufficient to publish or make public'\n\nlogger = logging.getLogger(__name__)\n\n\ndef update_quota_usage(res):\n from hs_core.tasks import update_quota_usage_task\n quser = res.get_quota_holder()\n if quser is None:\n # no quota holder for this resource, this should not happen, but check just in case\n logger.error('no quota holder is found for resource' + res.short_id)\n return\n # update quota usage by a celery task in 1 minute to give iRODS quota usage computation\n # services enough time to finish before reflecting the quota usage in django DB\n update_quota_usage_task.apply_async((quser.username,), countdown=60)\n\n\ndef get_resource(pk):\n \"\"\"\n Retrieve an instance of type Bags associated with the resource identified by **pk**\n\n Parameters: pk - Unique HydroShare identifier for the resource to be retrieved.\n\n Returns: An instance of type Bags.\n\n Raises:\n Exceptions.NotFound - The resource identified by pid does not exist\n \"\"\"\n\n return utils.get_resource_by_shortkey(pk).baseresource.bags.first()\n\n\ndef get_science_metadata(pk):\n \"\"\"\n Describes the resource identified by the pid by returning the associated science metadata\n object (xml+rdf string). If the resource does not exist, Exceptions.NotFound must be raised.\n\n REST URL: GET /scimeta/{pid}\n\n Parameters: pk - Unique HydroShare identifier for the resource whose science metadata is to\n be retrieved.\n\n Returns: Science metadata document describing the resource.\n\n Return Type: xml+rdf string\n\n Raises: Exceptions.NotAuthorized - The user is not authorized\n Exceptions.NotFound - The resource identified by pid does not exist\n Exception.ServiceFailure - The service is unable to process the request\n \"\"\"\n res = utils.get_resource_by_shortkey(pk)\n return res.metadata.get_xml()\n\n\ndef get_capabilities(pk):\n \"\"\"\n Describes API services exposed for a resource. If there are extra capabilites for a particular\n resource type over and above the standard Hydroshare API, then this API call will list these\n\n REST URL: GET /capabilites/{pid}\n\n Parameters: Unique HydroShare identifier for the resource whose capabilites are to be retrieved.\n\n Return Type: Capabilites\n\n Raises:\n Exceptions.NotAuthorized - The user is not authorized\n Exceptions.NotFound - The resource identified by pid does not exist\n Exception.ServiceFailure - The service is unable to process the request\n \"\"\"\n res = utils.get_resource_by_shortkey(pk)\n return getattr(res, 'extra_capabilities', lambda: None)()\n\n\ndef get_resource_file(pk, filename):\n \"\"\"\n Called by clients to get an individual file within a HydroShare resource.\n\n REST URL: GET /resource/{pid}/files/{filename}\n\n Parameters:\n pid - Unique HydroShare identifier for the resource from which the file will be extracted.\n filename - The data bytes of the file that will be extracted from the resource identified by pid\n\n Returns: The bytes of the file extracted from the resource\n\n Return Type: pid\n\n Raises:\n Exceptions.NotAuthorized - The user is not authorized\n Exceptions.NotFound - The resource identified does not exist or the file identified by filename\n does not exist\n Exception.ServiceFailure - The service is unable to process the request\n \"\"\"\n resource = utils.get_resource_by_shortkey(pk)\n for f in ResourceFile.objects.filter(object_id=resource.id):\n if os.path.basename(f.resource_file.name) == filename:\n return f.resource_file\n raise ObjectDoesNotExist(filename)\n\n\ndef update_resource_file(pk, filename, f):\n \"\"\"\n Called by clients to update an individual file within a HydroShare resource.\n\n REST URL: PUT /resource/{pid}/files/{filename}\n\n Parameters:\n pid - Unique HydroShare identifier for the resource from which the file will be extracted.\n filename - The data bytes of the file that will be extracted from the resource identified by pid\n file - the data bytes of the file to update\n\n Returns: The bytes of the file extracted from the resource\n\n Return Type: pid\n\n Raises:\n Exceptions.NotAuthorized - The user is not authorized\n Exceptions.NotFound - The resource identified does not exist or the file identified by filename\n does not exist\n Exception.ServiceFailure - The service is unable to process the request\n \"\"\"\n # TODO: does not update metadata; does not check resource state\n resource = utils.get_resource_by_shortkey(pk)\n for rf in ResourceFile.objects.filter(object_id=resource.id):\n if rf.short_path == filename:\n if rf.resource_file:\n # TODO: should use delete_resource_file\n rf.resource_file.delete()\n # TODO: should use add_file_to_resource\n rf.resource_file = File(f) if not isinstance(f, UploadedFile) else f\n rf.save()\n if rf.fed_resource_file:\n # TODO: should use delete_resource_file\n rf.fed_resource_file.delete()\n # TODO: should use add_file_to_resource\n rf.fed_resource_file = File(f) if not isinstance(f, UploadedFile) else f\n rf.save()\n return rf\n raise ObjectDoesNotExist(filename)\n\n\ndef get_related(pk):\n \"\"\"\n Returns a list of pids for resources that are related to the resource identified by the\n specified pid.\n\n REST URL: GET /related/{pid}\n\n Parameters:\n pid - Unique HydroShare identifier for the resource whose related resources are to be retrieved.\n\n Returns: List of pids for resources that are related to the specified resource.\n\n Return Type: List of pids\n\n Raises:\n Exceptions.NotAuthorized - The user is not authorized\n Exceptions.NotFound - The resource identified by pid does not exist\n Exception.ServiceFailure - The service is unable to process the request\n\n\n \"\"\"\n raise NotImplemented()\n\n\ndef get_checksum(pk):\n \"\"\"\n Returns a checksum for the specified resource using the MD5 algorithm. The result is used to\n determine if two instances referenced by a pid are identical.\n\n REST URL: GET /checksum/{pid}\n\n Parameters:\n pid - Unique HydroShare identifier for the resource for which the checksum is to be returned.\n\n Returns: Checksum of the resource identified by pid.\n\n Return Type: Checksum\n\n Raises:\n Exceptions.NotAuthorized - The user is not authorized\n Exceptions.NotFound - The resource specified by pid does not exist\n Exception.ServiceFailure - The service is unable to process the request\n \"\"\"\n raise NotImplementedError()\n\n\ndef check_resource_files(files=()):\n \"\"\"\n internally used method to check whether the uploaded files are within\n the supported maximal size limit. Also returns sum size of all files for\n quota check purpose if all files are within allowed size limit\n\n Parameters:\n files - list of Django File or UploadedFile objects to be attached to the resource\n Returns: (status, sum_size) tuple where status is True if files are within FILE_SIZE_LIMIT\n and False if not, and sum_size is the size summation over all files if status is\n True, and -1 if status is False\n \"\"\"\n sum = 0\n for file in files:\n if not isinstance(file, UploadedFile):\n # if file is already on the server, e.g., a file transferred directly from iRODS,\n # the file should not be subject to file size check since the file size check is\n # only prompted by file upload limit\n if hasattr(file, '_size'):\n sum += int(file._size)\n elif hasattr(file, 'size'):\n sum += int(file.size)\n else:\n try:\n size = os.stat(file).st_size\n except (TypeError, OSError):\n size = 0\n sum += size\n continue\n if hasattr(file, '_size') and file._size is not None:\n size = int(file._size)\n elif hasattr(file, 'size') and file.size is not None:\n size = int(file.size)\n else:\n try:\n size = int(os.stat(file.name).st_size)\n except (TypeError, OSError):\n size = 0\n sum += size\n if size > FILE_SIZE_LIMIT:\n # file is greater than FILE_SIZE_LIMIT, which is not allowed\n return False, -1\n\n return True, sum\n\n\ndef check_resource_type(resource_type):\n \"\"\"\n internally used method to check the resource type\n\n Parameters:\n resource_type: the resource type string to check\n Returns: the resource type class matching the resource type string; if no match is found,\n returns None\n \"\"\"\n for tp in utils.get_resource_types():\n if resource_type == tp.__name__:\n res_cls = tp\n break\n else:\n raise NotImplementedError(\"Type {resource_type} does not exist\".format(\n resource_type=resource_type))\n return res_cls\n\n\ndef add_zip_file_contents_to_resource_async(resource, f):\n \"\"\"\n Launch asynchronous celery task to add zip file contents to a resource.\n Note: will copy the zip file into a temporary space accessible to both\n the Django server and the Celery worker.\n :param resource: Resource to which file should be added\n :param f: TemporaryUploadedFile object (or object that implements temporary_file_path())\n representing a zip file whose contents are to be added to a resource.\n \"\"\"\n # Add contents of zipfile asynchronously; wait 30 seconds to be \"sure\" that resource creation\n # has finished.\n uploaded_filepath = f.temporary_file_path()\n tmp_dir = getattr(settings, 'HYDROSHARE_SHARED_TEMP', '/shared_tmp')\n logger.debug(\"Copying uploaded file from {0} to {1}\".format(uploaded_filepath,\n tmp_dir))\n shutil.copy(uploaded_filepath, tmp_dir)\n zfile_name = os.path.join(tmp_dir, os.path.basename(uploaded_filepath))\n logger.debug(\"Retained upload as {0}\".format(zfile_name))\n # Import here to avoid circular reference\n from hs_core.tasks import add_zip_file_contents_to_resource\n add_zip_file_contents_to_resource.apply_async((resource.short_id, zfile_name),\n countdown=30)\n resource.file_unpack_status = 'Pending'\n resource.save()\n\n\ndef create_resource(\n resource_type, owner, title,\n edit_users=None, view_users=None, edit_groups=None, view_groups=None,\n keywords=(), metadata=None, extra_metadata=None,\n files=(), source_names=[], fed_res_path='', move=False,\n create_metadata=True,\n create_bag=True, unpack_file=False, **kwargs):\n \"\"\"\n Called by a client to add a new resource to HydroShare. The caller must have authorization to\n write content to HydroShare. The pid for the resource is assigned by HydroShare upon inserting\n the resource. The create method returns the newly-assigned pid.\n\n REST URL: POST /resource\n\n Parameters:\n\n Returns: The newly created resource\n\n Return Type: BaseResource resource object\n\n Note: The calling user will automatically be set as the owner of the created resource.\n\n Implementation notes:\n\n 1. pid is called short_id. This is because pid is a UNIX term for Process ID and could be\n confusing.\n\n 2. return type is an instance of hs_core.models.BaseResource class. This is for efficiency in\n the native API. The native API should return actual instance rather than IDs wherever possible\n to avoid repeated lookups in the database when they are unnecessary.\n\n 3. resource_type is a string: see parameter list\n\n :param resource_type: string. the type of the resource such as GenericResource\n :param owner: email address, username, or User instance. The owner of the resource\n :param title: string. the title of the resource\n :param edit_users: list of email addresses, usernames, or User instances who will be given edit\n permissions\n :param view_users: list of email addresses, usernames, or User instances who will be given view\n permissions\n :param edit_groups: list of group names or Group instances who will be given edit permissions\n :param view_groups: list of group names or Group instances who will be given view permissions\n :param keywords: string list. list of keywords to add to the resource\n :param metadata: list of dicts containing keys (element names) and corresponding values as\n dicts { 'creator': {'name':'John Smith'}}.\n :param extra_metadata: one dict containing keys and corresponding values\n { 'Outlet Point Latitude': '40', 'Outlet Point Longitude': '-110'}.\n :param files: list of Django File or UploadedFile objects to be attached to the resource\n :param source_names: a list of file names from a federated zone to be\n used to create the resource in the federated zone, default is empty list\n :param fed_res_path: the federated zone path in the format of\n /federation_zone/home/localHydroProxy that indicate where the resource\n is stored, default is empty string\n :param move: a value of False or True indicating whether the content files\n should be erased from the source directory. default is False.\n :param create_bag: whether to create a bag for the newly created resource or not.\n By default, the bag is created.\n :param unpack_file: boolean. If files contains a single zip file, and unpack_file is True,\n the unpacked contents of the zip file will be added to the resource instead of the zip file.\n :param kwargs: extra arguments to fill in required values in AbstractResource subclasses\n\n :return: a new resource which is an instance of BaseResource with specificed resource_type.\n \"\"\"\n if __debug__:\n assert(isinstance(source_names, list))\n\n with transaction.atomic():\n cls = check_resource_type(resource_type)\n owner = utils.user_from_id(owner)\n\n # create the resource\n resource = cls.objects.create(\n resource_type=resource_type,\n user=owner,\n creator=owner,\n title=title,\n last_changed_by=owner,\n in_menus=[],\n **kwargs\n )\n\n resource.resource_type = resource_type\n\n # by default make resource private\n resource.set_slug('resource{0}{1}'.format('/', resource.short_id))\n resource.save()\n\n if not metadata:\n metadata = []\n\n if extra_metadata is not None:\n resource.extra_metadata = extra_metadata\n resource.save()\n\n if fed_res_path:\n resource.resource_federation_path = fed_res_path\n resource.save()\n\n # TODO: It would be safer to require an explicit zone path rather than harvesting file path\n elif len(source_names) > 0:\n resource.resource_federation_path = utils.get_federated_zone_home_path(source_names[0])\n resource.save()\n\n # by default resource is private\n resource_access = ResourceAccess(resource=resource)\n resource_access.save()\n # use the built-in share routine to set initial provenance.\n UserResourcePrivilege.share(resource=resource, grantor=owner, user=owner,\n privilege=PrivilegeCodes.OWNER)\n\n resource_labels = ResourceLabels(resource=resource)\n resource_labels.save()\n\n if edit_users:\n for user in edit_users:\n user = utils.user_from_id(user)\n owner.uaccess.share_resource_with_user(resource, user, PrivilegeCodes.CHANGE)\n\n if view_users:\n for user in view_users:\n user = utils.user_from_id(user)\n owner.uaccess.share_resource_with_user(resource, user, PrivilegeCodes.VIEW)\n\n if edit_groups:\n for group in edit_groups:\n group = utils.group_from_id(group)\n owner.uaccess.share_resource_with_group(resource, group, PrivilegeCodes.CHANGE)\n\n if view_groups:\n for group in view_groups:\n group = utils.group_from_id(group)\n owner.uaccess.share_resource_with_group(resource, group, PrivilegeCodes.VIEW)\n\n # set quota of this resource to this creator\n # quota holder has to be set before the files are added in order for real time iRODS\n # quota micro-services to work\n resource.set_quota_holder(owner, owner)\n\n if len(files) == 1 and unpack_file and zipfile.is_zipfile(files[0]):\n # Add contents of zipfile as resource files asynchronously\n # Note: this is done asynchronously as unzipping may take\n # a long time (~15 seconds to many minutes).\n add_zip_file_contents_to_resource_async(resource, files[0])\n else:\n # Add resource file(s) now\n # Note: this is done synchronously as it should only take a\n # few seconds. We may want to add the option to do this\n # asynchronously if the file size is large and would take\n # more than ~15 seconds to complete.\n add_resource_files(resource.short_id, *files, source_names=source_names,\n move=move)\n\n if create_metadata:\n # prepare default metadata\n utils.prepare_resource_default_metadata(resource=resource, metadata=metadata,\n res_title=title)\n\n for element in metadata:\n # here k is the name of the element\n # v is a dict of all element attributes/field names and field values\n k, v = element.items()[0]\n resource.metadata.create_element(k, **v)\n\n for keyword in keywords:\n resource.metadata.create_element('subject', value=keyword)\n\n resource.title = resource.metadata.title.value\n resource.save()\n\n if create_bag:\n hs_bagit.create_bag(resource)\n\n # set the resource to private\n resource.setAVU('isPublic', resource.raccess.public)\n\n # set the resource type (which is immutable)\n resource.setAVU(\"resourceType\", resource._meta.object_name)\n\n return resource\n\n\n# TODO: this is incredibly misnamed. It should not be used to create empty resources!\ndef create_empty_resource(pk, user, action='version'):\n \"\"\"\n Create a resource with empty content and empty metadata for resource versioning or copying.\n This empty resource object is then used to create metadata and content from its original\n resource. This separate routine is needed to return a new resource object to the calling\n view so that if an exception is raised, this empty resource object can be deleted for clean-up.\n Args:\n pk: the unique HydroShare identifier for the resource that is to be versioned or copied.\n user: the user who requests to create a new version for the resource or copy the resource.\n action: \"version\" or \"copy\" with default action being \"version\"\n Returns:\n the empty new resource that is created as an initial new version or copy for the original\n resource which is then further populated with metadata and content in a subsequent step.\n \"\"\"\n res = utils.get_resource_by_shortkey(pk)\n if action == 'version':\n if not user.uaccess.owns_resource(res):\n raise PermissionDenied('Only resource owners can create new versions')\n elif action == 'copy':\n # import here to avoid circular import\n from hs_core.views.utils import can_user_copy_resource\n if not user.uaccess.can_view_resource(res):\n raise PermissionDenied('You do not have permission to view this resource')\n allow_copy = can_user_copy_resource(res, user)\n if not allow_copy:\n raise PermissionDenied('The license for this resource does not permit copying')\n else:\n raise ValidationError('Input parameter error: action needs to be version or copy')\n\n # create the resource without files and without creating bags first\n new_resource = create_resource(\n resource_type=res.resource_type,\n owner=user,\n title=res.metadata.title.value,\n create_metadata=False,\n fed_res_path=res.resource_federation_path,\n create_bag=False\n )\n return new_resource\n\n\ndef copy_resource(ori_res, new_res):\n \"\"\"\n Populate metadata and contents from ori_res object to new_res object to make new_res object\n as a copy of the ori_res object\n Args:\n ori_res: the original resource that is to be copied.\n new_res: the new_res to be populated with metadata and content from the original resource\n as a copy of the original resource.\n Returns:\n the new resource copied from the original resource\n \"\"\"\n\n # add files directly via irods backend file operation\n utils.copy_resource_files_and_AVUs(ori_res.short_id, new_res.short_id)\n\n utils.copy_and_create_metadata(ori_res, new_res)\n\n hs_identifier = ori_res.metadata.identifiers.all().filter(name=\"hydroShareIdentifier\")[0]\n if hs_identifier:\n new_res.metadata.create_element('source', derived_from=hs_identifier.url)\n\n if ori_res.resource_type.lower() == \"collectionresource\":\n # clone contained_res list of original collection and add to new collection\n # note that new collection will not contain \"deleted resources\"\n new_res.resources = ori_res.resources.all()\n\n # create bag for the new resource\n hs_bagit.create_bag(new_res)\n\n return new_res\n\n\ndef create_new_version_resource(ori_res, new_res, user):\n \"\"\"\n Populate metadata and contents from ori_res object to new_res object to make new_res object as\n a new version of the ori_res object\n Args:\n ori_res: the original resource that is to be versioned.\n new_res: the new_res to be populated with metadata and content from the original resource\n to make it a new version\n user: the requesting user\n Returns:\n the new versioned resource for the original resource and thus obsolete the original resource\n\n \"\"\"\n # newly created new resource version is private initially\n # add files directly via irods backend file operation\n utils.copy_resource_files_and_AVUs(ori_res.short_id, new_res.short_id)\n\n # copy metadata from source resource to target new-versioned resource except three elements\n utils.copy_and_create_metadata(ori_res, new_res)\n\n # add or update Relation element to link source and target resources\n hs_identifier = new_res.metadata.identifiers.all().filter(name=\"hydroShareIdentifier\")[0]\n ori_res.metadata.create_element('relation', type='isReplacedBy', value=hs_identifier.url)\n\n if new_res.metadata.relations.all().filter(type='isVersionOf').exists():\n # the original resource is already a versioned resource, and its isVersionOf relation\n # element is copied over to this new version resource, needs to delete this element so\n # it can be created to link to its original resource correctly\n eid = new_res.metadata.relations.all().filter(type='isVersionOf').first().id\n new_res.metadata.delete_element('relation', eid)\n\n hs_identifier = ori_res.metadata.identifiers.all().filter(name=\"hydroShareIdentifier\")[0]\n new_res.metadata.create_element('relation', type='isVersionOf', value=hs_identifier.url)\n\n if ori_res.resource_type.lower() == \"collectionresource\":\n # clone contained_res list of original collection and add to new collection\n # note that new version collection will not contain \"deleted resources\"\n new_res.resources = ori_res.resources.all()\n\n # create bag for the new resource\n hs_bagit.create_bag(new_res)\n\n # since an isReplaceBy relation element is added to original resource, needs to call\n # resource_modified() for original resource\n utils.resource_modified(ori_res, user, overwrite_bag=False)\n # if everything goes well up to this point, set original resource to be immutable so that\n # obsoleted resources cannot be modified from REST API\n ori_res.raccess.immutable = True\n ori_res.raccess.save()\n return new_res\n\n\ndef add_resource_files(pk, *files, **kwargs):\n \"\"\"\n Called by clients to update a resource in HydroShare by adding one or more files.\n\n REST URL: PUT /resource/{pid}/files/{file}\n\n Parameters:\n pk - Unique HydroShare identifier for the resource that is to be updated.\n files - A list of file-like objects representing files that will be added\n to the existing resource identified by pid\n\n Returns: A list of ResourceFile objects added to the resource\n\n Return Type: list\n\n Raises:\n Exceptions.NotAuthorized - The user is not authorized\n Exceptions.InvalidContent - The content of the file is invalid\n Exception.ServiceFailure - The service is unable to process the request\n\n Notes:\n This does **not** handle mutability; changes to immutable resources should be denied elsewhere.\n\n \"\"\"\n resource = utils.get_resource_by_shortkey(pk)\n ret = []\n source_names = kwargs.pop('source_names', [])\n\n if __debug__:\n assert(isinstance(source_names, list))\n\n move = kwargs.pop('move', False)\n folder = kwargs.pop('folder', None)\n\n if __debug__: # assure that there are no spurious kwargs left.\n for k in kwargs:\n print(\"kwargs[{}]\".format(k))\n assert len(kwargs) == 0\n\n for f in files:\n ret.append(utils.add_file_to_resource(resource, f, folder=folder))\n\n if len(source_names) > 0:\n for ifname in source_names:\n ret.append(utils.add_file_to_resource(resource, None,\n folder=folder,\n source_name=ifname,\n move=move))\n if not ret:\n # no file has been added, make sure data/contents directory exists if no file is added\n utils.create_empty_contents_directory(resource)\n else:\n # some file(s) added, need to update quota usage\n update_quota_usage(resource)\n return ret\n\n\ndef update_science_metadata(pk, metadata, user):\n \"\"\"\n Updates science metadata for a resource\n\n Args:\n pk: Unique HydroShare identifier for the resource for which science metadata needs to be\n updated.\n metadata: a list of dictionary items containing data for each metadata element that needs to\n be updated\n user: user who is updating metadata\n example metadata format:\n [\n {'title': {'value': 'Updated Resource Title'}},\n {'description': {'abstract': 'Updated Resource Abstract'}},\n {'date': {'type': 'valid', 'start_date': '1/26/2016', 'end_date': '12/31/2016'}},\n {'creator': {'name': 'John Smith', 'email': '[email protected]'}},\n {'creator': {'name': 'Lisa Molley', 'email': '[email protected]'}},\n {'contributor': {'name': 'Kelvin Marshal', 'email': '[email protected]',\n 'organization': 'Utah State University',\n 'profile_links': [{'type': 'yahooProfile', 'url':\n 'http://yahoo.com/LH001'}]}},\n {'coverage': {'type': 'period', 'value': {'name': 'Name for period coverage',\n 'start': '1/1/2000',\n 'end': '12/12/2012'}}},\n {'coverage': {'type': 'point', 'value': {'name': 'Name for point coverage', 'east':\n '56.45678',\n 'north': '12.6789', 'units': 'decimal deg'}}},\n {'identifier': {'name': 'someIdentifier', 'url': \"http://some.org/001\"}},\n {'language': {'code': 'fre'}},\n {'relation': {'type': 'isPartOf', 'value': 'http://hydroshare.org/resource/001'}},\n {'rights': {'statement': 'This is the rights statement for this resource',\n 'url': 'http://rights.ord/001'}},\n {'source': {'derived_from': 'http://hydroshare.org/resource/0001'}},\n {'subject': {'value': 'sub-1'}},\n {'subject': {'value': 'sub-2'}},\n ]\n\n Returns:\n \"\"\"\n resource = utils.get_resource_by_shortkey(pk)\n resource.metadata.update(metadata, user)\n utils.resource_modified(resource, user, overwrite_bag=False)\n\n # set to private if metadata has become non-compliant\n resource.update_public_and_discoverable() # set to False if necessary\n\n\ndef delete_resource(pk):\n \"\"\"\n Deletes a resource managed by HydroShare. The caller must be an owner of the resource or an\n administrator to perform this function. The operation removes the resource from further\n interaction with HydroShare services and interfaces. The implementation may delete the resource\n bytes, and should do so since a delete operation may be in response to a problem with the\n resource (e.g., it contains malicious content, is inappropriate, or is subject to a legal\n request). If the resource does not exist, the Exceptions.NotFound exception is raised.\n\n REST URL: DELETE /resource/{pid}\n\n Parameters:\n pid - The unique HydroShare identifier of the resource to be deleted\n\n Returns:\n The pid of the resource that was deleted\n\n Return Type: pid\n\n Raises:\n Exceptions.NotAuthorized - The user is not authorized\n Exceptions.NotFound - The resource identified by pid does not exist\n Exception.ServiceFailure - The service is unable to process the request\n\n Note: Only HydroShare administrators will be able to delete formally published resour\n \"\"\"\n\n res = utils.get_resource_by_shortkey(pk)\n\n if res.metadata.relations.all().filter(type='isReplacedBy').exists():\n raise ValidationError('An obsoleted resource in the middle of the obsolescence chain '\n 'cannot be deleted.')\n\n # when the most recent version of a resource in an obsolescence chain is deleted, the previous\n # version in the chain needs to be set as the \"active\" version by deleting \"isReplacedBy\"\n # relation element\n if res.metadata.relations.all().filter(type='isVersionOf').exists():\n is_version_of_res_link = \\\n res.metadata.relations.all().filter(type='isVersionOf').first().value\n idx = is_version_of_res_link.rindex('/')\n if idx == -1:\n obsolete_res_id = is_version_of_res_link\n else:\n obsolete_res_id = is_version_of_res_link[idx+1:]\n obsolete_res = utils.get_resource_by_shortkey(obsolete_res_id)\n if obsolete_res.metadata.relations.all().filter(type='isReplacedBy').exists():\n eid = obsolete_res.metadata.relations.all().filter(type='isReplacedBy').first().id\n obsolete_res.metadata.delete_element('relation', eid)\n # also make this obsoleted resource editable now that it becomes the latest version\n obsolete_res.raccess.immutable = False\n obsolete_res.raccess.save()\n\n # need to update quota usage when a resource is deleted\n update_quota_usage(res)\n\n res.delete()\n return pk\n\n\ndef get_resource_file_name(f):\n \"\"\"\n get the file name of a specific ResourceFile object f\n Args:\n f: the ResourceFile object to return name for\n Returns:\n the file name of the ResourceFile object f\n \"\"\"\n return f.storage_path\n\n\ndef delete_resource_file_only(resource, f):\n \"\"\"\n Delete the single resource file f from the resource without sending signals and\n without deleting related metadata element. This function is called by delete_resource_file()\n function as well as from pre-delete signal handler for specific resource types\n (e.g., netCDF, raster, and feature) where when one resource file is deleted,\n some other resource files needs to be deleted as well.\n Args:\n resource: the resource from which the file f is to be deleted\n f: the ResourceFile object to be deleted\n Returns: unqualified relative path to file that has been deleted\n \"\"\"\n short_path = f.short_path\n f.delete()\n # need to update quota usage when a file is deleted\n update_quota_usage(resource)\n return short_path\n\n\ndef delete_format_metadata_after_delete_file(resource, file_name):\n \"\"\"\n delete format metadata as appropriate after a file is deleted.\n :param resource: BaseResource object representing a HydroShare resource\n :param file_name: the file name to be deleted\n :return:\n \"\"\"\n delete_file_mime_type = utils.get_file_mime_type(file_name)\n delete_file_extension = os.path.splitext(file_name)[1]\n\n # if there is no other resource file with the same extension as the\n # file just deleted then delete the matching format metadata element for the resource\n resource_file_extensions = [os.path.splitext(get_resource_file_name(f))[1] for f in\n resource.files.all()]\n if delete_file_extension not in resource_file_extensions:\n format_element = resource.metadata.formats.filter(value=delete_file_mime_type).first()\n if format_element:\n resource.metadata.delete_element(format_element.term, format_element.id)\n\n\n# TODO: test in-folder delete of short path\ndef filter_condition(filename_or_id, fl):\n \"\"\"\n Converted lambda definition of filter_condition into def to conform to pep8 E731 rule: do not\n assign a lambda expression, use a def\n :param filename_or_id: passed in filename_or id as the filter\n :param fl: the ResourceFile object to filter against\n :return: boolean indicating whether fl conforms to filename_or_id\n \"\"\"\n try:\n file_id = int(filename_or_id)\n return fl.id == file_id\n except ValueError:\n return fl.short_path == filename_or_id\n\n\n# TODO: Remove option for file id, not needed since names are unique.\n# TODO: Test that short_path deletes properly.\ndef delete_resource_file(pk, filename_or_id, user, delete_logical_file=True):\n \"\"\"\n Deletes an individual file from a HydroShare resource. If the file does not exist,\n the Exceptions.NotFound exception is raised.\n\n REST URL: DELETE /resource/{pid}/files/{filename}\n\n Parameters:\n :param pk: The unique HydroShare identifier for the resource from which the file will be deleted\n :param filename_or_id: Name of the file or id of the file to be deleted from the resource\n :param user: requesting user\n :param delete_logical_file: If True then if the ResourceFile object to be deleted is part of a\n LogicalFile object then the LogicalFile object will be deleted which deletes all associated\n ResourceFile objects and file type metadata objects.\n\n :returns: The name or id of the file which was deleted\n\n Return Type: string or integer\n\n Raises:\n Exceptions.NotAuthorized - The user is not authorized\n Exceptions.NotFound - The resource identified by pid does not exist or the file identified by\n file does not exist\n Exception.ServiceFailure - The service is unable to process the request\n\n Note: This does not handle immutability as previously intended.\n \"\"\"\n resource = utils.get_resource_by_shortkey(pk)\n res_cls = resource.__class__\n\n for f in ResourceFile.objects.filter(object_id=resource.id):\n if filter_condition(filename_or_id, f):\n if delete_logical_file:\n if f.logical_file is not None:\n # logical_delete() calls this function (delete_resource_file())\n # to delete each of its contained ResourceFile objects\n f.logical_file.logical_delete(user)\n return filename_or_id\n\n signals.pre_delete_file_from_resource.send(sender=res_cls, file=f,\n resource=resource, user=user)\n\n # Pabitra: better to use f.delete() here and get rid of the\n # delete_resource_file_only() util function\n # Hong: now that I am adding update_quota_usage() call in delete_resource_file_only(),\n # there is merit to keep file deletion call in a util function so that some action\n # can be bundled together with a file deletion operation\n file_name = delete_resource_file_only(resource, f)\n\n # This presumes that the file is no longer in django\n delete_format_metadata_after_delete_file(resource, file_name)\n\n signals.post_delete_file_from_resource.send(sender=res_cls, resource=resource)\n\n # set to private if necessary -- AFTER post_delete_file handling\n resource.update_public_and_discoverable() # set to False if necessary\n\n # generate bag\n utils.resource_modified(resource, user, overwrite_bag=False)\n\n return filename_or_id\n\n # if execution gets here, file was not found\n raise ObjectDoesNotExist(str.format(\"resource {}, file {} not found\",\n resource.short_id, filename_or_id))\n\n\ndef get_resource_doi(res_id, flag=''):\n doi_str = \"https://doi.org/10.4211/hs.{shortkey}\".format(shortkey=res_id)\n if flag:\n return \"{doi}{append_flag}\".format(doi=doi_str, append_flag=flag)\n else:\n return doi_str\n\n\ndef get_activated_doi(doi):\n \"\"\"\n Get activated DOI with flags removed. The following two flags are appended\n to the DOI string to indicate publication status for internal use:\n 'pending' flag indicates the metadata deposition with CrossRef succeeds, but\n pending activation with CrossRef for DOI to take effect.\n 'failure' flag indicates the metadata deposition failed with CrossRef due to\n network or system issues with CrossRef\n\n Args:\n doi: the DOI string with possible status flags appended\n\n Returns:\n the activated DOI with all flags removed if any\n \"\"\"\n idx1 = doi.find('pending')\n idx2 = doi.find('failure')\n if idx1 >= 0:\n return doi[:idx1]\n elif idx2 >= 0:\n return doi[:idx2]\n else:\n return doi\n\n\ndef get_crossref_url():\n main_url = 'https://test.crossref.org/'\n if not settings.USE_CROSSREF_TEST:\n main_url = 'https://doi.crossref.org/'\n return main_url\n\n\ndef deposit_res_metadata_with_crossref(res):\n \"\"\"\n Deposit resource metadata with CrossRef DOI registration agency.\n Args:\n res: the resource object with its metadata to be deposited for publication\n\n Returns:\n response returned for the metadata deposition request from CrossRef\n\n \"\"\"\n xml_file_name = '{uuid}_deposit_metadata.xml'.format(uuid=res.short_id)\n # using HTTP to POST deposit xml file to crossref\n post_data = {\n 'operation': 'doMDUpload',\n 'login_id': settings.CROSSREF_LOGIN_ID,\n 'login_passwd': settings.CROSSREF_LOGIN_PWD\n }\n files = {'file': (xml_file_name, res.get_crossref_deposit_xml())}\n # exceptions will be raised if POST request fails\n main_url = get_crossref_url()\n post_url = '{MAIN_URL}servlet/deposit'.format(MAIN_URL=main_url)\n response = requests.post(post_url, data=post_data, files=files)\n return response\n\n\ndef publish_resource(user, pk):\n \"\"\"\n Formally publishes a resource in HydroShare. Triggers the creation of a DOI for the resource,\n and triggers the exposure of the resource to the HydroShare DataONE Member Node. The user must\n be an owner of a resource or an administrator to perform this action.\n\n Parameters:\n user - requesting user to publish the resource who must be one of the owners of the resource\n pk - Unique HydroShare identifier for the resource to be formally published.\n\n Returns: The id of the resource that was published\n\n Return Type: string\n\n Raises:\n Exceptions.NotAuthorized - The user is not authorized\n Exceptions.NotFound - The resource identified by pid does not exist\n Exception.ServiceFailure - The service is unable to process the request\n and other general exceptions\n\n Note: This is different than just giving public access to a resource via access control rule\n \"\"\"\n resource = utils.get_resource_by_shortkey(pk)\n\n # TODO: whether a resource can be published is not considered in can_be_published\n # TODO: can_be_published is currently an alias for can_be_public_or_discoverable\n if not resource.can_be_published:\n raise ValidationError(\"This resource cannot be published since it does not have required \"\n \"metadata or content files or this resource type is not allowed \"\n \"for publication.\")\n\n # append pending to the doi field to indicate DOI is not activated yet. Upon successful\n # activation, \"pending\" will be removed from DOI field\n resource.doi = get_resource_doi(pk, 'pending')\n resource.save()\n\n response = deposit_res_metadata_with_crossref(resource)\n if not response.status_code == status.HTTP_200_OK:\n # resource metadata deposition failed from CrossRef - set failure flag to be retried in a\n # crontab celery task\n resource.doi = get_resource_doi(pk, 'failure')\n resource.save()\n\n resource.set_public(True) # also sets discoverable to True\n resource.raccess.immutable = True\n resource.raccess.shareable = False\n resource.raccess.published = True\n resource.raccess.save()\n\n # change \"Publisher\" element of science metadata to CUAHSI\n md_args = {'name': 'Consortium of Universities for the Advancement of Hydrologic Science, '\n 'Inc. (CUAHSI)',\n 'url': 'https://www.cuahsi.org'}\n resource.metadata.create_element('Publisher', **md_args)\n\n # create published date\n resource.metadata.create_element('date', type='published', start_date=resource.updated)\n\n # add doi to \"Identifier\" element of science metadata\n md_args = {'name': 'doi',\n 'url': get_activated_doi(resource.doi)}\n resource.metadata.create_element('Identifier', **md_args)\n\n utils.resource_modified(resource, user, overwrite_bag=False)\n\n return pk\n\n\ndef resolve_doi(doi):\n \"\"\"\n Takes as input a DOI and returns the internal HydroShare identifier (pid) for a resource.\n This method will be used to get the HydroShare pid for a resource identified by a doi for\n further operations using the web service API.\n\n REST URL: GET /resolveDOI/{doi}\n\n Parameters: doi - A doi assigned to a resource in HydroShare.\n\n Returns: The pid of the resource that was published\n\n Return Type: pid\n\n Raises:\n Exceptions.NotAuthorized - The user is not authorized\n Exceptions.NotFound - The resource identified by pid does not exist\n Exception.ServiceFailure - The service is unable to process the request\n\n Note: All HydroShare methods (except this one) will use HydroShare internal identifiers\n (pids). This method exists so that a program can resolve the pid for a DOI.\n \"\"\"\n return utils.get_resource_by_doi(doi).short_id\n\n\ndef create_metadata_element(resource_short_id, element_model_name, **kwargs):\n \"\"\"\n Creates a specific type of metadata element for a given resource\n\n :param resource_short_id: id of the resource for which a metadata element needs to be created\n :param element_model_name: metadata element name (e.g., creator)\n :param kwargs: metadata element attribute name/value pairs for all those attributes that\n require a value\n :return:\n \"\"\"\n res = utils.get_resource_by_shortkey(resource_short_id)\n res.metadata.create_element(element_model_name, **kwargs)\n\n\ndef update_metadata_element(resource_short_id, element_model_name, element_id, **kwargs):\n \"\"\"\n Updates the data associated with a metadata element for a specified resource\n\n :param resource_short_id: id of the resource for which a metadata element needs to be updated\n :param element_model_name: metadata element name (e.g., creator)\n :param element_id: id of the metadata element to be updated\n :param kwargs: metadata element attribute name/value pairs for all those attributes that need\n update\n :return:\n \"\"\"\n res = utils.get_resource_by_shortkey(resource_short_id)\n res.metadata.update_element(element_model_name, element_id, **kwargs)\n\n\ndef delete_metadata_element(resource_short_id, element_model_name, element_id):\n \"\"\"\n Deletes a specific type of metadata element for a specified resource\n\n :param resource_short_id: id of the resource for which metadata element to be deleted\n :param element_model_name: metadata element name (e.g., creator)\n :param element_id: id of the metadata element to be deleted\n :return:\n \"\"\"\n res = utils.get_resource_by_shortkey(resource_short_id)\n res.metadata.delete_element(element_model_name, element_id)\n", "path": "hs_core/hydroshare/resource.py" } ]
diff --git a/docs/bagit/readme.txt b/docs/bagit/readme.txt index 0195f76bd1..2251b08a91 100644 --- a/docs/bagit/readme.txt +++ b/docs/bagit/readme.txt @@ -10,7 +10,7 @@ You can find the full BagIt specification here: https://tools.ietf.org/html/draf You can also find a much more detailed description of HydroShare's resource data model and packaging scheme in the following paper: -Horsburgh, J. S., Morsy, M. M., Castronova, A., Goodall, J. L., Gan, T., Yi, H., Stealey, M. J., and D.G. Tarboton (2015). HydroShare: Sharing diverse hydrologic data types and models as social objects within a Hydrologic Information System, Journal Of the American Water Resources Association(JAWRA), http://dx.doi.org/10.1111/1752-1688.12363. +Horsburgh, J. S., Morsy, M. M., Castronova, A., Goodall, J. L., Gan, T., Yi, H., Stealey, M. J., and D.G. Tarboton (2015). HydroShare: Sharing diverse hydrologic data types and models as social objects within a Hydrologic Information System, Journal Of the American Water Resources Association(JAWRA), https://doi.org/10.1111/1752-1688.12363. We've summarized the important points below. diff --git a/hs_core/hydroshare/resource.py b/hs_core/hydroshare/resource.py index 35c397d976..a45edb86f7 100755 --- a/hs_core/hydroshare/resource.py +++ b/hs_core/hydroshare/resource.py @@ -909,7 +909,7 @@ def delete_resource_file(pk, filename_or_id, user, delete_logical_file=True): def get_resource_doi(res_id, flag=''): - doi_str = "http://dx.doi.org/10.4211/hs.{shortkey}".format(shortkey=res_id) + doi_str = "https://doi.org/10.4211/hs.{shortkey}".format(shortkey=res_id) if flag: return "{doi}{append_flag}".format(doi=doi_str, append_flag=flag) else: diff --git a/hs_core/tests/api/native/test_core_metadata.py b/hs_core/tests/api/native/test_core_metadata.py index 91e1a5d847..25d2262bc3 100755 --- a/hs_core/tests/api/native/test_core_metadata.py +++ b/hs_core/tests/api/native/test_core_metadata.py @@ -959,7 +959,7 @@ def test_identifier(self): # test adding an identifier with name 'DOI' when the resource does not have a DOI - should raise an exception self.res.doi = None self.res.save() - url_doi = "http://dx.doi.org/10.4211/hs.{res_id}".format(res_id=self.res.short_id) + url_doi = "https://doi.org/10.4211/hs.{res_id}".format(res_id=self.res.short_id) self.assertRaises(Exception, lambda: resource.create_metadata_element(self.res.short_id,'identifier', name='DOI', url=url_doi)) @@ -974,7 +974,7 @@ def test_identifier(self): doi_idf.id, name='DOI-1')) # test that 'DOI' identifier url can be changed - resource.update_metadata_element(self.res.short_id, 'identifier', doi_idf.id, url='http://doi.org/001') + resource.update_metadata_element(self.res.short_id, 'identifier', doi_idf.id, url='https://doi.org/001') # test that hydroshareidentifier can't be deleted - raise exception hs_idf = self.res.metadata.identifiers.all().filter(name='hydroShareIdentifier').first() @@ -1495,7 +1495,7 @@ def test_get_xml(self): # add 'DOI' identifier self.res.doi='doi1000100010001' self.res.save() - self.res.metadata.create_element('identifier', name='DOI', url="http://dx.doi.org/001") + self.res.metadata.create_element('identifier', name='DOI', url="https://doi.org/001") # no need to add a language element - language element is created at the time of resource creation diff --git a/theme/templates/resource-landing-page/citation.html b/theme/templates/resource-landing-page/citation.html index 20cceaa391..fbd4d1ded2 100644 --- a/theme/templates/resource-landing-page/citation.html +++ b/theme/templates/resource-landing-page/citation.html @@ -20,7 +20,7 @@ <h3>How to cite</h3> <div> <em> When permanently published, this resource will have a formal Digital Object Identifier (DOI) and will be - accessible at the following URL: http://doi.org/10.4211/hs.{{ cm.short_id }}. When you are + accessible at the following URL: https://doi.org/10.4211/hs.{{ cm.short_id }}. When you are ready to permanently publish, click the Publish button at the top of the page to request your DOI. Reminder: You may no longer edit your resource, once you have permanently published it. </em>
nvaccess__nvda-8771
UnicodeDecodeError when NVDA running on non-English outdated systems When runing the latest NVDA snapshots on Russian versions of outdated Windows (XP/Vista), in the log I have the following exception: Traceback (most recent call last): File "nvda.pyw", line 64, in <module> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc4 in position 0: ordinal not in range(128) Probably, need to explicitly specify the system encoding, since in Russian Windows it is cp1251. For example, like so: import winVersion import locale if not winVersion.isSupportedOS(): winUser.MessageBox(0, unicode(ctypes.FormatError(winUser.ERROR_OLD_WIN_VERSION), locale.getpreferredencoding()), None, winUser.MB_ICONERROR) sys.exit(1) Now the dialogue will be shown correctly.
[ { "content": "#winUser.py\r\n#A part of NonVisual Desktop Access (NVDA)\r\n#Copyright (C) 2006-2017 NV Access Limited, Babbage B.V.\r\n#This file is covered by the GNU General Public License.\r\n#See the file COPYING for more details.\r\n\r\n\"\"\"Functions that wrap Windows API functions from user32.dll\"\"\"\r\n\r\nfrom ctypes import *\r\nfrom ctypes.wintypes import *\r\n\r\n#dll handles\r\nuser32=windll.user32\r\n\r\nLRESULT=c_long\r\nHCURSOR=c_long\r\n\r\n#Standard window class stuff\r\n\r\nWNDPROC=WINFUNCTYPE(LRESULT,HWND,c_uint,WPARAM,LPARAM)\r\n\r\nclass WNDCLASSEXW(Structure):\r\n\t_fields_=[\r\n\t\t('cbSize',c_uint),\r\n\t\t('style',c_uint),\r\n\t\t('lpfnWndProc',WNDPROC),\r\n\t\t('cbClsExtra',c_int),\r\n\t\t('cbWndExtra',c_int),\r\n\t\t('hInstance',HINSTANCE),\r\n\t\t('hIcon',HICON),\r\n\t\t('HCURSOR',HCURSOR),\r\n\t\t('hbrBackground',HBRUSH),\r\n\t\t('lpszMenuName',LPWSTR),\r\n\t\t('lpszClassName',LPWSTR),\r\n\t\t('hIconSm',HICON),\r\n\t]\r\n\r\nclass NMHdrStruct(Structure):\r\n\t_fields_=[\r\n\t\t('hwndFrom',HWND),\r\n\t\t('idFrom',c_uint),\r\n\t\t('code',c_uint),\r\n\t]\r\n\r\nclass GUITHREADINFO(Structure):\r\n\t_fields_=[\r\n\t\t('cbSize',DWORD),\r\n\t\t('flags',DWORD),\r\n\t\t('hwndActive',HWND),\r\n \t\t('hwndFocus',HWND),\r\n\t\t('hwndCapture',HWND),\r\n\t\t('hwndMenuOwner',HWND),\r\n\t\t('hwndMoveSize',HWND),\r\n\t\t('hwndCaret',HWND),\r\n\t\t('rcCaret',RECT),\r\n\t]\r\n\r\n#constants\r\nERROR_OLD_WIN_VERSION=1150\r\nMOUSEEVENTF_LEFTDOWN=0x0002 \r\nMOUSEEVENTF_LEFTUP=0x0004 \r\nMOUSEEVENTF_RIGHTDOWN=0x0008\r\nMOUSEEVENTF_RIGHTUP=0x0010\r\nMOUSEEVENTF_MIDDLEDOWN=0x0020\r\nMOUSEEVENTF_MIDDLEUP=0x0040\r\nMOUSEEVENTF_XDOWN=0x0080\r\nMOUSEEVENTF_XUP=0x0100\r\nGUI_CARETBLINKING=0x00000001\r\nGUI_INMOVESIZE=0x00000002\r\nGUI_INMENUMODE=0x00000004\r\nGUI_SYSTEMMENUMODE=0x00000008\r\nGUI_POPUPMENUMODE=0x00000010\r\nSPI_GETSTICKYKEYS=0x003A\r\nSPI_GETSCREENREADER=70\r\nSPI_SETSCREENREADER=71\r\nSPIF_UPDATEINIFILE=1\r\nSPIF_SENDCHANGE=2\r\nWS_DISABLED=0x8000000\r\nWS_VISIBLE=0x10000000\r\nWS_POPUP=0x80000000\r\nWS_GROUP=0x20000\r\nWS_THICKFRAME=0x40000\r\nWS_SIZEBOX=WS_THICKFRAME\r\nWS_SYSMENU=0x80000\r\nWS_HSCROLL=0x100000\r\nWS_VSCROLL=0x200000\r\nWS_CAPTION=0xC00000\r\nWS_EX_TOPMOST=0x00000008\r\nBS_GROUPBOX=7\r\nES_MULTILINE=4\r\nLBS_OWNERDRAWFIXED=0x0010\r\nLBS_OWNERDRAWVARIABLE=0x0020\r\nLBS_HASSTRINGS=0x0040\r\nCBS_OWNERDRAWFIXED=0x0010\r\nCBS_OWNERDRAWVARIABLE=0x0020\r\nCBS_HASSTRINGS=0x00200\r\nWM_NULL=0\r\nWM_COPYDATA=74\r\nWM_NOTIFY=78\r\nWM_USER=1024\r\n#PeekMessage\r\nPM_REMOVE=1\r\nPM_NOYIELD=2\r\n#sendMessageTimeout\r\nSMTO_ABORTIFHUNG=0x0002\r\n#getAncestor\r\nGA_PARENT=1\r\nGA_ROOT=2\r\nGA_ROOTOWNER=3\r\n#getWindowLong\r\nGWL_ID=-12\r\nGWL_STYLE=-16\r\nGWL_EXSTYLE=-20\r\n#getWindow\r\nGW_HWNDNEXT=2\r\nGW_HWNDPREV=3\r\nGW_OWNER=4\r\n#Window messages\r\nWM_GETTEXT=13\r\nWM_GETTEXTLENGTH=14\r\nWM_PAINT=0x000F\r\nWM_GETOBJECT=0x003D\r\n#Edit control window messages\r\nEM_GETSEL=176\r\nEM_SETSEL=177\r\nEM_SCROLLCARET=0xb7\r\nEM_GETLINE=196\r\nEM_GETLINECOUNT=186\r\nEM_LINEFROMCHAR=201\r\nEM_LINEINDEX=187\r\nEM_LINELENGTH=193\r\nEM_POSFROMCHAR=214 \r\nEM_CHARFROMPOS=215\r\nEM_GETFIRSTVISIBLELINE=0x0ce\r\n#Clipboard formats\r\nCF_TEXT=1\r\n#mapVirtualKey constants\r\nMAPVK_VK_TO_CHAR=2\r\nMAPVK_VSC_TO_VK_EX=3\r\n#Virtual key codes\r\nVK_LBUTTON=1\r\nVK_RBUTTON=2\r\nVK_CANCEL=3\r\nVK_MBUTTON=4\r\nVK_XBUTTON1=5\r\nVK_XBUTTON2=6\r\nVK_BACK=8\r\nVK_TAB=9\r\nVK_CLEAR=12\r\nVK_RETURN=13\r\nVK_SHIFT=16\r\nVK_CONTROL=17\r\nVK_MENU=18\r\nVK_PAUSE=19\r\nVK_CAPITAL=20\r\nVK_FINAL=0x18\r\nVK_ESCAPE=0x1B\r\nVK_CONVERT=0x1C\r\nVK_NONCONVERT=0x1D\r\nVK_ACCEPT=0x1E\r\nVK_MODECHANGE=0x1F\r\nVK_SPACE=32\r\nVK_PRIOR=33\r\nVK_NEXT=34\r\nVK_END=35\r\nVK_HOME=36\r\nVK_LEFT=37\r\nVK_UP=38\r\nVK_RIGHT=39\r\nVK_DOWN=40\r\nVK_SELECT=41\r\nVK_PRINT=42\r\nVK_EXECUTE=43\r\nVK_SNAPSHOT=44\r\nVK_INSERT=45\r\nVK_DELETE=46\r\nVK_HELP=47\r\nVK_LWIN=0x5B\r\nVK_RWIN=0x5C\r\nVK_APPS=0x5D\r\nVK_SLEEP=0x5F\r\nVK_NUMPAD0=0x60\r\nVK_NUMPAD1=0x61\r\nVK_NUMPAD2=0x62\r\nVK_NUMPAD3=0x63\r\nVK_NUMPAD4=0x64\r\nVK_NUMPAD5=0x65\r\nVK_NUMPAD6=0x66\r\nVK_NUMPAD7=0x67\r\nVK_NUMPAD8=0x68\r\nVK_NUMPAD9=0x69\r\nVK_MULTIPLY=0x6A\r\nVK_ADD=0x6B\r\nVK_SEPARATOR=0x6C\r\nVK_SUBTRACT=0x6D\r\nVK_DECIMAL=0x6E\r\nVK_DIVIDE=0x6F\r\nVK_F1=0x70\r\nVK_F2=0x71\r\nVK_F3=0x72\r\nVK_F4=0x73\r\nVK_F5=0x74\r\nVK_F6=0x75\r\nVK_F7=0x76\r\nVK_F8=0x77\r\nVK_F9=0x78\r\nVK_F10=0x79\r\nVK_F11=0x7A\r\nVK_F12=0x7B\r\nVK_F13=0x7C\r\nVK_F14=0x7D\r\nVK_F15=0x7E\r\nVK_F16=0x7F\r\nVK_F17=0x80\r\nVK_F18=0x81\r\nVK_F19=0x82\r\nVK_F20=0x83\r\nVK_F21=0x84\r\nVK_F22=0x85\r\nVK_F23=0x86\r\nVK_F24=0x87\r\nVK_NUMLOCK=0x90\r\nVK_SCROLL=0x91\r\nVK_LSHIFT=0xA0\r\nVK_RSHIFT=0xA1\r\nVK_LCONTROL=0xA2\r\nVK_RCONTROL=0xA3\r\nVK_LMENU=0xA4\r\nVK_RMENU=0xA5\r\nVK_VOLUME_MUTE=0xAD\r\nVK_VOLUME_DOWN=0xAE\r\nVK_VOLUME_UP=0xAF\r\n\r\n#Windows hooks\r\nWH_KEYBOARD=2\r\nWH_MOUSE=7\r\n#win events\r\nEVENT_SYSTEM_SOUND=0x1\r\nEVENT_SYSTEM_ALERT=0x2\r\nEVENT_SYSTEM_FOREGROUND=0x3\r\nEVENT_SYSTEM_MENUSTART=0x4\r\nEVENT_SYSTEM_MENUEND=0x5\r\nEVENT_SYSTEM_MENUPOPUPSTART=0x6\r\nEVENT_SYSTEM_MENUPOPUPEND=0x7\r\nEVENT_SYSTEM_CAPTURESTART=0x8\r\nEVENT_SYSTEM_CAPTUREEND=0x9\r\nEVENT_SYSTEM_MOVESIZESTART=0xa\r\nEVENT_SYSTEM_MOVESIZEEND=0xb\r\nEVENT_SYSTEM_CONTEXTHELPSTART=0xc\r\nEVENT_SYSTEM_CONTEXTHELPEND=0xd\r\nEVENT_SYSTEM_DRAGDROPSTART=0xe\r\nEVENT_SYSTEM_DRAGDROPEND=0xf\r\nEVENT_SYSTEM_DIALOGSTART=0x10\r\nEVENT_SYSTEM_DIALOGEND=0x11\r\nEVENT_SYSTEM_SCROLLINGSTART=0x12\r\nEVENT_SYSTEM_SCROLLINGEND=0x13\r\nEVENT_SYSTEM_SWITCHSTART=0x14\r\nEVENT_SYSTEM_SWITCHEND=0x15\r\nEVENT_SYSTEM_MINIMIZESTART=0x16\r\nEVENT_SYSTEM_MINIMIZEEND=0x17\r\nEVENT_OBJECT_CREATE=0x8000\r\nEVENT_OBJECT_DESTROY=0x8001\r\nEVENT_OBJECT_SHOW=0x8002\r\nEVENT_OBJECT_HIDE=0x8003\r\nEVENT_OBJECT_REORDER=0x8004\r\nEVENT_OBJECT_FOCUS=0x8005\r\nEVENT_OBJECT_SELECTION=0x8006\r\nEVENT_OBJECT_SELECTIONADD=0x8007\r\nEVENT_OBJECT_SELECTIONREMOVE=0x8008\r\nEVENT_OBJECT_SELECTIONWITHIN=0x8009\r\nEVENT_OBJECT_STATECHANGE=0x800a\r\nEVENT_OBJECT_LOCATIONCHANGE=0x800b\r\nEVENT_OBJECT_NAMECHANGE=0x800c\r\nEVENT_OBJECT_DESCRIPTIONCHANGE=0x800d\r\nEVENT_OBJECT_VALUECHANGE=0x800e\r\nEVENT_OBJECT_PARENTCHANGE=0x800f\r\nEVENT_OBJECT_HELPCHANGE=0x8010\r\nEVENT_OBJECT_DEFACTIONCHANGE=0x8011\r\nEVENT_OBJECT_ACCELERATORCHANGE=0x8012\r\nEVENT_OBJECT_LIVEREGIONCHANGED=0x8019\r\nEVENT_SYSTEM_DESKTOPSWITCH=0x20\r\nEVENT_OBJECT_INVOKED=0x8013\r\nEVENT_OBJECT_TEXTSELECTIONCHANGED=0x8014\r\nEVENT_OBJECT_CONTENTSCROLLED=0x8015\r\n\r\nEVENT_CONSOLE_CARET=0x4001\r\nEVENT_CONSOLE_UPDATE_REGION=0x4002\r\nEVENT_CONSOLE_UPDATE_SIMPLE=0x4003\r\nEVENT_CONSOLE_UPDATE_SCROLL=0x4004\r\nEVENT_CONSOLE_LAYOUT=0x4005\r\nEVENT_CONSOLE_START_APPLICATION=0x4006\r\nEVENT_CONSOLE_END_APPLICATION=0x4007\r\n#IAccessible Child IDs\r\nCHILDID_SELF=0\r\n#IAccessible Object IDs\r\nOBJID_WINDOW=0\r\nOBJID_SYSMENU=-1\r\nOBJID_TITLEBAR=-2\r\nOBJID_MENU=-3\r\nOBJID_CLIENT=-4\r\nOBJID_VSCROLL=-5\r\nOBJID_HSCROLL=-6\r\nOBJID_SIZEGRIP=-7\r\nOBJID_CARET=-8\r\nOBJID_CURSOR=-9\r\nOBJID_ALERT=-10\r\nOBJID_SOUND=-11\r\nOBJID_NATIVEOM=-16\r\n\r\n# ShowWindow() commands\r\nSW_HIDE = 0\r\nSW_SHOWNORMAL = 1\r\n\r\n# RedrawWindow() flags\r\nRDW_INVALIDATE = 0x0001\r\nRDW_UPDATENOW = 0x0100\r\n# MsgWaitForMultipleObjectsEx\r\nQS_ALLINPUT = 0x04ff\r\nMWMO_ALERTABLE = 0x0002\r\n\r\ndef setSystemScreenReaderFlag(val):\r\n\tuser32.SystemParametersInfoW(SPI_SETSCREENREADER,val,0,SPIF_UPDATEINIFILE|SPIF_SENDCHANGE)\r\n\r\ndef getSystemScreenReaderFlag():\r\n\tval = BOOL()\r\n\tuser32.SystemParametersInfoW(SPI_GETSCREENREADER, 0, byref(val), 0)\r\n\treturn bool(val.value)\r\n\r\ndef LOBYTE(word):\r\n\treturn word&0xFF\r\n \r\ndef HIBYTE(word):\r\n\treturn word>>8\r\n\r\ndef MAKEWORD(lo,hi):\r\n\treturn (hi<<8)+lo\r\n\r\ndef LOWORD(long):\r\n\treturn long&0xFFFF\r\n\r\ndef HIWORD(long):\r\n\treturn long>>16\r\n\r\ndef GET_X_LPARAM(lp):\r\n\treturn c_short(LOWORD(lp)).value\r\n\r\ndef GET_Y_LPARAM(lp):\r\n\treturn c_short(HIWORD(lp)).value\r\n\r\ndef MAKELONG(lo,hi):\r\n\treturn (hi<<16)+lo\r\n\r\ndef waitMessage():\r\n\treturn user32.WaitMessage()\r\n\r\ndef getMessage(*args):\r\n\treturn user32.GetMessageW(*args)\r\n\r\ndef translateMessage(*args):\r\n\treturn user32.TranslateMessage(*args)\r\n\r\ndef dispatchMessage(*args):\r\n\treturn user32.DispatchMessageW(*args)\r\n\r\ndef peekMessage(*args):\r\n\ttry:\r\n\t\tres=user32.PeekMessageW(*args)\r\n\texcept:\r\n\t\tres=0\r\n\treturn res\r\n\r\ndef registerWindowMessage(name):\r\n\treturn user32.RegisterWindowMessageW(name)\r\n\r\ndef getAsyncKeyState(v):\r\n\treturn user32.GetAsyncKeyState(v)\r\n\r\ndef getKeyState(v):\r\n\treturn user32.GetKeyState(v)\r\n\r\ndef isWindow(hwnd):\r\n\treturn user32.IsWindow(hwnd)\r\n\r\ndef isDescendantWindow(parentHwnd,childHwnd):\r\n\tif (parentHwnd==childHwnd) or user32.IsChild(parentHwnd,childHwnd):\r\n\t\treturn True\r\n\telse:\r\n\t\treturn False\r\n\r\ndef getForegroundWindow():\r\n\treturn user32.GetForegroundWindow()\r\n\r\ndef setForegroundWindow(hwnd):\r\n\tuser32.SetForegroundWindow(hwnd)\r\n\r\ndef setFocus(hwnd):\r\n\tuser32.SetFocus(hwnd)\r\n\r\ndef getDesktopWindow():\r\n\treturn user32.GetDesktopWindow()\r\n\r\ndef getControlID(hwnd):\r\n\treturn user32.GetWindowLongW(hwnd,GWL_ID)\r\n\r\n\r\ndef getClientRect(hwnd):\r\n\treturn user32.GetClientRect(hwnd)\r\n\r\nHWINEVENTHOOK=HANDLE\r\n\r\nWINEVENTPROC=WINFUNCTYPE(None,HWINEVENTHOOK,DWORD,HWND,c_long,c_long,DWORD,DWORD)\r\n\r\ndef setWinEventHook(*args):\r\n\t\treturn user32.SetWinEventHook(*args)\r\n\r\ndef unhookWinEvent(*args):\r\n\treturn user32.UnhookWinEvent(*args)\r\n\r\ndef sendMessage(hwnd,msg,param1,param2):\r\n\treturn user32.SendMessageW(hwnd,msg,param1,param2)\r\n\r\ndef getWindowThreadProcessID(hwnd):\r\n\tprocessID=c_int()\r\n\tthreadID=user32.GetWindowThreadProcessId(hwnd,byref(processID))\r\n\treturn (processID.value,threadID)\r\n\r\ndef getClassName(window):\r\n\tbuf=create_unicode_buffer(256)\r\n\tuser32.GetClassNameW(window,buf,255)\r\n\treturn buf.value\r\n\r\ndef keybd_event(*args):\r\n\treturn user32.keybd_event(*args)\r\n\r\ndef mouse_event(*args):\r\n\treturn user32.mouse_event(*args)\r\n\r\ndef getAncestor(hwnd,flags):\r\n\treturn user32.GetAncestor(hwnd,flags)\r\n\r\ntry:\r\n\t# Windows >= Vista\r\n\t_getCursorPos = user32.GetPhysicalCursorPos\r\n\t_setCursorPos = user32.SetPhysicalCursorPos\r\nexcept AttributeError:\r\n\t_getCursorPos = user32.GetCursorPos\r\n\t_setCursorPos = user32.SetCursorPos\r\n\r\ndef setCursorPos(x,y):\r\n\t_setCursorPos(x,y)\r\n\r\ndef getCursorPos():\r\n\tpoint=POINT()\r\n\t_getCursorPos(byref(point))\r\n\treturn [point.x,point.y]\r\n\r\ndef getCaretPos():\r\n\tpoint=POINT()\r\n\tuser32.GetCaretPos(byref(point))\r\n\treturn [point.x,point.y]\r\n\r\ndef getTopWindow(hwnd):\r\n\treturn user32.GetTopWindow(hwnd)\r\n\r\ndef getWindowText(hwnd):\r\n\tbuf=create_unicode_buffer(1024)\r\n\tuser32.InternalGetWindowText(hwnd,buf,1023)\r\n\treturn buf.value\r\n\r\ndef getWindow(window,relation):\r\n\treturn user32.GetWindow(window,relation)\r\n\r\ndef isWindowVisible(window):\r\n\treturn bool(user32.IsWindowVisible(window))\r\n\r\ndef isWindowEnabled(window):\r\n\treturn bool(user32.IsWindowEnabled(window))\r\n\r\ndef getGUIThreadInfo(threadID):\r\n\tinfo=GUITHREADINFO(cbSize=sizeof(GUITHREADINFO))\r\n\tuser32.GetGUIThreadInfo(threadID,byref(info))\r\n\treturn info\r\n\r\ndef getWindowStyle(hwnd):\r\n\treturn user32.GetWindowLongW(hwnd,GWL_STYLE)\r\n\r\ndef getPreviousWindow(hwnd):\r\n\ttry:\r\n\t\treturn user32.GetWindow(hwnd,GW_HWNDPREV)\r\n\texcept WindowsError:\r\n\t\treturn 0\r\n\r\ndef getKeyboardLayout(idThread=0):\r\n\treturn user32.GetKeyboardLayout(idThread)\r\n\r\ndef RedrawWindow(hwnd, rcUpdate, rgnUpdate, flags):\r\n\treturn user32.RedrawWindow(hwnd, byref(rcUpdate), rgnUpdate, flags)\r\n\r\ndef getKeyNameText(scanCode,extended):\r\n\tbuf=create_unicode_buffer(32)\r\n\tuser32.GetKeyNameTextW((scanCode<<16)|(extended<<24),buf,31)\r\n\treturn buf.value\r\n\r\ndef FindWindow(className, windowName):\r\n\tres = user32.FindWindowW(className, windowName)\r\n\tif res == 0:\r\n\t\traise WinError()\r\n\treturn res\r\n\r\nMB_RETRYCANCEL=5\r\nMB_ICONERROR=0x10\r\nMB_SYSTEMMODAL=0x1000\r\nIDRETRY=4\r\nIDCANCEL=3\r\n\r\ndef MessageBox(hwnd, text, caption, type):\r\n\tres = user32.MessageBoxW(hwnd, text, caption, type)\r\n\tif res == 0:\r\n\t\traise WinError()\r\n\treturn res\r\n\r\ndef PostMessage(hwnd, msg, wParam, lParam):\r\n\tif not user32.PostMessageW(hwnd, msg, wParam, lParam):\r\n\t\traise WinError()\r\n\r\nuser32.VkKeyScanExW.restype = SHORT\r\ndef VkKeyScanEx(ch, hkl):\r\n\tres = user32.VkKeyScanExW(WCHAR(ch), hkl)\r\n\tif res == -1:\r\n\t\traise LookupError\r\n\treturn res >> 8, res & 0xFF\r\n\r\ndef ScreenToClient(hwnd, x, y):\r\n\tpoint = POINT(x, y)\r\n\tuser32.ScreenToClient(hwnd, byref(point))\r\n\treturn point.x, point.y\r\n\r\ndef ClientToScreen(hwnd, x, y):\r\n\tpoint = POINT(x, y)\r\n\tuser32.ClientToScreen(hwnd, byref(point))\r\n\treturn point.x, point.y\r\n\r\ndef NotifyWinEvent(event, hwnd, idObject, idChild):\r\n\tuser32.NotifyWinEvent(event, hwnd, idObject, idChild)\r\n\r\nclass STICKYKEYS(Structure):\r\n\t_fields_ = (\r\n\t\t(\"cbSize\", DWORD),\r\n\t\t(\"dwFlags\", DWORD),\r\n\t)\r\n\tdef __init__(self, **kwargs):\r\n\t\tsuper(STICKYKEYS, self).__init__(cbSize=sizeof(self), **kwargs)\r\nSKF_STICKYKEYSON = 0x00000001\r\nSKF_AUDIBLEFEEDBACK = 0x00000040\r\nSKF_TRISTATE = 0x00000080\r\nSKF_TWOKEYSOFF = 0x00000100\r\n\r\ndef getSystemStickyKeys():\r\n\tsk = STICKYKEYS()\r\n\tuser32.SystemParametersInfoW(SPI_GETSTICKYKEYS, 0, byref(sk), 0)\r\n\treturn sk\r\n\r\n\r\n# START SENDINPUT TYPE DECLARATIONS\r\nPUL = POINTER(c_ulong)\r\nclass KeyBdInput(Structure):\r\n _fields_ = [(\"wVk\", c_ushort),\r\n (\"wScan\", c_ushort),\r\n (\"dwFlags\", c_ulong),\r\n (\"time\", c_ulong),\r\n (\"dwExtraInfo\", PUL)]\r\n\r\nclass HardwareInput(Structure):\r\n _fields_ = [(\"uMsg\", c_ulong),\r\n (\"wParamL\", c_short),\r\n (\"wParamH\", c_ushort)]\r\n\r\nclass MouseInput(Structure):\r\n _fields_ = [(\"dx\", c_long),\r\n (\"dy\", c_long),\r\n (\"mouseData\", c_ulong),\r\n (\"dwFlags\", c_ulong),\r\n (\"time\",c_ulong),\r\n (\"dwExtraInfo\", PUL)]\r\n\r\nclass Input_I(Union):\r\n _fields_ = [(\"ki\", KeyBdInput),\r\n (\"mi\", MouseInput),\r\n (\"hi\", HardwareInput)]\r\n\r\nclass Input(Structure):\r\n _fields_ = [(\"type\", c_ulong),\r\n (\"ii\", Input_I)]\r\n\r\nINPUT_KEYBOARD = 1\r\nKEYEVENTF_KEYUP = 0x0002\r\nKEYEVENTF_UNICODE = 0x04\r\n# END SENDINPUT TYPE DECLARATIONS\r\n\r\ndef SendInput(inputs):\r\n\tn = len(inputs)\r\n\tarr = (Input * n)(*inputs)\r\n\tuser32.SendInput(n, arr, sizeof(Input))\r\n", "path": "source/winUser.py" } ]
[ { "content": "#winUser.py\r\n#A part of NonVisual Desktop Access (NVDA)\r\n#Copyright (C) 2006-2017 NV Access Limited, Babbage B.V.\r\n#This file is covered by the GNU General Public License.\r\n#See the file COPYING for more details.\r\n\r\n\"\"\"Functions that wrap Windows API functions from user32.dll\"\"\"\r\n\r\nfrom ctypes import *\r\nfrom ctypes.wintypes import *\r\n\r\n#dll handles\r\nuser32=windll.user32\r\n\r\nLRESULT=c_long\r\nHCURSOR=c_long\r\n\r\n#Standard window class stuff\r\n\r\nWNDPROC=WINFUNCTYPE(LRESULT,HWND,c_uint,WPARAM,LPARAM)\r\n\r\nclass WNDCLASSEXW(Structure):\r\n\t_fields_=[\r\n\t\t('cbSize',c_uint),\r\n\t\t('style',c_uint),\r\n\t\t('lpfnWndProc',WNDPROC),\r\n\t\t('cbClsExtra',c_int),\r\n\t\t('cbWndExtra',c_int),\r\n\t\t('hInstance',HINSTANCE),\r\n\t\t('hIcon',HICON),\r\n\t\t('HCURSOR',HCURSOR),\r\n\t\t('hbrBackground',HBRUSH),\r\n\t\t('lpszMenuName',LPWSTR),\r\n\t\t('lpszClassName',LPWSTR),\r\n\t\t('hIconSm',HICON),\r\n\t]\r\n\r\nclass NMHdrStruct(Structure):\r\n\t_fields_=[\r\n\t\t('hwndFrom',HWND),\r\n\t\t('idFrom',c_uint),\r\n\t\t('code',c_uint),\r\n\t]\r\n\r\nclass GUITHREADINFO(Structure):\r\n\t_fields_=[\r\n\t\t('cbSize',DWORD),\r\n\t\t('flags',DWORD),\r\n\t\t('hwndActive',HWND),\r\n \t\t('hwndFocus',HWND),\r\n\t\t('hwndCapture',HWND),\r\n\t\t('hwndMenuOwner',HWND),\r\n\t\t('hwndMoveSize',HWND),\r\n\t\t('hwndCaret',HWND),\r\n\t\t('rcCaret',RECT),\r\n\t]\r\n\r\n#constants\r\nERROR_OLD_WIN_VERSION=1150\r\nMOUSEEVENTF_LEFTDOWN=0x0002 \r\nMOUSEEVENTF_LEFTUP=0x0004 \r\nMOUSEEVENTF_RIGHTDOWN=0x0008\r\nMOUSEEVENTF_RIGHTUP=0x0010\r\nMOUSEEVENTF_MIDDLEDOWN=0x0020\r\nMOUSEEVENTF_MIDDLEUP=0x0040\r\nMOUSEEVENTF_XDOWN=0x0080\r\nMOUSEEVENTF_XUP=0x0100\r\nGUI_CARETBLINKING=0x00000001\r\nGUI_INMOVESIZE=0x00000002\r\nGUI_INMENUMODE=0x00000004\r\nGUI_SYSTEMMENUMODE=0x00000008\r\nGUI_POPUPMENUMODE=0x00000010\r\nSPI_GETSTICKYKEYS=0x003A\r\nSPI_GETSCREENREADER=70\r\nSPI_SETSCREENREADER=71\r\nSPIF_UPDATEINIFILE=1\r\nSPIF_SENDCHANGE=2\r\nWS_DISABLED=0x8000000\r\nWS_VISIBLE=0x10000000\r\nWS_POPUP=0x80000000\r\nWS_GROUP=0x20000\r\nWS_THICKFRAME=0x40000\r\nWS_SIZEBOX=WS_THICKFRAME\r\nWS_SYSMENU=0x80000\r\nWS_HSCROLL=0x100000\r\nWS_VSCROLL=0x200000\r\nWS_CAPTION=0xC00000\r\nWS_EX_TOPMOST=0x00000008\r\nBS_GROUPBOX=7\r\nES_MULTILINE=4\r\nLBS_OWNERDRAWFIXED=0x0010\r\nLBS_OWNERDRAWVARIABLE=0x0020\r\nLBS_HASSTRINGS=0x0040\r\nCBS_OWNERDRAWFIXED=0x0010\r\nCBS_OWNERDRAWVARIABLE=0x0020\r\nCBS_HASSTRINGS=0x00200\r\nWM_NULL=0\r\nWM_COPYDATA=74\r\nWM_NOTIFY=78\r\nWM_USER=1024\r\n#PeekMessage\r\nPM_REMOVE=1\r\nPM_NOYIELD=2\r\n#sendMessageTimeout\r\nSMTO_ABORTIFHUNG=0x0002\r\n#getAncestor\r\nGA_PARENT=1\r\nGA_ROOT=2\r\nGA_ROOTOWNER=3\r\n#getWindowLong\r\nGWL_ID=-12\r\nGWL_STYLE=-16\r\nGWL_EXSTYLE=-20\r\n#getWindow\r\nGW_HWNDNEXT=2\r\nGW_HWNDPREV=3\r\nGW_OWNER=4\r\n#Window messages\r\nWM_GETTEXT=13\r\nWM_GETTEXTLENGTH=14\r\nWM_PAINT=0x000F\r\nWM_GETOBJECT=0x003D\r\n#Edit control window messages\r\nEM_GETSEL=176\r\nEM_SETSEL=177\r\nEM_SCROLLCARET=0xb7\r\nEM_GETLINE=196\r\nEM_GETLINECOUNT=186\r\nEM_LINEFROMCHAR=201\r\nEM_LINEINDEX=187\r\nEM_LINELENGTH=193\r\nEM_POSFROMCHAR=214 \r\nEM_CHARFROMPOS=215\r\nEM_GETFIRSTVISIBLELINE=0x0ce\r\n#Clipboard formats\r\nCF_TEXT=1\r\n#mapVirtualKey constants\r\nMAPVK_VK_TO_CHAR=2\r\nMAPVK_VSC_TO_VK_EX=3\r\n#Virtual key codes\r\nVK_LBUTTON=1\r\nVK_RBUTTON=2\r\nVK_CANCEL=3\r\nVK_MBUTTON=4\r\nVK_XBUTTON1=5\r\nVK_XBUTTON2=6\r\nVK_BACK=8\r\nVK_TAB=9\r\nVK_CLEAR=12\r\nVK_RETURN=13\r\nVK_SHIFT=16\r\nVK_CONTROL=17\r\nVK_MENU=18\r\nVK_PAUSE=19\r\nVK_CAPITAL=20\r\nVK_FINAL=0x18\r\nVK_ESCAPE=0x1B\r\nVK_CONVERT=0x1C\r\nVK_NONCONVERT=0x1D\r\nVK_ACCEPT=0x1E\r\nVK_MODECHANGE=0x1F\r\nVK_SPACE=32\r\nVK_PRIOR=33\r\nVK_NEXT=34\r\nVK_END=35\r\nVK_HOME=36\r\nVK_LEFT=37\r\nVK_UP=38\r\nVK_RIGHT=39\r\nVK_DOWN=40\r\nVK_SELECT=41\r\nVK_PRINT=42\r\nVK_EXECUTE=43\r\nVK_SNAPSHOT=44\r\nVK_INSERT=45\r\nVK_DELETE=46\r\nVK_HELP=47\r\nVK_LWIN=0x5B\r\nVK_RWIN=0x5C\r\nVK_APPS=0x5D\r\nVK_SLEEP=0x5F\r\nVK_NUMPAD0=0x60\r\nVK_NUMPAD1=0x61\r\nVK_NUMPAD2=0x62\r\nVK_NUMPAD3=0x63\r\nVK_NUMPAD4=0x64\r\nVK_NUMPAD5=0x65\r\nVK_NUMPAD6=0x66\r\nVK_NUMPAD7=0x67\r\nVK_NUMPAD8=0x68\r\nVK_NUMPAD9=0x69\r\nVK_MULTIPLY=0x6A\r\nVK_ADD=0x6B\r\nVK_SEPARATOR=0x6C\r\nVK_SUBTRACT=0x6D\r\nVK_DECIMAL=0x6E\r\nVK_DIVIDE=0x6F\r\nVK_F1=0x70\r\nVK_F2=0x71\r\nVK_F3=0x72\r\nVK_F4=0x73\r\nVK_F5=0x74\r\nVK_F6=0x75\r\nVK_F7=0x76\r\nVK_F8=0x77\r\nVK_F9=0x78\r\nVK_F10=0x79\r\nVK_F11=0x7A\r\nVK_F12=0x7B\r\nVK_F13=0x7C\r\nVK_F14=0x7D\r\nVK_F15=0x7E\r\nVK_F16=0x7F\r\nVK_F17=0x80\r\nVK_F18=0x81\r\nVK_F19=0x82\r\nVK_F20=0x83\r\nVK_F21=0x84\r\nVK_F22=0x85\r\nVK_F23=0x86\r\nVK_F24=0x87\r\nVK_NUMLOCK=0x90\r\nVK_SCROLL=0x91\r\nVK_LSHIFT=0xA0\r\nVK_RSHIFT=0xA1\r\nVK_LCONTROL=0xA2\r\nVK_RCONTROL=0xA3\r\nVK_LMENU=0xA4\r\nVK_RMENU=0xA5\r\nVK_VOLUME_MUTE=0xAD\r\nVK_VOLUME_DOWN=0xAE\r\nVK_VOLUME_UP=0xAF\r\n\r\n#Windows hooks\r\nWH_KEYBOARD=2\r\nWH_MOUSE=7\r\n#win events\r\nEVENT_SYSTEM_SOUND=0x1\r\nEVENT_SYSTEM_ALERT=0x2\r\nEVENT_SYSTEM_FOREGROUND=0x3\r\nEVENT_SYSTEM_MENUSTART=0x4\r\nEVENT_SYSTEM_MENUEND=0x5\r\nEVENT_SYSTEM_MENUPOPUPSTART=0x6\r\nEVENT_SYSTEM_MENUPOPUPEND=0x7\r\nEVENT_SYSTEM_CAPTURESTART=0x8\r\nEVENT_SYSTEM_CAPTUREEND=0x9\r\nEVENT_SYSTEM_MOVESIZESTART=0xa\r\nEVENT_SYSTEM_MOVESIZEEND=0xb\r\nEVENT_SYSTEM_CONTEXTHELPSTART=0xc\r\nEVENT_SYSTEM_CONTEXTHELPEND=0xd\r\nEVENT_SYSTEM_DRAGDROPSTART=0xe\r\nEVENT_SYSTEM_DRAGDROPEND=0xf\r\nEVENT_SYSTEM_DIALOGSTART=0x10\r\nEVENT_SYSTEM_DIALOGEND=0x11\r\nEVENT_SYSTEM_SCROLLINGSTART=0x12\r\nEVENT_SYSTEM_SCROLLINGEND=0x13\r\nEVENT_SYSTEM_SWITCHSTART=0x14\r\nEVENT_SYSTEM_SWITCHEND=0x15\r\nEVENT_SYSTEM_MINIMIZESTART=0x16\r\nEVENT_SYSTEM_MINIMIZEEND=0x17\r\nEVENT_OBJECT_CREATE=0x8000\r\nEVENT_OBJECT_DESTROY=0x8001\r\nEVENT_OBJECT_SHOW=0x8002\r\nEVENT_OBJECT_HIDE=0x8003\r\nEVENT_OBJECT_REORDER=0x8004\r\nEVENT_OBJECT_FOCUS=0x8005\r\nEVENT_OBJECT_SELECTION=0x8006\r\nEVENT_OBJECT_SELECTIONADD=0x8007\r\nEVENT_OBJECT_SELECTIONREMOVE=0x8008\r\nEVENT_OBJECT_SELECTIONWITHIN=0x8009\r\nEVENT_OBJECT_STATECHANGE=0x800a\r\nEVENT_OBJECT_LOCATIONCHANGE=0x800b\r\nEVENT_OBJECT_NAMECHANGE=0x800c\r\nEVENT_OBJECT_DESCRIPTIONCHANGE=0x800d\r\nEVENT_OBJECT_VALUECHANGE=0x800e\r\nEVENT_OBJECT_PARENTCHANGE=0x800f\r\nEVENT_OBJECT_HELPCHANGE=0x8010\r\nEVENT_OBJECT_DEFACTIONCHANGE=0x8011\r\nEVENT_OBJECT_ACCELERATORCHANGE=0x8012\r\nEVENT_OBJECT_LIVEREGIONCHANGED=0x8019\r\nEVENT_SYSTEM_DESKTOPSWITCH=0x20\r\nEVENT_OBJECT_INVOKED=0x8013\r\nEVENT_OBJECT_TEXTSELECTIONCHANGED=0x8014\r\nEVENT_OBJECT_CONTENTSCROLLED=0x8015\r\n\r\nEVENT_CONSOLE_CARET=0x4001\r\nEVENT_CONSOLE_UPDATE_REGION=0x4002\r\nEVENT_CONSOLE_UPDATE_SIMPLE=0x4003\r\nEVENT_CONSOLE_UPDATE_SCROLL=0x4004\r\nEVENT_CONSOLE_LAYOUT=0x4005\r\nEVENT_CONSOLE_START_APPLICATION=0x4006\r\nEVENT_CONSOLE_END_APPLICATION=0x4007\r\n#IAccessible Child IDs\r\nCHILDID_SELF=0\r\n#IAccessible Object IDs\r\nOBJID_WINDOW=0\r\nOBJID_SYSMENU=-1\r\nOBJID_TITLEBAR=-2\r\nOBJID_MENU=-3\r\nOBJID_CLIENT=-4\r\nOBJID_VSCROLL=-5\r\nOBJID_HSCROLL=-6\r\nOBJID_SIZEGRIP=-7\r\nOBJID_CARET=-8\r\nOBJID_CURSOR=-9\r\nOBJID_ALERT=-10\r\nOBJID_SOUND=-11\r\nOBJID_NATIVEOM=-16\r\n\r\n# ShowWindow() commands\r\nSW_HIDE = 0\r\nSW_SHOWNORMAL = 1\r\n\r\n# RedrawWindow() flags\r\nRDW_INVALIDATE = 0x0001\r\nRDW_UPDATENOW = 0x0100\r\n# MsgWaitForMultipleObjectsEx\r\nQS_ALLINPUT = 0x04ff\r\nMWMO_ALERTABLE = 0x0002\r\n\r\ndef setSystemScreenReaderFlag(val):\r\n\tuser32.SystemParametersInfoW(SPI_SETSCREENREADER,val,0,SPIF_UPDATEINIFILE|SPIF_SENDCHANGE)\r\n\r\ndef getSystemScreenReaderFlag():\r\n\tval = BOOL()\r\n\tuser32.SystemParametersInfoW(SPI_GETSCREENREADER, 0, byref(val), 0)\r\n\treturn bool(val.value)\r\n\r\ndef LOBYTE(word):\r\n\treturn word&0xFF\r\n \r\ndef HIBYTE(word):\r\n\treturn word>>8\r\n\r\ndef MAKEWORD(lo,hi):\r\n\treturn (hi<<8)+lo\r\n\r\ndef LOWORD(long):\r\n\treturn long&0xFFFF\r\n\r\ndef HIWORD(long):\r\n\treturn long>>16\r\n\r\ndef GET_X_LPARAM(lp):\r\n\treturn c_short(LOWORD(lp)).value\r\n\r\ndef GET_Y_LPARAM(lp):\r\n\treturn c_short(HIWORD(lp)).value\r\n\r\ndef MAKELONG(lo,hi):\r\n\treturn (hi<<16)+lo\r\n\r\ndef waitMessage():\r\n\treturn user32.WaitMessage()\r\n\r\ndef getMessage(*args):\r\n\treturn user32.GetMessageW(*args)\r\n\r\ndef translateMessage(*args):\r\n\treturn user32.TranslateMessage(*args)\r\n\r\ndef dispatchMessage(*args):\r\n\treturn user32.DispatchMessageW(*args)\r\n\r\ndef peekMessage(*args):\r\n\ttry:\r\n\t\tres=user32.PeekMessageW(*args)\r\n\texcept:\r\n\t\tres=0\r\n\treturn res\r\n\r\ndef registerWindowMessage(name):\r\n\treturn user32.RegisterWindowMessageW(name)\r\n\r\ndef getAsyncKeyState(v):\r\n\treturn user32.GetAsyncKeyState(v)\r\n\r\ndef getKeyState(v):\r\n\treturn user32.GetKeyState(v)\r\n\r\ndef isWindow(hwnd):\r\n\treturn user32.IsWindow(hwnd)\r\n\r\ndef isDescendantWindow(parentHwnd,childHwnd):\r\n\tif (parentHwnd==childHwnd) or user32.IsChild(parentHwnd,childHwnd):\r\n\t\treturn True\r\n\telse:\r\n\t\treturn False\r\n\r\ndef getForegroundWindow():\r\n\treturn user32.GetForegroundWindow()\r\n\r\ndef setForegroundWindow(hwnd):\r\n\tuser32.SetForegroundWindow(hwnd)\r\n\r\ndef setFocus(hwnd):\r\n\tuser32.SetFocus(hwnd)\r\n\r\ndef getDesktopWindow():\r\n\treturn user32.GetDesktopWindow()\r\n\r\ndef getControlID(hwnd):\r\n\treturn user32.GetWindowLongW(hwnd,GWL_ID)\r\n\r\n\r\ndef getClientRect(hwnd):\r\n\treturn user32.GetClientRect(hwnd)\r\n\r\nHWINEVENTHOOK=HANDLE\r\n\r\nWINEVENTPROC=WINFUNCTYPE(None,HWINEVENTHOOK,DWORD,HWND,c_long,c_long,DWORD,DWORD)\r\n\r\ndef setWinEventHook(*args):\r\n\t\treturn user32.SetWinEventHook(*args)\r\n\r\ndef unhookWinEvent(*args):\r\n\treturn user32.UnhookWinEvent(*args)\r\n\r\ndef sendMessage(hwnd,msg,param1,param2):\r\n\treturn user32.SendMessageW(hwnd,msg,param1,param2)\r\n\r\ndef getWindowThreadProcessID(hwnd):\r\n\tprocessID=c_int()\r\n\tthreadID=user32.GetWindowThreadProcessId(hwnd,byref(processID))\r\n\treturn (processID.value,threadID)\r\n\r\ndef getClassName(window):\r\n\tbuf=create_unicode_buffer(256)\r\n\tuser32.GetClassNameW(window,buf,255)\r\n\treturn buf.value\r\n\r\ndef keybd_event(*args):\r\n\treturn user32.keybd_event(*args)\r\n\r\ndef mouse_event(*args):\r\n\treturn user32.mouse_event(*args)\r\n\r\ndef getAncestor(hwnd,flags):\r\n\treturn user32.GetAncestor(hwnd,flags)\r\n\r\ntry:\r\n\t# Windows >= Vista\r\n\t_getCursorPos = user32.GetPhysicalCursorPos\r\n\t_setCursorPos = user32.SetPhysicalCursorPos\r\nexcept AttributeError:\r\n\t_getCursorPos = user32.GetCursorPos\r\n\t_setCursorPos = user32.SetCursorPos\r\n\r\ndef setCursorPos(x,y):\r\n\t_setCursorPos(x,y)\r\n\r\ndef getCursorPos():\r\n\tpoint=POINT()\r\n\t_getCursorPos(byref(point))\r\n\treturn [point.x,point.y]\r\n\r\ndef getCaretPos():\r\n\tpoint=POINT()\r\n\tuser32.GetCaretPos(byref(point))\r\n\treturn [point.x,point.y]\r\n\r\ndef getTopWindow(hwnd):\r\n\treturn user32.GetTopWindow(hwnd)\r\n\r\ndef getWindowText(hwnd):\r\n\tbuf=create_unicode_buffer(1024)\r\n\tuser32.InternalGetWindowText(hwnd,buf,1023)\r\n\treturn buf.value\r\n\r\ndef getWindow(window,relation):\r\n\treturn user32.GetWindow(window,relation)\r\n\r\ndef isWindowVisible(window):\r\n\treturn bool(user32.IsWindowVisible(window))\r\n\r\ndef isWindowEnabled(window):\r\n\treturn bool(user32.IsWindowEnabled(window))\r\n\r\ndef getGUIThreadInfo(threadID):\r\n\tinfo=GUITHREADINFO(cbSize=sizeof(GUITHREADINFO))\r\n\tuser32.GetGUIThreadInfo(threadID,byref(info))\r\n\treturn info\r\n\r\ndef getWindowStyle(hwnd):\r\n\treturn user32.GetWindowLongW(hwnd,GWL_STYLE)\r\n\r\ndef getPreviousWindow(hwnd):\r\n\ttry:\r\n\t\treturn user32.GetWindow(hwnd,GW_HWNDPREV)\r\n\texcept WindowsError:\r\n\t\treturn 0\r\n\r\ndef getKeyboardLayout(idThread=0):\r\n\treturn user32.GetKeyboardLayout(idThread)\r\n\r\ndef RedrawWindow(hwnd, rcUpdate, rgnUpdate, flags):\r\n\treturn user32.RedrawWindow(hwnd, byref(rcUpdate), rgnUpdate, flags)\r\n\r\ndef getKeyNameText(scanCode,extended):\r\n\tbuf=create_unicode_buffer(32)\r\n\tuser32.GetKeyNameTextW((scanCode<<16)|(extended<<24),buf,31)\r\n\treturn buf.value\r\n\r\ndef FindWindow(className, windowName):\r\n\tres = user32.FindWindowW(className, windowName)\r\n\tif res == 0:\r\n\t\traise WinError()\r\n\treturn res\r\n\r\nMB_RETRYCANCEL=5\r\nMB_ICONERROR=0x10\r\nMB_SYSTEMMODAL=0x1000\r\nIDRETRY=4\r\nIDCANCEL=3\r\n\r\ndef MessageBox(hwnd, text, caption, type):\r\n\tif isinstance(text, bytes):\r\n\t\ttext = text.decode('mbcs')\r\n\tif isinstance(caption, bytes):\r\n\t\tcaption = caption.decode('mbcs')\r\n\tres = user32.MessageBoxW(hwnd, text, caption, type)\r\n\tif res == 0:\r\n\t\traise WinError()\r\n\treturn res\r\n\r\ndef PostMessage(hwnd, msg, wParam, lParam):\r\n\tif not user32.PostMessageW(hwnd, msg, wParam, lParam):\r\n\t\traise WinError()\r\n\r\nuser32.VkKeyScanExW.restype = SHORT\r\ndef VkKeyScanEx(ch, hkl):\r\n\tres = user32.VkKeyScanExW(WCHAR(ch), hkl)\r\n\tif res == -1:\r\n\t\traise LookupError\r\n\treturn res >> 8, res & 0xFF\r\n\r\ndef ScreenToClient(hwnd, x, y):\r\n\tpoint = POINT(x, y)\r\n\tuser32.ScreenToClient(hwnd, byref(point))\r\n\treturn point.x, point.y\r\n\r\ndef ClientToScreen(hwnd, x, y):\r\n\tpoint = POINT(x, y)\r\n\tuser32.ClientToScreen(hwnd, byref(point))\r\n\treturn point.x, point.y\r\n\r\ndef NotifyWinEvent(event, hwnd, idObject, idChild):\r\n\tuser32.NotifyWinEvent(event, hwnd, idObject, idChild)\r\n\r\nclass STICKYKEYS(Structure):\r\n\t_fields_ = (\r\n\t\t(\"cbSize\", DWORD),\r\n\t\t(\"dwFlags\", DWORD),\r\n\t)\r\n\tdef __init__(self, **kwargs):\r\n\t\tsuper(STICKYKEYS, self).__init__(cbSize=sizeof(self), **kwargs)\r\nSKF_STICKYKEYSON = 0x00000001\r\nSKF_AUDIBLEFEEDBACK = 0x00000040\r\nSKF_TRISTATE = 0x00000080\r\nSKF_TWOKEYSOFF = 0x00000100\r\n\r\ndef getSystemStickyKeys():\r\n\tsk = STICKYKEYS()\r\n\tuser32.SystemParametersInfoW(SPI_GETSTICKYKEYS, 0, byref(sk), 0)\r\n\treturn sk\r\n\r\n\r\n# START SENDINPUT TYPE DECLARATIONS\r\nPUL = POINTER(c_ulong)\r\nclass KeyBdInput(Structure):\r\n _fields_ = [(\"wVk\", c_ushort),\r\n (\"wScan\", c_ushort),\r\n (\"dwFlags\", c_ulong),\r\n (\"time\", c_ulong),\r\n (\"dwExtraInfo\", PUL)]\r\n\r\nclass HardwareInput(Structure):\r\n _fields_ = [(\"uMsg\", c_ulong),\r\n (\"wParamL\", c_short),\r\n (\"wParamH\", c_ushort)]\r\n\r\nclass MouseInput(Structure):\r\n _fields_ = [(\"dx\", c_long),\r\n (\"dy\", c_long),\r\n (\"mouseData\", c_ulong),\r\n (\"dwFlags\", c_ulong),\r\n (\"time\",c_ulong),\r\n (\"dwExtraInfo\", PUL)]\r\n\r\nclass Input_I(Union):\r\n _fields_ = [(\"ki\", KeyBdInput),\r\n (\"mi\", MouseInput),\r\n (\"hi\", HardwareInput)]\r\n\r\nclass Input(Structure):\r\n _fields_ = [(\"type\", c_ulong),\r\n (\"ii\", Input_I)]\r\n\r\nINPUT_KEYBOARD = 1\r\nKEYEVENTF_KEYUP = 0x0002\r\nKEYEVENTF_UNICODE = 0x04\r\n# END SENDINPUT TYPE DECLARATIONS\r\n\r\ndef SendInput(inputs):\r\n\tn = len(inputs)\r\n\tarr = (Input * n)(*inputs)\r\n\tuser32.SendInput(n, arr, sizeof(Input))\r\n", "path": "source/winUser.py" } ]
diff --git a/source/nvda.pyw b/source/nvda.pyw index 80d158c3d24..a37f5f0ceda 100755 --- a/source/nvda.pyw +++ b/source/nvda.pyw @@ -74,7 +74,7 @@ globalVars.startTime=time.time() # Check OS version requirements import winVersion if not winVersion.isSupportedOS(): - winUser.MessageBox(0, unicode(ctypes.FormatError(winUser.ERROR_OLD_WIN_VERSION)), None, winUser.MB_ICONERROR) + winUser.MessageBox(0, ctypes.FormatError(winUser.ERROR_OLD_WIN_VERSION), None, winUser.MB_ICONERROR) sys.exit(1) def decodeMbcs(string): diff --git a/source/winUser.py b/source/winUser.py index a8ad3091ce4..ecd81bc5185 100644 --- a/source/winUser.py +++ b/source/winUser.py @@ -514,6 +514,10 @@ def FindWindow(className, windowName): IDCANCEL=3 def MessageBox(hwnd, text, caption, type): + if isinstance(text, bytes): + text = text.decode('mbcs') + if isinstance(caption, bytes): + caption = caption.decode('mbcs') res = user32.MessageBoxW(hwnd, text, caption, type) if res == 0: raise WinError() diff --git a/user_docs/en/changes.t2t b/user_docs/en/changes.t2t index 152ab2b490c..d229d12c0dd 100644 --- a/user_docs/en/changes.t2t +++ b/user_docs/en/changes.t2t @@ -22,6 +22,7 @@ What's New in NVDA - When NVDA is set to languages such as Kirgyz, Mongolian or Macedonian, it no longer shows a dialog on start-up warning that the language is not supported by the Operating System. (#8064) - Moving the mouse to the navigator object will now much more accurately move the mouse to the browse mode position in Mozilla Firefox, Google Chrome and Acrobat Reader DC. (#6460) - Interacting with combo boxes on the web in Firefox, Chrome and Internet Explorer has been improved. (#8664) +- If running on the Japanese version of Windows XP or Vista, NVDA now displays the alert of OS version requirements as expected. (#8771) == Changes for Developers ==
modal-labs__modal-examples-556
apply #556 manually I manually applied the patch from #556. Not sure what's up with that PR
[ { "content": "# # Hello, world!\n#\n# This is a trivial example of a Modal function, but it illustrates a few features:\n#\n# * You can print things to stdout and stderr.\n# * You can return data.\n# * You can map over a function.\n#\n# ## Import Modal and define the app\n#\n# Let's start with the top level imports.\n# You need to import Modal and define the app.\n# A stub is an object that defines everything that will be run.\n\nimport sys\n\nimport modal\n\nstub = modal.Stub(\"example-hello-world\")\n\n# ## Defining a function\n#\n# Here we define a Modal function using the `modal.function` decorator.\n# The body of the function will automatically be run remotely.\n# This particular function is pretty silly: it just prints \"hello\"\n# and \"world\" alternatingly to standard out and standard error.\n\n\[email protected]()\ndef f(i):\n if i % 2 == 0:\n print(\"hello\", i)\n else:\n print(\"world\", i, file=sys.stderr)\n\n return i * i\n\n\n# ## Running it\n#\n# Finally, let's actually invoke it.\n# We put this invocation code inside a `@stub.local_entrypoint()`.\n# This is because this module will be imported in the cloud, and we don't want\n# this code to be executed a second time in the cloud.\n#\n# Run `modal run hello_world.py` and the `@stub.local_entrypoint()` decorator will handle\n# starting the Modal app and then executing the wrapped function body.\n#\n# Inside the `main()` function body, we are calling the function `f` in three ways:\n#\n# 1 As a simple local call, `f(1000)`\n# 2. As a simple *remote* call `f.remote(1000)`\n# 3. By mapping over the integers `0..19`\n\n\[email protected]_entrypoint()\ndef main():\n # Call the function locally.\n print(f.local(1000))\n\n # Call the function remotely.\n print(f.remote(1000))\n\n # Parallel map.\n total = 0\n for ret in f.map(range(20)):\n total += ret\n\n print(total)\n\n\n# ## What happens?\n#\n# When you do `.remote` on function `f`, Modal will execute `f` **in the cloud,**\n# not locally on your computer. It will take the code, put it inside a\n# container, run it, and stream all the output back to your local\n# computer.\n#\n# Try doing one of these things next.\n#\n# ### Change the code and run again\n#\n# For instance, change the `print` statement in the function `f`.\n# You can see that the latest code is always run.\n#\n# Modal's goal is to make running code in the cloud feel like you're\n# running code locally. You don't need to run any commands to rebuild,\n# push containers, or go to a web UI to download logs.\n#\n# ### Map over a larger dataset\n#\n# Change the map range from 20 to some large number. You can see that\n# Modal will create and run more containers in parallel.\n#\n# The function `f` is obviously silly and doesn't do much, but you could\n# imagine something more significant, like:\n#\n# * Training a machine learning model\n# * Transcoding media\n# * Backtesting a trading algorithm.\n#\n# Modal lets you parallelize that operation trivially by running hundreds or\n# thousands of containers in the cloud.\n", "path": "01_getting_started/hello_world.py" } ]
[ { "content": "# # Hello, world!\n#\n# This is a trivial example of a Modal function, but it illustrates a few features:\n#\n# * You can print things to stdout and stderr.\n# * You can return data.\n# * You can map over a function.\n#\n# ## Import Modal and define the app\n#\n# Let's start with the top level imports.\n# You need to import Modal and define the app.\n# A stub is an object that defines everything that will be run.\n\nimport sys\n\nimport modal\n\nstub = modal.Stub(\"example-hello-world\")\n\n# ## Defining a function\n#\n# Here we define a Modal function using the `modal.function` decorator.\n# The body of the function will automatically be run remotely.\n# This particular function is pretty silly: it just prints \"hello\"\n# and \"world\" alternatingly to standard out and standard error.\n\n\[email protected]()\ndef f(i):\n if i % 2 == 0:\n print(\"hello\", i)\n else:\n print(\"world\", i, file=sys.stderr)\n\n return i * i\n\n\n# ## Running it\n#\n# Finally, let's actually invoke it.\n# We put this invocation code inside a `@stub.local_entrypoint()`.\n# This is because this module will be imported in the cloud, and we don't want\n# this code to be executed a second time in the cloud.\n#\n# Run `modal run hello_world.py` and the `@stub.local_entrypoint()` decorator will handle\n# starting the Modal app and then executing the wrapped function body.\n#\n# Inside the `main()` function body, we are calling the function `f` in three ways:\n#\n# 1 As a simple local call, `f.local(1000)`\n# 2. As a simple *remote* call `f.remote(1000)`\n# 3. By mapping over the integers `0..19`\n\n\[email protected]_entrypoint()\ndef main():\n # Call the function locally.\n print(f.local(1000))\n\n # Call the function remotely.\n print(f.remote(1000))\n\n # Parallel map.\n total = 0\n for ret in f.map(range(20)):\n total += ret\n\n print(total)\n\n\n# ## What happens?\n#\n# When you do `.remote` on function `f`, Modal will execute `f` **in the cloud,**\n# not locally on your computer. It will take the code, put it inside a\n# container, run it, and stream all the output back to your local\n# computer.\n#\n# Try doing one of these things next.\n#\n# ### Change the code and run again\n#\n# For instance, change the `print` statement in the function `f`.\n# You can see that the latest code is always run.\n#\n# Modal's goal is to make running code in the cloud feel like you're\n# running code locally. You don't need to run any commands to rebuild,\n# push containers, or go to a web UI to download logs.\n#\n# ### Map over a larger dataset\n#\n# Change the map range from 20 to some large number. You can see that\n# Modal will create and run more containers in parallel.\n#\n# The function `f` is obviously silly and doesn't do much, but you could\n# imagine something more significant, like:\n#\n# * Training a machine learning model\n# * Transcoding media\n# * Backtesting a trading algorithm.\n#\n# Modal lets you parallelize that operation trivially by running hundreds or\n# thousands of containers in the cloud.\n", "path": "01_getting_started/hello_world.py" } ]
diff --git a/01_getting_started/hello_world.py b/01_getting_started/hello_world.py index 1ef43d63e..a3fe60329 100644 --- a/01_getting_started/hello_world.py +++ b/01_getting_started/hello_world.py @@ -48,7 +48,7 @@ def f(i): # # Inside the `main()` function body, we are calling the function `f` in three ways: # -# 1 As a simple local call, `f(1000)` +# 1 As a simple local call, `f.local(1000)` # 2. As a simple *remote* call `f.remote(1000)` # 3. By mapping over the integers `0..19` diff --git a/11_notebooks/basic.ipynb b/11_notebooks/basic.ipynb index 739b52181..0e77e9893 100644 --- a/11_notebooks/basic.ipynb +++ b/11_notebooks/basic.ipynb @@ -91,7 +91,7 @@ "\n", "\n", "with stub.run():\n", - " print(quadruple(100))\n", + " print(quadruple.local(100))\n", " print(quadruple.remote(100)) # run remotely\n", " result = quadruple.remote(10_000_000)" ]
sunpy__sunpy-1818
convert_data_to_pixel issue This line in convert_data_to_pixel: pixelx = (x - crval[0]) / cdelt[0] + (crpix[1] - 1) should be: pixelx = (x - crval[0]) / cdelt[0] + (crpix[0] - 1) I found the problem using 0.6.3, but looking at the source, it persists in 0.6.4.
[ { "content": "from __future__ import absolute_import\n\nimport numpy as np\nimport sunpy.sun as sun\n\nimport astropy.units as u\n\nrsun_meters = sun.constants.radius.si.value\n\n__all__ = ['_convert_angle_units', 'convert_pixel_to_data', 'convert_hpc_hg',\n 'convert_data_to_pixel', 'convert_hpc_hcc', 'convert_hcc_hpc',\n 'convert_hcc_hg', 'convert_hg_hcc', 'proj_tan',\n 'convert_hg_hpc', 'convert_to_coord',\n 'get_center']\n\ndef _convert_angle_units(unit='arcsec'):\n \"\"\"Determine the conversion factor between the data units and radians.\"\"\"\n if unit == 'degrees':\n return np.deg2rad(1)\n elif unit == 'arcmin':\n return np.deg2rad(1) / 60.0\n elif unit == 'arcsec':\n return np.deg2rad(1) / (60 * 60.0)\n elif unit == 'mas':\n return np.deg2rad(1) / (60 * 60 * 1000.0)\n else:\n raise ValueError(\"The units specified are either invalid or is not supported at this time.\")\n\ndef convert_pixel_to_data(size, scale, reference_pixel,\n reference_coordinate, x=None, y=None):\n \"\"\"Calculate the data coordinate for particular pixel indices.\n\n Parameters\n ----------\n size : 2d ndarray\n Number of pixels in width and height.\n scale : 2d ndarray\n The size of a pixel (dx,dy) in data coordinates (equivalent to WCS/CDELT)\n reference_pixel : 2d ndarray\n The reference pixel (x,y) at which the reference coordinate is given (equivalent to WCS/CRPIX)\n reference_coordinate : 2d ndarray\n The data coordinate (x, y) as measured at the reference pixel (equivalent to WCS/CRVAL)\n x,y : int or ndarray\n The pixel values at which data coordinates are requested. If none are given,\n returns coordinates for every pixel.\n\n Returns\n -------\n out : ndarray\n The data coordinates at pixel (x,y).\n\n Notes\n -----\n This function assumes a gnomic projection which is correct for a detector at the focus\n of an optic observing the Sun.\n\n Examples\n --------\n\n \"\"\"\n cdelt = np.array(scale)\n crpix = np.array(reference_pixel)\n crval = np.array(reference_coordinate)\n\n # first assume that coord is just [x,y]\n if (x is None) and (y is None):\n x, y = np.meshgrid(np.arange(size[0]), np.arange(size[1]))\n\n # note that crpix[] counts pixels starting at 1\n\n coordx = (x - (crpix[0] - 1)) * cdelt[0] + crval[0]\n coordy = (y - (crpix[1] - 1)) * cdelt[1] + crval[1]\n\n # Correct for Gnomic projection\n coordx, coordy = proj_tan(coordx, coordy)\n\n return coordx, coordy\n\ndef get_center(size, scale, reference_pixel, reference_coordinate):\n \"\"\"Returns the center of the image in data coordinates.\n\n Parameters\n ----------\n size : 2d ndarray\n Number of pixels in width and height.\n scale : 2d ndarray\n The size of a pixel (dx,dy) in data coordinates (equivalent to WCS/CDELT)\n reference_pixel : 2d ndarray\n The reference pixel (x,y) at which the reference coordinate is given (equivalent to WCS/CRPIX)\n reference_coordinate : 2d ndarray\n The data coordinate (x, y) as measured at the reference pixel (equivalent to WCS/CRVAL)\n\n Returns\n -------\n out : ndarray\n The data coordinates\n\n Examples\n --------\n\n \"\"\"\n return scale * (size - 1 * u.pix) / 2. + reference_coordinate - (reference_pixel - 1 * u.pix) * scale\n\ndef convert_data_to_pixel(x, y, scale, reference_pixel, reference_coordinate):\n \"\"\"Calculate the pixel indices for a given data coordinate.\n\n Parameters\n ----------\n x, y : float\n Data coordinate in same units as reference coordinate\n scale : 2d ndarray\n The size of a pixel (dx,dy) in data coordinates (equivalent to WCS/CDELT)\n reference_pixel : 2d ndarray\n The reference pixel (x,y) at which the reference coordinate is given (equivalent to WCS/CRPIX)\n reference_coordinate : 2d ndarray\n The data coordinate (x, y) as measured at the reference pixel (equivalent to WCS/CRVAL)\n\n Returns\n -------\n out : ndarray\n The pixel coordinates (x,y) at that data coordinate.\n\n Examples\n --------\n\n \"\"\"\n\n # TODO: Needs to check what coordinate system the data is given in\n cdelt = np.array(scale)\n crpix = np.array(reference_pixel)\n crval = np.array(reference_coordinate)\n # De-apply any tabular projections.\n # coord = inv_proj_tan(coord)\n\n # note that crpix[] counts pixels starting at 1\n pixelx = (x - crval[0]) / cdelt[0] + (crpix[1] - 1)\n pixely = (y - crval[1]) / cdelt[1] + (crpix[1] - 1)\n\n return pixelx, pixely\n\ndef convert_hpc_hcc(x, y, dsun_meters=None, angle_units='arcsec', z=False):\n \"\"\"Converts from Helioprojective-Cartesian (HPC) coordinates into\n Heliocentric-Cartesian (HCC) coordinates. Returns all three dimensions, x, y, z in\n meters.\n\n Parameters\n ----------\n x, y : float\n Data coordinate in angle units (default is arcsec)\n dsun_meters : float\n Distance from the observer to the Sun in meters. Default is 1 AU.\n angle_units : str\n Units of the data coordinates (e.g. arcsec, arcmin, degrees). Default is arcsec.\n z : Bool\n If true return the z coordinate as well.\n\n Returns\n -------\n out : ndarray\n The data coordinates (x,y,z) in heliocentric cartesian coordinates in meters.\n\n Notes\n -----\n Implements Eq. (15) of Thompson (2006), A&A, 449, 791.\n\n Examples\n --------\n >>> import sunpy.wcs\n >>> sunpy.wcs.convert_hpc_hcc(40.0, 32.0, z=True)\n (28876152.176423457, 23100922.071266972, 694524220.8157959)\n\n \"\"\"\n c = np.array([_convert_angle_units(unit=angle_units),\n _convert_angle_units(unit=angle_units)])\n\n cosx = np.cos(x * c[0])\n sinx = np.sin(x * c[0])\n cosy = np.cos(y * c[1])\n siny = np.sin(y * c[1])\n\n if dsun_meters is None:\n dsun_meters = sun.constants.au.si.value\n elif isinstance(dsun_meters, u.Quantity):\n dsun_meters = dsun_meters.si.value\n\n q = dsun_meters * cosy * cosx\n distance = q ** 2 - dsun_meters ** 2 + rsun_meters ** 2\n # distance[np.where(distance < 0)] = np.sqrt(-1)\n distance = q - np.sqrt(distance)\n\n rx = distance * cosy * sinx\n ry = distance * siny\n rz = dsun_meters - distance * cosy * cosx\n\n\n if np.all(z == True):\n return rx, ry, rz\n else:\n return rx, ry\n\ndef convert_hcc_hpc(x, y, dsun_meters=None, angle_units='arcsec'):\n \"\"\"Convert Heliocentric-Cartesian (HCC) to angular\n Helioprojective-Cartesian (HPC) coordinates (in degrees).\n\n Parameters\n ----------\n x, y : float (meters)\n Data coordinate in meters.\n dsun_meters : float\n Distance from the observer to the Sun in meters. Default is 1 AU.\n angle_units : str\n Units of the data coordinates (e.g. arcsec, arcmin, degrees). Default is arcsec.\n\n Returns\n -------\n out : ndarray\n The data coordinates (x,y) in helioprojective cartesian coordinates in arcsec.\n\n Notes\n -----\n Implements Eq. (16) of Thompson (2006), A&A, 449, 791.\n\n Examples\n --------\n >>> import sunpy.wcs\n >>> sunpy.wcs.convert_hcc_hpc(28748691, 22998953)\n (39.823439773829705, 31.858751644835717)\n\n \"\"\"\n\n # Calculate the z coordinate by assuming that it is on the surface of the Sun\n z = np.sqrt(rsun_meters ** 2 - x ** 2 - y ** 2)\n\n if dsun_meters is None:\n dsun_meters = sun.constants.au.si.value\n elif isinstance(dsun_meters, u.Quantity):\n dsun_meters = dsun_meters.si.value\n\n zeta = dsun_meters - z\n distance = np.sqrt(x**2 + y**2 + zeta**2)\n hpcx = np.rad2deg(np.arctan2(x, zeta))\n hpcy = np.rad2deg(np.arcsin(y / distance))\n\n if angle_units == 'arcsec':\n hpcx = 60 * 60 * hpcx\n hpcy = 60 * 60 * hpcy\n elif angle_units == 'arcmin':\n hpcx = 60 * hpcx\n hpcy = 60 * hpcy\n\n return hpcx, hpcy\n\ndef convert_hcc_hg(x, y, z=None, b0_deg=0, l0_deg=0, radius=False):\n \"\"\"Convert from Heliocentric-Cartesian (HCC) (given in meters) to\n Stonyhurst Heliographic coordinates (HG) given in degrees, with\n radial output in meters.\n\n Parameters\n ----------\n x, y : float (meters)\n Data coordinate in meters.\n z : float (meters)\n Data coordinate in meters. If None, then the z-coordinate is assumed\n to be on the Sun.\n b0_deg : float (degrees)\n Tilt of the solar North rotational axis toward the observer\n (heliographic latitude of the observer). Usually given as SOLAR_B0,\n HGLT_OBS, or CRLT_OBS. Default is 0.\n l0_deg : float (degrees)\n Carrington longitude of central meridian as seen from Earth. Default is 0.\n radius : Bool\n If true, forces the output to return a triple of (lon, lat, r). If\n false, return (lon, lat) only.\n\n Returns\n -------\n out : ndarray (degrees, meters)\n if radius is false, return the data coordinates (lon, lat). If\n radius=True, return the data coordinates (lon, lat, r). The quantities\n (lon, lat) are the heliographic coordinates in degrees. The quantity\n 'r' is the heliographic radius in meters.\n\n Notes\n -----\n Implements Eq. (12) of Thompson (2006), A&A, 449, 791.\n\n Examples\n --------\n >>> import sunpy.wcs\n >>> sunpy.wcs.convert_hcc_hg(230000.0,45000000.0,\n ... z=695508000.0 + 8000000.0, radius=True)\n (0.01873188196651189, 3.6599471896203317, 704945784.41465974)\n \"\"\"\n if z is None:\n z = np.sqrt(rsun_meters**2 - x**2 - y**2)\n\n cosb = np.cos(np.deg2rad(b0_deg))\n sinb = np.sin(np.deg2rad(b0_deg))\n\n hecr = np.sqrt(x**2 + y**2 + z**2)\n hgln = np.arctan2(x, z * cosb - y * sinb) + np.deg2rad(l0_deg)\n hglt = np.arcsin((y * cosb + z * sinb) / hecr)\n\n if radius:\n return np.rad2deg(hgln), np.rad2deg(hglt), hecr\n else:\n return np.rad2deg(hgln), np.rad2deg(hglt)\n\ndef convert_hg_hcc(hglon_deg, hglat_deg, b0_deg=0, l0_deg=0, occultation=False,\n z=False, r=rsun_meters):\n \"\"\"Convert from Stonyhurst Heliographic coordinates (given in degrees) to\n Heliocentric-Cartesian coordinates (given in meters).\n\n Parameters\n ----------\n hglon_deg, hglat_deg : float (degrees)\n Heliographic longitude and latitude in degrees.\n b0_deg : float (degrees)\n Tilt of the solar North rotational axis toward the observer\n (heliographic latitude of the observer). Usually given as SOLAR_B0,\n HGLT_OBS, or CRLT_OBS. Default is 0.\n l0_deg : float (degrees)\n Carrington longitude of central meridian as seen from Earth. Default is 0.\n occultation : Bool\n If true set all points behind the Sun (e.g. not visible) to Nan.\n z : Bool\n If true return the z coordinate as well.\n r : float (meters)\n Heliographic radius\n\n Returns\n -------\n out : ndarray (meters)\n The data coordinates in Heliocentric-Cartesian coordinates.\n\n Notes\n -----\n Implements Eq. (11) of Thompson (2006), A&A, 449, 791, with the default\n assumption that the value 'r' in Eq. (11) is identical to the radius of the\n Sun.\n\n Examples\n --------\n >>> import sunpy.wcs\n >>> sunpy.wcs.convert_hg_hcc(0.01873188196651189, 3.6599471896203317,\n ... r=704945784.41465974, z=True)\n (230000.0, 45000000.0, 703508000.0)\n \"\"\"\n lon = np.deg2rad(hglon_deg)\n lat = np.deg2rad(hglat_deg)\n\n cosb = np.cos(np.deg2rad(b0_deg))\n sinb = np.sin(np.deg2rad(b0_deg))\n\n lon = lon - np.deg2rad(l0_deg)\n\n cosx = np.cos(lon)\n sinx = np.sin(lon)\n cosy = np.cos(lat)\n siny = np.sin(lat)\n\n # Perform the conversion.\n x = r * cosy * sinx\n y = r * (siny * cosb - cosy * cosx * sinb)\n zz = r * (siny * sinb + cosy * cosx * cosb)\n\n if occultation:\n x[zz < 0] = np.nan\n y[zz < 0] = np.nan\n\n if np.all(z == True):\n return x, y, zz\n else:\n return x, y\n\ndef convert_hg_hpc(hglon_deg, hglat_deg, b0_deg=0, l0_deg=0, dsun_meters=None, angle_units='arcsec',\n occultation=False):\n \"\"\"Convert from Heliographic coordinates (HG) to Helioprojective-Cartesian\n (HPC).\n\n Parameters\n ----------\n hglon_deg, hglat_deg : float (degrees)\n Heliographic longitude and latitude in degrees.\n b0_deg : float (degrees)\n Tilt of the solar North rotational axis toward the observer\n (heliographic latitude of the observer). Usually given as SOLAR_B0,\n HGLT_OBS, or CRLT_OBS. Default is 0.\n l0_deg : float (degrees)\n Carrington longitude of central meridian as seen from Earth. Default is 0.\n occultation : Bool\n If true set all points behind the Sun (e.g. not visible) to Nan.\n dsun_meters : float (meters)\n Distance between the observer and the Sun.\n angle_units : str\n\n\n Returns\n -------\n out : ndarray (arcsec)\n The data coordinates (x,y) in Helioprojective-Cartesian coordinates.\n\n Notes\n -----\n Uses equations 11 and 16 in Thompson (2006), A&A, 449, 791-803.\n\n Examples\n --------\n >>> import sunpy.wcs\n >>> sunpy.wcs.convert_hg_hpc(34.0, 45.0, b0_deg=-7.064078, l0_deg=0.0)\n (380.05656560308898, 743.78281283290016)\n \"\"\"\n\n tempx, tempy = convert_hg_hcc(hglon_deg, hglat_deg, b0_deg=b0_deg, l0_deg=l0_deg, occultation=occultation)\n x, y = convert_hcc_hpc(tempx, tempy, dsun_meters=dsun_meters, angle_units=angle_units)\n return x, y\n\ndef convert_hpc_hg(x, y, b0_deg=0, l0_deg=0, dsun_meters=None, angle_units='arcsec'):\n \"\"\"Convert from Helioprojective-Cartesian (HPC) to Heliographic coordinates\n (HG) in degrees.\n\n Parameters\n ----------\n x, y : float ()\n Data coordinate in angle units.\n b0 : float (degrees)\n Tilt of the solar North rotational axis toward the observer\n (heliographic latitude of the observer). Usually given as SOLAR_B0,\n HGLT_OBS, or CRLT_OBS. Default is 0.\n l0 : float (degrees)\n Carrington longitude of central meridian as seen from Earth. Default is 0.\n dsun_meters : float (meters)\n Distance between the observer and the Sun.\n angle_units : str\n Units used for input x and y. Default is arcsec.\n\n Returns\n -------\n out : ndarray (degrees)\n The data coordinates (hglongitude, hglatitude) in Heliographic coordinates.\n\n Notes\n -----\n Uses equations 15 and 12 in Thompson (2006), A&A, 449, 791-803.\n\n Examples\n --------\n >>> import sunpy.wcs\n >>> sunpy.wcs.convert_hpc_hg(382, 748, b0_deg=-7.064078, l0_deg=0.0)\n (34.504653439914669, 45.443143275518182)\n \"\"\"\n tempx, tempy = convert_hpc_hcc(x, y, dsun_meters=dsun_meters, angle_units=angle_units)\n lon, lat = convert_hcc_hg(tempx, tempy, b0_deg=b0_deg, l0_deg=l0_deg)\n return lon, lat\n\ndef proj_tan(x, y, force=False):\n \"\"\"Applies the gnomonic (TAN) projection to intermediate relative\n coordinates. This function is not currently implemented!\"\"\"\n # if pixels are within 3 degrees of the Sun then skip the calculation unless\n # force is True. This applies to all sdo images so this function is just\n # here as a place holder for the future\n # TODO: write proj_tan function\n return x, y\n\ndef convert_to_coord(x, y, from_coord, to_coord, b0_deg=0, l0_deg=0, dsun_meters=None, angle_units='arcsec'):\n \"\"\"Apply a coordinate transform to coordinates. Right now can only do hpc\n to hcc to hg\"\"\"\n\n if (from_coord == 'hcc') and (to_coord == 'hg'):\n rx, ry = convert_hcc_hg(x, y, b0_deg=b0_deg, l0_deg=l0_deg)\n elif (from_coord == 'hpc') and (to_coord == 'hg'):\n rx, ry = convert_hpc_hg(x, y, b0_deg=b0_deg, l0_deg=l0_deg, dsun_meters=dsun_meters, angle_units=angle_units)\n elif (from_coord == 'hg') and (to_coord == 'hcc'):\n rx, ry = convert_hg_hcc(x, y, b0_deg=b0_deg, l0_deg=l0_deg)\n elif (from_coord == 'hcc') and (to_coord == 'hpc'):\n rx, ry = convert_hcc_hpc(x, y, dsun_meters=dsun_meters, angle_units=angle_units)\n elif (from_coord == 'hg') and (to_coord == 'hpc'):\n rx, ry = convert_hg_hpc(x, y, b0_deg=b0_deg, l0_deg=l0_deg, dsun_meters=dsun_meters, angle_units=angle_units)\n elif (from_coord == 'hpc') and (to_coord == 'hcc'):\n rx, ry = convert_hpc_hcc(x, y, dsun_meters=dsun_meters, angle_units=angle_units)\n\n return rx, ry\n", "path": "sunpy/wcs/wcs.py" } ]
[ { "content": "from __future__ import absolute_import\n\nimport numpy as np\nimport sunpy.sun as sun\n\nimport astropy.units as u\n\nrsun_meters = sun.constants.radius.si.value\n\n__all__ = ['_convert_angle_units', 'convert_pixel_to_data', 'convert_hpc_hg',\n 'convert_data_to_pixel', 'convert_hpc_hcc', 'convert_hcc_hpc',\n 'convert_hcc_hg', 'convert_hg_hcc', 'proj_tan',\n 'convert_hg_hpc', 'convert_to_coord',\n 'get_center']\n\ndef _convert_angle_units(unit='arcsec'):\n \"\"\"Determine the conversion factor between the data units and radians.\"\"\"\n if unit == 'degrees':\n return np.deg2rad(1)\n elif unit == 'arcmin':\n return np.deg2rad(1) / 60.0\n elif unit == 'arcsec':\n return np.deg2rad(1) / (60 * 60.0)\n elif unit == 'mas':\n return np.deg2rad(1) / (60 * 60 * 1000.0)\n else:\n raise ValueError(\"The units specified are either invalid or is not supported at this time.\")\n\ndef convert_pixel_to_data(size, scale, reference_pixel,\n reference_coordinate, x=None, y=None):\n \"\"\"Calculate the data coordinate for particular pixel indices.\n\n Parameters\n ----------\n size : 2d ndarray\n Number of pixels in width and height.\n scale : 2d ndarray\n The size of a pixel (dx,dy) in data coordinates (equivalent to WCS/CDELT)\n reference_pixel : 2d ndarray\n The reference pixel (x,y) at which the reference coordinate is given (equivalent to WCS/CRPIX)\n reference_coordinate : 2d ndarray\n The data coordinate (x, y) as measured at the reference pixel (equivalent to WCS/CRVAL)\n x,y : int or ndarray\n The pixel values at which data coordinates are requested. If none are given,\n returns coordinates for every pixel.\n\n Returns\n -------\n out : ndarray\n The data coordinates at pixel (x,y).\n\n Notes\n -----\n This function assumes a gnomic projection which is correct for a detector at the focus\n of an optic observing the Sun.\n\n Examples\n --------\n\n \"\"\"\n cdelt = np.array(scale)\n crpix = np.array(reference_pixel)\n crval = np.array(reference_coordinate)\n\n # first assume that coord is just [x,y]\n if (x is None) and (y is None):\n x, y = np.meshgrid(np.arange(size[0]), np.arange(size[1]))\n\n # note that crpix[] counts pixels starting at 1\n\n coordx = (x - (crpix[0] - 1)) * cdelt[0] + crval[0]\n coordy = (y - (crpix[1] - 1)) * cdelt[1] + crval[1]\n\n # Correct for Gnomic projection\n coordx, coordy = proj_tan(coordx, coordy)\n\n return coordx, coordy\n\ndef get_center(size, scale, reference_pixel, reference_coordinate):\n \"\"\"Returns the center of the image in data coordinates.\n\n Parameters\n ----------\n size : 2d ndarray\n Number of pixels in width and height.\n scale : 2d ndarray\n The size of a pixel (dx,dy) in data coordinates (equivalent to WCS/CDELT)\n reference_pixel : 2d ndarray\n The reference pixel (x,y) at which the reference coordinate is given (equivalent to WCS/CRPIX)\n reference_coordinate : 2d ndarray\n The data coordinate (x, y) as measured at the reference pixel (equivalent to WCS/CRVAL)\n\n Returns\n -------\n out : ndarray\n The data coordinates\n\n Examples\n --------\n\n \"\"\"\n return scale * (size - 1 * u.pix) / 2. + reference_coordinate - (reference_pixel - 1 * u.pix) * scale\n\ndef convert_data_to_pixel(x, y, scale, reference_pixel, reference_coordinate):\n \"\"\"Calculate the pixel indices for a given data coordinate.\n\n Parameters\n ----------\n x, y : float\n Data coordinate in same units as reference coordinate\n scale : 2d ndarray\n The size of a pixel (dx,dy) in data coordinates (equivalent to WCS/CDELT)\n reference_pixel : 2d ndarray\n The reference pixel (x,y) at which the reference coordinate is given (equivalent to WCS/CRPIX)\n reference_coordinate : 2d ndarray\n The data coordinate (x, y) as measured at the reference pixel (equivalent to WCS/CRVAL)\n\n Returns\n -------\n out : ndarray\n The pixel coordinates (x,y) at that data coordinate.\n\n Examples\n --------\n\n \"\"\"\n\n # TODO: Needs to check what coordinate system the data is given in\n cdelt = np.array(scale)\n crpix = np.array(reference_pixel)\n crval = np.array(reference_coordinate)\n # De-apply any tabular projections.\n # coord = inv_proj_tan(coord)\n\n # note that crpix[] counts pixels starting at 1\n pixelx = (x - crval[0]) / cdelt[0] + (crpix[0] - 1)\n pixely = (y - crval[1]) / cdelt[1] + (crpix[1] - 1)\n\n return pixelx, pixely\n\ndef convert_hpc_hcc(x, y, dsun_meters=None, angle_units='arcsec', z=False):\n \"\"\"Converts from Helioprojective-Cartesian (HPC) coordinates into\n Heliocentric-Cartesian (HCC) coordinates. Returns all three dimensions, x, y, z in\n meters.\n\n Parameters\n ----------\n x, y : float\n Data coordinate in angle units (default is arcsec)\n dsun_meters : float\n Distance from the observer to the Sun in meters. Default is 1 AU.\n angle_units : str\n Units of the data coordinates (e.g. arcsec, arcmin, degrees). Default is arcsec.\n z : Bool\n If true return the z coordinate as well.\n\n Returns\n -------\n out : ndarray\n The data coordinates (x,y,z) in heliocentric cartesian coordinates in meters.\n\n Notes\n -----\n Implements Eq. (15) of Thompson (2006), A&A, 449, 791.\n\n Examples\n --------\n >>> import sunpy.wcs\n >>> sunpy.wcs.convert_hpc_hcc(40.0, 32.0, z=True)\n (28876152.176423457, 23100922.071266972, 694524220.8157959)\n\n \"\"\"\n c = np.array([_convert_angle_units(unit=angle_units),\n _convert_angle_units(unit=angle_units)])\n\n cosx = np.cos(x * c[0])\n sinx = np.sin(x * c[0])\n cosy = np.cos(y * c[1])\n siny = np.sin(y * c[1])\n\n if dsun_meters is None:\n dsun_meters = sun.constants.au.si.value\n elif isinstance(dsun_meters, u.Quantity):\n dsun_meters = dsun_meters.si.value\n\n q = dsun_meters * cosy * cosx\n distance = q ** 2 - dsun_meters ** 2 + rsun_meters ** 2\n # distance[np.where(distance < 0)] = np.sqrt(-1)\n distance = q - np.sqrt(distance)\n\n rx = distance * cosy * sinx\n ry = distance * siny\n rz = dsun_meters - distance * cosy * cosx\n\n\n if np.all(z == True):\n return rx, ry, rz\n else:\n return rx, ry\n\ndef convert_hcc_hpc(x, y, dsun_meters=None, angle_units='arcsec'):\n \"\"\"Convert Heliocentric-Cartesian (HCC) to angular\n Helioprojective-Cartesian (HPC) coordinates (in degrees).\n\n Parameters\n ----------\n x, y : float (meters)\n Data coordinate in meters.\n dsun_meters : float\n Distance from the observer to the Sun in meters. Default is 1 AU.\n angle_units : str\n Units of the data coordinates (e.g. arcsec, arcmin, degrees). Default is arcsec.\n\n Returns\n -------\n out : ndarray\n The data coordinates (x,y) in helioprojective cartesian coordinates in arcsec.\n\n Notes\n -----\n Implements Eq. (16) of Thompson (2006), A&A, 449, 791.\n\n Examples\n --------\n >>> import sunpy.wcs\n >>> sunpy.wcs.convert_hcc_hpc(28748691, 22998953)\n (39.823439773829705, 31.858751644835717)\n\n \"\"\"\n\n # Calculate the z coordinate by assuming that it is on the surface of the Sun\n z = np.sqrt(rsun_meters ** 2 - x ** 2 - y ** 2)\n\n if dsun_meters is None:\n dsun_meters = sun.constants.au.si.value\n elif isinstance(dsun_meters, u.Quantity):\n dsun_meters = dsun_meters.si.value\n\n zeta = dsun_meters - z\n distance = np.sqrt(x**2 + y**2 + zeta**2)\n hpcx = np.rad2deg(np.arctan2(x, zeta))\n hpcy = np.rad2deg(np.arcsin(y / distance))\n\n if angle_units == 'arcsec':\n hpcx = 60 * 60 * hpcx\n hpcy = 60 * 60 * hpcy\n elif angle_units == 'arcmin':\n hpcx = 60 * hpcx\n hpcy = 60 * hpcy\n\n return hpcx, hpcy\n\ndef convert_hcc_hg(x, y, z=None, b0_deg=0, l0_deg=0, radius=False):\n \"\"\"Convert from Heliocentric-Cartesian (HCC) (given in meters) to\n Stonyhurst Heliographic coordinates (HG) given in degrees, with\n radial output in meters.\n\n Parameters\n ----------\n x, y : float (meters)\n Data coordinate in meters.\n z : float (meters)\n Data coordinate in meters. If None, then the z-coordinate is assumed\n to be on the Sun.\n b0_deg : float (degrees)\n Tilt of the solar North rotational axis toward the observer\n (heliographic latitude of the observer). Usually given as SOLAR_B0,\n HGLT_OBS, or CRLT_OBS. Default is 0.\n l0_deg : float (degrees)\n Carrington longitude of central meridian as seen from Earth. Default is 0.\n radius : Bool\n If true, forces the output to return a triple of (lon, lat, r). If\n false, return (lon, lat) only.\n\n Returns\n -------\n out : ndarray (degrees, meters)\n if radius is false, return the data coordinates (lon, lat). If\n radius=True, return the data coordinates (lon, lat, r). The quantities\n (lon, lat) are the heliographic coordinates in degrees. The quantity\n 'r' is the heliographic radius in meters.\n\n Notes\n -----\n Implements Eq. (12) of Thompson (2006), A&A, 449, 791.\n\n Examples\n --------\n >>> import sunpy.wcs\n >>> sunpy.wcs.convert_hcc_hg(230000.0,45000000.0,\n ... z=695508000.0 + 8000000.0, radius=True)\n (0.01873188196651189, 3.6599471896203317, 704945784.41465974)\n \"\"\"\n if z is None:\n z = np.sqrt(rsun_meters**2 - x**2 - y**2)\n\n cosb = np.cos(np.deg2rad(b0_deg))\n sinb = np.sin(np.deg2rad(b0_deg))\n\n hecr = np.sqrt(x**2 + y**2 + z**2)\n hgln = np.arctan2(x, z * cosb - y * sinb) + np.deg2rad(l0_deg)\n hglt = np.arcsin((y * cosb + z * sinb) / hecr)\n\n if radius:\n return np.rad2deg(hgln), np.rad2deg(hglt), hecr\n else:\n return np.rad2deg(hgln), np.rad2deg(hglt)\n\ndef convert_hg_hcc(hglon_deg, hglat_deg, b0_deg=0, l0_deg=0, occultation=False,\n z=False, r=rsun_meters):\n \"\"\"Convert from Stonyhurst Heliographic coordinates (given in degrees) to\n Heliocentric-Cartesian coordinates (given in meters).\n\n Parameters\n ----------\n hglon_deg, hglat_deg : float (degrees)\n Heliographic longitude and latitude in degrees.\n b0_deg : float (degrees)\n Tilt of the solar North rotational axis toward the observer\n (heliographic latitude of the observer). Usually given as SOLAR_B0,\n HGLT_OBS, or CRLT_OBS. Default is 0.\n l0_deg : float (degrees)\n Carrington longitude of central meridian as seen from Earth. Default is 0.\n occultation : Bool\n If true set all points behind the Sun (e.g. not visible) to Nan.\n z : Bool\n If true return the z coordinate as well.\n r : float (meters)\n Heliographic radius\n\n Returns\n -------\n out : ndarray (meters)\n The data coordinates in Heliocentric-Cartesian coordinates.\n\n Notes\n -----\n Implements Eq. (11) of Thompson (2006), A&A, 449, 791, with the default\n assumption that the value 'r' in Eq. (11) is identical to the radius of the\n Sun.\n\n Examples\n --------\n >>> import sunpy.wcs\n >>> sunpy.wcs.convert_hg_hcc(0.01873188196651189, 3.6599471896203317,\n ... r=704945784.41465974, z=True)\n (230000.0, 45000000.0, 703508000.0)\n \"\"\"\n lon = np.deg2rad(hglon_deg)\n lat = np.deg2rad(hglat_deg)\n\n cosb = np.cos(np.deg2rad(b0_deg))\n sinb = np.sin(np.deg2rad(b0_deg))\n\n lon = lon - np.deg2rad(l0_deg)\n\n cosx = np.cos(lon)\n sinx = np.sin(lon)\n cosy = np.cos(lat)\n siny = np.sin(lat)\n\n # Perform the conversion.\n x = r * cosy * sinx\n y = r * (siny * cosb - cosy * cosx * sinb)\n zz = r * (siny * sinb + cosy * cosx * cosb)\n\n if occultation:\n x[zz < 0] = np.nan\n y[zz < 0] = np.nan\n\n if np.all(z == True):\n return x, y, zz\n else:\n return x, y\n\ndef convert_hg_hpc(hglon_deg, hglat_deg, b0_deg=0, l0_deg=0, dsun_meters=None, angle_units='arcsec',\n occultation=False):\n \"\"\"Convert from Heliographic coordinates (HG) to Helioprojective-Cartesian\n (HPC).\n\n Parameters\n ----------\n hglon_deg, hglat_deg : float (degrees)\n Heliographic longitude and latitude in degrees.\n b0_deg : float (degrees)\n Tilt of the solar North rotational axis toward the observer\n (heliographic latitude of the observer). Usually given as SOLAR_B0,\n HGLT_OBS, or CRLT_OBS. Default is 0.\n l0_deg : float (degrees)\n Carrington longitude of central meridian as seen from Earth. Default is 0.\n occultation : Bool\n If true set all points behind the Sun (e.g. not visible) to Nan.\n dsun_meters : float (meters)\n Distance between the observer and the Sun.\n angle_units : str\n\n\n Returns\n -------\n out : ndarray (arcsec)\n The data coordinates (x,y) in Helioprojective-Cartesian coordinates.\n\n Notes\n -----\n Uses equations 11 and 16 in Thompson (2006), A&A, 449, 791-803.\n\n Examples\n --------\n >>> import sunpy.wcs\n >>> sunpy.wcs.convert_hg_hpc(34.0, 45.0, b0_deg=-7.064078, l0_deg=0.0)\n (380.05656560308898, 743.78281283290016)\n \"\"\"\n\n tempx, tempy = convert_hg_hcc(hglon_deg, hglat_deg, b0_deg=b0_deg, l0_deg=l0_deg, occultation=occultation)\n x, y = convert_hcc_hpc(tempx, tempy, dsun_meters=dsun_meters, angle_units=angle_units)\n return x, y\n\ndef convert_hpc_hg(x, y, b0_deg=0, l0_deg=0, dsun_meters=None, angle_units='arcsec'):\n \"\"\"Convert from Helioprojective-Cartesian (HPC) to Heliographic coordinates\n (HG) in degrees.\n\n Parameters\n ----------\n x, y : float ()\n Data coordinate in angle units.\n b0 : float (degrees)\n Tilt of the solar North rotational axis toward the observer\n (heliographic latitude of the observer). Usually given as SOLAR_B0,\n HGLT_OBS, or CRLT_OBS. Default is 0.\n l0 : float (degrees)\n Carrington longitude of central meridian as seen from Earth. Default is 0.\n dsun_meters : float (meters)\n Distance between the observer and the Sun.\n angle_units : str\n Units used for input x and y. Default is arcsec.\n\n Returns\n -------\n out : ndarray (degrees)\n The data coordinates (hglongitude, hglatitude) in Heliographic coordinates.\n\n Notes\n -----\n Uses equations 15 and 12 in Thompson (2006), A&A, 449, 791-803.\n\n Examples\n --------\n >>> import sunpy.wcs\n >>> sunpy.wcs.convert_hpc_hg(382, 748, b0_deg=-7.064078, l0_deg=0.0)\n (34.504653439914669, 45.443143275518182)\n \"\"\"\n tempx, tempy = convert_hpc_hcc(x, y, dsun_meters=dsun_meters, angle_units=angle_units)\n lon, lat = convert_hcc_hg(tempx, tempy, b0_deg=b0_deg, l0_deg=l0_deg)\n return lon, lat\n\ndef proj_tan(x, y, force=False):\n \"\"\"Applies the gnomonic (TAN) projection to intermediate relative\n coordinates. This function is not currently implemented!\"\"\"\n # if pixels are within 3 degrees of the Sun then skip the calculation unless\n # force is True. This applies to all sdo images so this function is just\n # here as a place holder for the future\n # TODO: write proj_tan function\n return x, y\n\ndef convert_to_coord(x, y, from_coord, to_coord, b0_deg=0, l0_deg=0, dsun_meters=None, angle_units='arcsec'):\n \"\"\"Apply a coordinate transform to coordinates. Right now can only do hpc\n to hcc to hg\"\"\"\n\n if (from_coord == 'hcc') and (to_coord == 'hg'):\n rx, ry = convert_hcc_hg(x, y, b0_deg=b0_deg, l0_deg=l0_deg)\n elif (from_coord == 'hpc') and (to_coord == 'hg'):\n rx, ry = convert_hpc_hg(x, y, b0_deg=b0_deg, l0_deg=l0_deg, dsun_meters=dsun_meters, angle_units=angle_units)\n elif (from_coord == 'hg') and (to_coord == 'hcc'):\n rx, ry = convert_hg_hcc(x, y, b0_deg=b0_deg, l0_deg=l0_deg)\n elif (from_coord == 'hcc') and (to_coord == 'hpc'):\n rx, ry = convert_hcc_hpc(x, y, dsun_meters=dsun_meters, angle_units=angle_units)\n elif (from_coord == 'hg') and (to_coord == 'hpc'):\n rx, ry = convert_hg_hpc(x, y, b0_deg=b0_deg, l0_deg=l0_deg, dsun_meters=dsun_meters, angle_units=angle_units)\n elif (from_coord == 'hpc') and (to_coord == 'hcc'):\n rx, ry = convert_hpc_hcc(x, y, dsun_meters=dsun_meters, angle_units=angle_units)\n\n return rx, ry\n", "path": "sunpy/wcs/wcs.py" } ]
diff --git a/CHANGELOG.md b/CHANGELOG.md index 775bc5f5f53..770484b5b94 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,10 @@ -Latest ------- +0.7.1 +----- + +* Fix bug in `wcs.convert_data_to_pixel` where crpix[1] was used for both axes. + +0.7.0 +----- * Added `timeout` parameter in `sunpy.data.download_sample_data()` * Fixed `aiaprep` to return properly sized map. * Deprecation warnings fixed when using image coalignment. diff --git a/sunpy/wcs/wcs.py b/sunpy/wcs/wcs.py index f74f8120e9b..0d5f152c194 100644 --- a/sunpy/wcs/wcs.py +++ b/sunpy/wcs/wcs.py @@ -133,7 +133,7 @@ def convert_data_to_pixel(x, y, scale, reference_pixel, reference_coordinate): # coord = inv_proj_tan(coord) # note that crpix[] counts pixels starting at 1 - pixelx = (x - crval[0]) / cdelt[0] + (crpix[1] - 1) + pixelx = (x - crval[0]) / cdelt[0] + (crpix[0] - 1) pixely = (y - crval[1]) / cdelt[1] + (crpix[1] - 1) return pixelx, pixely
huggingface__transformers-7282
weights partially missing for CamembertForMaskedLM ## Environment info - `transformers` version: 3.1.0 - Platform: Darwin-18.7.0-x86_64-i386-64bit - Python version: 3.7.5 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @louismartin ## Information When loading "camembert-base" with `CamembertForMaskedLM` with: from transformers import CamembertForMaskedLM model = CamembertForMaskedLM.from_pretrained("camembert-base") the bias of the LM head decoder is not loaded: Some weights of CamembertForMaskedLM were not initialized from the model checkpoint at camembert-base and are newly initialized: ['lm_head.decoder.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. As I understand `lm_head.decoder.bias` is therefore initialized randomly. I checked the original `camembert-base` model as published by the author, and the lm_head decoder bias is missing too, which is not discussed in the camembert or roberta publication.
[ { "content": "# coding=utf-8\n# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.\n# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"PyTorch RoBERTa model. \"\"\"\n\n\nimport warnings\n\nimport torch\nimport torch.nn as nn\nfrom torch.nn import CrossEntropyLoss, MSELoss\n\nfrom .configuration_roberta import RobertaConfig\nfrom .file_utils import (\n add_code_sample_docstrings,\n add_start_docstrings,\n add_start_docstrings_to_callable,\n replace_return_docstrings,\n)\nfrom .modeling_bert import BertEmbeddings, BertLayerNorm, BertModel, BertPreTrainedModel, gelu\nfrom .modeling_outputs import (\n CausalLMOutput,\n MaskedLMOutput,\n MultipleChoiceModelOutput,\n QuestionAnsweringModelOutput,\n SequenceClassifierOutput,\n TokenClassifierOutput,\n)\nfrom .utils import logging\n\n\nlogger = logging.get_logger(__name__)\n\n_CONFIG_FOR_DOC = \"RobertaConfig\"\n_TOKENIZER_FOR_DOC = \"RobertaTokenizer\"\n\nROBERTA_PRETRAINED_MODEL_ARCHIVE_LIST = [\n \"roberta-base\",\n \"roberta-large\",\n \"roberta-large-mnli\",\n \"distilroberta-base\",\n \"roberta-base-openai-detector\",\n \"roberta-large-openai-detector\",\n # See all RoBERTa models at https://huggingface.co/models?filter=roberta\n]\n\n\nclass RobertaEmbeddings(BertEmbeddings):\n \"\"\"\n Same as BertEmbeddings with a tiny tweak for positional embeddings indexing.\n \"\"\"\n\n def __init__(self, config):\n super().__init__(config)\n self.padding_idx = config.pad_token_id\n self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=self.padding_idx)\n self.position_embeddings = nn.Embedding(\n config.max_position_embeddings, config.hidden_size, padding_idx=self.padding_idx\n )\n\n def forward(self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None):\n if position_ids is None:\n if input_ids is not None:\n # Create the position ids from the input token ids. Any padded tokens remain padded.\n position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx).to(input_ids.device)\n else:\n position_ids = self.create_position_ids_from_inputs_embeds(inputs_embeds)\n\n return super().forward(\n input_ids, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds\n )\n\n def create_position_ids_from_inputs_embeds(self, inputs_embeds):\n \"\"\"We are provided embeddings directly. We cannot infer which are padded so just generate\n sequential position ids.\n\n :param torch.Tensor inputs_embeds:\n :return torch.Tensor:\n \"\"\"\n input_shape = inputs_embeds.size()[:-1]\n sequence_length = input_shape[1]\n\n position_ids = torch.arange(\n self.padding_idx + 1, sequence_length + self.padding_idx + 1, dtype=torch.long, device=inputs_embeds.device\n )\n return position_ids.unsqueeze(0).expand(input_shape)\n\n\nROBERTA_START_DOCSTRING = r\"\"\"\n\n This model is a PyTorch `torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>`_ sub-class.\n Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general\n usage and behavior.\n\n Parameters:\n config (:class:`~transformers.RobertaConfig`): Model configuration class with all the parameters of the\n model. Initializing with a config file does not load the weights associated with the model, only the configuration.\n Check out the :meth:`~transformers.PreTrainedModel.from_pretrained` method to load the model weights.\n\"\"\"\n\nROBERTA_INPUTS_DOCSTRING = r\"\"\"\n Args:\n input_ids (:obj:`torch.LongTensor` of shape :obj:`{0}`):\n Indices of input sequence tokens in the vocabulary.\n\n Indices can be obtained using :class:`transformers.RobertaTokenizer`.\n See :func:`transformers.PreTrainedTokenizer.encode` and\n :func:`transformers.PreTrainedTokenizer.__call__` for details.\n\n `What are input IDs? <../glossary.html#input-ids>`__\n attention_mask (:obj:`torch.FloatTensor` of shape :obj:`{0}`, `optional`):\n Mask to avoid performing attention on padding token indices.\n Mask values selected in ``[0, 1]``:\n ``1`` for tokens that are NOT MASKED, ``0`` for MASKED tokens.\n\n `What are attention masks? <../glossary.html#attention-mask>`__\n token_type_ids (:obj:`torch.LongTensor` of shape :obj:`{0}`, `optional`):\n Segment token indices to indicate first and second portions of the inputs.\n Indices are selected in ``[0, 1]``: ``0`` corresponds to a `sentence A` token, ``1``\n corresponds to a `sentence B` token\n\n `What are token type IDs? <../glossary.html#token-type-ids>`_\n position_ids (:obj:`torch.LongTensor` of shape :obj:`{0}`, `optional`):\n Indices of positions of each input sequence tokens in the position embeddings.\n Selected in the range ``[0, config.max_position_embeddings - 1]``.\n\n `What are position IDs? <../glossary.html#position-ids>`_\n head_mask (:obj:`torch.FloatTensor` of shape :obj:`(num_heads,)` or :obj:`(num_layers, num_heads)`, `optional`):\n Mask to nullify selected heads of the self-attention modules.\n Mask values selected in ``[0, 1]``:\n :obj:`1` indicates the head is **not masked**, :obj:`0` indicates the head is **masked**.\n inputs_embeds (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`):\n Optionally, instead of passing :obj:`input_ids` you can choose to directly pass an embedded representation.\n This is useful if you want more control over how to convert `input_ids` indices into associated vectors\n than the model's internal embedding lookup matrix.\n output_attentions (:obj:`bool`, `optional`):\n If set to ``True``, the attentions tensors of all attention layers are returned. See ``attentions`` under returned tensors for more detail.\n output_hidden_states (:obj:`bool`, `optional`):\n If set to ``True``, the hidden states of all layers are returned. See ``hidden_states`` under returned tensors for more detail.\n return_dict (:obj:`bool`, `optional`):\n If set to ``True``, the model will return a :class:`~transformers.file_utils.ModelOutput` instead of a\n plain tuple.\n\"\"\"\n\n\n@add_start_docstrings(\n \"The bare RoBERTa Model transformer outputting raw hidden-states without any specific head on top.\",\n ROBERTA_START_DOCSTRING,\n)\nclass RobertaModel(BertModel):\n \"\"\"\n This class overrides :class:`~transformers.BertModel`. Please check the\n superclass for the appropriate documentation alongside usage examples.\n \"\"\"\n\n config_class = RobertaConfig\n base_model_prefix = \"roberta\"\n\n def __init__(self, config):\n super().__init__(config)\n\n self.embeddings = RobertaEmbeddings(config)\n self.init_weights()\n\n def get_input_embeddings(self):\n return self.embeddings.word_embeddings\n\n def set_input_embeddings(self, value):\n self.embeddings.word_embeddings = value\n\n\n@add_start_docstrings(\n \"\"\"RoBERTa Model with a `language modeling` head on top for CLM fine-tuning. \"\"\", ROBERTA_START_DOCSTRING\n)\nclass RobertaForCausalLM(BertPreTrainedModel):\n config_class = RobertaConfig\n base_model_prefix = \"roberta\"\n\n def __init__(self, config):\n super().__init__(config)\n\n if not config.is_decoder:\n logger.warning(\"If you want to use `RobertaLMHeadModel` as a standalone, add `is_decoder=True.`\")\n\n self.roberta = RobertaModel(config)\n self.lm_head = RobertaLMHead(config)\n\n self.init_weights()\n\n def get_output_embeddings(self):\n return self.lm_head.decoder\n\n @add_start_docstrings_to_callable(ROBERTA_INPUTS_DOCSTRING.format(\"(batch_size, sequence_length)\"))\n @replace_return_docstrings(output_type=CausalLMOutput, config_class=_CONFIG_FOR_DOC)\n def forward(\n self,\n input_ids=None,\n attention_mask=None,\n token_type_ids=None,\n position_ids=None,\n head_mask=None,\n inputs_embeds=None,\n encoder_hidden_states=None,\n encoder_attention_mask=None,\n labels=None,\n output_attentions=None,\n output_hidden_states=None,\n return_dict=None,\n ):\n r\"\"\"\n encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`):\n Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention\n if the model is configured as a decoder.\n encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):\n Mask to avoid performing attention on the padding token indices of the encoder input. This mask\n is used in the cross-attention if the model is configured as a decoder.\n Mask values selected in ``[0, 1]``:\n ``1`` for tokens that are NOT MASKED, ``0`` for MASKED tokens.\n labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):\n Labels for computing the left-to-right language modeling loss (next word prediction).\n Indices should be in ``[-100, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring)\n Tokens with indices set to ``-100`` are ignored (masked), the loss is only computed for the tokens with labels\n in ``[0, ..., config.vocab_size]``\n\n Returns:\n\n Example::\n\n >>> from transformers import RobertaTokenizer, RobertaLMHeadModel, RobertaConfig\n >>> import torch\n\n >>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')\n >>> config = RobertaConfig.from_pretrained(\"roberta-base\")\n >>> config.is_decoder = True\n >>> model = RobertaLMHeadModel.from_pretrained('roberta-base', config=config, return_dict=True)\n\n >>> inputs = tokenizer(\"Hello, my dog is cute\", return_tensors=\"pt\")\n >>> outputs = model(**inputs)\n\n >>> prediction_logits = outputs.logits\n \"\"\"\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n\n outputs = self.roberta(\n input_ids,\n attention_mask=attention_mask,\n token_type_ids=token_type_ids,\n position_ids=position_ids,\n head_mask=head_mask,\n inputs_embeds=inputs_embeds,\n encoder_hidden_states=encoder_hidden_states,\n encoder_attention_mask=encoder_attention_mask,\n output_attentions=output_attentions,\n output_hidden_states=output_hidden_states,\n return_dict=return_dict,\n )\n\n sequence_output = outputs[0]\n prediction_scores = self.lm_head(sequence_output)\n\n lm_loss = None\n if labels is not None:\n # we are doing next-token prediction; shift prediction scores and input ids by one\n shifted_prediction_scores = prediction_scores[:, :-1, :].contiguous()\n labels = labels[:, 1:].contiguous()\n loss_fct = CrossEntropyLoss()\n lm_loss = loss_fct(shifted_prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))\n\n if not return_dict:\n output = (prediction_scores,) + outputs[2:]\n return ((lm_loss,) + output) if lm_loss is not None else output\n\n return CausalLMOutput(\n loss=lm_loss,\n logits=prediction_scores,\n hidden_states=outputs.hidden_states,\n attentions=outputs.attentions,\n )\n\n def prepare_inputs_for_generation(self, input_ids, attention_mask=None, **model_kwargs):\n input_shape = input_ids.shape\n\n # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly\n if attention_mask is None:\n attention_mask = input_ids.new_ones(input_shape)\n\n return {\"input_ids\": input_ids, \"attention_mask\": attention_mask}\n\n\n@add_start_docstrings(\"\"\"RoBERTa Model with a `language modeling` head on top. \"\"\", ROBERTA_START_DOCSTRING)\nclass RobertaForMaskedLM(BertPreTrainedModel):\n config_class = RobertaConfig\n base_model_prefix = \"roberta\"\n\n def __init__(self, config):\n super().__init__(config)\n\n if config.is_decoder:\n logger.warning(\n \"If you want to use `RobertaForMaskedLM` make sure `config.is_decoder=False` for \"\n \"bi-directional self-attention.\"\n )\n\n self.roberta = RobertaModel(config)\n self.lm_head = RobertaLMHead(config)\n\n self.init_weights()\n\n def get_output_embeddings(self):\n return self.lm_head.decoder\n\n @add_start_docstrings_to_callable(ROBERTA_INPUTS_DOCSTRING.format(\"(batch_size, sequence_length)\"))\n @add_code_sample_docstrings(\n tokenizer_class=_TOKENIZER_FOR_DOC,\n checkpoint=\"roberta-base\",\n output_type=MaskedLMOutput,\n config_class=_CONFIG_FOR_DOC,\n )\n def forward(\n self,\n input_ids=None,\n attention_mask=None,\n token_type_ids=None,\n position_ids=None,\n head_mask=None,\n inputs_embeds=None,\n encoder_hidden_states=None,\n encoder_attention_mask=None,\n labels=None,\n output_attentions=None,\n output_hidden_states=None,\n return_dict=None,\n **kwargs\n ):\n r\"\"\"\n labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):\n Labels for computing the masked language modeling loss.\n Indices should be in ``[-100, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring)\n Tokens with indices set to ``-100`` are ignored (masked), the loss is only computed for the tokens with labels\n in ``[0, ..., config.vocab_size]``\n kwargs (:obj:`Dict[str, any]`, optional, defaults to `{}`):\n Used to hide legacy arguments that have been deprecated.\n \"\"\"\n if \"masked_lm_labels\" in kwargs:\n warnings.warn(\n \"The `masked_lm_labels` argument is deprecated and will be removed in a future version, use `labels` instead.\",\n FutureWarning,\n )\n labels = kwargs.pop(\"masked_lm_labels\")\n assert kwargs == {}, f\"Unexpected keyword arguments: {list(kwargs.keys())}.\"\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n\n outputs = self.roberta(\n input_ids,\n attention_mask=attention_mask,\n token_type_ids=token_type_ids,\n position_ids=position_ids,\n head_mask=head_mask,\n inputs_embeds=inputs_embeds,\n encoder_hidden_states=encoder_hidden_states,\n encoder_attention_mask=encoder_attention_mask,\n output_attentions=output_attentions,\n output_hidden_states=output_hidden_states,\n return_dict=return_dict,\n )\n sequence_output = outputs[0]\n prediction_scores = self.lm_head(sequence_output)\n\n masked_lm_loss = None\n if labels is not None:\n loss_fct = CrossEntropyLoss()\n masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))\n\n if not return_dict:\n output = (prediction_scores,) + outputs[2:]\n return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output\n\n return MaskedLMOutput(\n loss=masked_lm_loss,\n logits=prediction_scores,\n hidden_states=outputs.hidden_states,\n attentions=outputs.attentions,\n )\n\n\nclass RobertaLMHead(nn.Module):\n \"\"\"Roberta Head for masked language modeling.\"\"\"\n\n def __init__(self, config):\n super().__init__()\n self.dense = nn.Linear(config.hidden_size, config.hidden_size)\n self.layer_norm = BertLayerNorm(config.hidden_size, eps=config.layer_norm_eps)\n\n self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False)\n self.bias = nn.Parameter(torch.zeros(config.vocab_size))\n\n # Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings`\n self.decoder.bias = self.bias\n\n def forward(self, features, **kwargs):\n x = self.dense(features)\n x = gelu(x)\n x = self.layer_norm(x)\n\n # project back to size of vocabulary with bias\n x = self.decoder(x)\n\n return x\n\n\n@add_start_docstrings(\n \"\"\"RoBERTa Model transformer with a sequence classification/regression head on top (a linear layer\n on top of the pooled output) e.g. for GLUE tasks. \"\"\",\n ROBERTA_START_DOCSTRING,\n)\nclass RobertaForSequenceClassification(BertPreTrainedModel):\n config_class = RobertaConfig\n base_model_prefix = \"roberta\"\n\n def __init__(self, config):\n super().__init__(config)\n self.num_labels = config.num_labels\n\n self.roberta = RobertaModel(config)\n self.classifier = RobertaClassificationHead(config)\n\n self.init_weights()\n\n @add_start_docstrings_to_callable(ROBERTA_INPUTS_DOCSTRING.format(\"(batch_size, sequence_length)\"))\n @add_code_sample_docstrings(\n tokenizer_class=_TOKENIZER_FOR_DOC,\n checkpoint=\"roberta-base\",\n output_type=SequenceClassifierOutput,\n config_class=_CONFIG_FOR_DOC,\n )\n def forward(\n self,\n input_ids=None,\n attention_mask=None,\n token_type_ids=None,\n position_ids=None,\n head_mask=None,\n inputs_embeds=None,\n labels=None,\n output_attentions=None,\n output_hidden_states=None,\n return_dict=None,\n ):\n r\"\"\"\n labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):\n Labels for computing the sequence classification/regression loss.\n Indices should be in :obj:`[0, ..., config.num_labels - 1]`.\n If :obj:`config.num_labels == 1` a regression loss is computed (Mean-Square loss),\n If :obj:`config.num_labels > 1` a classification loss is computed (Cross-Entropy).\n \"\"\"\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n\n outputs = self.roberta(\n input_ids,\n attention_mask=attention_mask,\n token_type_ids=token_type_ids,\n position_ids=position_ids,\n head_mask=head_mask,\n inputs_embeds=inputs_embeds,\n output_attentions=output_attentions,\n output_hidden_states=output_hidden_states,\n return_dict=return_dict,\n )\n sequence_output = outputs[0]\n logits = self.classifier(sequence_output)\n\n loss = None\n if labels is not None:\n if self.num_labels == 1:\n # We are doing regression\n loss_fct = MSELoss()\n loss = loss_fct(logits.view(-1), labels.view(-1))\n else:\n loss_fct = CrossEntropyLoss()\n loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))\n\n if not return_dict:\n output = (logits,) + outputs[2:]\n return ((loss,) + output) if loss is not None else output\n\n return SequenceClassifierOutput(\n loss=loss,\n logits=logits,\n hidden_states=outputs.hidden_states,\n attentions=outputs.attentions,\n )\n\n\n@add_start_docstrings(\n \"\"\"Roberta Model with a multiple choice classification head on top (a linear layer on top of\n the pooled output and a softmax) e.g. for RocStories/SWAG tasks. \"\"\",\n ROBERTA_START_DOCSTRING,\n)\nclass RobertaForMultipleChoice(BertPreTrainedModel):\n config_class = RobertaConfig\n base_model_prefix = \"roberta\"\n\n def __init__(self, config):\n super().__init__(config)\n\n self.roberta = RobertaModel(config)\n self.dropout = nn.Dropout(config.hidden_dropout_prob)\n self.classifier = nn.Linear(config.hidden_size, 1)\n\n self.init_weights()\n\n @add_start_docstrings_to_callable(ROBERTA_INPUTS_DOCSTRING.format(\"(batch_size, num_choices, sequence_length)\"))\n @add_code_sample_docstrings(\n tokenizer_class=_TOKENIZER_FOR_DOC,\n checkpoint=\"roberta-base\",\n output_type=MultipleChoiceModelOutput,\n config_class=_CONFIG_FOR_DOC,\n )\n def forward(\n self,\n input_ids=None,\n token_type_ids=None,\n attention_mask=None,\n labels=None,\n position_ids=None,\n head_mask=None,\n inputs_embeds=None,\n output_attentions=None,\n output_hidden_states=None,\n return_dict=None,\n ):\n r\"\"\"\n labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):\n Labels for computing the multiple choice classification loss.\n Indices should be in ``[0, ..., num_choices]`` where `num_choices` is the size of the second dimension\n of the input tensors. (see `input_ids` above)\n \"\"\"\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n num_choices = input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1]\n\n flat_input_ids = input_ids.view(-1, input_ids.size(-1)) if input_ids is not None else None\n flat_position_ids = position_ids.view(-1, position_ids.size(-1)) if position_ids is not None else None\n flat_token_type_ids = token_type_ids.view(-1, token_type_ids.size(-1)) if token_type_ids is not None else None\n flat_attention_mask = attention_mask.view(-1, attention_mask.size(-1)) if attention_mask is not None else None\n flat_inputs_embeds = (\n inputs_embeds.view(-1, inputs_embeds.size(-2), inputs_embeds.size(-1))\n if inputs_embeds is not None\n else None\n )\n\n outputs = self.roberta(\n flat_input_ids,\n position_ids=flat_position_ids,\n token_type_ids=flat_token_type_ids,\n attention_mask=flat_attention_mask,\n head_mask=head_mask,\n inputs_embeds=flat_inputs_embeds,\n output_attentions=output_attentions,\n output_hidden_states=output_hidden_states,\n return_dict=return_dict,\n )\n pooled_output = outputs[1]\n\n pooled_output = self.dropout(pooled_output)\n logits = self.classifier(pooled_output)\n reshaped_logits = logits.view(-1, num_choices)\n\n loss = None\n if labels is not None:\n loss_fct = CrossEntropyLoss()\n loss = loss_fct(reshaped_logits, labels)\n\n if not return_dict:\n output = (reshaped_logits,) + outputs[2:]\n return ((loss,) + output) if loss is not None else output\n\n return MultipleChoiceModelOutput(\n loss=loss,\n logits=reshaped_logits,\n hidden_states=outputs.hidden_states,\n attentions=outputs.attentions,\n )\n\n\n@add_start_docstrings(\n \"\"\"Roberta Model with a token classification head on top (a linear layer on top of\n the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. \"\"\",\n ROBERTA_START_DOCSTRING,\n)\nclass RobertaForTokenClassification(BertPreTrainedModel):\n config_class = RobertaConfig\n base_model_prefix = \"roberta\"\n\n def __init__(self, config):\n super().__init__(config)\n self.num_labels = config.num_labels\n\n self.roberta = RobertaModel(config)\n self.dropout = nn.Dropout(config.hidden_dropout_prob)\n self.classifier = nn.Linear(config.hidden_size, config.num_labels)\n\n self.init_weights()\n\n @add_start_docstrings_to_callable(ROBERTA_INPUTS_DOCSTRING.format(\"(batch_size, sequence_length)\"))\n @add_code_sample_docstrings(\n tokenizer_class=_TOKENIZER_FOR_DOC,\n checkpoint=\"roberta-base\",\n output_type=TokenClassifierOutput,\n config_class=_CONFIG_FOR_DOC,\n )\n def forward(\n self,\n input_ids=None,\n attention_mask=None,\n token_type_ids=None,\n position_ids=None,\n head_mask=None,\n inputs_embeds=None,\n labels=None,\n output_attentions=None,\n output_hidden_states=None,\n return_dict=None,\n ):\n r\"\"\"\n labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):\n Labels for computing the token classification loss.\n Indices should be in ``[0, ..., config.num_labels - 1]``.\n \"\"\"\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n\n outputs = self.roberta(\n input_ids,\n attention_mask=attention_mask,\n token_type_ids=token_type_ids,\n position_ids=position_ids,\n head_mask=head_mask,\n inputs_embeds=inputs_embeds,\n output_attentions=output_attentions,\n output_hidden_states=output_hidden_states,\n return_dict=return_dict,\n )\n\n sequence_output = outputs[0]\n\n sequence_output = self.dropout(sequence_output)\n logits = self.classifier(sequence_output)\n\n loss = None\n if labels is not None:\n loss_fct = CrossEntropyLoss()\n # Only keep active parts of the loss\n if attention_mask is not None:\n active_loss = attention_mask.view(-1) == 1\n active_logits = logits.view(-1, self.num_labels)\n active_labels = torch.where(\n active_loss, labels.view(-1), torch.tensor(loss_fct.ignore_index).type_as(labels)\n )\n loss = loss_fct(active_logits, active_labels)\n else:\n loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))\n\n if not return_dict:\n output = (logits,) + outputs[2:]\n return ((loss,) + output) if loss is not None else output\n\n return TokenClassifierOutput(\n loss=loss,\n logits=logits,\n hidden_states=outputs.hidden_states,\n attentions=outputs.attentions,\n )\n\n\nclass RobertaClassificationHead(nn.Module):\n \"\"\"Head for sentence-level classification tasks.\"\"\"\n\n def __init__(self, config):\n super().__init__()\n self.dense = nn.Linear(config.hidden_size, config.hidden_size)\n self.dropout = nn.Dropout(config.hidden_dropout_prob)\n self.out_proj = nn.Linear(config.hidden_size, config.num_labels)\n\n def forward(self, features, **kwargs):\n x = features[:, 0, :] # take <s> token (equiv. to [CLS])\n x = self.dropout(x)\n x = self.dense(x)\n x = torch.tanh(x)\n x = self.dropout(x)\n x = self.out_proj(x)\n return x\n\n\n@add_start_docstrings(\n \"\"\"Roberta Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of\n the hidden-states output to compute `span start logits` and `span end logits`). \"\"\",\n ROBERTA_START_DOCSTRING,\n)\nclass RobertaForQuestionAnswering(BertPreTrainedModel):\n config_class = RobertaConfig\n base_model_prefix = \"roberta\"\n\n def __init__(self, config):\n super().__init__(config)\n self.num_labels = config.num_labels\n\n self.roberta = RobertaModel(config)\n self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels)\n\n self.init_weights()\n\n @add_start_docstrings_to_callable(ROBERTA_INPUTS_DOCSTRING.format(\"(batch_size, sequence_length)\"))\n @add_code_sample_docstrings(\n tokenizer_class=_TOKENIZER_FOR_DOC,\n checkpoint=\"roberta-base\",\n output_type=QuestionAnsweringModelOutput,\n config_class=_CONFIG_FOR_DOC,\n )\n def forward(\n self,\n input_ids=None,\n attention_mask=None,\n token_type_ids=None,\n position_ids=None,\n head_mask=None,\n inputs_embeds=None,\n start_positions=None,\n end_positions=None,\n output_attentions=None,\n output_hidden_states=None,\n return_dict=None,\n ):\n r\"\"\"\n start_positions (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):\n Labels for position (index) of the start of the labelled span for computing the token classification loss.\n Positions are clamped to the length of the sequence (`sequence_length`).\n Position outside of the sequence are not taken into account for computing the loss.\n end_positions (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):\n Labels for position (index) of the end of the labelled span for computing the token classification loss.\n Positions are clamped to the length of the sequence (`sequence_length`).\n Position outside of the sequence are not taken into account for computing the loss.\n \"\"\"\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n\n outputs = self.roberta(\n input_ids,\n attention_mask=attention_mask,\n token_type_ids=token_type_ids,\n position_ids=position_ids,\n head_mask=head_mask,\n inputs_embeds=inputs_embeds,\n output_attentions=output_attentions,\n output_hidden_states=output_hidden_states,\n return_dict=return_dict,\n )\n\n sequence_output = outputs[0]\n\n logits = self.qa_outputs(sequence_output)\n start_logits, end_logits = logits.split(1, dim=-1)\n start_logits = start_logits.squeeze(-1)\n end_logits = end_logits.squeeze(-1)\n\n total_loss = None\n if start_positions is not None and end_positions is not None:\n # If we are on multi-GPU, split add a dimension\n if len(start_positions.size()) > 1:\n start_positions = start_positions.squeeze(-1)\n if len(end_positions.size()) > 1:\n end_positions = end_positions.squeeze(-1)\n # sometimes the start/end positions are outside our model inputs, we ignore these terms\n ignored_index = start_logits.size(1)\n start_positions.clamp_(0, ignored_index)\n end_positions.clamp_(0, ignored_index)\n\n loss_fct = CrossEntropyLoss(ignore_index=ignored_index)\n start_loss = loss_fct(start_logits, start_positions)\n end_loss = loss_fct(end_logits, end_positions)\n total_loss = (start_loss + end_loss) / 2\n\n if not return_dict:\n output = (start_logits, end_logits) + outputs[2:]\n return ((total_loss,) + output) if total_loss is not None else output\n\n return QuestionAnsweringModelOutput(\n loss=total_loss,\n start_logits=start_logits,\n end_logits=end_logits,\n hidden_states=outputs.hidden_states,\n attentions=outputs.attentions,\n )\n\n\ndef create_position_ids_from_input_ids(input_ids, padding_idx):\n \"\"\"Replace non-padding symbols with their position numbers. Position numbers begin at\n padding_idx+1. Padding symbols are ignored. This is modified from fairseq's\n `utils.make_positions`.\n\n :param torch.Tensor x:\n :return torch.Tensor:\n \"\"\"\n # The series of casts and type-conversions here are carefully balanced to both work with ONNX export and XLA.\n mask = input_ids.ne(padding_idx).int()\n incremental_indices = torch.cumsum(mask, dim=1).type_as(mask) * mask\n return incremental_indices.long() + padding_idx\n", "path": "src/transformers/modeling_roberta.py" } ]
[ { "content": "# coding=utf-8\n# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.\n# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"PyTorch RoBERTa model. \"\"\"\n\n\nimport warnings\n\nimport torch\nimport torch.nn as nn\nfrom torch.nn import CrossEntropyLoss, MSELoss\n\nfrom .configuration_roberta import RobertaConfig\nfrom .file_utils import (\n add_code_sample_docstrings,\n add_start_docstrings,\n add_start_docstrings_to_callable,\n replace_return_docstrings,\n)\nfrom .modeling_bert import BertEmbeddings, BertLayerNorm, BertModel, BertPreTrainedModel, gelu\nfrom .modeling_outputs import (\n CausalLMOutput,\n MaskedLMOutput,\n MultipleChoiceModelOutput,\n QuestionAnsweringModelOutput,\n SequenceClassifierOutput,\n TokenClassifierOutput,\n)\nfrom .utils import logging\n\n\nlogger = logging.get_logger(__name__)\n\n_CONFIG_FOR_DOC = \"RobertaConfig\"\n_TOKENIZER_FOR_DOC = \"RobertaTokenizer\"\n\nROBERTA_PRETRAINED_MODEL_ARCHIVE_LIST = [\n \"roberta-base\",\n \"roberta-large\",\n \"roberta-large-mnli\",\n \"distilroberta-base\",\n \"roberta-base-openai-detector\",\n \"roberta-large-openai-detector\",\n # See all RoBERTa models at https://huggingface.co/models?filter=roberta\n]\n\n\nclass RobertaEmbeddings(BertEmbeddings):\n \"\"\"\n Same as BertEmbeddings with a tiny tweak for positional embeddings indexing.\n \"\"\"\n\n def __init__(self, config):\n super().__init__(config)\n self.padding_idx = config.pad_token_id\n self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=self.padding_idx)\n self.position_embeddings = nn.Embedding(\n config.max_position_embeddings, config.hidden_size, padding_idx=self.padding_idx\n )\n\n def forward(self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None):\n if position_ids is None:\n if input_ids is not None:\n # Create the position ids from the input token ids. Any padded tokens remain padded.\n position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx).to(input_ids.device)\n else:\n position_ids = self.create_position_ids_from_inputs_embeds(inputs_embeds)\n\n return super().forward(\n input_ids, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds\n )\n\n def create_position_ids_from_inputs_embeds(self, inputs_embeds):\n \"\"\"We are provided embeddings directly. We cannot infer which are padded so just generate\n sequential position ids.\n\n :param torch.Tensor inputs_embeds:\n :return torch.Tensor:\n \"\"\"\n input_shape = inputs_embeds.size()[:-1]\n sequence_length = input_shape[1]\n\n position_ids = torch.arange(\n self.padding_idx + 1, sequence_length + self.padding_idx + 1, dtype=torch.long, device=inputs_embeds.device\n )\n return position_ids.unsqueeze(0).expand(input_shape)\n\n\nROBERTA_START_DOCSTRING = r\"\"\"\n\n This model is a PyTorch `torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>`_ sub-class.\n Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general\n usage and behavior.\n\n Parameters:\n config (:class:`~transformers.RobertaConfig`): Model configuration class with all the parameters of the\n model. Initializing with a config file does not load the weights associated with the model, only the configuration.\n Check out the :meth:`~transformers.PreTrainedModel.from_pretrained` method to load the model weights.\n\"\"\"\n\nROBERTA_INPUTS_DOCSTRING = r\"\"\"\n Args:\n input_ids (:obj:`torch.LongTensor` of shape :obj:`{0}`):\n Indices of input sequence tokens in the vocabulary.\n\n Indices can be obtained using :class:`transformers.RobertaTokenizer`.\n See :func:`transformers.PreTrainedTokenizer.encode` and\n :func:`transformers.PreTrainedTokenizer.__call__` for details.\n\n `What are input IDs? <../glossary.html#input-ids>`__\n attention_mask (:obj:`torch.FloatTensor` of shape :obj:`{0}`, `optional`):\n Mask to avoid performing attention on padding token indices.\n Mask values selected in ``[0, 1]``:\n ``1`` for tokens that are NOT MASKED, ``0`` for MASKED tokens.\n\n `What are attention masks? <../glossary.html#attention-mask>`__\n token_type_ids (:obj:`torch.LongTensor` of shape :obj:`{0}`, `optional`):\n Segment token indices to indicate first and second portions of the inputs.\n Indices are selected in ``[0, 1]``: ``0`` corresponds to a `sentence A` token, ``1``\n corresponds to a `sentence B` token\n\n `What are token type IDs? <../glossary.html#token-type-ids>`_\n position_ids (:obj:`torch.LongTensor` of shape :obj:`{0}`, `optional`):\n Indices of positions of each input sequence tokens in the position embeddings.\n Selected in the range ``[0, config.max_position_embeddings - 1]``.\n\n `What are position IDs? <../glossary.html#position-ids>`_\n head_mask (:obj:`torch.FloatTensor` of shape :obj:`(num_heads,)` or :obj:`(num_layers, num_heads)`, `optional`):\n Mask to nullify selected heads of the self-attention modules.\n Mask values selected in ``[0, 1]``:\n :obj:`1` indicates the head is **not masked**, :obj:`0` indicates the head is **masked**.\n inputs_embeds (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`):\n Optionally, instead of passing :obj:`input_ids` you can choose to directly pass an embedded representation.\n This is useful if you want more control over how to convert `input_ids` indices into associated vectors\n than the model's internal embedding lookup matrix.\n output_attentions (:obj:`bool`, `optional`):\n If set to ``True``, the attentions tensors of all attention layers are returned. See ``attentions`` under returned tensors for more detail.\n output_hidden_states (:obj:`bool`, `optional`):\n If set to ``True``, the hidden states of all layers are returned. See ``hidden_states`` under returned tensors for more detail.\n return_dict (:obj:`bool`, `optional`):\n If set to ``True``, the model will return a :class:`~transformers.file_utils.ModelOutput` instead of a\n plain tuple.\n\"\"\"\n\n\n@add_start_docstrings(\n \"The bare RoBERTa Model transformer outputting raw hidden-states without any specific head on top.\",\n ROBERTA_START_DOCSTRING,\n)\nclass RobertaModel(BertModel):\n \"\"\"\n This class overrides :class:`~transformers.BertModel`. Please check the\n superclass for the appropriate documentation alongside usage examples.\n \"\"\"\n\n config_class = RobertaConfig\n base_model_prefix = \"roberta\"\n\n def __init__(self, config):\n super().__init__(config)\n\n self.embeddings = RobertaEmbeddings(config)\n self.init_weights()\n\n def get_input_embeddings(self):\n return self.embeddings.word_embeddings\n\n def set_input_embeddings(self, value):\n self.embeddings.word_embeddings = value\n\n\n@add_start_docstrings(\n \"\"\"RoBERTa Model with a `language modeling` head on top for CLM fine-tuning. \"\"\", ROBERTA_START_DOCSTRING\n)\nclass RobertaForCausalLM(BertPreTrainedModel):\n config_class = RobertaConfig\n base_model_prefix = \"roberta\"\n\n def __init__(self, config):\n super().__init__(config)\n\n if not config.is_decoder:\n logger.warning(\"If you want to use `RobertaLMHeadModel` as a standalone, add `is_decoder=True.`\")\n\n self.roberta = RobertaModel(config)\n self.lm_head = RobertaLMHead(config)\n\n self.init_weights()\n\n def get_output_embeddings(self):\n return self.lm_head.decoder\n\n @add_start_docstrings_to_callable(ROBERTA_INPUTS_DOCSTRING.format(\"(batch_size, sequence_length)\"))\n @replace_return_docstrings(output_type=CausalLMOutput, config_class=_CONFIG_FOR_DOC)\n def forward(\n self,\n input_ids=None,\n attention_mask=None,\n token_type_ids=None,\n position_ids=None,\n head_mask=None,\n inputs_embeds=None,\n encoder_hidden_states=None,\n encoder_attention_mask=None,\n labels=None,\n output_attentions=None,\n output_hidden_states=None,\n return_dict=None,\n ):\n r\"\"\"\n encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`):\n Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention\n if the model is configured as a decoder.\n encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):\n Mask to avoid performing attention on the padding token indices of the encoder input. This mask\n is used in the cross-attention if the model is configured as a decoder.\n Mask values selected in ``[0, 1]``:\n ``1`` for tokens that are NOT MASKED, ``0`` for MASKED tokens.\n labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):\n Labels for computing the left-to-right language modeling loss (next word prediction).\n Indices should be in ``[-100, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring)\n Tokens with indices set to ``-100`` are ignored (masked), the loss is only computed for the tokens with labels\n in ``[0, ..., config.vocab_size]``\n\n Returns:\n\n Example::\n\n >>> from transformers import RobertaTokenizer, RobertaLMHeadModel, RobertaConfig\n >>> import torch\n\n >>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')\n >>> config = RobertaConfig.from_pretrained(\"roberta-base\")\n >>> config.is_decoder = True\n >>> model = RobertaLMHeadModel.from_pretrained('roberta-base', config=config, return_dict=True)\n\n >>> inputs = tokenizer(\"Hello, my dog is cute\", return_tensors=\"pt\")\n >>> outputs = model(**inputs)\n\n >>> prediction_logits = outputs.logits\n \"\"\"\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n\n outputs = self.roberta(\n input_ids,\n attention_mask=attention_mask,\n token_type_ids=token_type_ids,\n position_ids=position_ids,\n head_mask=head_mask,\n inputs_embeds=inputs_embeds,\n encoder_hidden_states=encoder_hidden_states,\n encoder_attention_mask=encoder_attention_mask,\n output_attentions=output_attentions,\n output_hidden_states=output_hidden_states,\n return_dict=return_dict,\n )\n\n sequence_output = outputs[0]\n prediction_scores = self.lm_head(sequence_output)\n\n lm_loss = None\n if labels is not None:\n # we are doing next-token prediction; shift prediction scores and input ids by one\n shifted_prediction_scores = prediction_scores[:, :-1, :].contiguous()\n labels = labels[:, 1:].contiguous()\n loss_fct = CrossEntropyLoss()\n lm_loss = loss_fct(shifted_prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))\n\n if not return_dict:\n output = (prediction_scores,) + outputs[2:]\n return ((lm_loss,) + output) if lm_loss is not None else output\n\n return CausalLMOutput(\n loss=lm_loss,\n logits=prediction_scores,\n hidden_states=outputs.hidden_states,\n attentions=outputs.attentions,\n )\n\n def prepare_inputs_for_generation(self, input_ids, attention_mask=None, **model_kwargs):\n input_shape = input_ids.shape\n\n # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly\n if attention_mask is None:\n attention_mask = input_ids.new_ones(input_shape)\n\n return {\"input_ids\": input_ids, \"attention_mask\": attention_mask}\n\n\n@add_start_docstrings(\"\"\"RoBERTa Model with a `language modeling` head on top. \"\"\", ROBERTA_START_DOCSTRING)\nclass RobertaForMaskedLM(BertPreTrainedModel):\n config_class = RobertaConfig\n base_model_prefix = \"roberta\"\n authorized_missing_keys = [r\"position_ids\", r\"lm_head\\.decoder\\.bias\"]\n\n def __init__(self, config):\n super().__init__(config)\n\n if config.is_decoder:\n logger.warning(\n \"If you want to use `RobertaForMaskedLM` make sure `config.is_decoder=False` for \"\n \"bi-directional self-attention.\"\n )\n\n self.roberta = RobertaModel(config)\n self.lm_head = RobertaLMHead(config)\n\n self.init_weights()\n\n def get_output_embeddings(self):\n return self.lm_head.decoder\n\n @add_start_docstrings_to_callable(ROBERTA_INPUTS_DOCSTRING.format(\"(batch_size, sequence_length)\"))\n @add_code_sample_docstrings(\n tokenizer_class=_TOKENIZER_FOR_DOC,\n checkpoint=\"roberta-base\",\n output_type=MaskedLMOutput,\n config_class=_CONFIG_FOR_DOC,\n )\n def forward(\n self,\n input_ids=None,\n attention_mask=None,\n token_type_ids=None,\n position_ids=None,\n head_mask=None,\n inputs_embeds=None,\n encoder_hidden_states=None,\n encoder_attention_mask=None,\n labels=None,\n output_attentions=None,\n output_hidden_states=None,\n return_dict=None,\n **kwargs\n ):\n r\"\"\"\n labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):\n Labels for computing the masked language modeling loss.\n Indices should be in ``[-100, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring)\n Tokens with indices set to ``-100`` are ignored (masked), the loss is only computed for the tokens with labels\n in ``[0, ..., config.vocab_size]``\n kwargs (:obj:`Dict[str, any]`, optional, defaults to `{}`):\n Used to hide legacy arguments that have been deprecated.\n \"\"\"\n if \"masked_lm_labels\" in kwargs:\n warnings.warn(\n \"The `masked_lm_labels` argument is deprecated and will be removed in a future version, use `labels` instead.\",\n FutureWarning,\n )\n labels = kwargs.pop(\"masked_lm_labels\")\n assert kwargs == {}, f\"Unexpected keyword arguments: {list(kwargs.keys())}.\"\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n\n outputs = self.roberta(\n input_ids,\n attention_mask=attention_mask,\n token_type_ids=token_type_ids,\n position_ids=position_ids,\n head_mask=head_mask,\n inputs_embeds=inputs_embeds,\n encoder_hidden_states=encoder_hidden_states,\n encoder_attention_mask=encoder_attention_mask,\n output_attentions=output_attentions,\n output_hidden_states=output_hidden_states,\n return_dict=return_dict,\n )\n sequence_output = outputs[0]\n prediction_scores = self.lm_head(sequence_output)\n\n masked_lm_loss = None\n if labels is not None:\n loss_fct = CrossEntropyLoss()\n masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))\n\n if not return_dict:\n output = (prediction_scores,) + outputs[2:]\n return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output\n\n return MaskedLMOutput(\n loss=masked_lm_loss,\n logits=prediction_scores,\n hidden_states=outputs.hidden_states,\n attentions=outputs.attentions,\n )\n\n\nclass RobertaLMHead(nn.Module):\n \"\"\"Roberta Head for masked language modeling.\"\"\"\n\n def __init__(self, config):\n super().__init__()\n self.dense = nn.Linear(config.hidden_size, config.hidden_size)\n self.layer_norm = BertLayerNorm(config.hidden_size, eps=config.layer_norm_eps)\n\n self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False)\n self.bias = nn.Parameter(torch.zeros(config.vocab_size))\n\n # Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings`\n self.decoder.bias = self.bias\n\n def forward(self, features, **kwargs):\n x = self.dense(features)\n x = gelu(x)\n x = self.layer_norm(x)\n\n # project back to size of vocabulary with bias\n x = self.decoder(x)\n\n return x\n\n\n@add_start_docstrings(\n \"\"\"RoBERTa Model transformer with a sequence classification/regression head on top (a linear layer\n on top of the pooled output) e.g. for GLUE tasks. \"\"\",\n ROBERTA_START_DOCSTRING,\n)\nclass RobertaForSequenceClassification(BertPreTrainedModel):\n config_class = RobertaConfig\n base_model_prefix = \"roberta\"\n\n def __init__(self, config):\n super().__init__(config)\n self.num_labels = config.num_labels\n\n self.roberta = RobertaModel(config)\n self.classifier = RobertaClassificationHead(config)\n\n self.init_weights()\n\n @add_start_docstrings_to_callable(ROBERTA_INPUTS_DOCSTRING.format(\"(batch_size, sequence_length)\"))\n @add_code_sample_docstrings(\n tokenizer_class=_TOKENIZER_FOR_DOC,\n checkpoint=\"roberta-base\",\n output_type=SequenceClassifierOutput,\n config_class=_CONFIG_FOR_DOC,\n )\n def forward(\n self,\n input_ids=None,\n attention_mask=None,\n token_type_ids=None,\n position_ids=None,\n head_mask=None,\n inputs_embeds=None,\n labels=None,\n output_attentions=None,\n output_hidden_states=None,\n return_dict=None,\n ):\n r\"\"\"\n labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):\n Labels for computing the sequence classification/regression loss.\n Indices should be in :obj:`[0, ..., config.num_labels - 1]`.\n If :obj:`config.num_labels == 1` a regression loss is computed (Mean-Square loss),\n If :obj:`config.num_labels > 1` a classification loss is computed (Cross-Entropy).\n \"\"\"\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n\n outputs = self.roberta(\n input_ids,\n attention_mask=attention_mask,\n token_type_ids=token_type_ids,\n position_ids=position_ids,\n head_mask=head_mask,\n inputs_embeds=inputs_embeds,\n output_attentions=output_attentions,\n output_hidden_states=output_hidden_states,\n return_dict=return_dict,\n )\n sequence_output = outputs[0]\n logits = self.classifier(sequence_output)\n\n loss = None\n if labels is not None:\n if self.num_labels == 1:\n # We are doing regression\n loss_fct = MSELoss()\n loss = loss_fct(logits.view(-1), labels.view(-1))\n else:\n loss_fct = CrossEntropyLoss()\n loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))\n\n if not return_dict:\n output = (logits,) + outputs[2:]\n return ((loss,) + output) if loss is not None else output\n\n return SequenceClassifierOutput(\n loss=loss,\n logits=logits,\n hidden_states=outputs.hidden_states,\n attentions=outputs.attentions,\n )\n\n\n@add_start_docstrings(\n \"\"\"Roberta Model with a multiple choice classification head on top (a linear layer on top of\n the pooled output and a softmax) e.g. for RocStories/SWAG tasks. \"\"\",\n ROBERTA_START_DOCSTRING,\n)\nclass RobertaForMultipleChoice(BertPreTrainedModel):\n config_class = RobertaConfig\n base_model_prefix = \"roberta\"\n\n def __init__(self, config):\n super().__init__(config)\n\n self.roberta = RobertaModel(config)\n self.dropout = nn.Dropout(config.hidden_dropout_prob)\n self.classifier = nn.Linear(config.hidden_size, 1)\n\n self.init_weights()\n\n @add_start_docstrings_to_callable(ROBERTA_INPUTS_DOCSTRING.format(\"(batch_size, num_choices, sequence_length)\"))\n @add_code_sample_docstrings(\n tokenizer_class=_TOKENIZER_FOR_DOC,\n checkpoint=\"roberta-base\",\n output_type=MultipleChoiceModelOutput,\n config_class=_CONFIG_FOR_DOC,\n )\n def forward(\n self,\n input_ids=None,\n token_type_ids=None,\n attention_mask=None,\n labels=None,\n position_ids=None,\n head_mask=None,\n inputs_embeds=None,\n output_attentions=None,\n output_hidden_states=None,\n return_dict=None,\n ):\n r\"\"\"\n labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):\n Labels for computing the multiple choice classification loss.\n Indices should be in ``[0, ..., num_choices]`` where `num_choices` is the size of the second dimension\n of the input tensors. (see `input_ids` above)\n \"\"\"\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n num_choices = input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1]\n\n flat_input_ids = input_ids.view(-1, input_ids.size(-1)) if input_ids is not None else None\n flat_position_ids = position_ids.view(-1, position_ids.size(-1)) if position_ids is not None else None\n flat_token_type_ids = token_type_ids.view(-1, token_type_ids.size(-1)) if token_type_ids is not None else None\n flat_attention_mask = attention_mask.view(-1, attention_mask.size(-1)) if attention_mask is not None else None\n flat_inputs_embeds = (\n inputs_embeds.view(-1, inputs_embeds.size(-2), inputs_embeds.size(-1))\n if inputs_embeds is not None\n else None\n )\n\n outputs = self.roberta(\n flat_input_ids,\n position_ids=flat_position_ids,\n token_type_ids=flat_token_type_ids,\n attention_mask=flat_attention_mask,\n head_mask=head_mask,\n inputs_embeds=flat_inputs_embeds,\n output_attentions=output_attentions,\n output_hidden_states=output_hidden_states,\n return_dict=return_dict,\n )\n pooled_output = outputs[1]\n\n pooled_output = self.dropout(pooled_output)\n logits = self.classifier(pooled_output)\n reshaped_logits = logits.view(-1, num_choices)\n\n loss = None\n if labels is not None:\n loss_fct = CrossEntropyLoss()\n loss = loss_fct(reshaped_logits, labels)\n\n if not return_dict:\n output = (reshaped_logits,) + outputs[2:]\n return ((loss,) + output) if loss is not None else output\n\n return MultipleChoiceModelOutput(\n loss=loss,\n logits=reshaped_logits,\n hidden_states=outputs.hidden_states,\n attentions=outputs.attentions,\n )\n\n\n@add_start_docstrings(\n \"\"\"Roberta Model with a token classification head on top (a linear layer on top of\n the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. \"\"\",\n ROBERTA_START_DOCSTRING,\n)\nclass RobertaForTokenClassification(BertPreTrainedModel):\n config_class = RobertaConfig\n base_model_prefix = \"roberta\"\n\n def __init__(self, config):\n super().__init__(config)\n self.num_labels = config.num_labels\n\n self.roberta = RobertaModel(config)\n self.dropout = nn.Dropout(config.hidden_dropout_prob)\n self.classifier = nn.Linear(config.hidden_size, config.num_labels)\n\n self.init_weights()\n\n @add_start_docstrings_to_callable(ROBERTA_INPUTS_DOCSTRING.format(\"(batch_size, sequence_length)\"))\n @add_code_sample_docstrings(\n tokenizer_class=_TOKENIZER_FOR_DOC,\n checkpoint=\"roberta-base\",\n output_type=TokenClassifierOutput,\n config_class=_CONFIG_FOR_DOC,\n )\n def forward(\n self,\n input_ids=None,\n attention_mask=None,\n token_type_ids=None,\n position_ids=None,\n head_mask=None,\n inputs_embeds=None,\n labels=None,\n output_attentions=None,\n output_hidden_states=None,\n return_dict=None,\n ):\n r\"\"\"\n labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):\n Labels for computing the token classification loss.\n Indices should be in ``[0, ..., config.num_labels - 1]``.\n \"\"\"\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n\n outputs = self.roberta(\n input_ids,\n attention_mask=attention_mask,\n token_type_ids=token_type_ids,\n position_ids=position_ids,\n head_mask=head_mask,\n inputs_embeds=inputs_embeds,\n output_attentions=output_attentions,\n output_hidden_states=output_hidden_states,\n return_dict=return_dict,\n )\n\n sequence_output = outputs[0]\n\n sequence_output = self.dropout(sequence_output)\n logits = self.classifier(sequence_output)\n\n loss = None\n if labels is not None:\n loss_fct = CrossEntropyLoss()\n # Only keep active parts of the loss\n if attention_mask is not None:\n active_loss = attention_mask.view(-1) == 1\n active_logits = logits.view(-1, self.num_labels)\n active_labels = torch.where(\n active_loss, labels.view(-1), torch.tensor(loss_fct.ignore_index).type_as(labels)\n )\n loss = loss_fct(active_logits, active_labels)\n else:\n loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))\n\n if not return_dict:\n output = (logits,) + outputs[2:]\n return ((loss,) + output) if loss is not None else output\n\n return TokenClassifierOutput(\n loss=loss,\n logits=logits,\n hidden_states=outputs.hidden_states,\n attentions=outputs.attentions,\n )\n\n\nclass RobertaClassificationHead(nn.Module):\n \"\"\"Head for sentence-level classification tasks.\"\"\"\n\n def __init__(self, config):\n super().__init__()\n self.dense = nn.Linear(config.hidden_size, config.hidden_size)\n self.dropout = nn.Dropout(config.hidden_dropout_prob)\n self.out_proj = nn.Linear(config.hidden_size, config.num_labels)\n\n def forward(self, features, **kwargs):\n x = features[:, 0, :] # take <s> token (equiv. to [CLS])\n x = self.dropout(x)\n x = self.dense(x)\n x = torch.tanh(x)\n x = self.dropout(x)\n x = self.out_proj(x)\n return x\n\n\n@add_start_docstrings(\n \"\"\"Roberta Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of\n the hidden-states output to compute `span start logits` and `span end logits`). \"\"\",\n ROBERTA_START_DOCSTRING,\n)\nclass RobertaForQuestionAnswering(BertPreTrainedModel):\n config_class = RobertaConfig\n base_model_prefix = \"roberta\"\n\n def __init__(self, config):\n super().__init__(config)\n self.num_labels = config.num_labels\n\n self.roberta = RobertaModel(config)\n self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels)\n\n self.init_weights()\n\n @add_start_docstrings_to_callable(ROBERTA_INPUTS_DOCSTRING.format(\"(batch_size, sequence_length)\"))\n @add_code_sample_docstrings(\n tokenizer_class=_TOKENIZER_FOR_DOC,\n checkpoint=\"roberta-base\",\n output_type=QuestionAnsweringModelOutput,\n config_class=_CONFIG_FOR_DOC,\n )\n def forward(\n self,\n input_ids=None,\n attention_mask=None,\n token_type_ids=None,\n position_ids=None,\n head_mask=None,\n inputs_embeds=None,\n start_positions=None,\n end_positions=None,\n output_attentions=None,\n output_hidden_states=None,\n return_dict=None,\n ):\n r\"\"\"\n start_positions (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):\n Labels for position (index) of the start of the labelled span for computing the token classification loss.\n Positions are clamped to the length of the sequence (`sequence_length`).\n Position outside of the sequence are not taken into account for computing the loss.\n end_positions (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):\n Labels for position (index) of the end of the labelled span for computing the token classification loss.\n Positions are clamped to the length of the sequence (`sequence_length`).\n Position outside of the sequence are not taken into account for computing the loss.\n \"\"\"\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n\n outputs = self.roberta(\n input_ids,\n attention_mask=attention_mask,\n token_type_ids=token_type_ids,\n position_ids=position_ids,\n head_mask=head_mask,\n inputs_embeds=inputs_embeds,\n output_attentions=output_attentions,\n output_hidden_states=output_hidden_states,\n return_dict=return_dict,\n )\n\n sequence_output = outputs[0]\n\n logits = self.qa_outputs(sequence_output)\n start_logits, end_logits = logits.split(1, dim=-1)\n start_logits = start_logits.squeeze(-1)\n end_logits = end_logits.squeeze(-1)\n\n total_loss = None\n if start_positions is not None and end_positions is not None:\n # If we are on multi-GPU, split add a dimension\n if len(start_positions.size()) > 1:\n start_positions = start_positions.squeeze(-1)\n if len(end_positions.size()) > 1:\n end_positions = end_positions.squeeze(-1)\n # sometimes the start/end positions are outside our model inputs, we ignore these terms\n ignored_index = start_logits.size(1)\n start_positions.clamp_(0, ignored_index)\n end_positions.clamp_(0, ignored_index)\n\n loss_fct = CrossEntropyLoss(ignore_index=ignored_index)\n start_loss = loss_fct(start_logits, start_positions)\n end_loss = loss_fct(end_logits, end_positions)\n total_loss = (start_loss + end_loss) / 2\n\n if not return_dict:\n output = (start_logits, end_logits) + outputs[2:]\n return ((total_loss,) + output) if total_loss is not None else output\n\n return QuestionAnsweringModelOutput(\n loss=total_loss,\n start_logits=start_logits,\n end_logits=end_logits,\n hidden_states=outputs.hidden_states,\n attentions=outputs.attentions,\n )\n\n\ndef create_position_ids_from_input_ids(input_ids, padding_idx):\n \"\"\"Replace non-padding symbols with their position numbers. Position numbers begin at\n padding_idx+1. Padding symbols are ignored. This is modified from fairseq's\n `utils.make_positions`.\n\n :param torch.Tensor x:\n :return torch.Tensor:\n \"\"\"\n # The series of casts and type-conversions here are carefully balanced to both work with ONNX export and XLA.\n mask = input_ids.ne(padding_idx).int()\n incremental_indices = torch.cumsum(mask, dim=1).type_as(mask) * mask\n return incremental_indices.long() + padding_idx\n", "path": "src/transformers/modeling_roberta.py" } ]
diff --git a/src/transformers/modeling_roberta.py b/src/transformers/modeling_roberta.py index f0be480e4be0..76b7b430d3b4 100644 --- a/src/transformers/modeling_roberta.py +++ b/src/transformers/modeling_roberta.py @@ -303,6 +303,7 @@ def prepare_inputs_for_generation(self, input_ids, attention_mask=None, **model_ class RobertaForMaskedLM(BertPreTrainedModel): config_class = RobertaConfig base_model_prefix = "roberta" + authorized_missing_keys = [r"position_ids", r"lm_head\.decoder\.bias"] def __init__(self, config): super().__init__(config)
avocado-framework__avocado-4576
TestSuite() initialization config param missing error In the definition of the TestSuite Class the parameter "config" has a default value as None and when the default value is used it fails, forcing us to use a dummy input to get it working. Error example: ``` 'NoneType' object has no attribute 'get' <class 'AttributeError'> 'NoneType' object has no attribute 'get' File "/Library/Python/3.7/site-packages/bluegen/common/main.py", line 65, in main ret = command.get(opts.cmd)(opts) File "/Library/Python/3.7/site-packages/bluegen/common/command.py", line 142, in __call__ return self.func(opts, *args, **kwargs) File "/Library/Python/3.7/site-packages/bluegen/commands/test.py", line 34, in test test_suites = avocado.suites_generator(opts.sequence_file, opts.tests_ref, opts.parallel, opts.cfg_dir) File "/Library/Python/3.7/site-packages/bluegen/utils/avocado.py", line 125, in suites_generator job_config=JOB_CONFIG) File "/Users/marioalvarado/Library/Python/3.7/lib/python/site- packages/avocado/core/suite.py", line 106, in __init__ if (config.get('run.dry_run.enabled') and ``` Code example: ``` test_suite = TestSuite(name="BlueGen Sequential Execution", tests=runnables_with_param, job_config=JOB_CONFIG) ```
[ { "content": "# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n#\n# See LICENSE for more details.\n#\n# Copyright: Red Hat Inc. 2020\n# Author: Beraldo Leal <[email protected]>\n\nimport os\nfrom enum import Enum\nfrom uuid import uuid4\n\nfrom .dispatcher import RunnerDispatcher\nfrom .exceptions import (JobTestSuiteReferenceResolutionError,\n OptionValidationError)\nfrom .loader import (DiscoverMode, LoaderError, LoaderUnhandledReferenceError,\n loader)\nfrom .parser import HintParser\nfrom .resolver import ReferenceResolutionResult, resolve\nfrom .settings import settings\nfrom .tags import filter_test_tags, filter_test_tags_runnable\nfrom .test import DryRunTest, Test\nfrom .varianter import Varianter\n\n\nclass TestSuiteError(Exception):\n pass\n\n\nclass TestSuiteStatus(Enum):\n RESOLUTION_NOT_STARTED = object()\n TESTS_NOT_FOUND = object()\n TESTS_FOUND = object()\n UNKNOWN = object()\n\n\ndef resolutions_to_runnables(resolutions, config):\n \"\"\"\n Transforms resolver resolutions into runnables suitable for a suite\n\n A resolver resolution\n (:class:`avocado.core.resolver.ReferenceResolution`) contains\n information about the resolution process (if it was successful\n or not) and in case of successful resolutions a list of\n resolutions. It's expected that the resolution contain one\n or more :class:`avocado.core.nrunner.Runnable`.\n\n This function sets the runnable specific configuration for each\n runnable. It also performs tag based filtering on the runnables\n for possibly excluding some of the Runnables.\n\n :param resolutions: possible multiple resolutions for multiple\n references\n :type resolutions: list of :class:`avocado.core.resolver.ReferenceResolution`\n :param config: job configuration\n :type config: dict\n :returns: the resolutions converted to tasks\n :rtype: list of :class:`avocado.core.nrunner.Task`\n \"\"\"\n result = []\n filter_by_tags = config.get(\"filter.by_tags.tags\")\n include_empty = config.get(\"filter.by_tags.include_empty\")\n include_empty_key = config.get('filter.by_tags.include_empty_key')\n runner_config = settings.filter_config(config, r'^runner\\.')\n for resolution in resolutions:\n if resolution.result != ReferenceResolutionResult.SUCCESS:\n continue\n for runnable in resolution.resolutions:\n if filter_by_tags:\n if not filter_test_tags_runnable(runnable,\n filter_by_tags,\n include_empty,\n include_empty_key):\n continue\n runnable.config = runner_config\n result.append(runnable)\n return result\n\n\nclass TestSuite:\n def __init__(self, name, config=None, tests=None, job_config=None,\n resolutions=None):\n self.name = name\n self.tests = tests\n self.resolutions = resolutions\n\n # Create a complete config dict with all registered options + custom\n # config\n self.config = settings.as_dict()\n if job_config:\n self.config.update(job_config)\n if config:\n self.config.update(config)\n\n self._variants = None\n self._references = None\n self._runner = None\n self._test_parameters = None\n\n if (config.get('run.dry_run.enabled') and\n self.config.get('run.test_runner') == 'runner'):\n self._convert_to_dry_run()\n\n if self.size == 0:\n return\n\n def __len__(self):\n \"\"\"This is a convenient method to run `len()` over this object.\n\n With this you can run: len(a_suite) and will return the same as\n `len(a_suite.tests)`.\n \"\"\"\n return self.size\n\n def _convert_to_dry_run(self):\n for i in range(self.size):\n self.tests[i] = [DryRunTest, self.tests[i][1]]\n\n @classmethod\n def _from_config_with_loader(cls, config, name=None):\n references = config.get('run.references')\n ignore_missing = config.get('run.ignore_missing_references')\n verbose = config.get('core.verbose')\n subcommand = config.get('subcommand')\n\n # To-be-removed: For some reason, avocado list will display more tests\n # if in verbose mode. IMO, this is a little inconsistent with the 'run'\n # command. This hack was needed to make one specific test happy.\n tests_mode = DiscoverMode.DEFAULT\n if subcommand == 'list':\n if verbose:\n tests_mode = DiscoverMode.ALL\n else:\n tests_mode = DiscoverMode.AVAILABLE\n\n try:\n loader.load_plugins(config)\n tests = loader.discover(references,\n force=ignore_missing,\n which_tests=tests_mode)\n if config.get(\"filter.by_tags.tags\"):\n tests = filter_test_tags(\n tests,\n config.get(\"filter.by_tags.tags\"),\n config.get(\"filter.by_tags.include_empty\"),\n config.get('filter.by_tags.include_empty_key'))\n except (LoaderUnhandledReferenceError, LoaderError) as details:\n raise TestSuiteError(details)\n\n if name is None:\n name = str(uuid4())\n return cls(name=name, config=config, tests=tests)\n\n @classmethod\n def _from_config_with_resolver(cls, config, name=None):\n ignore_missing = config.get('run.ignore_missing_references')\n references = config.get('run.references')\n try:\n hint = None\n hint_filepath = '.avocado.hint'\n if os.path.exists(hint_filepath):\n hint = HintParser(hint_filepath)\n resolutions = resolve(references,\n hint=hint,\n ignore_missing=ignore_missing)\n except JobTestSuiteReferenceResolutionError as details:\n raise TestSuiteError(details)\n\n runnables = resolutions_to_runnables(resolutions, config)\n\n if name is None:\n name = str(uuid4())\n return cls(name=name, config=config, tests=runnables,\n resolutions=resolutions)\n\n def _get_stats_from_nrunner(self):\n stats = {}\n for test in self.tests:\n stats = self._increment_dict_key_counter(stats, test.kind)\n return stats\n\n def _get_stats_from_runner(self):\n stats = {}\n mapping = loader.get_type_label_mapping()\n\n for cls, _ in self.tests:\n if isinstance(cls, str):\n cls = Test\n stats = self._increment_dict_key_counter(stats, mapping[cls])\n return stats\n\n def _get_tags_stats_from_nrunner(self):\n stats = {}\n for runnable in self.tests:\n if runnable is None:\n continue\n tags = runnable.tags or {}\n for tag in tags:\n stats = self._increment_dict_key_counter(stats, tag)\n return stats\n\n def _get_tags_stats_from_runner(self):\n stats = {}\n for test in self.tests:\n params = test[1]\n for tag in params.get('tags', {}):\n stats = self._increment_dict_key_counter(stats, tag)\n return stats\n\n @staticmethod\n def _increment_dict_key_counter(dict_object, key):\n try:\n dict_object[key.lower()] += 1\n except KeyError:\n dict_object[key.lower()] = 1\n return dict_object\n\n @property\n def references(self):\n if self._references is None:\n self._references = self.config.get('run.references')\n return self._references\n\n @property\n def runner(self):\n if self._runner is None:\n runner_name = self.config.get('run.test_runner') or 'runner'\n try:\n runner_extension = RunnerDispatcher()[runner_name]\n self._runner = runner_extension.obj\n except KeyError:\n raise TestSuiteError(\"Runner not implemented.\")\n return self._runner\n\n @property\n def size(self):\n \"\"\"The overall length/size of this test suite.\"\"\"\n if self.tests is None:\n return 0\n return len(self.tests)\n\n @property\n def stats(self):\n \"\"\"Return a statistics dict with the current tests.\"\"\"\n runner_name = self.config.get('run.test_runner') or 'runner'\n if runner_name == 'runner':\n return self._get_stats_from_runner()\n elif runner_name == 'nrunner':\n return self._get_stats_from_nrunner()\n return {}\n\n @property\n def status(self):\n if self.tests is None:\n return TestSuiteStatus.RESOLUTION_NOT_STARTED\n elif self.size == 0:\n return TestSuiteStatus.TESTS_NOT_FOUND\n elif self.size > 0:\n return TestSuiteStatus.TESTS_FOUND\n else:\n return TestSuiteStatus.UNKNOWN\n\n @property\n def tags_stats(self):\n \"\"\"Return a statistics dict with the current tests tags.\"\"\"\n runner_name = self.config.get('run.test_runner') or 'runner'\n if runner_name == 'runner':\n return self._get_tags_stats_from_runner()\n elif runner_name == 'nrunner':\n return self._get_tags_stats_from_nrunner()\n return {}\n\n @property\n def test_parameters(self):\n \"\"\"Placeholder for test parameters.\n\n This is related to --test-parameters command line option or\n (run.test_parameters).\n \"\"\"\n if self._test_parameters is None:\n self._test_parameters = {name: value for name, value\n in self.config.get('run.test_parameters',\n [])}\n return self._test_parameters\n\n @property\n def variants(self):\n if self._variants is None:\n variants = Varianter()\n if not variants.is_parsed():\n try:\n variants.parse(self.config)\n except (IOError, ValueError) as details:\n raise OptionValidationError(\"Unable to parse \"\n \"variant: %s\" % details)\n self._variants = variants\n return self._variants\n\n def run(self, job):\n \"\"\"Run this test suite with the job context in mind.\n\n :param job: A :class:`avocado.core.job.Job` instance.\n :rtype: set\n \"\"\"\n return self.runner.run_suite(job, self)\n\n @classmethod\n def from_config(cls, config, name=None, job_config=None):\n \"\"\"Helper method to create a TestSuite from config dicts.\n\n This is different from the TestSuite() initialization because here we\n are assuming that you need some help to build the test suite. Avocado\n will try to resolve tests based on the configuration information\n instead of assuming pre populated tests.\n\n If you need to create a custom TestSuite, please use the TestSuite()\n constructor instead of this method.\n\n :param config: A config dict to be used on the desired test suite.\n :type config: dict\n :param name: The name of the test suite. This is optional and default\n is a random uuid.\n :type name: str\n :param job_config: The job config dict (a global config). Use this to\n avoid huge configs per test suite. This is also\n optional.\n :type job_config: dict\n \"\"\"\n suite_config = config\n config = settings.as_dict()\n config.update(suite_config)\n if job_config:\n config.update(job_config)\n runner = config.get('run.test_runner') or 'runner'\n if runner == 'nrunner':\n suite = cls._from_config_with_resolver(config, name)\n else:\n suite = cls._from_config_with_loader(config, name)\n\n if not config.get('run.ignore_missing_references'):\n if not suite.tests:\n msg = (\"Test Suite could not be create. No test references \"\n \"provided nor any other arguments resolved into tests\")\n raise TestSuiteError(msg)\n\n return suite\n", "path": "avocado/core/suite.py" } ]
[ { "content": "# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n#\n# See LICENSE for more details.\n#\n# Copyright: Red Hat Inc. 2020\n# Author: Beraldo Leal <[email protected]>\n\nimport os\nfrom enum import Enum\nfrom uuid import uuid4\n\nfrom .dispatcher import RunnerDispatcher\nfrom .exceptions import (JobTestSuiteReferenceResolutionError,\n OptionValidationError)\nfrom .loader import (DiscoverMode, LoaderError, LoaderUnhandledReferenceError,\n loader)\nfrom .parser import HintParser\nfrom .resolver import ReferenceResolutionResult, resolve\nfrom .settings import settings\nfrom .tags import filter_test_tags, filter_test_tags_runnable\nfrom .test import DryRunTest, Test\nfrom .varianter import Varianter\n\n\nclass TestSuiteError(Exception):\n pass\n\n\nclass TestSuiteStatus(Enum):\n RESOLUTION_NOT_STARTED = object()\n TESTS_NOT_FOUND = object()\n TESTS_FOUND = object()\n UNKNOWN = object()\n\n\ndef resolutions_to_runnables(resolutions, config):\n \"\"\"\n Transforms resolver resolutions into runnables suitable for a suite\n\n A resolver resolution\n (:class:`avocado.core.resolver.ReferenceResolution`) contains\n information about the resolution process (if it was successful\n or not) and in case of successful resolutions a list of\n resolutions. It's expected that the resolution contain one\n or more :class:`avocado.core.nrunner.Runnable`.\n\n This function sets the runnable specific configuration for each\n runnable. It also performs tag based filtering on the runnables\n for possibly excluding some of the Runnables.\n\n :param resolutions: possible multiple resolutions for multiple\n references\n :type resolutions: list of :class:`avocado.core.resolver.ReferenceResolution`\n :param config: job configuration\n :type config: dict\n :returns: the resolutions converted to tasks\n :rtype: list of :class:`avocado.core.nrunner.Task`\n \"\"\"\n result = []\n filter_by_tags = config.get(\"filter.by_tags.tags\")\n include_empty = config.get(\"filter.by_tags.include_empty\")\n include_empty_key = config.get('filter.by_tags.include_empty_key')\n runner_config = settings.filter_config(config, r'^runner\\.')\n for resolution in resolutions:\n if resolution.result != ReferenceResolutionResult.SUCCESS:\n continue\n for runnable in resolution.resolutions:\n if filter_by_tags:\n if not filter_test_tags_runnable(runnable,\n filter_by_tags,\n include_empty,\n include_empty_key):\n continue\n runnable.config = runner_config\n result.append(runnable)\n return result\n\n\nclass TestSuite:\n def __init__(self, name, config=None, tests=None, job_config=None,\n resolutions=None):\n self.name = name\n self.tests = tests\n self.resolutions = resolutions\n\n # Create a complete config dict with all registered options + custom\n # config\n self.config = settings.as_dict()\n if job_config:\n self.config.update(job_config)\n if config:\n self.config.update(config)\n\n self._variants = None\n self._references = None\n self._runner = None\n self._test_parameters = None\n\n if (self.config.get('run.dry_run.enabled') and\n self.config.get('run.test_runner') == 'runner'):\n self._convert_to_dry_run()\n\n if self.size == 0:\n return\n\n def __len__(self):\n \"\"\"This is a convenient method to run `len()` over this object.\n\n With this you can run: len(a_suite) and will return the same as\n `len(a_suite.tests)`.\n \"\"\"\n return self.size\n\n def _convert_to_dry_run(self):\n for i in range(self.size):\n self.tests[i] = [DryRunTest, self.tests[i][1]]\n\n @classmethod\n def _from_config_with_loader(cls, config, name=None):\n references = config.get('run.references')\n ignore_missing = config.get('run.ignore_missing_references')\n verbose = config.get('core.verbose')\n subcommand = config.get('subcommand')\n\n # To-be-removed: For some reason, avocado list will display more tests\n # if in verbose mode. IMO, this is a little inconsistent with the 'run'\n # command. This hack was needed to make one specific test happy.\n tests_mode = DiscoverMode.DEFAULT\n if subcommand == 'list':\n if verbose:\n tests_mode = DiscoverMode.ALL\n else:\n tests_mode = DiscoverMode.AVAILABLE\n\n try:\n loader.load_plugins(config)\n tests = loader.discover(references,\n force=ignore_missing,\n which_tests=tests_mode)\n if config.get(\"filter.by_tags.tags\"):\n tests = filter_test_tags(\n tests,\n config.get(\"filter.by_tags.tags\"),\n config.get(\"filter.by_tags.include_empty\"),\n config.get('filter.by_tags.include_empty_key'))\n except (LoaderUnhandledReferenceError, LoaderError) as details:\n raise TestSuiteError(details)\n\n if name is None:\n name = str(uuid4())\n return cls(name=name, config=config, tests=tests)\n\n @classmethod\n def _from_config_with_resolver(cls, config, name=None):\n ignore_missing = config.get('run.ignore_missing_references')\n references = config.get('run.references')\n try:\n hint = None\n hint_filepath = '.avocado.hint'\n if os.path.exists(hint_filepath):\n hint = HintParser(hint_filepath)\n resolutions = resolve(references,\n hint=hint,\n ignore_missing=ignore_missing)\n except JobTestSuiteReferenceResolutionError as details:\n raise TestSuiteError(details)\n\n runnables = resolutions_to_runnables(resolutions, config)\n\n if name is None:\n name = str(uuid4())\n return cls(name=name, config=config, tests=runnables,\n resolutions=resolutions)\n\n def _get_stats_from_nrunner(self):\n stats = {}\n for test in self.tests:\n stats = self._increment_dict_key_counter(stats, test.kind)\n return stats\n\n def _get_stats_from_runner(self):\n stats = {}\n mapping = loader.get_type_label_mapping()\n\n for cls, _ in self.tests:\n if isinstance(cls, str):\n cls = Test\n stats = self._increment_dict_key_counter(stats, mapping[cls])\n return stats\n\n def _get_tags_stats_from_nrunner(self):\n stats = {}\n for runnable in self.tests:\n if runnable is None:\n continue\n tags = runnable.tags or {}\n for tag in tags:\n stats = self._increment_dict_key_counter(stats, tag)\n return stats\n\n def _get_tags_stats_from_runner(self):\n stats = {}\n for test in self.tests:\n params = test[1]\n for tag in params.get('tags', {}):\n stats = self._increment_dict_key_counter(stats, tag)\n return stats\n\n @staticmethod\n def _increment_dict_key_counter(dict_object, key):\n try:\n dict_object[key.lower()] += 1\n except KeyError:\n dict_object[key.lower()] = 1\n return dict_object\n\n @property\n def references(self):\n if self._references is None:\n self._references = self.config.get('run.references')\n return self._references\n\n @property\n def runner(self):\n if self._runner is None:\n runner_name = self.config.get('run.test_runner') or 'runner'\n try:\n runner_extension = RunnerDispatcher()[runner_name]\n self._runner = runner_extension.obj\n except KeyError:\n raise TestSuiteError(\"Runner not implemented.\")\n return self._runner\n\n @property\n def size(self):\n \"\"\"The overall length/size of this test suite.\"\"\"\n if self.tests is None:\n return 0\n return len(self.tests)\n\n @property\n def stats(self):\n \"\"\"Return a statistics dict with the current tests.\"\"\"\n runner_name = self.config.get('run.test_runner') or 'runner'\n if runner_name == 'runner':\n return self._get_stats_from_runner()\n elif runner_name == 'nrunner':\n return self._get_stats_from_nrunner()\n return {}\n\n @property\n def status(self):\n if self.tests is None:\n return TestSuiteStatus.RESOLUTION_NOT_STARTED\n elif self.size == 0:\n return TestSuiteStatus.TESTS_NOT_FOUND\n elif self.size > 0:\n return TestSuiteStatus.TESTS_FOUND\n else:\n return TestSuiteStatus.UNKNOWN\n\n @property\n def tags_stats(self):\n \"\"\"Return a statistics dict with the current tests tags.\"\"\"\n runner_name = self.config.get('run.test_runner') or 'runner'\n if runner_name == 'runner':\n return self._get_tags_stats_from_runner()\n elif runner_name == 'nrunner':\n return self._get_tags_stats_from_nrunner()\n return {}\n\n @property\n def test_parameters(self):\n \"\"\"Placeholder for test parameters.\n\n This is related to --test-parameters command line option or\n (run.test_parameters).\n \"\"\"\n if self._test_parameters is None:\n self._test_parameters = {name: value for name, value\n in self.config.get('run.test_parameters',\n [])}\n return self._test_parameters\n\n @property\n def variants(self):\n if self._variants is None:\n variants = Varianter()\n if not variants.is_parsed():\n try:\n variants.parse(self.config)\n except (IOError, ValueError) as details:\n raise OptionValidationError(\"Unable to parse \"\n \"variant: %s\" % details)\n self._variants = variants\n return self._variants\n\n def run(self, job):\n \"\"\"Run this test suite with the job context in mind.\n\n :param job: A :class:`avocado.core.job.Job` instance.\n :rtype: set\n \"\"\"\n return self.runner.run_suite(job, self)\n\n @classmethod\n def from_config(cls, config, name=None, job_config=None):\n \"\"\"Helper method to create a TestSuite from config dicts.\n\n This is different from the TestSuite() initialization because here we\n are assuming that you need some help to build the test suite. Avocado\n will try to resolve tests based on the configuration information\n instead of assuming pre populated tests.\n\n If you need to create a custom TestSuite, please use the TestSuite()\n constructor instead of this method.\n\n :param config: A config dict to be used on the desired test suite.\n :type config: dict\n :param name: The name of the test suite. This is optional and default\n is a random uuid.\n :type name: str\n :param job_config: The job config dict (a global config). Use this to\n avoid huge configs per test suite. This is also\n optional.\n :type job_config: dict\n \"\"\"\n suite_config = config\n config = settings.as_dict()\n config.update(suite_config)\n if job_config:\n config.update(job_config)\n runner = config.get('run.test_runner') or 'runner'\n if runner == 'nrunner':\n suite = cls._from_config_with_resolver(config, name)\n else:\n suite = cls._from_config_with_loader(config, name)\n\n if not config.get('run.ignore_missing_references'):\n if not suite.tests:\n msg = (\"Test Suite could not be create. No test references \"\n \"provided nor any other arguments resolved into tests\")\n raise TestSuiteError(msg)\n\n return suite\n", "path": "avocado/core/suite.py" } ]
diff --git a/avocado/core/suite.py b/avocado/core/suite.py index ace1b5bf40..4247957e1d 100644 --- a/avocado/core/suite.py +++ b/avocado/core/suite.py @@ -103,7 +103,7 @@ def __init__(self, name, config=None, tests=None, job_config=None, self._runner = None self._test_parameters = None - if (config.get('run.dry_run.enabled') and + if (self.config.get('run.dry_run.enabled') and self.config.get('run.test_runner') == 'runner'): self._convert_to_dry_run()
edgedb__edgedb-6268
EdgeDB server FIPS incompatibility <!-- Please search existing issues to avoid creating duplicates. --> <!-- For the EdgeDB Version: run `edgedb query 'select sys::get_version_as_str()'` from your project directory (or run `select sys::get_version_as_str();` in the EdgeDB interactive shell). For the EdgeDB CLI Version: Run `edgedb --version` from anywhere --> - EdgeDB Version: `3.4+75c51ce` - EdgeDB CLI Version: `3.4.0+97cad0e` - OS Version: RHEL 8.8 with FIPS enabled Steps to Reproduce: 1. Init a project with a local instance with the "Initial schema" below. Modify the schema to the "Changed schema" below, and `edgedb migration create` to create a migration. Then `edgedb project unlink` to unlink the project from the local instance. 2. Follow the EdgeDB bare metal deployment instructions on a RHEL machine with FIPS enabled. We're using an AWS EC2 instance for this. 3. Now back on your local machine, run `edgedb project init` and connect to the remote instance. 4. You should get an error `ValueError: [digital envelope routines: EVP_DigestInit_ex] disabled for FIPS` There's probably an easier way to reproduce but I'm new to EdgeDB and this is the way I was able to do it. It seems to be related to constraints and long names. We're evaluating EdgeDB for an enterprise use case where EdgeDB server will need to run on a FIPS-compliant host. The issue is that `_edgedb_name_to_pg_name` uses MD5 to hash the given name to ensure it's small enough to fit in a Postgres column name. MD5 is disabled on FIPS-compliant systems, even when you're not using it for something security related. Very annoying. On that note, the comment in the function mentions that Postgres doesn't have a sha1 implementation in all versions. SHA-1 is not currently disabled, but [will be](https://www.nist.gov/news-events/news/2022/12/nist-retires-sha-1-cryptographic-algorithm) at some point in the near future. I understand the desire to maintain backwards compatibility with older Postgres versions, but would it be possible to detect the Postgres version and use non-MD5 hash if supported? Given that SHA-1 won't be supported soon, ideally there would be a way to opt in to e.g. SHA-224. I realize that, given the objective of this function, it's counterproductive to use something like SHA-224 where (after base 64 encoding) you'll use up 38 characters vs. 27 for SHA-1 and 22 for MD5, but it's the (annoying) reality of FIPS compliance. Edit: My reading of [NIST's guidance](https://csrc.nist.gov/projects/hash-functions) is that SHAKE-128 and SHAKE-256 are also acceptable for non-security related applications (like this one). Perhaps they could be used with a chosen digest length to limit the size of the hash digests. <!-- If the issue is about a query error, please also provide your schema --> Initial schema: ``` module default { type Person { required name: str; } type Movie { title: str; multi actors: Person; } }; ``` Changed schema: ``` module default { type Person { required nameasdfgluhasdlfiuhsdafkjlndfkjlsadhflksdanfksdabfnljksdabfljkdsa: str { constraint exclusive; }; } type Movie { title: str; multi actors: Person; } }; ``` The full traceback: ``` Server traceback: Traceback (most recent call last): File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/server/compiler_pool/worker.py", line 186, in compile_in_tx units, cstate = COMPILER.compile_in_tx(cstate, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/server/compiler/compiler.py", line 913, in compile_in_tx return compile(ctx=ctx, source=source), ctx.state ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/server/compiler/compiler.py", line 2068, in compile return _try_compile(ctx=ctx, source=source) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/server/compiler/compiler.py", line 2136, in _try_compile comp, capabilities = _compile_dispatch_ql( ^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/server/compiler/compiler.py", line 1975, in _compile_dispatch_ql query = ddl.compile_dispatch_ql_migration( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/server/compiler/ddl.py", line 379, in compile_dispatch_ql_migration return compile_and_apply_ddl_stmt(ctx, ql) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/server/compiler/ddl.py", line 209, in compile_and_apply_ddl_stmt block, new_types, config_ops = _process_delta(ctx, delta) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/server/compiler/ddl.py", line 345, in _process_delta pgdelta.generate(block) File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/delta.py", line 7210, in generate op.generate(block) File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/delta.py", line 211, in generate op.generate(block) File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/delta.py", line 211, in generate op.generate(block) File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/delta.py", line 211, in generate op.generate(block) [Previous line repeated 1 more time] File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/dbops/base.py", line 296, in generate self_block = self.generate_self_block(block) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/dbops/base.py", line 335, in generate_self_block cmd.generate(self_block) File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/dbops/base.py", line 296, in generate self_block = self.generate_self_block(block) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/dbops/base.py", line 335, in generate_self_block cmd.generate(self_block) File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/deltadbops.py", line 550, in generate self.create_constraint(self._constraint) File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/deltadbops.py", line 489, in create_constraint cr_trigger = self.create_constr_trigger( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/deltadbops.py", line 410, in create_constr_trigger ins_trigger, upd_trigger = self._get_triggers( ^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/deltadbops.py", line 389, in _get_triggers ins_trigger_name = common.edgedb_name_to_pg_name(cname + '_instrigger') ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/common.py", line 195, in edgedb_name_to_pg_name return _edgedb_name_to_pg_name(name, prefix_length) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/common.py", line 168, in _edgedb_name_to_pg_name hashlib.md5(name.encode()).digest() ^^^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: [digital envelope routines: EVP_DigestInit_ex] disabled for FIPS ```
[ { "content": "#\n# This source file is part of the EdgeDB open source project.\n#\n# Copyright 2008-present MagicStack Inc. and the EdgeDB authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\nfrom __future__ import annotations\n\nimport binascii\nimport functools\nimport hashlib\nimport base64\nimport re\nfrom typing import *\nfrom typing import overload\nimport uuid\n\nfrom edb.common import uuidgen\nfrom edb.schema import casts as s_casts\nfrom edb.schema import constraints as s_constr\nfrom edb.schema import defines as s_def\nfrom edb.schema import functions as s_func\nfrom edb.schema import indexes as s_indexes\nfrom edb.schema import name as s_name\nfrom edb.schema import objects as so\nfrom edb.schema import objtypes as s_objtypes\nfrom edb.schema import operators as s_opers\nfrom edb.schema import pointers as s_pointers\nfrom edb.schema import scalars as s_scalars\nfrom edb.schema import types as s_types\nfrom edb.schema import schema as s_schema\n\nfrom edb.pgsql import ast as pgast\n\nfrom . import keywords as pg_keywords\n\n\ndef quote_e_literal(string: str) -> str:\n def escape_sq(s):\n split = re.split(r\"(\\n|\\\\\\\\|\\\\')\", s)\n\n if len(split) == 1:\n return s.replace(r\"'\", r\"\\'\")\n\n return ''.join((r if i % 2 else r.replace(r\"'\", r\"\\'\"))\n for i, r in enumerate(split))\n\n return \"E'\" + escape_sq(string) + \"'\"\n\n\ndef quote_literal(string):\n return \"'\" + string.replace(\"'\", \"''\") + \"'\"\n\n\ndef _quote_ident(string: str) -> str:\n return '\"' + string.replace('\"', '\"\"') + '\"'\n\n\ndef quote_ident(ident: str | pgast.Star, *, force=False, column=False) -> str:\n if isinstance(ident, pgast.Star):\n return \"*\"\n return (\n _quote_ident(ident)\n if needs_quoting(ident, column=column) or force else ident\n )\n\n\ndef quote_col(ident: str | pgast.Star) -> str:\n return quote_ident(ident, column=True)\n\n\ndef quote_bytea_literal(data: bytes) -> str:\n \"\"\"Return valid SQL representation of a bytes value.\"\"\"\n\n if data:\n b = binascii.b2a_hex(data).decode('ascii')\n return f\"'\\\\x{b}'::bytea\"\n else:\n return \"''::bytea\"\n\n\ndef needs_quoting(string: str, column: bool=False) -> bool:\n isalnum = (string and not string[0].isdecimal() and\n string.replace('_', 'a').isalnum())\n return (\n not isalnum or\n string.lower() in pg_keywords.by_type[\n pg_keywords.RESERVED_KEYWORD] or\n string.lower() in pg_keywords.by_type[\n pg_keywords.TYPE_FUNC_NAME_KEYWORD] or\n (column and string.lower() in pg_keywords.by_type[\n pg_keywords.COL_NAME_KEYWORD]) or\n string.lower() != string\n )\n\n\ndef qname(*parts: str | pgast.Star, column: bool=False) -> str:\n assert len(parts) <= 3, parts\n return '.'.join([quote_ident(q, column=column) for q in parts])\n\n\ndef quote_type(type_: Tuple[str, ...] | str):\n if isinstance(type_, tuple):\n first = qname(*type_[:-1]) + '.' if len(type_) > 1 else ''\n last = type_[-1]\n else:\n first = ''\n last = type_\n\n is_rowtype = last.endswith('%ROWTYPE')\n if is_rowtype:\n last = last[:-8]\n\n is_array = last.endswith('[]')\n if is_array:\n last = last[:-2]\n\n param = None\n if '(' in last:\n last, param = last.split('(', 1)\n param = '(' + param\n\n last = quote_ident(last)\n\n if is_rowtype:\n last += '%ROWTYPE'\n\n if param:\n last += param\n\n if is_array:\n last += '[]'\n\n return first + last\n\n\ndef get_module_backend_name(module: s_name.Name) -> str:\n # standard modules go into \"edgedbstd\", user ones into \"edgedbpub\"\n return \"edgedbstd\" if module in s_schema.STD_MODULES else \"edgedbpub\"\n\n\ndef get_unique_random_name() -> str:\n return base64.b64encode(uuidgen.uuid1mc().bytes).rstrip(b'=').decode()\n\n\[email protected]_cache()\ndef _edgedb_name_to_pg_name(name: str, prefix_length: int = 0) -> str:\n # Note: PostgreSQL doesn't have a sha1 implementation as a\n # built-in function available in all versions, hence we use md5.\n #\n # Although sha1 would be slightly better as it's marginally faster than\n # md5 (and it doesn't matter which function is better cryptographically\n # in this case.)\n hashed = base64.b64encode(\n hashlib.md5(name.encode()).digest()\n ).decode().rstrip('=')\n\n return (\n name[:prefix_length] +\n hashed +\n ':' +\n name[-(s_def.MAX_NAME_LENGTH - prefix_length - 1 - len(hashed)):]\n )\n\n\ndef edgedb_name_to_pg_name(name: str, prefix_length: int = 0) -> str:\n \"\"\"Convert EdgeDB name to a valid PostgresSQL column name.\n\n PostgreSQL has a limit of 63 characters for column names.\n\n @param name: EdgeDB name to convert\n @return: PostgreSQL column name\n \"\"\"\n if not (0 <= prefix_length < s_def.MAX_NAME_LENGTH):\n raise ValueError('supplied name is too long '\n 'to be kept in original form')\n\n name = str(name)\n if len(name) <= s_def.MAX_NAME_LENGTH - prefix_length:\n return name\n\n return _edgedb_name_to_pg_name(name, prefix_length)\n\n\ndef convert_name(name: s_name.QualName, suffix='', catenate=True):\n schema = get_module_backend_name(name.get_module_name())\n if suffix:\n sname = f'{name.name}_{suffix}'\n else:\n sname = name.name\n\n dbname = edgedb_name_to_pg_name(sname)\n\n if catenate:\n return qname(schema, dbname)\n else:\n return schema, dbname\n\n\ndef get_database_backend_name(db_name: str, *, tenant_id: str) -> str:\n return f'{tenant_id}_{db_name}'\n\n\ndef get_role_backend_name(role_name: str, *, tenant_id: str) -> str:\n return f'{tenant_id}_{role_name}'\n\n\ndef update_aspect(name, aspect):\n \"\"\"Update the aspect on a non catenated name.\n\n It also needs to be from an object that uses ids for names\"\"\"\n suffix = get_aspect_suffix(aspect)\n stripped = name[1].rsplit(\"_\", 1)[0]\n if suffix:\n return (name[0], f'{stripped}_{suffix}')\n else:\n return (name[0], stripped)\n\n\ndef get_scalar_backend_name(id, module_name, catenate=True, *, aspect=None):\n if aspect is None:\n aspect = 'domain'\n if aspect not in (\n \"domain\",\n \"sequence\",\n \"enum\",\n \"enum-cast-into-str\",\n \"enum-cast-from-str\",\n \"source-del-imm-otl-f\",\n \"source-del-imm-otl-t\",\n ):\n raise ValueError(\n f'unexpected aspect for scalar backend name: {aspect!r}')\n name = s_name.QualName(module=module_name, name=str(id))\n\n if aspect.startswith(\"enum-cast-\"):\n suffix = \"_into_str\" if aspect == \"enum-cast-into-str\" else \"_from_str\"\n name = s_name.QualName(name.module, name.name + suffix)\n return get_cast_backend_name(name, catenate, aspect=\"function\")\n\n return convert_name(name, aspect, catenate)\n\n\ndef get_aspect_suffix(aspect):\n if aspect == 'table':\n return ''\n elif aspect == 'inhview':\n return 't'\n else:\n return aspect\n\n\ndef is_inhview_name(name: str) -> bool:\n return name.endswith('_t')\n\n\ndef get_objtype_backend_name(\n id: uuid.UUID,\n module_name: str,\n *,\n catenate: bool = True,\n aspect: Optional[str] = None,\n):\n if aspect is None:\n aspect = 'table'\n if aspect not in {'table', 'inhview', 'dummy'} and not re.match(\n r'(source|target)-del-(def|imm)-(inl|otl)-(f|t)', aspect):\n raise ValueError(\n f'unexpected aspect for object type backend name: {aspect!r}')\n\n name = s_name.QualName(module=module_name, name=str(id))\n\n suffix = get_aspect_suffix(aspect)\n return convert_name(name, suffix=suffix, catenate=catenate)\n\n\ndef get_pointer_backend_name(id, module_name, *, catenate=False, aspect=None):\n if aspect is None:\n aspect = 'table'\n\n if aspect not in ('table', 'index', 'inhview', 'dummy'):\n raise ValueError(\n f'unexpected aspect for pointer backend name: {aspect!r}')\n\n name = s_name.QualName(module=module_name, name=str(id))\n\n suffix = get_aspect_suffix(aspect)\n return convert_name(name, suffix=suffix, catenate=catenate)\n\n\n_operator_map = {\n s_name.name_from_string('std::AND'): 'AND',\n s_name.name_from_string('std::OR'): 'OR',\n s_name.name_from_string('std::NOT'): 'NOT',\n s_name.name_from_string('std::?='): 'IS NOT DISTINCT FROM',\n s_name.name_from_string('std::?!='): 'IS DISTINCT FROM',\n s_name.name_from_string('std::LIKE'): 'LIKE',\n s_name.name_from_string('std::ILIKE'): 'ILIKE',\n s_name.name_from_string('std::NOT LIKE'): 'NOT LIKE',\n s_name.name_from_string('std::NOT ILIKE'): 'NOT ILIKE',\n}\n\n\ndef get_operator_backend_name(name, catenate=False, *, aspect=None):\n if aspect is None:\n aspect = 'operator'\n\n if aspect == 'function':\n return convert_name(name, 'f', catenate=catenate)\n elif aspect != 'operator':\n raise ValueError(\n f'unexpected aspect for operator backend name: {aspect!r}')\n\n oper_name = _operator_map.get(name)\n if oper_name is None:\n oper_name = name.name\n if re.search(r'[a-zA-Z]', oper_name):\n # Alphanumeric operator, cannot be expressed in Postgres as-is\n # Since this is a rare occasion, we hard-code the translation\n # table.\n if oper_name == 'OR':\n oper_name = '|||'\n elif oper_name == 'AND':\n oper_name = '&&&'\n else:\n raise ValueError(\n f'cannot represent operator {oper_name} in Postgres')\n\n oper_name = f'`{oper_name}`'\n schema = 'edgedb'\n else:\n schema = ''\n\n if catenate:\n return qname(schema, oper_name)\n else:\n return schema, oper_name\n\n\ndef get_cast_backend_name(\n fullname: s_name.QualName, catenate=False, *, aspect=None\n):\n if aspect == \"function\":\n return convert_name(fullname, \"f\", catenate=catenate)\n else:\n raise ValueError(\n f'unexpected aspect for cast backend name: {aspect!r}')\n\n\ndef get_function_backend_name(name, backend_name, catenate=False):\n real_name = backend_name or name.name\n\n fullname = s_name.QualName(module=name.module, name=real_name)\n schema, func_name = convert_name(fullname, catenate=False)\n if catenate:\n return qname(schema, func_name)\n else:\n return schema, func_name\n\n\ndef get_constraint_backend_name(\n id, module_name, catenate=True, *, aspect=None):\n if aspect not in ('trigproc', 'index'):\n raise ValueError(\n f'unexpected aspect for constraint backend name: {aspect!r}')\n\n sname = str(id)\n if aspect == 'index':\n aspect = None\n sname = get_constraint_raw_name(id)\n name = s_name.QualName(module=module_name, name=sname)\n return convert_name(name, aspect, catenate)\n\n\ndef get_constraint_raw_name(id):\n return f'{id};schemaconstr'\n\n\ndef get_index_backend_name(id, module_name, catenate=True, *, aspect=None):\n if aspect is None:\n aspect = 'index'\n name = s_name.QualName(module=module_name, name=str(id))\n return convert_name(name, aspect, catenate)\n\n\ndef get_tuple_backend_name(\n id, catenate=True, *, aspect=None\n) -> Union[str, tuple[str, str]]:\n\n name = s_name.QualName(module='edgedb', name=f'{id}_t')\n return convert_name(name, aspect, catenate)\n\n\n@overload\ndef get_backend_name(\n schema: s_schema.Schema,\n obj: so.Object,\n catenate: Literal[True]=True,\n *,\n aspect: Optional[str]=None\n) -> str:\n ...\n\n\n@overload\ndef get_backend_name(\n schema: s_schema.Schema,\n obj: so.Object,\n catenate: Literal[False],\n *,\n aspect: Optional[str]=None\n) -> tuple[str, str]:\n ...\n\n\ndef get_backend_name(\n schema: s_schema.Schema,\n obj: so.Object,\n catenate: bool=True,\n *,\n aspect: Optional[str]=None\n) -> Union[str, tuple[str, str]]:\n name: Union[s_name.QualName, s_name.Name]\n if isinstance(obj, s_objtypes.ObjectType):\n name = obj.get_name(schema)\n return get_objtype_backend_name(\n obj.id, name.module, catenate=catenate, aspect=aspect)\n\n elif isinstance(obj, s_pointers.Pointer):\n name = obj.get_name(schema)\n return get_pointer_backend_name(obj.id, name.module, catenate=catenate,\n aspect=aspect)\n\n elif isinstance(obj, s_scalars.ScalarType):\n name = obj.get_name(schema)\n return get_scalar_backend_name(obj.id, name.module, catenate=catenate,\n aspect=aspect)\n\n elif isinstance(obj, s_opers.Operator):\n name = obj.get_shortname(schema)\n return get_operator_backend_name(\n name, catenate, aspect=aspect)\n\n elif isinstance(obj, s_casts.Cast):\n name = obj.get_name(schema)\n return get_cast_backend_name(\n name, catenate, aspect=aspect)\n\n elif isinstance(obj, s_func.Function):\n name = obj.get_shortname(schema)\n backend_name = obj.get_backend_name(schema)\n return get_function_backend_name(\n name, backend_name, catenate)\n\n elif isinstance(obj, s_constr.Constraint):\n name = obj.get_name(schema)\n return get_constraint_backend_name(\n obj.id, name.module, catenate, aspect=aspect)\n\n elif isinstance(obj, s_indexes.Index):\n name = obj.get_name(schema)\n return get_index_backend_name(\n obj.id, name.module, catenate, aspect=aspect)\n\n elif isinstance(obj, s_types.Tuple):\n return get_tuple_backend_name(\n obj.id, catenate, aspect=aspect)\n\n else:\n raise ValueError(f'cannot determine backend name for {obj!r}')\n\n\ndef get_object_from_backend_name(schema, metaclass, name, *, aspect=None):\n\n if issubclass(metaclass, s_objtypes.ObjectType):\n table_name = name[1]\n obj_id = uuidgen.UUID(table_name)\n return schema.get_by_id(obj_id)\n\n elif issubclass(metaclass, s_pointers.Pointer):\n obj_id = uuidgen.UUID(name)\n return schema.get_by_id(obj_id)\n\n else:\n raise ValueError(\n f'cannot determine object from backend name for {metaclass!r}')\n", "path": "edb/pgsql/common.py" } ]
[ { "content": "#\n# This source file is part of the EdgeDB open source project.\n#\n# Copyright 2008-present MagicStack Inc. and the EdgeDB authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\nfrom __future__ import annotations\n\nimport binascii\nimport functools\nimport hashlib\nimport base64\nimport re\nfrom typing import *\nfrom typing import overload\nimport uuid\n\nfrom edb.common import uuidgen\nfrom edb.schema import casts as s_casts\nfrom edb.schema import constraints as s_constr\nfrom edb.schema import defines as s_def\nfrom edb.schema import functions as s_func\nfrom edb.schema import indexes as s_indexes\nfrom edb.schema import name as s_name\nfrom edb.schema import objects as so\nfrom edb.schema import objtypes as s_objtypes\nfrom edb.schema import operators as s_opers\nfrom edb.schema import pointers as s_pointers\nfrom edb.schema import scalars as s_scalars\nfrom edb.schema import types as s_types\nfrom edb.schema import schema as s_schema\n\nfrom edb.pgsql import ast as pgast\n\nfrom . import keywords as pg_keywords\n\n\ndef quote_e_literal(string: str) -> str:\n def escape_sq(s):\n split = re.split(r\"(\\n|\\\\\\\\|\\\\')\", s)\n\n if len(split) == 1:\n return s.replace(r\"'\", r\"\\'\")\n\n return ''.join((r if i % 2 else r.replace(r\"'\", r\"\\'\"))\n for i, r in enumerate(split))\n\n return \"E'\" + escape_sq(string) + \"'\"\n\n\ndef quote_literal(string):\n return \"'\" + string.replace(\"'\", \"''\") + \"'\"\n\n\ndef _quote_ident(string: str) -> str:\n return '\"' + string.replace('\"', '\"\"') + '\"'\n\n\ndef quote_ident(ident: str | pgast.Star, *, force=False, column=False) -> str:\n if isinstance(ident, pgast.Star):\n return \"*\"\n return (\n _quote_ident(ident)\n if needs_quoting(ident, column=column) or force else ident\n )\n\n\ndef quote_col(ident: str | pgast.Star) -> str:\n return quote_ident(ident, column=True)\n\n\ndef quote_bytea_literal(data: bytes) -> str:\n \"\"\"Return valid SQL representation of a bytes value.\"\"\"\n\n if data:\n b = binascii.b2a_hex(data).decode('ascii')\n return f\"'\\\\x{b}'::bytea\"\n else:\n return \"''::bytea\"\n\n\ndef needs_quoting(string: str, column: bool=False) -> bool:\n isalnum = (string and not string[0].isdecimal() and\n string.replace('_', 'a').isalnum())\n return (\n not isalnum or\n string.lower() in pg_keywords.by_type[\n pg_keywords.RESERVED_KEYWORD] or\n string.lower() in pg_keywords.by_type[\n pg_keywords.TYPE_FUNC_NAME_KEYWORD] or\n (column and string.lower() in pg_keywords.by_type[\n pg_keywords.COL_NAME_KEYWORD]) or\n string.lower() != string\n )\n\n\ndef qname(*parts: str | pgast.Star, column: bool=False) -> str:\n assert len(parts) <= 3, parts\n return '.'.join([quote_ident(q, column=column) for q in parts])\n\n\ndef quote_type(type_: Tuple[str, ...] | str):\n if isinstance(type_, tuple):\n first = qname(*type_[:-1]) + '.' if len(type_) > 1 else ''\n last = type_[-1]\n else:\n first = ''\n last = type_\n\n is_rowtype = last.endswith('%ROWTYPE')\n if is_rowtype:\n last = last[:-8]\n\n is_array = last.endswith('[]')\n if is_array:\n last = last[:-2]\n\n param = None\n if '(' in last:\n last, param = last.split('(', 1)\n param = '(' + param\n\n last = quote_ident(last)\n\n if is_rowtype:\n last += '%ROWTYPE'\n\n if param:\n last += param\n\n if is_array:\n last += '[]'\n\n return first + last\n\n\ndef get_module_backend_name(module: s_name.Name) -> str:\n # standard modules go into \"edgedbstd\", user ones into \"edgedbpub\"\n return \"edgedbstd\" if module in s_schema.STD_MODULES else \"edgedbpub\"\n\n\ndef get_unique_random_name() -> str:\n return base64.b64encode(uuidgen.uuid1mc().bytes).rstrip(b'=').decode()\n\n\[email protected]_cache()\ndef _edgedb_name_to_pg_name(name: str, prefix_length: int = 0) -> str:\n # Note: PostgreSQL doesn't have a sha1 implementation as a\n # built-in function available in all versions, hence we use md5.\n #\n # Although sha1 would be slightly better as it's marginally faster than\n # md5 (and it doesn't matter which function is better cryptographically\n # in this case.)\n hashed = base64.b64encode(\n hashlib.md5(name.encode(), usedforsecurity=False).digest()\n ).decode().rstrip('=')\n\n return (\n name[:prefix_length] +\n hashed +\n ':' +\n name[-(s_def.MAX_NAME_LENGTH - prefix_length - 1 - len(hashed)):]\n )\n\n\ndef edgedb_name_to_pg_name(name: str, prefix_length: int = 0) -> str:\n \"\"\"Convert EdgeDB name to a valid PostgresSQL column name.\n\n PostgreSQL has a limit of 63 characters for column names.\n\n @param name: EdgeDB name to convert\n @return: PostgreSQL column name\n \"\"\"\n if not (0 <= prefix_length < s_def.MAX_NAME_LENGTH):\n raise ValueError('supplied name is too long '\n 'to be kept in original form')\n\n name = str(name)\n if len(name) <= s_def.MAX_NAME_LENGTH - prefix_length:\n return name\n\n return _edgedb_name_to_pg_name(name, prefix_length)\n\n\ndef convert_name(name: s_name.QualName, suffix='', catenate=True):\n schema = get_module_backend_name(name.get_module_name())\n if suffix:\n sname = f'{name.name}_{suffix}'\n else:\n sname = name.name\n\n dbname = edgedb_name_to_pg_name(sname)\n\n if catenate:\n return qname(schema, dbname)\n else:\n return schema, dbname\n\n\ndef get_database_backend_name(db_name: str, *, tenant_id: str) -> str:\n return f'{tenant_id}_{db_name}'\n\n\ndef get_role_backend_name(role_name: str, *, tenant_id: str) -> str:\n return f'{tenant_id}_{role_name}'\n\n\ndef update_aspect(name, aspect):\n \"\"\"Update the aspect on a non catenated name.\n\n It also needs to be from an object that uses ids for names\"\"\"\n suffix = get_aspect_suffix(aspect)\n stripped = name[1].rsplit(\"_\", 1)[0]\n if suffix:\n return (name[0], f'{stripped}_{suffix}')\n else:\n return (name[0], stripped)\n\n\ndef get_scalar_backend_name(id, module_name, catenate=True, *, aspect=None):\n if aspect is None:\n aspect = 'domain'\n if aspect not in (\n \"domain\",\n \"sequence\",\n \"enum\",\n \"enum-cast-into-str\",\n \"enum-cast-from-str\",\n \"source-del-imm-otl-f\",\n \"source-del-imm-otl-t\",\n ):\n raise ValueError(\n f'unexpected aspect for scalar backend name: {aspect!r}')\n name = s_name.QualName(module=module_name, name=str(id))\n\n if aspect.startswith(\"enum-cast-\"):\n suffix = \"_into_str\" if aspect == \"enum-cast-into-str\" else \"_from_str\"\n name = s_name.QualName(name.module, name.name + suffix)\n return get_cast_backend_name(name, catenate, aspect=\"function\")\n\n return convert_name(name, aspect, catenate)\n\n\ndef get_aspect_suffix(aspect):\n if aspect == 'table':\n return ''\n elif aspect == 'inhview':\n return 't'\n else:\n return aspect\n\n\ndef is_inhview_name(name: str) -> bool:\n return name.endswith('_t')\n\n\ndef get_objtype_backend_name(\n id: uuid.UUID,\n module_name: str,\n *,\n catenate: bool = True,\n aspect: Optional[str] = None,\n):\n if aspect is None:\n aspect = 'table'\n if aspect not in {'table', 'inhview', 'dummy'} and not re.match(\n r'(source|target)-del-(def|imm)-(inl|otl)-(f|t)', aspect):\n raise ValueError(\n f'unexpected aspect for object type backend name: {aspect!r}')\n\n name = s_name.QualName(module=module_name, name=str(id))\n\n suffix = get_aspect_suffix(aspect)\n return convert_name(name, suffix=suffix, catenate=catenate)\n\n\ndef get_pointer_backend_name(id, module_name, *, catenate=False, aspect=None):\n if aspect is None:\n aspect = 'table'\n\n if aspect not in ('table', 'index', 'inhview', 'dummy'):\n raise ValueError(\n f'unexpected aspect for pointer backend name: {aspect!r}')\n\n name = s_name.QualName(module=module_name, name=str(id))\n\n suffix = get_aspect_suffix(aspect)\n return convert_name(name, suffix=suffix, catenate=catenate)\n\n\n_operator_map = {\n s_name.name_from_string('std::AND'): 'AND',\n s_name.name_from_string('std::OR'): 'OR',\n s_name.name_from_string('std::NOT'): 'NOT',\n s_name.name_from_string('std::?='): 'IS NOT DISTINCT FROM',\n s_name.name_from_string('std::?!='): 'IS DISTINCT FROM',\n s_name.name_from_string('std::LIKE'): 'LIKE',\n s_name.name_from_string('std::ILIKE'): 'ILIKE',\n s_name.name_from_string('std::NOT LIKE'): 'NOT LIKE',\n s_name.name_from_string('std::NOT ILIKE'): 'NOT ILIKE',\n}\n\n\ndef get_operator_backend_name(name, catenate=False, *, aspect=None):\n if aspect is None:\n aspect = 'operator'\n\n if aspect == 'function':\n return convert_name(name, 'f', catenate=catenate)\n elif aspect != 'operator':\n raise ValueError(\n f'unexpected aspect for operator backend name: {aspect!r}')\n\n oper_name = _operator_map.get(name)\n if oper_name is None:\n oper_name = name.name\n if re.search(r'[a-zA-Z]', oper_name):\n # Alphanumeric operator, cannot be expressed in Postgres as-is\n # Since this is a rare occasion, we hard-code the translation\n # table.\n if oper_name == 'OR':\n oper_name = '|||'\n elif oper_name == 'AND':\n oper_name = '&&&'\n else:\n raise ValueError(\n f'cannot represent operator {oper_name} in Postgres')\n\n oper_name = f'`{oper_name}`'\n schema = 'edgedb'\n else:\n schema = ''\n\n if catenate:\n return qname(schema, oper_name)\n else:\n return schema, oper_name\n\n\ndef get_cast_backend_name(\n fullname: s_name.QualName, catenate=False, *, aspect=None\n):\n if aspect == \"function\":\n return convert_name(fullname, \"f\", catenate=catenate)\n else:\n raise ValueError(\n f'unexpected aspect for cast backend name: {aspect!r}')\n\n\ndef get_function_backend_name(name, backend_name, catenate=False):\n real_name = backend_name or name.name\n\n fullname = s_name.QualName(module=name.module, name=real_name)\n schema, func_name = convert_name(fullname, catenate=False)\n if catenate:\n return qname(schema, func_name)\n else:\n return schema, func_name\n\n\ndef get_constraint_backend_name(\n id, module_name, catenate=True, *, aspect=None):\n if aspect not in ('trigproc', 'index'):\n raise ValueError(\n f'unexpected aspect for constraint backend name: {aspect!r}')\n\n sname = str(id)\n if aspect == 'index':\n aspect = None\n sname = get_constraint_raw_name(id)\n name = s_name.QualName(module=module_name, name=sname)\n return convert_name(name, aspect, catenate)\n\n\ndef get_constraint_raw_name(id):\n return f'{id};schemaconstr'\n\n\ndef get_index_backend_name(id, module_name, catenate=True, *, aspect=None):\n if aspect is None:\n aspect = 'index'\n name = s_name.QualName(module=module_name, name=str(id))\n return convert_name(name, aspect, catenate)\n\n\ndef get_tuple_backend_name(\n id, catenate=True, *, aspect=None\n) -> Union[str, tuple[str, str]]:\n\n name = s_name.QualName(module='edgedb', name=f'{id}_t')\n return convert_name(name, aspect, catenate)\n\n\n@overload\ndef get_backend_name(\n schema: s_schema.Schema,\n obj: so.Object,\n catenate: Literal[True]=True,\n *,\n aspect: Optional[str]=None\n) -> str:\n ...\n\n\n@overload\ndef get_backend_name(\n schema: s_schema.Schema,\n obj: so.Object,\n catenate: Literal[False],\n *,\n aspect: Optional[str]=None\n) -> tuple[str, str]:\n ...\n\n\ndef get_backend_name(\n schema: s_schema.Schema,\n obj: so.Object,\n catenate: bool=True,\n *,\n aspect: Optional[str]=None\n) -> Union[str, tuple[str, str]]:\n name: Union[s_name.QualName, s_name.Name]\n if isinstance(obj, s_objtypes.ObjectType):\n name = obj.get_name(schema)\n return get_objtype_backend_name(\n obj.id, name.module, catenate=catenate, aspect=aspect)\n\n elif isinstance(obj, s_pointers.Pointer):\n name = obj.get_name(schema)\n return get_pointer_backend_name(obj.id, name.module, catenate=catenate,\n aspect=aspect)\n\n elif isinstance(obj, s_scalars.ScalarType):\n name = obj.get_name(schema)\n return get_scalar_backend_name(obj.id, name.module, catenate=catenate,\n aspect=aspect)\n\n elif isinstance(obj, s_opers.Operator):\n name = obj.get_shortname(schema)\n return get_operator_backend_name(\n name, catenate, aspect=aspect)\n\n elif isinstance(obj, s_casts.Cast):\n name = obj.get_name(schema)\n return get_cast_backend_name(\n name, catenate, aspect=aspect)\n\n elif isinstance(obj, s_func.Function):\n name = obj.get_shortname(schema)\n backend_name = obj.get_backend_name(schema)\n return get_function_backend_name(\n name, backend_name, catenate)\n\n elif isinstance(obj, s_constr.Constraint):\n name = obj.get_name(schema)\n return get_constraint_backend_name(\n obj.id, name.module, catenate, aspect=aspect)\n\n elif isinstance(obj, s_indexes.Index):\n name = obj.get_name(schema)\n return get_index_backend_name(\n obj.id, name.module, catenate, aspect=aspect)\n\n elif isinstance(obj, s_types.Tuple):\n return get_tuple_backend_name(\n obj.id, catenate, aspect=aspect)\n\n else:\n raise ValueError(f'cannot determine backend name for {obj!r}')\n\n\ndef get_object_from_backend_name(schema, metaclass, name, *, aspect=None):\n\n if issubclass(metaclass, s_objtypes.ObjectType):\n table_name = name[1]\n obj_id = uuidgen.UUID(table_name)\n return schema.get_by_id(obj_id)\n\n elif issubclass(metaclass, s_pointers.Pointer):\n obj_id = uuidgen.UUID(name)\n return schema.get_by_id(obj_id)\n\n else:\n raise ValueError(\n f'cannot determine object from backend name for {metaclass!r}')\n", "path": "edb/pgsql/common.py" } ]
diff --git a/edb/pgsql/common.py b/edb/pgsql/common.py index 8304c0f067f..ca4a0f11445 100644 --- a/edb/pgsql/common.py +++ b/edb/pgsql/common.py @@ -165,7 +165,7 @@ def _edgedb_name_to_pg_name(name: str, prefix_length: int = 0) -> str: # md5 (and it doesn't matter which function is better cryptographically # in this case.) hashed = base64.b64encode( - hashlib.md5(name.encode()).digest() + hashlib.md5(name.encode(), usedforsecurity=False).digest() ).decode().rstrip('=') return (
abey79__vpype-103
`read` should discard invisible geometries
[ { "content": "\"\"\"File import/export functions.\n\"\"\"\nimport copy\nimport datetime\nimport math\nimport re\nfrom typing import Iterator, List, Optional, TextIO, Tuple, Union\nfrom xml.etree import ElementTree\n\nimport click\nimport numpy as np\nimport svgelements\nimport svgwrite\nfrom multiprocess import Pool\nfrom shapely.geometry import LineString\nfrom svgwrite.extensions import Inkscape\n\nfrom .config import CONFIG_MANAGER, PaperConfig, PlotterConfig\nfrom .model import Document, LineCollection\nfrom .utils import UNITS\n\n__all__ = [\"read_svg\", \"read_multilayer_svg\", \"write_svg\", \"write_hpgl\"]\n\n\n_COLORS = [\n \"#00f\",\n \"#080\",\n \"#f00\",\n \"#0cc\",\n \"#0f0\",\n \"#c0c\",\n \"#cc0\",\n \"black\",\n]\n\n_DEFAULT_WIDTH = 1000\n_DEFAULT_HEIGHT = 1000\n\n\nclass _ComplexStack:\n \"\"\"Complex number stack implemented with a numpy array\"\"\"\n\n def __init__(self):\n self._alloc = 100\n self._stack = np.empty(shape=self._alloc, dtype=complex)\n self._len = 0\n\n def __len__(self) -> int:\n return self._len\n\n def _realloc(self, min_free: int = 1) -> None:\n self._alloc = max(self._alloc * 2, self._len + min_free)\n # noinspection PyTypeChecker\n self._stack.resize(self._alloc, refcheck=False)\n\n def append(self, c: complex) -> None:\n if self._len == self._alloc:\n self._realloc()\n self._stack[self._len] = c\n self._len += 1\n\n def extend(self, a: np.ndarray) -> None:\n len_a = len(a)\n if self._len + len_a > self._alloc:\n self._realloc(len_a)\n self._stack[self._len : self._len + len_a] = a\n self._len += len_a\n\n def ends_with(self, c: complex) -> bool:\n return self._stack[self._len - 1] == c if self._len > 0 else False\n\n def get(self) -> np.ndarray:\n self._alloc = self._len\n # noinspection PyTypeChecker\n self._stack.resize(self._alloc, refcheck=False)\n return self._stack\n\n\n_PathListType = List[\n Union[\n # for actual paths and shapes transformed into paths\n svgelements.Path,\n # for the special case of Polygon and Polylines\n List[Union[svgelements.PathSegment, svgelements.Polygon, svgelements.Polyline]],\n ]\n]\n\n\ndef _convert_flattened_paths(\n paths: _PathListType, quantization: float, simplify: bool, parallel: bool\n) -> \"LineCollection\":\n \"\"\"Convert a list of FlattenedPaths to a :class:`LineCollection`.\n\n Args:\n paths: list of FlattenedPaths\n quantization: maximum length of linear elements to approximate curve paths\n simplify: should Shapely's simplify be run on curved elements after quantization\n parallel: enable multiprocessing\n\n Returns:\n new :class:`LineCollection` instance containing the converted geometries\n \"\"\"\n\n def _process_path(path):\n if len(path) == 0:\n return []\n\n result = []\n point_stack = _ComplexStack()\n for seg in path:\n # handle cases of zero radius Arc\n if isinstance(seg, svgelements.Arc) and (seg.rx == 0 or seg.ry == 0):\n seg = svgelements.Line(start=seg.start, end=seg.end)\n\n if isinstance(seg, svgelements.Move):\n if len(point_stack) > 0:\n result.append(point_stack.get())\n point_stack = _ComplexStack()\n\n point_stack.append(complex(seg.end))\n elif isinstance(seg, (svgelements.Line, svgelements.Close)):\n start = complex(seg.start)\n end = complex(seg.end)\n if not point_stack.ends_with(start):\n point_stack.append(start)\n if end != start:\n point_stack.append(end)\n elif isinstance(seg, (svgelements.Polygon, svgelements.Polyline)):\n line = np.array(seg.points, dtype=float)\n line = line.view(dtype=complex).reshape(len(line))\n if point_stack.ends_with(line[0]):\n point_stack.extend(line[1:])\n else:\n point_stack.extend(line)\n else:\n # This is a curved element that we approximate with small segments\n step = max(2, int(math.ceil(seg.length() / quantization)))\n line = seg.npoint(np.linspace(0, 1, step))\n\n if simplify:\n line = np.array(LineString(line).simplify(tolerance=quantization))\n\n line = line.view(dtype=complex).reshape(len(line))\n\n if point_stack.ends_with(line[0]):\n point_stack.extend(line[1:])\n else:\n point_stack.extend(line)\n\n if len(point_stack) > 0:\n result.append(point_stack.get())\n\n return result\n\n # benchmarking indicated that parallel processing only makes sense if simplify is used\n if parallel:\n with Pool() as p:\n results = p.map(_process_path, paths)\n else:\n results = map(_process_path, paths)\n\n lc = LineCollection()\n for res in results:\n lc.extend(res)\n return lc\n\n\ndef _extract_paths(group: svgelements.Group, recursive) -> _PathListType:\n \"\"\"Extract everything from the provided SVG group.\"\"\"\n\n if recursive:\n everything = group.select()\n else:\n everything = group\n paths = []\n for elem in everything:\n if isinstance(elem, svgelements.Path):\n if len(elem) != 0:\n paths.append(elem)\n elif isinstance(elem, (svgelements.Polyline, svgelements.Polygon)):\n # Here we add a \"fake\" path containing just the Polyline/Polygon,\n # to be treated specifically by _convert_flattened_paths.\n path = [svgelements.Move(elem.points[0]), elem]\n if isinstance(elem, svgelements.Polygon):\n path.append(svgelements.Close(elem.points[-1], elem.points[0]))\n paths.append(path)\n elif isinstance(elem, svgelements.Shape):\n e = svgelements.Path(elem)\n e.reify() # In some cases the shape could not have reified, the path must.\n if len(e) != 0:\n paths.append(e)\n\n return paths\n\n\ndef read_svg(\n filename: str,\n quantization: float,\n crop: bool = True,\n simplify: bool = False,\n parallel: bool = False,\n default_width: float = _DEFAULT_WIDTH,\n default_height: float = _DEFAULT_HEIGHT,\n) -> Tuple[\"LineCollection\", float, float]:\n \"\"\"Read a SVG file an return its content as a :class:`LineCollection` instance.\n\n All curved geometries are chopped in segments no longer than the value of *quantization*.\n Optionally, the geometries are simplified using Shapely, using the value of *quantization*\n as tolerance.\n\n Args:\n filename: path of the SVG file\n quantization: maximum size of segment used to approximate curved geometries\n crop: crop the geometries to the SVG boundaries\n simplify: run Shapely's simplify on loaded geometry\n parallel: enable multiprocessing (only recommended for ``simplify=True`` and SVG with\n many curves)\n default_width: default width if not provided by SVG or if a percent width is provided\n default_height: default height if not provided by SVG or if a percent height is\n provided\n\n Returns:\n tuple containing a :class:`LineCollection` with the imported geometries as well as the\n width and height of the SVG\n \"\"\"\n\n # default width is for SVG with % width/height\n svg = svgelements.SVG.parse(filename, width=default_width, height=default_height)\n paths = _extract_paths(svg, recursive=True)\n lc = _convert_flattened_paths(paths, quantization, simplify, parallel)\n\n width = svg.viewbox.element_width or default_width\n height = svg.viewbox.element_height or default_height\n\n if crop:\n lc.crop(0, 0, width, height)\n\n return lc, width, height\n\n\ndef read_multilayer_svg(\n filename: str,\n quantization: float,\n crop: bool = True,\n simplify: bool = False,\n parallel: bool = False,\n default_width: float = _DEFAULT_WIDTH,\n default_height: float = _DEFAULT_HEIGHT,\n) -> \"Document\":\n \"\"\"Read a multilayer SVG file and return its content as a :class:`Document` instance\n retaining the SVG's layer structure and its dimension.\n\n Each top-level group is considered a layer. All non-group, top-level elements are imported\n in layer 1.\n\n Groups are matched to layer ID according their `inkscape:label` attribute, their `id`\n attribute or their appearing order, in that order of priority. Labels are stripped of\n non-numeric characters and the remaining is used as layer ID. Lacking numeric characters,\n the appearing order is used. If the label is 0, its changed to 1.\n\n All curved geometries are chopped in segments no longer than the value of *quantization*.\n Optionally, the geometries are simplified using Shapely, using the value of *quantization*\n as tolerance.\n\n Args:\n filename: path of the SVG file\n quantization: maximum size of segment used to approximate curved geometries\n crop: crop the geometries to the SVG boundaries\n simplify: run Shapely's simplify on loaded geometry\n parallel: enable multiprocessing (only recommended for ``simplify=True`` and SVG with\n many curves)\n default_width: default width if not provided by SVG or if a percent width is provided\n default_height: default height if not provided by SVG or if a percent height is\n provided\n\n Returns:\n :class:`Document` instance with the imported geometries and its page size set the the\n SVG dimensions\n \"\"\"\n\n svg = svgelements.SVG.parse(filename, width=default_width, height=default_height)\n\n document = Document()\n\n # non-group top level elements are loaded in layer 1\n lc = _convert_flattened_paths(\n _extract_paths(svg, recursive=False), quantization, simplify, parallel\n )\n if not lc.is_empty():\n document.add(lc, 1)\n\n def _find_groups(group: svgelements.Group) -> Iterator[svgelements.Group]:\n for elem in group:\n if isinstance(elem, svgelements.Group):\n yield elem\n\n for i, g in enumerate(_find_groups(svg)):\n # compute a decent layer ID\n lid_str = re.sub(\n \"[^0-9]\",\n \"\",\n g.values.get(\"{http://www.inkscape.org/namespaces/inkscape}label\") or \"\",\n )\n if not lid_str:\n lid_str = re.sub(\"[^0-9]\", \"\", g.values.get(\"id\") or \"\")\n if lid_str:\n lid = int(lid_str)\n if lid == 0:\n lid = 1\n else:\n lid = i + 1\n\n lc = _convert_flattened_paths(\n _extract_paths(g, recursive=True), quantization, simplify, parallel\n )\n if not lc.is_empty():\n document.add(lc, lid)\n\n width = svg.viewbox.element_width or default_width\n height = svg.viewbox.element_height or default_height\n\n document.page_size = (width, height)\n\n if crop:\n document.crop(0, 0, width, height)\n\n return document\n\n\ndef write_svg(\n output: TextIO,\n document: Document,\n page_size: Optional[Tuple[float, float]] = None,\n center: bool = False,\n source_string: str = \"\",\n layer_label_format: str = \"%d\",\n show_pen_up: bool = False,\n color_mode: str = \"none\",\n) -> None:\n \"\"\"Create a SVG from a :py:class:`Document` instance.\n\n If no page size is provided (or (0, 0) is passed), the SVG generated has bounds tightly\n fitted around the geometries. Otherwise the provided size (in pixel) is used. The width\n and height is capped to a minimum of 1 pixel.\n\n By default, no translation is applied on the geometry. If `center=True`, geometries are\n moved to the center of the page.\n\n No scaling or rotation is applied to geometries.\n\n Layers are named after `layer_label_format`, which may contain a C-style format specifier\n such as `%d` which will be replaced by the layer number.\n\n For previsualisation purposes, pen-up trajectories can be added to the SVG and path can\n be colored individually (``color_mode=\"path\"``) or layer-by-layer (``color_mode=\"layer\"``).\n\n Args:\n output: text-mode IO stream where SVG code will be written\n document: geometries to be written\n page_size: if provided, overrides document.page_size\n center: center geometries on page before export\n source_string: value of the `source` metadata\n layer_label_format: format string for layer label naming\n show_pen_up: add paths for the pen-up trajectories\n color_mode: \"none\" (no formatting), \"layer\" (one color per layer), \"path\" (one color\n per path)\n \"\"\"\n\n # compute bounds\n bounds = document.bounds()\n if bounds is None:\n # empty geometry, we provide fake bounds\n bounds = (0, 0, 1, 1)\n\n if page_size:\n size = page_size\n tight = page_size == (0.0, 0.0)\n elif document.page_size:\n size = document.page_size\n tight = False\n else:\n size = (bounds[2] - bounds[0], bounds[3] - bounds[1])\n tight = True\n\n if center:\n corrected_doc = copy.deepcopy(document)\n corrected_doc.translate(\n (size[0] - (bounds[2] - bounds[0])) / 2.0 - bounds[0],\n (size[1] - (bounds[3] - bounds[1])) / 2.0 - bounds[1],\n )\n elif tight:\n corrected_doc = copy.deepcopy(document)\n corrected_doc.translate(-bounds[0], -bounds[1])\n else:\n corrected_doc = document\n\n # output SVG, width/height are capped to 1px\n capped_size = tuple(max(1, s) for s in size)\n size_cm = tuple(f\"{round(s / UNITS['cm'], 8)}cm\" for s in capped_size)\n dwg = svgwrite.Drawing(size=size_cm, profile=\"tiny\", debug=False)\n inkscape = Inkscape(dwg)\n dwg.attribs.update(\n {\n \"viewBox\": f\"0 0 {capped_size[0]} {capped_size[1]}\",\n \"xmlns:dc\": \"http://purl.org/dc/elements/1.1/\",\n \"xmlns:cc\": \"http://creativecommons.org/ns#\",\n \"xmlns:rdf\": \"http://www.w3.org/1999/02/22-rdf-syntax-ns#\",\n }\n )\n\n # add metadata\n metadata = ElementTree.Element(\"rdf:RDF\")\n work = ElementTree.SubElement(metadata, \"cc:Work\")\n fmt = ElementTree.SubElement(work, \"dc:format\")\n fmt.text = \"image/svg+xml\"\n source = ElementTree.SubElement(work, \"dc:source\")\n source.text = source_string\n date = ElementTree.SubElement(work, \"dc:date\")\n date.text = datetime.datetime.now().isoformat()\n dwg.set_metadata(metadata)\n\n color_idx = 0\n if show_pen_up:\n group = inkscape.layer(label=\"% pen up trajectories\")\n group.attribs[\"fill\"] = \"none\"\n group.attribs[\"stroke\"] = \"black\"\n group.attribs[\"style\"] = \"display:inline; stroke-opacity: 50%; stroke-width: 0.5\"\n group.attribs[\"id\"] = \"pen_up_trajectories\"\n\n for layer in corrected_doc.layers.values():\n for line in layer.pen_up_trajectories():\n group.add(\n dwg.line((line[0].real, line[0].imag), (line[-1].real, line[-1].imag))\n )\n\n dwg.add(group)\n\n for layer_id in sorted(corrected_doc.layers.keys()):\n layer = corrected_doc.layers[layer_id]\n\n group = inkscape.layer(label=str(layer_label_format % layer_id))\n group.attribs[\"fill\"] = \"none\"\n if color_mode == \"layer\":\n group.attribs[\"stroke\"] = _COLORS[color_idx % len(_COLORS)]\n color_idx += 1\n else:\n group.attribs[\"stroke\"] = \"black\"\n group.attribs[\"style\"] = \"display:inline\"\n group.attribs[\"id\"] = f\"layer{layer_id}\"\n\n for line in layer:\n if len(line) <= 1:\n continue\n\n if len(line) == 2:\n path = dwg.line((line[0].real, line[0].imag), (line[1].real, line[1].imag))\n elif line[0] == line[-1]:\n path = dwg.polygon((c.real, c.imag) for c in line[:-1])\n else:\n path = dwg.polyline((c.real, c.imag) for c in line)\n\n if color_mode == \"path\":\n path.attribs[\"stroke\"] = _COLORS[color_idx % len(_COLORS)]\n color_idx += 1\n group.add(path)\n\n dwg.add(group)\n\n dwg.write(output, pretty=True)\n\n\ndef _get_hpgl_config(\n device: Optional[str], page_size: str\n) -> Tuple[PlotterConfig, PaperConfig]:\n if device is None:\n device = CONFIG_MANAGER.get_command_config(\"write\").get(\"default_hpgl_device\", None)\n plotter_config = CONFIG_MANAGER.get_plotter_config(str(device))\n if plotter_config is None:\n raise ValueError(f\"no configuration available for plotter '{device}'\")\n paper_config = plotter_config.paper_config(page_size)\n if paper_config is None:\n raise ValueError(\n f\"no configuration available for paper size '{page_size}' with plotter \"\n f\"'{device}'\"\n )\n\n return plotter_config, paper_config\n\n\ndef write_hpgl(\n output: TextIO,\n document: Document,\n page_size: str,\n landscape: bool,\n center: bool,\n device: Optional[str],\n velocity: Optional[float],\n quiet: bool = False,\n) -> None:\n \"\"\"Create a HPGL file from the :class:`Document` instance.\n\n The ``device``/``page_size`` combination must be defined in the built-in or user-provided\n config files or an exception will be raised.\n\n By default, no translation is applied on the geometry. If `center=True`, geometries are\n moved to the center of the page.\n\n No scaling or rotation is applied to geometries.\n\n Args:\n output: text-mode IO stream where SVG code will be written\n document: geometries to be written\n page_size: page size string (it must be configured for the selected device)\n landscape: if True, the geometries are generated in landscape orientation\n center: center geometries on page before export\n device: name of the device to use (the corresponding config must exists). If not\n provided, a default device must be configured, which will be used.\n velocity: if provided, a VS command will be generated with the corresponding value\n quiet: if True, do not print the plotter/paper info strings\n \"\"\"\n\n # empty HPGL is acceptable there are no geometries to plot\n if document.is_empty():\n return\n\n plotter_config, paper_config = _get_hpgl_config(device, page_size)\n if not quiet:\n if plotter_config.info:\n # use of echo instead of print needed for testability\n # https://github.com/pallets/click/issues/1678\n click.echo(plotter_config.info, err=True)\n if paper_config.info:\n click.echo(paper_config.info, err=True)\n\n # are plotter coordinate placed in landscape or portrait orientation?\n coords_landscape = paper_config.paper_size[0] > paper_config.paper_size[1]\n\n # document preprocessing:\n # - make a copy\n # - deal with orientation mismatch\n # - optionally center on paper\n # - convert to plotter units\n # - crop to plotter limits\n document = copy.deepcopy(document)\n\n if landscape != coords_landscape:\n document.rotate(-math.pi / 2)\n document.translate(0, paper_config.paper_size[1])\n\n if paper_config.rotate_180:\n document.scale(-1, -1)\n document.translate(*paper_config.paper_size)\n\n if center:\n bounds = document.bounds()\n if bounds is not None:\n document.translate(\n (paper_config.paper_size[0] - (bounds[2] - bounds[0])) / 2.0 - bounds[0],\n (paper_config.paper_size[1] - (bounds[3] - bounds[1])) / 2.0 - bounds[1],\n )\n\n document.translate(-paper_config.origin_location[0], -paper_config.origin_location[1])\n unit_per_pixel = 1 / plotter_config.plotter_unit_length\n document.scale(\n unit_per_pixel, -unit_per_pixel if paper_config.y_axis_up else unit_per_pixel\n )\n document.crop(\n paper_config.x_range[0],\n paper_config.y_range[0],\n paper_config.x_range[1],\n paper_config.y_range[1],\n )\n\n # output HPGL\n def complex_to_str(p: complex) -> str:\n return f\"{int(round(p.real))},{int(round(p.imag))}\"\n\n output.write(\"IN;DF;\")\n if velocity is not None:\n output.write(f\"VS{velocity};\")\n if paper_config.set_ps is not None:\n output.write(f\"PS{int(paper_config.set_ps)};\")\n\n for layer_id in sorted(document.layers.keys()):\n pen_id = 1 + (layer_id - 1) % plotter_config.pen_count\n output.write(f\"SP{pen_id};\")\n\n for line in document.layers[layer_id]:\n if len(line) < 2:\n continue\n output.write(f\"PU{complex_to_str(line[0])};\")\n output.write(f\"PD\")\n output.write(\",\".join(complex_to_str(p) for p in line[1:]))\n output.write(\";\")\n\n output.write(\n f\"PU{paper_config.final_pu_params if paper_config.final_pu_params else ''};\"\n )\n\n output.write(\"SP0;IN;\\n\")\n", "path": "vpype/io.py" } ]
[ { "content": "\"\"\"File import/export functions.\n\"\"\"\nimport copy\nimport datetime\nimport math\nimport re\nfrom typing import Iterator, List, Optional, TextIO, Tuple, Union\nfrom xml.etree import ElementTree\n\nimport click\nimport numpy as np\nimport svgelements\nimport svgwrite\nfrom multiprocess import Pool\nfrom shapely.geometry import LineString\nfrom svgwrite.extensions import Inkscape\n\nfrom .config import CONFIG_MANAGER, PaperConfig, PlotterConfig\nfrom .model import Document, LineCollection\nfrom .utils import UNITS\n\n__all__ = [\"read_svg\", \"read_multilayer_svg\", \"write_svg\", \"write_hpgl\"]\n\n\n_COLORS = [\n \"#00f\",\n \"#080\",\n \"#f00\",\n \"#0cc\",\n \"#0f0\",\n \"#c0c\",\n \"#cc0\",\n \"black\",\n]\n\n_DEFAULT_WIDTH = 1000\n_DEFAULT_HEIGHT = 1000\n\n\nclass _ComplexStack:\n \"\"\"Complex number stack implemented with a numpy array\"\"\"\n\n def __init__(self):\n self._alloc = 100\n self._stack = np.empty(shape=self._alloc, dtype=complex)\n self._len = 0\n\n def __len__(self) -> int:\n return self._len\n\n def _realloc(self, min_free: int = 1) -> None:\n self._alloc = max(self._alloc * 2, self._len + min_free)\n # noinspection PyTypeChecker\n self._stack.resize(self._alloc, refcheck=False)\n\n def append(self, c: complex) -> None:\n if self._len == self._alloc:\n self._realloc()\n self._stack[self._len] = c\n self._len += 1\n\n def extend(self, a: np.ndarray) -> None:\n len_a = len(a)\n if self._len + len_a > self._alloc:\n self._realloc(len_a)\n self._stack[self._len : self._len + len_a] = a\n self._len += len_a\n\n def ends_with(self, c: complex) -> bool:\n return self._stack[self._len - 1] == c if self._len > 0 else False\n\n def get(self) -> np.ndarray:\n self._alloc = self._len\n # noinspection PyTypeChecker\n self._stack.resize(self._alloc, refcheck=False)\n return self._stack\n\n\n_PathListType = List[\n Union[\n # for actual paths and shapes transformed into paths\n svgelements.Path,\n # for the special case of Polygon and Polylines\n List[Union[svgelements.PathSegment, svgelements.Polygon, svgelements.Polyline]],\n ]\n]\n\n\ndef _convert_flattened_paths(\n paths: _PathListType, quantization: float, simplify: bool, parallel: bool\n) -> \"LineCollection\":\n \"\"\"Convert a list of FlattenedPaths to a :class:`LineCollection`.\n\n Args:\n paths: list of FlattenedPaths\n quantization: maximum length of linear elements to approximate curve paths\n simplify: should Shapely's simplify be run on curved elements after quantization\n parallel: enable multiprocessing\n\n Returns:\n new :class:`LineCollection` instance containing the converted geometries\n \"\"\"\n\n def _process_path(path):\n if len(path) == 0:\n return []\n\n result = []\n point_stack = _ComplexStack()\n for seg in path:\n # handle cases of zero radius Arc\n if isinstance(seg, svgelements.Arc) and (seg.rx == 0 or seg.ry == 0):\n seg = svgelements.Line(start=seg.start, end=seg.end)\n\n if isinstance(seg, svgelements.Move):\n if len(point_stack) > 0:\n result.append(point_stack.get())\n point_stack = _ComplexStack()\n\n point_stack.append(complex(seg.end))\n elif isinstance(seg, (svgelements.Line, svgelements.Close)):\n start = complex(seg.start)\n end = complex(seg.end)\n if not point_stack.ends_with(start):\n point_stack.append(start)\n if end != start:\n point_stack.append(end)\n elif isinstance(seg, (svgelements.Polygon, svgelements.Polyline)):\n line = np.array(seg.points, dtype=float)\n line = line.view(dtype=complex).reshape(len(line))\n if point_stack.ends_with(line[0]):\n point_stack.extend(line[1:])\n else:\n point_stack.extend(line)\n else:\n # This is a curved element that we approximate with small segments\n step = max(2, int(math.ceil(seg.length() / quantization)))\n line = seg.npoint(np.linspace(0, 1, step))\n\n if simplify:\n line = np.array(LineString(line).simplify(tolerance=quantization))\n\n line = line.view(dtype=complex).reshape(len(line))\n\n if point_stack.ends_with(line[0]):\n point_stack.extend(line[1:])\n else:\n point_stack.extend(line)\n\n if len(point_stack) > 0:\n result.append(point_stack.get())\n\n return result\n\n # benchmarking indicated that parallel processing only makes sense if simplify is used\n if parallel:\n with Pool() as p:\n results = p.map(_process_path, paths)\n else:\n results = map(_process_path, paths)\n\n lc = LineCollection()\n for res in results:\n lc.extend(res)\n return lc\n\n\ndef _extract_paths(group: svgelements.Group, recursive) -> _PathListType:\n \"\"\"Extract everything from the provided SVG group.\"\"\"\n\n if recursive:\n everything = group.select()\n else:\n everything = group\n paths = []\n for elem in everything:\n if elem.values.get(\"visibility\", \"\") in (\"hidden\", \"collapse\"):\n continue\n\n if isinstance(elem, svgelements.Path):\n if len(elem) != 0:\n paths.append(elem)\n elif isinstance(elem, (svgelements.Polyline, svgelements.Polygon)):\n # Here we add a \"fake\" path containing just the Polyline/Polygon,\n # to be treated specifically by _convert_flattened_paths.\n path = [svgelements.Move(elem.points[0]), elem]\n if isinstance(elem, svgelements.Polygon):\n path.append(svgelements.Close(elem.points[-1], elem.points[0]))\n paths.append(path)\n elif isinstance(elem, svgelements.Shape):\n e = svgelements.Path(elem)\n e.reify() # In some cases the shape could not have reified, the path must.\n if len(e) != 0:\n paths.append(e)\n\n return paths\n\n\ndef read_svg(\n filename: str,\n quantization: float,\n crop: bool = True,\n simplify: bool = False,\n parallel: bool = False,\n default_width: float = _DEFAULT_WIDTH,\n default_height: float = _DEFAULT_HEIGHT,\n) -> Tuple[\"LineCollection\", float, float]:\n \"\"\"Read a SVG file an return its content as a :class:`LineCollection` instance.\n\n All curved geometries are chopped in segments no longer than the value of *quantization*.\n Optionally, the geometries are simplified using Shapely, using the value of *quantization*\n as tolerance.\n\n Args:\n filename: path of the SVG file\n quantization: maximum size of segment used to approximate curved geometries\n crop: crop the geometries to the SVG boundaries\n simplify: run Shapely's simplify on loaded geometry\n parallel: enable multiprocessing (only recommended for ``simplify=True`` and SVG with\n many curves)\n default_width: default width if not provided by SVG or if a percent width is provided\n default_height: default height if not provided by SVG or if a percent height is\n provided\n\n Returns:\n tuple containing a :class:`LineCollection` with the imported geometries as well as the\n width and height of the SVG\n \"\"\"\n\n # default width is for SVG with % width/height\n svg = svgelements.SVG.parse(filename, width=default_width, height=default_height)\n paths = _extract_paths(svg, recursive=True)\n lc = _convert_flattened_paths(paths, quantization, simplify, parallel)\n\n width = svg.viewbox.element_width or default_width\n height = svg.viewbox.element_height or default_height\n\n if crop:\n lc.crop(0, 0, width, height)\n\n return lc, width, height\n\n\ndef read_multilayer_svg(\n filename: str,\n quantization: float,\n crop: bool = True,\n simplify: bool = False,\n parallel: bool = False,\n default_width: float = _DEFAULT_WIDTH,\n default_height: float = _DEFAULT_HEIGHT,\n) -> \"Document\":\n \"\"\"Read a multilayer SVG file and return its content as a :class:`Document` instance\n retaining the SVG's layer structure and its dimension.\n\n Each top-level group is considered a layer. All non-group, top-level elements are imported\n in layer 1.\n\n Groups are matched to layer ID according their `inkscape:label` attribute, their `id`\n attribute or their appearing order, in that order of priority. Labels are stripped of\n non-numeric characters and the remaining is used as layer ID. Lacking numeric characters,\n the appearing order is used. If the label is 0, its changed to 1.\n\n All curved geometries are chopped in segments no longer than the value of *quantization*.\n Optionally, the geometries are simplified using Shapely, using the value of *quantization*\n as tolerance.\n\n Args:\n filename: path of the SVG file\n quantization: maximum size of segment used to approximate curved geometries\n crop: crop the geometries to the SVG boundaries\n simplify: run Shapely's simplify on loaded geometry\n parallel: enable multiprocessing (only recommended for ``simplify=True`` and SVG with\n many curves)\n default_width: default width if not provided by SVG or if a percent width is provided\n default_height: default height if not provided by SVG or if a percent height is\n provided\n\n Returns:\n :class:`Document` instance with the imported geometries and its page size set the the\n SVG dimensions\n \"\"\"\n\n svg = svgelements.SVG.parse(filename, width=default_width, height=default_height)\n\n document = Document()\n\n # non-group top level elements are loaded in layer 1\n lc = _convert_flattened_paths(\n _extract_paths(svg, recursive=False), quantization, simplify, parallel\n )\n if not lc.is_empty():\n document.add(lc, 1)\n\n def _find_groups(group: svgelements.Group) -> Iterator[svgelements.Group]:\n for elem in group:\n if isinstance(elem, svgelements.Group):\n yield elem\n\n for i, g in enumerate(_find_groups(svg)):\n # compute a decent layer ID\n lid_str = re.sub(\n \"[^0-9]\",\n \"\",\n g.values.get(\"{http://www.inkscape.org/namespaces/inkscape}label\") or \"\",\n )\n if not lid_str:\n lid_str = re.sub(\"[^0-9]\", \"\", g.values.get(\"id\") or \"\")\n if lid_str:\n lid = int(lid_str)\n if lid == 0:\n lid = 1\n else:\n lid = i + 1\n\n lc = _convert_flattened_paths(\n _extract_paths(g, recursive=True), quantization, simplify, parallel\n )\n if not lc.is_empty():\n document.add(lc, lid)\n\n width = svg.viewbox.element_width or default_width\n height = svg.viewbox.element_height or default_height\n\n document.page_size = (width, height)\n\n if crop:\n document.crop(0, 0, width, height)\n\n return document\n\n\ndef write_svg(\n output: TextIO,\n document: Document,\n page_size: Optional[Tuple[float, float]] = None,\n center: bool = False,\n source_string: str = \"\",\n layer_label_format: str = \"%d\",\n show_pen_up: bool = False,\n color_mode: str = \"none\",\n) -> None:\n \"\"\"Create a SVG from a :py:class:`Document` instance.\n\n If no page size is provided (or (0, 0) is passed), the SVG generated has bounds tightly\n fitted around the geometries. Otherwise the provided size (in pixel) is used. The width\n and height is capped to a minimum of 1 pixel.\n\n By default, no translation is applied on the geometry. If `center=True`, geometries are\n moved to the center of the page.\n\n No scaling or rotation is applied to geometries.\n\n Layers are named after `layer_label_format`, which may contain a C-style format specifier\n such as `%d` which will be replaced by the layer number.\n\n For previsualisation purposes, pen-up trajectories can be added to the SVG and path can\n be colored individually (``color_mode=\"path\"``) or layer-by-layer (``color_mode=\"layer\"``).\n\n Args:\n output: text-mode IO stream where SVG code will be written\n document: geometries to be written\n page_size: if provided, overrides document.page_size\n center: center geometries on page before export\n source_string: value of the `source` metadata\n layer_label_format: format string for layer label naming\n show_pen_up: add paths for the pen-up trajectories\n color_mode: \"none\" (no formatting), \"layer\" (one color per layer), \"path\" (one color\n per path)\n \"\"\"\n\n # compute bounds\n bounds = document.bounds()\n if bounds is None:\n # empty geometry, we provide fake bounds\n bounds = (0, 0, 1, 1)\n\n if page_size:\n size = page_size\n tight = page_size == (0.0, 0.0)\n elif document.page_size:\n size = document.page_size\n tight = False\n else:\n size = (bounds[2] - bounds[0], bounds[3] - bounds[1])\n tight = True\n\n if center:\n corrected_doc = copy.deepcopy(document)\n corrected_doc.translate(\n (size[0] - (bounds[2] - bounds[0])) / 2.0 - bounds[0],\n (size[1] - (bounds[3] - bounds[1])) / 2.0 - bounds[1],\n )\n elif tight:\n corrected_doc = copy.deepcopy(document)\n corrected_doc.translate(-bounds[0], -bounds[1])\n else:\n corrected_doc = document\n\n # output SVG, width/height are capped to 1px\n capped_size = tuple(max(1, s) for s in size)\n size_cm = tuple(f\"{round(s / UNITS['cm'], 8)}cm\" for s in capped_size)\n dwg = svgwrite.Drawing(size=size_cm, profile=\"tiny\", debug=False)\n inkscape = Inkscape(dwg)\n dwg.attribs.update(\n {\n \"viewBox\": f\"0 0 {capped_size[0]} {capped_size[1]}\",\n \"xmlns:dc\": \"http://purl.org/dc/elements/1.1/\",\n \"xmlns:cc\": \"http://creativecommons.org/ns#\",\n \"xmlns:rdf\": \"http://www.w3.org/1999/02/22-rdf-syntax-ns#\",\n }\n )\n\n # add metadata\n metadata = ElementTree.Element(\"rdf:RDF\")\n work = ElementTree.SubElement(metadata, \"cc:Work\")\n fmt = ElementTree.SubElement(work, \"dc:format\")\n fmt.text = \"image/svg+xml\"\n source = ElementTree.SubElement(work, \"dc:source\")\n source.text = source_string\n date = ElementTree.SubElement(work, \"dc:date\")\n date.text = datetime.datetime.now().isoformat()\n dwg.set_metadata(metadata)\n\n color_idx = 0\n if show_pen_up:\n group = inkscape.layer(label=\"% pen up trajectories\")\n group.attribs[\"fill\"] = \"none\"\n group.attribs[\"stroke\"] = \"black\"\n group.attribs[\"style\"] = \"display:inline; stroke-opacity: 50%; stroke-width: 0.5\"\n group.attribs[\"id\"] = \"pen_up_trajectories\"\n\n for layer in corrected_doc.layers.values():\n for line in layer.pen_up_trajectories():\n group.add(\n dwg.line((line[0].real, line[0].imag), (line[-1].real, line[-1].imag))\n )\n\n dwg.add(group)\n\n for layer_id in sorted(corrected_doc.layers.keys()):\n layer = corrected_doc.layers[layer_id]\n\n group = inkscape.layer(label=str(layer_label_format % layer_id))\n group.attribs[\"fill\"] = \"none\"\n if color_mode == \"layer\":\n group.attribs[\"stroke\"] = _COLORS[color_idx % len(_COLORS)]\n color_idx += 1\n else:\n group.attribs[\"stroke\"] = \"black\"\n group.attribs[\"style\"] = \"display:inline\"\n group.attribs[\"id\"] = f\"layer{layer_id}\"\n\n for line in layer:\n if len(line) <= 1:\n continue\n\n if len(line) == 2:\n path = dwg.line((line[0].real, line[0].imag), (line[1].real, line[1].imag))\n elif line[0] == line[-1]:\n path = dwg.polygon((c.real, c.imag) for c in line[:-1])\n else:\n path = dwg.polyline((c.real, c.imag) for c in line)\n\n if color_mode == \"path\":\n path.attribs[\"stroke\"] = _COLORS[color_idx % len(_COLORS)]\n color_idx += 1\n group.add(path)\n\n dwg.add(group)\n\n dwg.write(output, pretty=True)\n\n\ndef _get_hpgl_config(\n device: Optional[str], page_size: str\n) -> Tuple[PlotterConfig, PaperConfig]:\n if device is None:\n device = CONFIG_MANAGER.get_command_config(\"write\").get(\"default_hpgl_device\", None)\n plotter_config = CONFIG_MANAGER.get_plotter_config(str(device))\n if plotter_config is None:\n raise ValueError(f\"no configuration available for plotter '{device}'\")\n paper_config = plotter_config.paper_config(page_size)\n if paper_config is None:\n raise ValueError(\n f\"no configuration available for paper size '{page_size}' with plotter \"\n f\"'{device}'\"\n )\n\n return plotter_config, paper_config\n\n\ndef write_hpgl(\n output: TextIO,\n document: Document,\n page_size: str,\n landscape: bool,\n center: bool,\n device: Optional[str],\n velocity: Optional[float],\n quiet: bool = False,\n) -> None:\n \"\"\"Create a HPGL file from the :class:`Document` instance.\n\n The ``device``/``page_size`` combination must be defined in the built-in or user-provided\n config files or an exception will be raised.\n\n By default, no translation is applied on the geometry. If `center=True`, geometries are\n moved to the center of the page.\n\n No scaling or rotation is applied to geometries.\n\n Args:\n output: text-mode IO stream where SVG code will be written\n document: geometries to be written\n page_size: page size string (it must be configured for the selected device)\n landscape: if True, the geometries are generated in landscape orientation\n center: center geometries on page before export\n device: name of the device to use (the corresponding config must exists). If not\n provided, a default device must be configured, which will be used.\n velocity: if provided, a VS command will be generated with the corresponding value\n quiet: if True, do not print the plotter/paper info strings\n \"\"\"\n\n # empty HPGL is acceptable there are no geometries to plot\n if document.is_empty():\n return\n\n plotter_config, paper_config = _get_hpgl_config(device, page_size)\n if not quiet:\n if plotter_config.info:\n # use of echo instead of print needed for testability\n # https://github.com/pallets/click/issues/1678\n click.echo(plotter_config.info, err=True)\n if paper_config.info:\n click.echo(paper_config.info, err=True)\n\n # are plotter coordinate placed in landscape or portrait orientation?\n coords_landscape = paper_config.paper_size[0] > paper_config.paper_size[1]\n\n # document preprocessing:\n # - make a copy\n # - deal with orientation mismatch\n # - optionally center on paper\n # - convert to plotter units\n # - crop to plotter limits\n document = copy.deepcopy(document)\n\n if landscape != coords_landscape:\n document.rotate(-math.pi / 2)\n document.translate(0, paper_config.paper_size[1])\n\n if paper_config.rotate_180:\n document.scale(-1, -1)\n document.translate(*paper_config.paper_size)\n\n if center:\n bounds = document.bounds()\n if bounds is not None:\n document.translate(\n (paper_config.paper_size[0] - (bounds[2] - bounds[0])) / 2.0 - bounds[0],\n (paper_config.paper_size[1] - (bounds[3] - bounds[1])) / 2.0 - bounds[1],\n )\n\n document.translate(-paper_config.origin_location[0], -paper_config.origin_location[1])\n unit_per_pixel = 1 / plotter_config.plotter_unit_length\n document.scale(\n unit_per_pixel, -unit_per_pixel if paper_config.y_axis_up else unit_per_pixel\n )\n document.crop(\n paper_config.x_range[0],\n paper_config.y_range[0],\n paper_config.x_range[1],\n paper_config.y_range[1],\n )\n\n # output HPGL\n def complex_to_str(p: complex) -> str:\n return f\"{int(round(p.real))},{int(round(p.imag))}\"\n\n output.write(\"IN;DF;\")\n if velocity is not None:\n output.write(f\"VS{velocity};\")\n if paper_config.set_ps is not None:\n output.write(f\"PS{int(paper_config.set_ps)};\")\n\n for layer_id in sorted(document.layers.keys()):\n pen_id = 1 + (layer_id - 1) % plotter_config.pen_count\n output.write(f\"SP{pen_id};\")\n\n for line in document.layers[layer_id]:\n if len(line) < 2:\n continue\n output.write(f\"PU{complex_to_str(line[0])};\")\n output.write(f\"PD\")\n output.write(\",\".join(complex_to_str(p) for p in line[1:]))\n output.write(\";\")\n\n output.write(\n f\"PU{paper_config.final_pu_params if paper_config.final_pu_params else ''};\"\n )\n\n output.write(\"SP0;IN;\\n\")\n", "path": "vpype/io.py" } ]
diff --git a/CHANGELOG.md b/CHANGELOG.md index 9bd3bb65..56da2e7b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,6 @@ #### 1.1.0 (UNRELEASED) +* Invisible SVG elements are now discarded (#103) * Fixed `write` to cap SVG width and height to a minimum of 1px (#102) * Fixed grouping of `stat` command in `vpype --help` * Bump svgelements from 1.3.2 to 1.3.4 (#101) diff --git a/tests/test_files.py b/tests/test_files.py index 5b2d734b..ec5e0bc0 100644 --- a/tests/test_files.py +++ b/tests/test_files.py @@ -58,3 +58,47 @@ def test_write_is_idempotent(runner, path, tmp_path): for line in difflib.unified_diff(txt1.split("\n"), txt2.split("\n"), lineterm=""): print(line) assert False + + [email protected]( + ("svg_content", "line_count"), + [ + ('<circle cx="500" cy="500" r="40"/>', 1), + ('<circle cx="500" cy="500" r="40" style="visibility:collapse"/>', 0), + ('<circle cx="500" cy="500" r="40" style="visibility:hidden"/>', 0), + ('<circle cx="500" cy="500" r="40" style="display:none"/>', 0), + ('<g style="visibility: hidden"><circle cx="500" cy="500" r="40"/></g>', 0), + ('<g style="visibility: collapse"><circle cx="500" cy="500" r="40"/></g>', 0), + ( + """<g style="visibility: collapse"> + <circle cx="500" cy="500" r="40" style="visibility:visible" /> + </g>""", + 1, + ), + ( + """<g style="visibility: hidden"> + <circle cx="500" cy="500" r="40" style="visibility:visible" /> + </g>""", + 1, + ), + ( + """<g style="display: none"> + <circle cx="500" cy="500" r="40" style="visibility:visible" /> + </g>""", + 0, + ), + ], +) +def test_read_svg_visibility(svg_content, line_count, tmp_path): + svg = f"""<?xml version="1.0"?> +<svg xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://www.w3.org/2000/svg" + width="1000" height="1000"> + {svg_content} +</svg> +""" + path = str(tmp_path / "file.svg") + with open(path, "w") as fp: + fp.write(svg) + + lc, _, _ = vp.read_svg(path, 1.0) + assert len(lc) == line_count diff --git a/vpype/io.py b/vpype/io.py index b79c9819..8d87a7e9 100644 --- a/vpype/io.py +++ b/vpype/io.py @@ -174,6 +174,9 @@ def _extract_paths(group: svgelements.Group, recursive) -> _PathListType: everything = group paths = [] for elem in everything: + if elem.values.get("visibility", "") in ("hidden", "collapse"): + continue + if isinstance(elem, svgelements.Path): if len(elem) != 0: paths.append(elem)
comic__grand-challenge.org-2531
Markdown Editor jumps around too much When editing in Markdown the Grand Challenge editor jumps around a lot, so the writer loses their place. https://user-images.githubusercontent.com/12661555/173570208-c2567b82-bb78-441c-9286-2f70a8f66745.mp4 It would be better if the size of the text area was kept constant, and the content only resized when switching to the preview tab, like on GitHub: https://user-images.githubusercontent.com/12661555/173570346-97f51fa9-fc23-49e8-b587-f656ff3cb7db.mp4
[ { "content": "import os\nimport re\nfrom datetime import datetime, timedelta\nfrom itertools import product\n\nimport sentry_sdk\nfrom disposable_email_domains import blocklist\nfrom django.contrib.messages import constants as messages\nfrom django.urls import reverse\nfrom machina import MACHINA_MAIN_STATIC_DIR, MACHINA_MAIN_TEMPLATE_DIR\nfrom sentry_sdk.integrations.celery import CeleryIntegration\nfrom sentry_sdk.integrations.django import DjangoIntegration\nfrom sentry_sdk.integrations.logging import ignore_logger\n\nfrom config.denylist import USERNAME_DENYLIST\nfrom grandchallenge.algorithms.exceptions import ImageImportError\nfrom grandchallenge.components.exceptions import PriorStepFailed\nfrom grandchallenge.core.utils import strtobool\nfrom grandchallenge.core.utils.markdown import BS4Extension\n\nMEGABYTE = 1024 * 1024\nGIGABYTE = 1024 * MEGABYTE\nTERABYTE = 1024 * GIGABYTE\n\nDEBUG = strtobool(os.environ.get(\"DEBUG\", \"False\"))\n\nCOMMIT_ID = os.environ.get(\"COMMIT_ID\", \"unknown\")\n\nADMINS = (\n # ('Your Name', '[email protected]'),\n)\n\n# Who gets the 404 notifications?\nmanager_email = os.environ.get(\"MANAGER_EMAIL\", None)\nif manager_email:\n MANAGERS = [(\"Manager\", manager_email)]\n\nIGNORABLE_404_URLS = [\n re.compile(r\".*\\.(php|cgi|asp).*\"),\n re.compile(r\"^/phpmyadmin.*\"),\n re.compile(r\"^/gen204.*\"),\n re.compile(r\"^/wp-content.*\"),\n re.compile(r\"^/wp.*\"),\n re.compile(r\"^/wordpress/.*\"),\n re.compile(r\"^/old/.*\", flags=re.IGNORECASE),\n re.compile(r\".*/trackback.*\"),\n re.compile(r\"^/site/.*\"),\n re.compile(r\"^/media/cache/.*\"),\n re.compile(r\"^/favicon.ico$\"),\n]\n\n# Used as starting points for various other paths. realpath(__file__) starts in\n# the config dir. We need to go one dir higher so path.join(\"..\")\nSITE_ROOT = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql_psycopg2\",\n \"NAME\": os.environ.get(\"POSTGRES_DB\", \"grandchallenge\"),\n \"USER\": os.environ.get(\"POSTGRES_USER\", \"grandchallenge\"),\n \"PASSWORD\": os.environ.get(\"POSTGRES_PASSWORD\", \"secretpassword\"),\n \"HOST\": os.environ.get(\"POSTGRES_HOST\", \"postgres\"),\n \"PORT\": os.environ.get(\"POSTGRES_PORT\", \"\"),\n \"OPTIONS\": {\n \"sslmode\": os.environ.get(\"POSTGRES_SSL_MODE\", \"prefer\"),\n \"sslrootcert\": os.path.join(\n SITE_ROOT, \"config\", \"certs\", \"rds-ca-2019-root.pem\"\n ),\n },\n \"ATOMIC_REQUESTS\": strtobool(\n os.environ.get(\"ATOMIC_REQUESTS\", \"True\")\n ),\n }\n}\n\nEMAIL_BACKEND = \"djcelery_email.backends.CeleryEmailBackend\"\nCELERY_EMAIL_BACKEND = os.environ.get(\n \"EMAIL_BACKEND\", \"django.core.mail.backends.console.EmailBackend\"\n)\nDEFAULT_FROM_EMAIL = os.environ.get(\n \"DEFAULT_FROM_EMAIL\", \"grandchallenge@localhost\"\n)\nSERVER_EMAIL = os.environ.get(\"SERVER_EMAIL\", \"root@localhost\")\n\nANONYMOUS_USER_NAME = \"AnonymousUser\"\nREGISTERED_USERS_GROUP_NAME = \"__registered_users_group__\"\nREGISTERED_AND_ANON_USERS_GROUP_NAME = \"__registered_and_anonymous_users__\"\n\n# Local time zone for this installation. Choices can be found here:\n# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name\n# although not all choices may be available on all operating systems.\n# On Unix systems, a value of None will cause Django to use the same\n# timezone as the operating system.\n# If running in a Windows environment this must be set to the same as your\n# system time zone.\nTIME_ZONE = \"UTC\"\n\n# Language code for this installation. All choices can be found here:\n# http://www.i18nguy.com/unicode/language-identifiers.html\nLANGUAGE_CODE = \"en-us\"\n\nSITE_ID = int(os.environ.get(\"SITE_ID\", \"1\"))\n\n# If you set this to False, Django will make some optimizations so as not\n# to load the internationalization machinery.\nUSE_I18N = True\n\n# If you set this to False, Django will not format dates, numbers and\n# calendars according to the current locale.\nUSE_L10N = True\n\n# If you set this to False, Django will not use timezone-aware datetimes.\nUSE_TZ = True\n\n# Use AutoField for backwards compatibility\nDEFAULT_AUTO_FIELD = \"django.db.models.AutoField\"\n\n# General forum\nDOCUMENTATION_HELP_FORUM_PK = os.environ.get(\n \"DOCUMENTATION_HELP_FORUM_PK\", \"1\"\n)\nDOCUMENTATION_HELP_FORUM_SLUG = os.environ.get(\n \"DOCUMENTATION_HELP_FORUM_SLUG\", \"general\"\n)\n\n# About Flatpage\nFLATPAGE_ABOUT_URL = os.environ.get(\"FLATPAGE_ABOUT_URL\", \"/about/\")\n\n# Costs (in US dollar cents)\nCHALLENGES_STORAGE_COST_CENTS_PER_TB_PER_YEAR = os.environ.get(\n \"CHALLENGES_STORAGE_COST_CENTS_PER_TB_PER_YEAR\", 4000\n)\nCHALLENGES_COMPUTE_COST_CENTS_PER_HOUR = os.environ.get(\n \"CHALLENGES_COMPUTE_COST_CENTS_PER_HOUR\", 100\n)\n\n##############################################################################\n#\n# Storage\n#\n##############################################################################\nDEFAULT_FILE_STORAGE = \"grandchallenge.core.storage.PublicS3Storage\"\n\n# Subdirectories on root for various files\nIMAGE_FILES_SUBDIRECTORY = \"images\"\nEVALUATION_FILES_SUBDIRECTORY = \"evaluation\"\nCOMPONENTS_FILES_SUBDIRECTORY = \"components\"\n\nAWS_S3_FILE_OVERWRITE = False\n# Note: deprecated in django storages 2.0\nAWS_BUCKET_ACL = \"private\"\nAWS_DEFAULT_ACL = \"private\"\nAWS_S3_MAX_MEMORY_SIZE = 1_048_576 # 100 MB\nAWS_S3_ENDPOINT_URL = os.environ.get(\"AWS_S3_ENDPOINT_URL\")\nAWS_DEFAULT_REGION = os.environ.get(\"AWS_DEFAULT_REGION\", \"eu-central-1\")\nAWS_S3_REGION_NAME = os.environ.get(\"AWS_S3_REGION_NAME\")\nAWS_S3_OBJECT_PARAMETERS = {\n # Note that these do not affect the Uploads bucket, which is configured separately.\n # See https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.put_object\n \"StorageClass\": os.environ.get(\"AWS_S3_DEFAULT_STORAGE_CLASS\", \"STANDARD\")\n}\nAWS_CLOUDWATCH_REGION_NAME = os.environ.get(\"AWS_CLOUDWATCH_REGION_NAME\")\nAWS_CODEBUILD_REGION_NAME = os.environ.get(\"AWS_CODEBUILD_REGION_NAME\")\nAWS_SES_REGION_ENDPOINT = f'email.{os.environ.get(\"AWS_SES_REGION_NAME\", AWS_DEFAULT_REGION)}.amazonaws.com'\n\n# This is for storing files that should not be served to the public\nPRIVATE_S3_STORAGE_KWARGS = {\n \"bucket_name\": os.environ.get(\n \"PRIVATE_S3_STORAGE_BUCKET_NAME\", \"grand-challenge-private\"\n )\n}\n\nPROTECTED_S3_STORAGE_KWARGS = {\n \"bucket_name\": os.environ.get(\n \"PROTECTED_S3_STORAGE_BUCKET_NAME\", \"grand-challenge-protected\"\n ),\n # This is the domain where people will be able to go to download data\n # from this bucket. Usually we would use reverse to find this out,\n # but this needs to be defined before the database is populated\n \"custom_domain\": os.environ.get(\n \"PROTECTED_S3_CUSTOM_DOMAIN\", \"gc.localhost/media\"\n ),\n}\nPROTECTED_S3_STORAGE_USE_CLOUDFRONT = strtobool(\n os.environ.get(\"PROTECTED_S3_STORAGE_USE_CLOUDFRONT\", \"False\")\n)\nPROTECTED_S3_STORAGE_CLOUDFRONT_DOMAIN = os.environ.get(\n \"PROTECTED_S3_STORAGE_CLOUDFRONT_DOMAIN_NAME\", \"\"\n)\n\nPUBLIC_S3_STORAGE_KWARGS = {\n \"bucket_name\": os.environ.get(\n \"PUBLIC_S3_STORAGE_BUCKET_NAME\", \"grand-challenge-public\"\n ),\n # Public bucket so do not use querystring_auth\n \"querystring_auth\": False,\n \"default_acl\": \"public-read\",\n}\n\nUPLOADS_S3_BUCKET_NAME = os.environ.get(\n \"UPLOADS_S3_BUCKET_NAME\", \"grand-challenge-uploads\"\n)\nUPLOADS_S3_USE_ACCELERATE_ENDPOINT = strtobool(\n os.environ.get(\"UPLOADS_S3_USE_ACCELERATE_ENDPOINT\", \"False\")\n)\nUPLOADS_MAX_SIZE_UNVERIFIED = int(\n os.environ.get(\"UPLOADS_MAX_SIZE_UNVERIFIED\", 2 * GIGABYTE)\n)\nUPLOADS_MAX_SIZE_VERIFIED = int(\n os.environ.get(\"UPLOADS_MAX_SIZE_VERIFIED\", 128 * GIGABYTE)\n)\nUPLOADS_TIMEOUT_DAYS = int(os.environ.get(\"UPLOADS_TIMEOUT_DAYS\", 1))\n\nVERIFICATIONS_REVIEW_PERIOD_DAYS = int(\n os.environ.get(\"VERIFICATIONS_REVIEW_PERIOD_DAYS\", 10)\n)\n\n# Key pair used for signing CloudFront URLS, only used if\n# PROTECTED_S3_STORAGE_USE_CLOUDFRONT is True\nCLOUDFRONT_KEY_PAIR_ID = os.environ.get(\"CLOUDFRONT_KEY_PAIR_ID\", \"\")\nCLOUDFRONT_PRIVATE_KEY_BASE64 = os.environ.get(\n \"CLOUDFRONT_PRIVATE_KEY_BASE64\", \"\"\n)\nCLOUDFRONT_URL_EXPIRY_SECONDS = int(\n os.environ.get(\"CLOUDFRONT_URL_EXPIRY_SECONDS\", \"300\") # 5 mins\n)\n\n##############################################################################\n#\n# Caching\n#\n##############################################################################\nREDIS_ENDPOINT = os.environ.get(\"REDIS_ENDPOINT\", \"redis://redis:6379\")\n\nCACHES = {\n \"default\": {\n \"BACKEND\": \"django_redis.cache.RedisCache\",\n \"LOCATION\": f\"{REDIS_ENDPOINT}/0\",\n \"OPTIONS\": {\"CLIENT_CLASS\": \"django_redis.client.DefaultClient\"},\n },\n \"machina_attachments\": {\n \"BACKEND\": \"django.core.cache.backends.filebased.FileBasedCache\",\n \"LOCATION\": \"/tmp\",\n },\n}\n\nROOT_URLCONF = \"config.urls.root\"\nCHALLENGE_SUBDOMAIN_URL_CONF = \"config.urls.challenge_subdomain\"\nRENDERING_SUBDOMAIN_URL_CONF = \"config.urls.rendering_subdomain\"\nDEFAULT_SCHEME = os.environ.get(\"DEFAULT_SCHEME\", \"https\")\n\n# Workaround for https://github.com/ellmetha/django-machina/issues/219\nABSOLUTE_URL_OVERRIDES = {\n \"forum.forum\": lambda o: reverse(\n \"forum:forum\", kwargs={\"slug\": o.slug, \"pk\": o.pk}\n ),\n \"forum_conversation.topic\": lambda o: reverse(\n \"forum_conversation:topic\",\n kwargs={\n \"slug\": o.slug,\n \"pk\": o.pk,\n \"forum_slug\": o.forum.slug,\n \"forum_pk\": o.forum.pk,\n },\n ),\n}\n\nSESSION_COOKIE_DOMAIN = os.environ.get(\n \"SESSION_COOKIE_DOMAIN\", \".gc.localhost\"\n)\n# We're always running behind a proxy so set these to true\nSESSION_COOKIE_SECURE = True\nCSRF_COOKIE_SECURE = True\n# Trust all subdomains for CSRF, used for user uploads. Changed the name\n# of the CSRF token as existing ones are already in use.\nCSRF_COOKIE_DOMAIN = SESSION_COOKIE_DOMAIN\nCSRF_COOKIE_NAME = \"_csrftoken\"\nCSRF_TRUSTED_ORIGINS = [SESSION_COOKIE_DOMAIN]\nSECURE_PROXY_SSL_HEADER = (\"HTTP_X_FORWARDED_PROTO\", \"https\")\n\n# Set the allowed hosts to the cookie domain\nALLOWED_HOSTS = [SESSION_COOKIE_DOMAIN, \"web\"]\n\n# Security options\nSECURE_HSTS_SECONDS = int(os.environ.get(\"SECURE_HSTS_SECONDS\", \"0\"))\nSECURE_HSTS_INCLUDE_SUBDOMAINS = strtobool(\n os.environ.get(\"SECURE_HSTS_INCLUDE_SUBDOMAINS\", \"False\")\n)\nSECURE_HSTS_PRELOAD = strtobool(os.environ.get(\"SECURE_HSTS_PRELOAD\", \"True\"))\nSECURE_CONTENT_TYPE_NOSNIFF = strtobool(\n os.environ.get(\"SECURE_CONTENT_TYPE_NOSNIFF\", \"False\")\n)\nSECURE_BROWSER_XSS_FILTER = strtobool(\n os.environ.get(\"SECURE_BROWSER_XSS_FILTER\", \"False\")\n)\nX_FRAME_OPTIONS = os.environ.get(\"X_FRAME_OPTIONS\", \"DENY\")\n# \"strict-origin-when-cross-origin\" required for uploads for cross domain POSTs\nSECURE_REFERRER_POLICY = os.environ.get(\n \"SECURE_REFERRER_POLICY\", \"strict-origin-when-cross-origin\"\n)\n\nPERMISSIONS_POLICY = {\n \"accelerometer\": [],\n \"ambient-light-sensor\": [],\n \"autoplay\": [],\n \"camera\": [],\n \"display-capture\": [],\n \"document-domain\": [],\n \"encrypted-media\": [],\n \"fullscreen\": [\"self\"],\n \"geolocation\": [],\n \"gyroscope\": [],\n \"interest-cohort\": [],\n \"magnetometer\": [],\n \"microphone\": [],\n \"midi\": [],\n \"payment\": [],\n \"usb\": [],\n}\n\nIPWARE_META_PRECEDENCE_ORDER = (\n # Set by nginx\n \"HTTP_X_FORWARDED_FOR\",\n \"HTTP_X_REAL_IP\",\n)\n\n# Absolute path to the directory static files should be collected to.\n# Don't put anything in this directory yourself; store your static files\n# in apps' \"static/\" subdirectories and in STATICFILES_DIRS.\n# Example: \"/home/media/media.lawrence.com/static/\"\nSTATIC_ROOT = \"/static/\"\n\nSTATIC_HOST = os.environ.get(\"DJANGO_STATIC_HOST\", \"\")\nSTATIC_URL = f\"{STATIC_HOST}/static/\"\n\n# List of finder classes that know how to find static files in\n# various locations.\nSTATICFILES_FINDERS = (\n \"django.contrib.staticfiles.finders.FileSystemFinder\",\n \"django.contrib.staticfiles.finders.AppDirectoriesFinder\",\n)\n\n# Vendored static files will be put here\nSTATICFILES_DIRS = [\"/opt/static/\", MACHINA_MAIN_STATIC_DIR]\n\nSTATICFILES_STORAGE = \"whitenoise.storage.CompressedManifestStaticFilesStorage\"\n\n# Make this unique, and don't share it with anybody.\nSECRET_KEY = os.environ.get(\n \"SECRET_KEY\", \"d=%^l=xa02an9jn-$!*hy1)5yox$a-$2(ejt-2smimh=j4%8*b\"\n)\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [\n # Override the machina templates, everything else is found with\n # django.template.loaders.app_directories.Loader\n os.path.join(SITE_ROOT, \"grandchallenge/forums/templates/\"),\n MACHINA_MAIN_TEMPLATE_DIR,\n ],\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.contrib.auth.context_processors.auth\",\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.i18n\",\n \"django.template.context_processors.media\",\n \"django.template.context_processors.static\",\n \"django.template.context_processors.tz\",\n \"django.template.context_processors.request\",\n \"django.contrib.messages.context_processors.messages\",\n \"grandchallenge.core.context_processors.challenge\",\n \"grandchallenge.core.context_processors.deployment_info\",\n \"grandchallenge.core.context_processors.debug\",\n \"grandchallenge.core.context_processors.sentry_dsn\",\n \"grandchallenge.core.context_processors.footer_links\",\n \"grandchallenge.core.context_processors.help_forum\",\n \"grandchallenge.core.context_processors.about_page\",\n \"grandchallenge.core.context_processors.newsletter_signup\",\n \"grandchallenge.core.context_processors.viewport_names\",\n \"machina.core.context_processors.metadata\",\n ],\n \"loaders\": [\n \"django.template.loaders.filesystem.Loader\",\n \"django.template.loaders.app_directories.Loader\",\n ],\n },\n }\n]\n\nMIDDLEWARE = (\n \"django.middleware.security.SecurityMiddleware\", # Keep security at top\n \"whitenoise.middleware.WhiteNoiseMiddleware\",\n # Keep whitenoise after security and before all else\n \"aws_xray_sdk.ext.django.middleware.XRayMiddleware\", # xray near the top\n \"corsheaders.middleware.CorsMiddleware\", # Keep CORS near the top\n \"django.middleware.common.BrokenLinkEmailsMiddleware\",\n # Keep BrokenLinkEmailsMiddleware near the top\n \"django_permissions_policy.PermissionsPolicyMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.contrib.sites.middleware.CurrentSiteMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"simple_history.middleware.HistoryRequestMiddleware\",\n # subdomain_middleware after CurrentSiteMiddleware\n \"grandchallenge.subdomains.middleware.subdomain_middleware\",\n \"grandchallenge.subdomains.middleware.challenge_subdomain_middleware\",\n \"grandchallenge.subdomains.middleware.subdomain_urlconf_middleware\",\n \"grandchallenge.timezones.middleware.TimezoneMiddleware\",\n \"machina.apps.forum_permission.middleware.ForumPermissionMiddleware\",\n # Flatpage fallback almost last\n \"django.contrib.flatpages.middleware.FlatpageFallbackMiddleware\",\n # Redirects last as they're a last resort\n \"django.contrib.redirects.middleware.RedirectFallbackMiddleware\",\n)\n\n# Python dotted path to the WSGI application used by Django's runserver.\nWSGI_APPLICATION = \"config.wsgi.application\"\n\nDJANGO_APPS = [\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.sites\",\n \"django.contrib.messages\",\n \"whitenoise.runserver_nostatic\", # Keep whitenoise above staticfiles\n \"django.contrib.staticfiles\",\n \"django.contrib.humanize\",\n \"django.contrib.admin\",\n \"django.contrib.postgres\",\n \"django.contrib.flatpages\",\n \"django.contrib.sitemaps\",\n \"django.contrib.redirects\",\n]\n\nTHIRD_PARTY_APPS = [\n \"aws_xray_sdk.ext.django\", # tracing\n \"django_celery_results\", # database results backend\n \"django_celery_beat\", # periodic tasks\n \"djcelery_email\", # asynchronous emails\n \"guardian\", # per object permissions\n \"rest_framework\", # provides REST API\n \"knox\", # token auth for REST API\n \"crispy_forms\", # bootstrap forms\n \"django_select2\", # for multiple choice widgets\n \"django_summernote\", # for WYSIWYG page editing\n \"dal\", # for autocompletion of selection fields\n \"dal_select2\", # for autocompletion of selection fields\n \"django_extensions\", # custom extensions\n \"simple_history\", # for object history\n \"corsheaders\", # to allow api communication from subdomains\n \"markdownx\", # for editing markdown\n \"stdimage\",\n \"django_filters\",\n \"drf_spectacular\",\n \"allauth\",\n \"allauth.account\",\n \"allauth.socialaccount\",\n \"grandchallenge.profiles.providers.gmail\",\n # Notifications with overrides\n \"actstream\",\n \"grandchallenge.notifications\",\n # django-machina dependencies:\n \"mptt\",\n \"haystack\",\n \"widget_tweaks\",\n # djano-machina apps:\n \"machina\",\n \"machina.apps.forum\",\n \"machina.apps.forum_conversation.forum_attachments\",\n \"machina.apps.forum_conversation.forum_polls\",\n \"machina.apps.forum_feeds\",\n \"machina.apps.forum_moderation\",\n \"machina.apps.forum_search\",\n \"machina.apps.forum_tracking\",\n \"machina.apps.forum_permission\",\n # Overridden apps\n \"grandchallenge.forum_conversation\",\n \"grandchallenge.forum_member\",\n]\n\nLOCAL_APPS = [\n \"grandchallenge.admins\",\n \"grandchallenge.anatomy\",\n \"grandchallenge.api\",\n \"grandchallenge.api_tokens\",\n \"grandchallenge.challenges\",\n \"grandchallenge.core\",\n \"grandchallenge.evaluation\",\n \"grandchallenge.pages\",\n \"grandchallenge.participants\",\n \"grandchallenge.profiles\",\n \"grandchallenge.teams\",\n \"grandchallenge.uploads\",\n \"grandchallenge.cases\",\n \"grandchallenge.algorithms\",\n \"grandchallenge.components\",\n \"grandchallenge.statistics\",\n \"grandchallenge.archives\",\n \"grandchallenge.patients\",\n \"grandchallenge.studies\",\n \"grandchallenge.registrations\",\n \"grandchallenge.annotations\",\n \"grandchallenge.retina_api\",\n \"grandchallenge.workstations\",\n \"grandchallenge.workspaces\",\n \"grandchallenge.reader_studies\",\n \"grandchallenge.workstation_configs\",\n \"grandchallenge.policies\",\n \"grandchallenge.products\",\n \"grandchallenge.serving\",\n \"grandchallenge.blogs\",\n \"grandchallenge.publications\",\n \"grandchallenge.verifications\",\n \"grandchallenge.credits\",\n \"grandchallenge.task_categories\",\n \"grandchallenge.modalities\",\n \"grandchallenge.datatables\",\n \"grandchallenge.organizations\",\n \"grandchallenge.groups\",\n \"grandchallenge.github\",\n \"grandchallenge.codebuild\",\n \"grandchallenge.timezones\",\n \"grandchallenge.documentation\",\n \"grandchallenge.flatpages\",\n \"grandchallenge.emails\",\n \"grandchallenge.hanging_protocols\",\n]\n\nINSTALLED_APPS = DJANGO_APPS + LOCAL_APPS + THIRD_PARTY_APPS\n\nADMIN_URL = f'{os.environ.get(\"DJANGO_ADMIN_URL\", \"django-admin\")}/'\n\nAUTHENTICATION_BACKENDS = [\n \"django.contrib.auth.backends.ModelBackend\",\n \"allauth.account.auth_backends.AuthenticationBackend\",\n \"guardian.backends.ObjectPermissionBackend\",\n]\n\nGOOGLE_ANALYTICS_ID = os.environ.get(\"GOOGLE_ANALYTICS_ID\", \"GA_TRACKING_ID\")\n\n##############################################################################\n#\n# django-allauth\n#\n##############################################################################\n\nACCOUNT_ADAPTER = \"grandchallenge.profiles.adapters.AccountAdapter\"\nACCOUNT_SIGNUP_FORM_CLASS = \"grandchallenge.profiles.forms.SignupForm\"\n\nACCOUNT_EMAIL_CONFIRMATION_COOLDOWN = 30\nACCOUNT_AUTHENTICATION_METHOD = \"username_email\"\nACCOUNT_EMAIL_REQUIRED = True\nACCOUNT_EMAIL_VERIFICATION = \"mandatory\"\nACCOUNT_USERNAME_MIN_LENGTH = 4\nACCOUNT_DEFAULT_HTTP_PROTOCOL = \"https\"\nACCOUNT_LOGIN_ON_EMAIL_CONFIRMATION = True\nACCOUNT_USERNAME_BLACKLIST = USERNAME_DENYLIST\n\nSOCIALACCOUNT_ADAPTER = \"grandchallenge.profiles.adapters.SocialAccountAdapter\"\nSOCIALACCOUNT_AUTO_SIGNUP = False\nSOCIALACCOUNT_STORE_TOKENS = False\nSOCIALACCOUNT_PROVIDERS = {\n \"gmail\": {\n \"APP\": {\n \"client_id\": os.environ.get(\"SOCIAL_AUTH_GOOGLE_OAUTH2_KEY\", \"\"),\n \"secret\": os.environ.get(\"SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET\", \"\"),\n }\n }\n}\n\n# Use full paths as view name lookups do not work on subdomains\nLOGIN_URL = \"/accounts/login/\"\nLOGOUT_URL = \"/accounts/logout/\"\nLOGIN_REDIRECT_URL = \"/users/profile/\"\n\n##############################################################################\n#\n# stdimage\n#\n##############################################################################\n\n# Re-render the existing images if these values change\n# https://github.com/codingjoe/django-stdimage#re-rendering-variations\nSTDIMAGE_LOGO_VARIATIONS = {\n # Must be square\n \"full\": (None, None, False),\n \"x20\": (640, 640, True),\n \"x15\": (480, 480, True),\n \"x10\": (320, 320, True),\n \"x02\": (64, 64, True),\n}\nSTDIMAGE_SOCIAL_VARIATIONS = {\n # Values from social sharing\n \"full\": (None, None, False),\n \"x20\": (1280, 640, False),\n \"x15\": (960, 480, False),\n \"x10\": (640, 320, False),\n}\nSTDIMAGE_BANNER_VARIATIONS = {\n # Fixed width, any height\n \"full\": (None, None, False),\n \"x20\": (2220, None, False),\n \"x15\": (1665, None, False),\n \"x10\": (1110, None, False),\n}\n\n##############################################################################\n#\n# actstream\n#\n##############################################################################\n\nACTSTREAM_ENABLE = strtobool(os.environ.get(\"ACTSTREAM_ENABLE\", \"True\"))\nACTSTREAM_SETTINGS = {\n \"MANAGER\": \"actstream.managers.ActionManager\",\n \"FETCH_RELATIONS\": True,\n \"USE_JSONFIELD\": True,\n}\n\n##############################################################################\n#\n# django-summernote\n#\n##############################################################################\n\n# WYSIWYG editing with Summernote\nSUMMERNOTE_THEME = \"bs4\"\nSUMMERNOTE_CONFIG = {\n \"attachment_model\": \"uploads.SummernoteAttachment\",\n \"attachment_require_authentication\": True,\n \"summernote\": {\n \"width\": \"100%\",\n \"toolbar\": [\n [\"style\", [\"style\"]],\n [\n \"font\",\n [\"bold\", \"italic\", \"underline\", \"strikethrough\", \"clear\"],\n ],\n [\"para\", [\"ul\", \"ol\", \"paragraph\"]],\n [\"insert\", [\"link\", \"picture\", \"hr\"]],\n [\"view\", [\"fullscreen\", \"codeview\"]],\n [\"help\", [\"help\"]],\n ],\n },\n}\n\n# Settings for allowed HTML\nBLEACH_ALLOWED_TAGS = [\n \"a\",\n \"abbr\",\n \"acronym\",\n \"b\",\n \"blockquote\",\n \"br\",\n \"code\",\n \"col\",\n \"div\",\n \"em\",\n \"h1\",\n \"h2\",\n \"h3\",\n \"h4\",\n \"h5\",\n \"h6\",\n \"hr\",\n \"i\",\n \"img\",\n \"li\",\n \"ol\",\n \"p\",\n \"pre\",\n \"span\",\n \"strike\",\n \"strong\",\n \"table\",\n \"tbody\",\n \"thead\",\n \"td\",\n \"th\",\n \"tr\",\n \"u\",\n \"ul\",\n \"video\",\n]\nBLEACH_ALLOWED_ATTRIBUTES = {\n \"*\": [\"class\", \"data-toggle\", \"id\", \"style\", \"role\"],\n \"a\": [\"href\", \"title\", \"target\", \"rel\"],\n \"abbr\": [\"title\"],\n \"acronym\": [\"title\"],\n \"img\": [\"height\", \"src\", \"width\"],\n # For bootstrap tables: https://getbootstrap.com/docs/4.3/content/tables/\n \"th\": [\"scope\", \"colspan\"],\n \"td\": [\"colspan\"],\n \"video\": [\"src\", \"loop\", \"controls\", \"poster\"],\n}\nBLEACH_ALLOWED_STYLES = [\"height\", \"margin-left\", \"text-align\", \"width\"]\nBLEACH_ALLOWED_PROTOCOLS = [\"http\", \"https\", \"mailto\"]\nBLEACH_STRIP = strtobool(os.environ.get(\"BLEACH_STRIP\", \"True\"))\n\n# The markdown processor\nMARKDOWNX_MEDIA_PATH = datetime.now().strftime(\"i/%Y/%m/%d/\")\nMARKDOWNX_MARKDOWN_EXTENSIONS = [\n \"markdown.extensions.fenced_code\",\n \"markdown.extensions.tables\",\n \"markdown.extensions.sane_lists\",\n \"markdown.extensions.codehilite\",\n \"markdown.extensions.attr_list\",\n BS4Extension(),\n]\nMARKDOWNX_MARKDOWNIFY_FUNCTION = (\n \"grandchallenge.core.templatetags.bleach.md2html\"\n)\nMARKDOWNX_MARKDOWN_EXTENSION_CONFIGS = {}\nMARKDOWNX_IMAGE_MAX_SIZE = {\"size\": (2000, 0), \"quality\": 90}\n\nHAYSTACK_CONNECTIONS = {\n \"default\": {\"ENGINE\": \"haystack.backends.simple_backend.SimpleEngine\"}\n}\n\nFORUMS_CHALLENGE_CATEGORY_NAME = \"Challenges\"\nMACHINA_BASE_TEMPLATE_NAME = \"base.html\"\nMACHINA_PROFILE_AVATARS_ENABLED = False\nMACHINA_FORUM_NAME = \"Grand Challenge Forums\"\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\"\n },\n {\"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\"},\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\"\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\"\n },\n]\n\n# A sample logging configuration. More info in configuration can be found at\n# https://docs.djangoproject.com/en/dev/topics/logging/ .\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"handlers\": {\"console\": {\"class\": \"logging.StreamHandler\"}},\n \"loggers\": {\n \"grandchallenge\": {\n \"level\": os.environ.get(\"GRAND_CHALLENGE_LOG_LEVEL\", \"INFO\"),\n \"handlers\": [\"console\"],\n \"propagate\": True,\n },\n \"django\": {\n \"level\": os.environ.get(\"DJANGO_LOG_LEVEL\", \"INFO\"),\n \"handlers\": [\"console\"],\n \"propagate\": True,\n },\n \"werkzeug\": {\n \"handlers\": [\"console\"],\n \"level\": \"DEBUG\",\n \"propagate\": True,\n },\n # As AWS_XRAY_CONTEXT_MISSING can only be set to LOG_ERROR,\n # silence errors from this sdk as they flood the logs in\n # RedirectFallbackMiddleware\n \"aws_xray_sdk\": {\n \"handlers\": [\"console\"],\n \"level\": \"CRITICAL\",\n \"propagate\": True,\n },\n },\n}\n\n###############################################################################\n# SENTRY\n###############################################################################\n\nSENTRY_DSN = os.environ.get(\"DJANGO_SENTRY_DSN\", \"\")\nSENTRY_ENABLE_JS_REPORTING = strtobool(\n os.environ.get(\"SENTRY_ENABLE_JS_REPORTING\", \"False\")\n)\nWORKSTATION_SENTRY_DSN = os.environ.get(\"WORKSTATION_SENTRY_DSN\", \"\")\n\nif SENTRY_DSN:\n sentry_sdk.init(\n dsn=SENTRY_DSN,\n integrations=[DjangoIntegration(), CeleryIntegration()],\n release=COMMIT_ID,\n traces_sample_rate=float(\n os.environ.get(\"SENTRY_TRACES_SAMPLE_RATE\", \"0.0\")\n ),\n ignore_errors=[PriorStepFailed, ImageImportError],\n )\n ignore_logger(\"django.security.DisallowedHost\")\n ignore_logger(\"aws_xray_sdk\")\n\n###############################################################################\n# XRAY\n###############################################################################\nXRAY_RECORDER = {\n \"AWS_XRAY_CONTEXT_MISSING\": \"LOG_ERROR\",\n \"PLUGINS\": (\"ECSPlugin\",),\n \"AWS_XRAY_TRACING_NAME\": SESSION_COOKIE_DOMAIN.lstrip(\".\"),\n}\n\n###############################################################################\n#\n# django-rest-framework and drf-spectacular\n#\n###############################################################################\n\nREST_FRAMEWORK = {\n \"DEFAULT_PERMISSION_CLASSES\": (\"rest_framework.permissions.IsAdminUser\",),\n \"DEFAULT_AUTHENTICATION_CLASSES\": (\n \"knox.auth.TokenAuthentication\",\n \"rest_framework.authentication.SessionAuthentication\",\n ),\n \"DEFAULT_RENDERER_CLASSES\": [\"rest_framework.renderers.JSONRenderer\"],\n \"DEFAULT_PAGINATION_CLASS\": \"grandchallenge.api.pagination.MaxLimit1000OffsetPagination\",\n \"PAGE_SIZE\": 100,\n \"UNAUTHENTICATED_USER\": \"guardian.utils.get_anonymous_user\",\n \"DEFAULT_SCHEMA_CLASS\": \"drf_spectacular.openapi.AutoSchema\",\n}\n\nSPECTACULAR_SETTINGS = {\n \"SCHEMA_PATH_PREFIX\": r\"/api/v[0-9]\",\n \"TITLE\": f\"{SESSION_COOKIE_DOMAIN.lstrip('.')} API\",\n \"DESCRIPTION\": f\"The API for {SESSION_COOKIE_DOMAIN.lstrip('.')}.\",\n \"TOS\": f\"https://{SESSION_COOKIE_DOMAIN.lstrip('.')}/policies/terms-of-service/\",\n \"LICENSE\": {\"name\": \"Apache License 2.0\"},\n \"VERSION\": \"1.0.0\",\n}\n\nREST_KNOX = {\"AUTH_HEADER_PREFIX\": \"Bearer\"}\n\n###############################################################################\n#\n# CORS\n#\n###############################################################################\n\nVALID_SUBDOMAIN_REGEX = r\"[A-Za-z0-9](?:[A-Za-z0-9\\-]{0,61}[A-Za-z0-9])?\"\nCORS_ORIGIN_REGEX_WHITELIST = [\n rf\"^https:\\/\\/{VALID_SUBDOMAIN_REGEX}{re.escape(SESSION_COOKIE_DOMAIN)}$\",\n]\n# SESSION_COOKIE_SAMESITE should be set to \"lax\" so won't send credentials\n# across domains, but this will allow workstations to access the api\nCORS_ALLOW_CREDENTIALS = True\n\n###############################################################################\n#\n# celery\n#\n###############################################################################\n\nCELERY_TASK_DECORATOR_KWARGS = {\n \"acks-late-2xlarge\": {\n # For idempotent tasks that take a long time (<7200s)\n # or require a large amount of memory\n \"acks_late\": True,\n \"reject_on_worker_lost\": True,\n \"queue\": \"acks-late-2xlarge\",\n },\n \"acks-late-micro-short\": {\n # For idempotent tasks that take a short time (<300s)\n # and do not require a large amount of memory\n \"acks_late\": True,\n \"reject_on_worker_lost\": True,\n \"queue\": \"acks-late-micro-short\",\n },\n}\n\nCELERY_RESULT_BACKEND = os.environ.get(\"CELERY_RESULT_BACKEND\", \"django-db\")\nCELERY_RESULT_PERSISTENT = True\nCELERY_RESULT_EXPIRES = timedelta(days=7)\nCELERY_TASK_ACKS_LATE = strtobool(\n os.environ.get(\"CELERY_TASK_ACKS_LATE\", \"False\")\n)\nCELERY_WORKER_PREFETCH_MULTIPLIER = int(\n os.environ.get(\"CELERY_WORKER_PREFETCH_MULTIPLIER\", \"1\")\n)\nCELERY_TASK_SOFT_TIME_LIMIT = int(\n os.environ.get(\"CELERY_TASK_SOFT_TIME_LIMIT\", \"7200\")\n)\nCELERY_TASK_TIME_LIMIT = int(os.environ.get(\"CELERY_TASK_TIME_LIMIT\", \"7260\"))\nCELERY_BROKER_TRANSPORT_OPTIONS = {\n \"visibility_timeout\": int(1.1 * CELERY_TASK_TIME_LIMIT)\n}\nCELERY_BROKER_CONNECTION_MAX_RETRIES = 0\n\nif os.environ.get(\"BROKER_TYPE\", \"\").lower() == \"sqs\":\n CELERY_BROKER_URL = \"sqs://\"\n\n CELERY_WORKER_ENABLE_REMOTE_CONTROL = False\n CELERY_BROKER_USE_SSL = True\n\n CELERY_BROKER_TRANSPORT_OPTIONS.update(\n {\n \"queue_name_prefix\": os.environ.get(\n \"CELERY_BROKER_QUEUE_NAME_PREFIX\", \"gclocalhost-\"\n ),\n \"region\": os.environ.get(\n \"CELERY_BROKER_REGION\", AWS_DEFAULT_REGION\n ),\n \"polling_interval\": int(\n os.environ.get(\"CELERY_BROKER_POLLING_INTERVAL\", \"1\")\n ),\n }\n )\nelse:\n CELERY_BROKER_URL = os.environ.get(\"BROKER_URL\", f\"{REDIS_ENDPOINT}/1\")\n\n# Keep results of sent emails\nCELERY_EMAIL_CHUNK_SIZE = 1\nCELERY_EMAIL_TASK_CONFIG = {\"ignore_result\": False}\n\nCOMPONENTS_DEFAULT_BACKEND = os.environ.get(\n \"COMPONENTS_DEFAULT_BACKEND\",\n \"grandchallenge.components.backends.amazon_ecs.AmazonECSExecutor\",\n)\nCOMPONENTS_REGISTRY_URL = os.environ.get(\n \"COMPONENTS_REGISTRY_URL\", \"registry:5000\"\n)\nCOMPONENTS_REGISTRY_PREFIX = os.environ.get(\n \"COMPONENTS_REGISTRY_PREFIX\", SESSION_COOKIE_DOMAIN.lstrip(\".\")\n)\nCOMPONENTS_REGISTRY_INSECURE = strtobool(\n os.environ.get(\"COMPONENTS_REGISTRY_INSECURE\", \"False\")\n)\nCOMPONENTS_SHIM_IMAGES = strtobool(\n os.environ.get(\"COMPONENTS_SHIM_IMAGES\", \"True\")\n)\nCOMPONENTS_CREATE_SAGEMAKER_MODEL = strtobool(\n os.environ.get(\"COMPONENTS_CREATE_SAGEMAKER_MODEL\", \"False\")\n)\nCOMPONENTS_INPUT_BUCKET_NAME = os.environ.get(\n \"COMPONENTS_INPUT_BUCKET_NAME\", \"grand-challenge-components-inputs\"\n)\nCOMPONENTS_OUTPUT_BUCKET_NAME = os.environ.get(\n \"COMPONENTS_OUTPUT_BUCKET_NAME\", \"grand-challenge-components-outputs\"\n)\nCOMPONENTS_MAXIMUM_IMAGE_SIZE = 10 * GIGABYTE\nCOMPONENTS_AMAZON_EFS_BLOCK_SIZE = 16 * MEGABYTE\nCOMPONENTS_AMAZON_EFS_BALANCE_TARGET_BYTES = int(\n os.environ.get(\n \"COMPONENTS_AMAZON_EFS_BALANCE_TARGET_BYTES\", 2.1 * TERABYTE\n )\n)\nCOMPONENTS_AMAZON_EFS_MAX_FILE_SIZE = int(\n os.environ.get(\"COMPONENTS_AMAZON_EFS_MAX_FILE_SIZE\", 100 * GIGABYTE)\n)\n# Minimum of 6 as there is no payback below this\nCOMPONENTS_AMAZON_EFS_TARGET_HOURS = max(\n int(os.environ.get(\"COMPONENTS_AMAZON_EFS_TARGET_HOURS\", 24)), 6\n)\nCOMPONENTS_AMAZON_EFS_FILE_SYSTEM_ID = os.environ.get(\n \"COMPONENTS_AMAZON_EFS_FILE_SYSTEM_ID\"\n)\nCOMPONENTS_AMAZON_ECR_REGION = os.environ.get(\"COMPONENTS_AMAZON_ECR_REGION\")\nCOMPONENTS_AMAZON_ECS_REGION = os.environ.get(\"COMPONENTS_AMAZON_ECS_REGION\")\nCOMPONENTS_AMAZON_ECS_NFS_MOUNT_POINT = os.environ.get(\n \"COMPONENTS_AMAZON_ECS_NFS_MOUNT_POINT\", \"/mnt/aws-batch-nfs/\"\n)\nCOMPONENTS_AMAZON_ECS_LOG_GROUP_NAME = os.environ.get(\n \"COMPONENTS_AMAZON_ECS_LOG_GROUP_NAME\", \"\"\n)\nCOMPONENTS_AMAZON_ECS_LOGS_REGION = os.environ.get(\n \"COMPONENTS_AMAZON_ECS_LOGS_REGION\"\n)\nCOMPONENTS_AMAZON_ECS_CPU_CLUSTER_ARN = os.environ.get(\n \"COMPONENTS_AMAZON_ECS_CPU_CLUSTER_ARN\", \"\"\n)\nCOMPONENTS_AMAZON_ECS_GPU_CLUSTER_ARN = os.environ.get(\n \"COMPONENTS_AMAZON_ECS_GPU_CLUSTER_ARN\", \"\"\n)\nCOMPONENTS_AMAZON_ECS_TASK_ROLE_ARN = os.environ.get(\n \"COMPONENTS_AMAZON_ECS_TASK_ROLE_ARN\", \"\"\n)\nCOMPONENTS_AMAZON_SAGEMAKER_EXECUTION_ROLE_ARN = os.environ.get(\n \"COMPONENTS_AMAZON_SAGEMAKER_EXECUTION_ROLE_ARN\", \"\"\n)\nCOMPONENTS_AMAZON_SAGEMAKER_SECURITY_GROUP_ID = os.environ.get(\n \"COMPONENTS_AMAZON_SAGEMAKER_SECURITY_GROUP_ID\", \"\"\n)\nCOMPONENTS_AMAZON_SAGEMAKER_SUBNETS = os.environ.get(\n \"COMPONENTS_AMAZON_SAGEMAKER_SUBNETS\", \"\"\n).split(\",\")\nCOMPONENTS_DOCKER_NETWORK_NAME = os.environ.get(\n \"COMPONENTS_DOCKER_NETWORK_NAME\", \"grand-challengeorg_components\"\n)\nCOMPONENTS_DOCKER_TASK_SET_AWS_ENV = strtobool(\n os.environ.get(\"COMPONENTS_DOCKER_TASK_SET_AWS_ENV\", \"True\")\n)\nCOMPONENTS_DOCKER_TASK_AWS_ACCESS_KEY_ID = os.environ.get(\n \"COMPONENTS_DOCKER_TASK_AWS_ACCESS_KEY_ID\", \"componentstask\"\n)\nCOMPONENTS_DOCKER_TASK_AWS_SECRET_ACCESS_KEY = os.environ.get(\n \"COMPONENTS_DOCKER_TASK_AWS_SECRET_ACCESS_KEY\", \"componentstask123\"\n)\nCOMPONENTS_PUBLISH_PORTS = strtobool(\n os.environ.get(\"COMPONENTS_PUBLISH_PORTS\", \"False\")\n)\nCOMPONENTS_PORT_ADDRESS = os.environ.get(\"COMPONENTS_PORT_ADDRESS\", \"0.0.0.0\")\n\nCOMPONENTS_MEMORY_LIMIT = int(os.environ.get(\"COMPONENTS_MEMORY_LIMIT\", \"4\"))\nCOMPONENTS_SHARED_MEMORY_SIZE = int(\n os.environ.get(\"COMPONENTS_SHARED_MEMORY_SIZE\", \"64\")\n)\nCOMPONENTS_CPU_QUOTA = int(os.environ.get(\"COMPONENTS_CPU_QUOTA\", \"100000\"))\nCOMPONENTS_CPU_PERIOD = int(os.environ.get(\"COMPONENTS_CPU_PERIOD\", \"100000\"))\nCOMPONENTS_PIDS_LIMIT = int(os.environ.get(\"COMPONENTS_PIDS_LIMIT\", \"128\"))\nCOMPONENTS_CPU_SHARES = int(\n os.environ.get(\"COMPONENTS_CPU_SHARES\", \"1024\") # Default weight\n)\nCOMPONENTS_CPUSET_CPUS = str(os.environ.get(\"COMPONENTS_CPUSET_CPUS\", \"\"))\nCOMPONENTS_DOCKER_RUNTIME = os.environ.get(\"COMPONENTS_DOCKER_RUNTIME\", None)\nCOMPONENTS_NVIDIA_VISIBLE_DEVICES = os.environ.get(\n \"COMPONENTS_NVIDIA_VISIBLE_DEVICES\", \"void\"\n)\n\n# Set which template pack to use for forms\nCRISPY_TEMPLATE_PACK = \"bootstrap4\"\n\n# When using bootstrap error messages need to be renamed to danger\nMESSAGE_TAGS = {messages.ERROR: \"danger\"}\n\n# The name of the group whose members will be able to create reader studies\nREADER_STUDY_CREATORS_GROUP_NAME = \"reader_study_creators\"\n\n###############################################################################\n#\n# workspaces\n#\n###############################################################################\n\nWORKBENCH_SECRET_KEY = os.environ.get(\"WORKBENCH_SECRET_KEY\")\nWORKBENCH_API_URL = os.environ.get(\"WORKBENCH_API_URL\")\nWORKBENCH_ADMIN_USERNAME = os.environ.get(\"WORKBENCH_ADMIN_USERNAME\", \"demo\")\n\n###############################################################################\n#\n# workstations\n#\n###############################################################################\n\n# The workstation that is accessible by all authorised users\nDEFAULT_WORKSTATION_SLUG = os.environ.get(\n \"DEFAULT_WORKSTATION_SLUG\", \"cirrus-core\"\n)\nWORKSTATIONS_DNS_RESOLVER = os.environ.get(\n \"WORKSTATIONS_DNS_RESOLVER\", \"1.1.1.1\"\n)\nWORKSTATIONS_BASE_IMAGE_QUERY_PARAM = \"image\"\nWORKSTATIONS_OVERLAY_QUERY_PARAM = \"overlay\"\nWORKSTATIONS_READY_STUDY_QUERY_PARAM = \"readerStudy\"\nWORKSTATIONS_ALGORITHM_JOB_QUERY_PARAM = \"algorithmJob\"\nWORKSTATIONS_ARCHIVE_ITEM_QUERY_PARAM = \"archiveItem\"\nWORKSTATIONS_CONFIG_QUERY_PARAM = \"config\"\nWORKSTATIONS_USER_QUERY_PARAM = \"viewAsUser\"\nWORKSTATIONS_DISPLAY_SET_QUERY_PARAM = \"displaySet\"\n# The name of the network that the workstations will be attached to\nWORKSTATIONS_NETWORK_NAME = os.environ.get(\n \"WORKSTATIONS_NETWORK_NAME\", \"grand-challengeorg_workstations\"\n)\n# The total limit on the number of sessions\nWORKSTATIONS_MAXIMUM_SESSIONS = int(\n os.environ.get(\"WORKSTATIONS_MAXIMUM_SESSIONS\", \"10\")\n)\n# The name of the group whose members will be able to create workstations\nWORKSTATIONS_CREATORS_GROUP_NAME = \"workstation_creators\"\nWORKSTATIONS_SESSION_DURATION_LIMIT = int(\n os.environ.get(\"WORKSTATIONS_SESSION_DURATION_LIMIT\", \"10000\")\n)\n# Which regions are available for workstations to run in\nWORKSTATIONS_ACTIVE_REGIONS = os.environ.get(\n \"WORKSTATIONS_ACTIVE_REGIONS\", AWS_DEFAULT_REGION\n).split(\",\")\nWORKSTATIONS_RENDERING_SUBDOMAINS = {\n # Possible AWS regions\n *[\n \"-\".join(z)\n for z in product(\n [\"us\", \"af\", \"ap\", \"ca\", \"cn\", \"eu\", \"me\", \"sa\"],\n [\n \"east\",\n \"west\",\n \"south\",\n \"north\",\n \"central\",\n \"northeast\",\n \"southeast\",\n \"northwest\",\n \"southwest\",\n ],\n [\"1\", \"2\", \"3\"],\n )\n ],\n # User defined regions\n \"eu-nl-1\",\n \"eu-nl-2\",\n}\n# Number of minutes grace period before the container is stopped\nWORKSTATIONS_GRACE_MINUTES = 5\n\nCELERY_BEAT_SCHEDULE = {\n \"ping_google\": {\n \"task\": \"grandchallenge.core.tasks.ping_google\",\n \"schedule\": timedelta(days=1),\n },\n \"update_publication_metadata\": {\n \"task\": \"grandchallenge.publications.tasks.update_publication_metadata\",\n \"schedule\": timedelta(days=1),\n },\n \"send_unread_notification_emails\": {\n \"task\": \"grandchallenge.notifications.tasks.send_unread_notification_emails\",\n \"schedule\": timedelta(days=1),\n },\n \"delete_old_user_uploads\": {\n \"task\": \"grandchallenge.uploads.tasks.delete_old_user_uploads\",\n \"schedule\": timedelta(hours=1),\n },\n \"clear_sessions\": {\n \"task\": \"grandchallenge.core.tasks.clear_sessions\",\n \"schedule\": timedelta(days=1),\n },\n \"update_challenge_results_cache\": {\n \"task\": \"grandchallenge.challenges.tasks.update_challenge_results_cache\",\n \"schedule\": timedelta(minutes=5),\n },\n \"update_associated_challenges\": {\n \"task\": \"grandchallenge.algorithms.tasks.update_associated_challenges\",\n \"schedule\": timedelta(days=1),\n },\n \"update_components_filesystem\": {\n \"task\": \"grandchallenge.components.tasks.update_filesystem\",\n \"schedule\": timedelta(hours=COMPONENTS_AMAZON_EFS_TARGET_HOURS),\n },\n **{\n f\"stop_expired_services_{region}\": {\n \"task\": \"grandchallenge.components.tasks.stop_expired_services\",\n \"kwargs\": {\n \"app_label\": \"workstations\",\n \"model_name\": \"session\",\n \"region\": region,\n },\n \"options\": {\"queue\": f\"workstations-{region}\"},\n \"schedule\": timedelta(minutes=WORKSTATIONS_GRACE_MINUTES),\n }\n for region in WORKSTATIONS_ACTIVE_REGIONS\n },\n}\n\nif strtobool(os.environ.get(\"PUSH_CLOUDWATCH_METRICS\", \"False\")):\n CELERY_BEAT_SCHEDULE[\"push_metrics_to_cloudwatch\"] = {\n \"task\": \"grandchallenge.core.tasks.put_cloudwatch_metrics\",\n \"schedule\": timedelta(seconds=15),\n }\n\n# The name of the group whose members will be able to create algorithms\nALGORITHMS_CREATORS_GROUP_NAME = \"algorithm_creators\"\n# Number of jobs that can be scheduled in one task\nALGORITHMS_JOB_BATCH_LIMIT = 256\n# Maximum and minimum values the user can set for algorithm requirements\n# Current limits of 4g/30g are restrictions from the instance types used on ECS\nALGORITHMS_MIN_MEMORY_GB = 4\nALGORITHMS_MAX_MEMORY_GB = 30\n\n# Disallow some challenge names due to subdomain or media folder clashes\nDISALLOWED_CHALLENGE_NAMES = {\n \"m\",\n IMAGE_FILES_SUBDIRECTORY,\n \"logos\",\n \"banners\",\n \"mugshots\",\n \"docker\",\n EVALUATION_FILES_SUBDIRECTORY,\n \"evaluation-supplementary\",\n \"favicon\",\n \"i\",\n \"cache\",\n \"challenge\",\n \"challenges\",\n *USERNAME_DENYLIST,\n *WORKSTATIONS_RENDERING_SUBDOMAINS,\n}\n\n# Disallow registration from certain domains\nDISALLOWED_EMAIL_DOMAINS = {\n \"qq.com\",\n \"aol.com\",\n \"usa.com\",\n \"yahoo.com\",\n \"yahoo.co.uk\",\n \"yahoo.it\",\n \"seznam.cz\",\n \"web.de\",\n \"gmx.de\",\n \"mail.com\",\n \"mail.ru\",\n \"verizon.net\",\n \"comcast.net\",\n \"nudt.edu.cn\",\n \"ihpc.a-star.edu.sg\",\n \"raysightmed.com\",\n \"csu.edu.cn\",\n \"cerist.dz\",\n \"ciitvehari.edu.pk\",\n \"mail.dcu.ie\",\n *blocklist,\n}\n\n# GitHub App\nGITHUB_APP_INSTALL_URL = os.environ.get(\"GITHUB_APP_INSTALL_URL\", \"\")\nGITHUB_APP_ID = os.environ.get(\"GITHUB_APP_ID\", \"\")\nGITHUB_CLIENT_ID = os.environ.get(\"GITHUB_CLIENT_ID\", \"\")\nGITHUB_CLIENT_SECRET = os.environ.get(\"GITHUB_CLIENT_SECRET\", \"\")\nGITHUB_PRIVATE_KEY_BASE64 = os.environ.get(\"GITHUB_PRIVATE_KEY_BASE64\", \"\")\nGITHUB_WEBHOOK_SECRET = os.environ.get(\"GITHUB_WEBHOOK_SECRET\", \"\")\n\nCODEBUILD_PROJECT_NAME = os.environ.get(\"CODEBUILD_PROJECT_NAME\", \"\")\n\n# License keys from https://github.com/licensee/licensee/tree/v9.15.1/vendor/choosealicense.com/_licenses\nOPEN_SOURCE_LICENSES = frozenset(\n (\n \"agpl-3.0\",\n \"apache-2.0\",\n \"bsd-2-clause\",\n \"bsd-3-clause\",\n \"bsd-3-clause-clear\",\n \"bsd-4-clause\",\n \"bsl-1.0\",\n \"gpl-3.0\",\n \"lgpl-3.0\",\n \"mit\",\n \"mpl-2.0\",\n \"unlicense\",\n )\n)\n\n# Set the post processors to use for the image imports\nCASES_POST_PROCESSORS = os.environ.get(\n \"CASES_POST_PROCESSORS\", \"panimg.post_processors.tiff_to_dzi\"\n).split(\",\")\n\n# Maximum file size in bytes to be opened by SimpleITK.ReadImage in cases_tests.utils.get_sitk_image()\nMAX_SITK_FILE_SIZE = 256 * MEGABYTE\n\n# The maximum size of all the files in an upload session in bytes\nUPLOAD_SESSION_MAX_BYTES = 10 * GIGABYTE\n\n# Some forms have a lot of data, such as a reader study update view\n# that can contain reports about the medical images\nDATA_UPLOAD_MAX_MEMORY_SIZE = 16 * MEGABYTE\n\n# Some forms have a lot of fields, such as uploads of images\n# with many slices\nDATA_UPLOAD_MAX_NUMBER_FIELDS = int(\n os.environ.get(\"DATA_UPLOAD_MAX_NUMBER_FIELDS\", \"2048\")\n)\n\n# Retina specific settings\nRETINA_GRADERS_GROUP_NAME = \"retina_graders\"\nRETINA_ADMINS_GROUP_NAME = \"retina_admins\"\n\nENABLE_DEBUG_TOOLBAR = False\n\nif DEBUG:\n # Allow localhost in development\n CORS_ORIGIN_REGEX_WHITELIST += [r\"^http://localhost:8888$\"]\n\n LOGGING[\"loggers\"][\"grandchallenge\"][\"level\"] = \"DEBUG\"\n\n PUBLIC_S3_STORAGE_KWARGS.update({\"secure_urls\": False})\n DEMO_ALGORITHM_IMAGE_PATH = os.path.join(SITE_ROOT, \"algorithm.tar.gz\")\n DEMO_ALGORITHM_SHA256 = \"sha256:5e81cef3738b7dbffc12c101990eb3b97f17642c09a2e0b64d5b3d4dd144e79b\"\n\n if ENABLE_DEBUG_TOOLBAR:\n INSTALLED_APPS += (\"debug_toolbar\",)\n\n MIDDLEWARE = (\n \"debug_toolbar.middleware.DebugToolbarMiddleware\",\n *MIDDLEWARE,\n )\n\n DEBUG_TOOLBAR_CONFIG = {\n \"SHOW_TOOLBAR_CALLBACK\": \"config.toolbar_callback\",\n \"RESULTS_CACHE_SIZE\": 100,\n }\n", "path": "app/config/settings.py" } ]
[ { "content": "import os\nimport re\nfrom datetime import datetime, timedelta\nfrom itertools import product\n\nimport sentry_sdk\nfrom disposable_email_domains import blocklist\nfrom django.contrib.messages import constants as messages\nfrom django.urls import reverse\nfrom machina import MACHINA_MAIN_STATIC_DIR, MACHINA_MAIN_TEMPLATE_DIR\nfrom sentry_sdk.integrations.celery import CeleryIntegration\nfrom sentry_sdk.integrations.django import DjangoIntegration\nfrom sentry_sdk.integrations.logging import ignore_logger\n\nfrom config.denylist import USERNAME_DENYLIST\nfrom grandchallenge.algorithms.exceptions import ImageImportError\nfrom grandchallenge.components.exceptions import PriorStepFailed\nfrom grandchallenge.core.utils import strtobool\nfrom grandchallenge.core.utils.markdown import BS4Extension\n\nMEGABYTE = 1024 * 1024\nGIGABYTE = 1024 * MEGABYTE\nTERABYTE = 1024 * GIGABYTE\n\nDEBUG = strtobool(os.environ.get(\"DEBUG\", \"False\"))\n\nCOMMIT_ID = os.environ.get(\"COMMIT_ID\", \"unknown\")\n\nADMINS = (\n # ('Your Name', '[email protected]'),\n)\n\n# Who gets the 404 notifications?\nmanager_email = os.environ.get(\"MANAGER_EMAIL\", None)\nif manager_email:\n MANAGERS = [(\"Manager\", manager_email)]\n\nIGNORABLE_404_URLS = [\n re.compile(r\".*\\.(php|cgi|asp).*\"),\n re.compile(r\"^/phpmyadmin.*\"),\n re.compile(r\"^/gen204.*\"),\n re.compile(r\"^/wp-content.*\"),\n re.compile(r\"^/wp.*\"),\n re.compile(r\"^/wordpress/.*\"),\n re.compile(r\"^/old/.*\", flags=re.IGNORECASE),\n re.compile(r\".*/trackback.*\"),\n re.compile(r\"^/site/.*\"),\n re.compile(r\"^/media/cache/.*\"),\n re.compile(r\"^/favicon.ico$\"),\n]\n\n# Used as starting points for various other paths. realpath(__file__) starts in\n# the config dir. We need to go one dir higher so path.join(\"..\")\nSITE_ROOT = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql_psycopg2\",\n \"NAME\": os.environ.get(\"POSTGRES_DB\", \"grandchallenge\"),\n \"USER\": os.environ.get(\"POSTGRES_USER\", \"grandchallenge\"),\n \"PASSWORD\": os.environ.get(\"POSTGRES_PASSWORD\", \"secretpassword\"),\n \"HOST\": os.environ.get(\"POSTGRES_HOST\", \"postgres\"),\n \"PORT\": os.environ.get(\"POSTGRES_PORT\", \"\"),\n \"OPTIONS\": {\n \"sslmode\": os.environ.get(\"POSTGRES_SSL_MODE\", \"prefer\"),\n \"sslrootcert\": os.path.join(\n SITE_ROOT, \"config\", \"certs\", \"rds-ca-2019-root.pem\"\n ),\n },\n \"ATOMIC_REQUESTS\": strtobool(\n os.environ.get(\"ATOMIC_REQUESTS\", \"True\")\n ),\n }\n}\n\nEMAIL_BACKEND = \"djcelery_email.backends.CeleryEmailBackend\"\nCELERY_EMAIL_BACKEND = os.environ.get(\n \"EMAIL_BACKEND\", \"django.core.mail.backends.console.EmailBackend\"\n)\nDEFAULT_FROM_EMAIL = os.environ.get(\n \"DEFAULT_FROM_EMAIL\", \"grandchallenge@localhost\"\n)\nSERVER_EMAIL = os.environ.get(\"SERVER_EMAIL\", \"root@localhost\")\n\nANONYMOUS_USER_NAME = \"AnonymousUser\"\nREGISTERED_USERS_GROUP_NAME = \"__registered_users_group__\"\nREGISTERED_AND_ANON_USERS_GROUP_NAME = \"__registered_and_anonymous_users__\"\n\n# Local time zone for this installation. Choices can be found here:\n# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name\n# although not all choices may be available on all operating systems.\n# On Unix systems, a value of None will cause Django to use the same\n# timezone as the operating system.\n# If running in a Windows environment this must be set to the same as your\n# system time zone.\nTIME_ZONE = \"UTC\"\n\n# Language code for this installation. All choices can be found here:\n# http://www.i18nguy.com/unicode/language-identifiers.html\nLANGUAGE_CODE = \"en-us\"\n\nSITE_ID = int(os.environ.get(\"SITE_ID\", \"1\"))\n\n# If you set this to False, Django will make some optimizations so as not\n# to load the internationalization machinery.\nUSE_I18N = True\n\n# If you set this to False, Django will not format dates, numbers and\n# calendars according to the current locale.\nUSE_L10N = True\n\n# If you set this to False, Django will not use timezone-aware datetimes.\nUSE_TZ = True\n\n# Use AutoField for backwards compatibility\nDEFAULT_AUTO_FIELD = \"django.db.models.AutoField\"\n\n# General forum\nDOCUMENTATION_HELP_FORUM_PK = os.environ.get(\n \"DOCUMENTATION_HELP_FORUM_PK\", \"1\"\n)\nDOCUMENTATION_HELP_FORUM_SLUG = os.environ.get(\n \"DOCUMENTATION_HELP_FORUM_SLUG\", \"general\"\n)\n\n# About Flatpage\nFLATPAGE_ABOUT_URL = os.environ.get(\"FLATPAGE_ABOUT_URL\", \"/about/\")\n\n# Costs (in US dollar cents)\nCHALLENGES_STORAGE_COST_CENTS_PER_TB_PER_YEAR = os.environ.get(\n \"CHALLENGES_STORAGE_COST_CENTS_PER_TB_PER_YEAR\", 4000\n)\nCHALLENGES_COMPUTE_COST_CENTS_PER_HOUR = os.environ.get(\n \"CHALLENGES_COMPUTE_COST_CENTS_PER_HOUR\", 100\n)\n\n##############################################################################\n#\n# Storage\n#\n##############################################################################\nDEFAULT_FILE_STORAGE = \"grandchallenge.core.storage.PublicS3Storage\"\n\n# Subdirectories on root for various files\nIMAGE_FILES_SUBDIRECTORY = \"images\"\nEVALUATION_FILES_SUBDIRECTORY = \"evaluation\"\nCOMPONENTS_FILES_SUBDIRECTORY = \"components\"\n\nAWS_S3_FILE_OVERWRITE = False\n# Note: deprecated in django storages 2.0\nAWS_BUCKET_ACL = \"private\"\nAWS_DEFAULT_ACL = \"private\"\nAWS_S3_MAX_MEMORY_SIZE = 1_048_576 # 100 MB\nAWS_S3_ENDPOINT_URL = os.environ.get(\"AWS_S3_ENDPOINT_URL\")\nAWS_DEFAULT_REGION = os.environ.get(\"AWS_DEFAULT_REGION\", \"eu-central-1\")\nAWS_S3_REGION_NAME = os.environ.get(\"AWS_S3_REGION_NAME\")\nAWS_S3_OBJECT_PARAMETERS = {\n # Note that these do not affect the Uploads bucket, which is configured separately.\n # See https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.put_object\n \"StorageClass\": os.environ.get(\"AWS_S3_DEFAULT_STORAGE_CLASS\", \"STANDARD\")\n}\nAWS_CLOUDWATCH_REGION_NAME = os.environ.get(\"AWS_CLOUDWATCH_REGION_NAME\")\nAWS_CODEBUILD_REGION_NAME = os.environ.get(\"AWS_CODEBUILD_REGION_NAME\")\nAWS_SES_REGION_ENDPOINT = f'email.{os.environ.get(\"AWS_SES_REGION_NAME\", AWS_DEFAULT_REGION)}.amazonaws.com'\n\n# This is for storing files that should not be served to the public\nPRIVATE_S3_STORAGE_KWARGS = {\n \"bucket_name\": os.environ.get(\n \"PRIVATE_S3_STORAGE_BUCKET_NAME\", \"grand-challenge-private\"\n )\n}\n\nPROTECTED_S3_STORAGE_KWARGS = {\n \"bucket_name\": os.environ.get(\n \"PROTECTED_S3_STORAGE_BUCKET_NAME\", \"grand-challenge-protected\"\n ),\n # This is the domain where people will be able to go to download data\n # from this bucket. Usually we would use reverse to find this out,\n # but this needs to be defined before the database is populated\n \"custom_domain\": os.environ.get(\n \"PROTECTED_S3_CUSTOM_DOMAIN\", \"gc.localhost/media\"\n ),\n}\nPROTECTED_S3_STORAGE_USE_CLOUDFRONT = strtobool(\n os.environ.get(\"PROTECTED_S3_STORAGE_USE_CLOUDFRONT\", \"False\")\n)\nPROTECTED_S3_STORAGE_CLOUDFRONT_DOMAIN = os.environ.get(\n \"PROTECTED_S3_STORAGE_CLOUDFRONT_DOMAIN_NAME\", \"\"\n)\n\nPUBLIC_S3_STORAGE_KWARGS = {\n \"bucket_name\": os.environ.get(\n \"PUBLIC_S3_STORAGE_BUCKET_NAME\", \"grand-challenge-public\"\n ),\n # Public bucket so do not use querystring_auth\n \"querystring_auth\": False,\n \"default_acl\": \"public-read\",\n}\n\nUPLOADS_S3_BUCKET_NAME = os.environ.get(\n \"UPLOADS_S3_BUCKET_NAME\", \"grand-challenge-uploads\"\n)\nUPLOADS_S3_USE_ACCELERATE_ENDPOINT = strtobool(\n os.environ.get(\"UPLOADS_S3_USE_ACCELERATE_ENDPOINT\", \"False\")\n)\nUPLOADS_MAX_SIZE_UNVERIFIED = int(\n os.environ.get(\"UPLOADS_MAX_SIZE_UNVERIFIED\", 2 * GIGABYTE)\n)\nUPLOADS_MAX_SIZE_VERIFIED = int(\n os.environ.get(\"UPLOADS_MAX_SIZE_VERIFIED\", 128 * GIGABYTE)\n)\nUPLOADS_TIMEOUT_DAYS = int(os.environ.get(\"UPLOADS_TIMEOUT_DAYS\", 1))\n\nVERIFICATIONS_REVIEW_PERIOD_DAYS = int(\n os.environ.get(\"VERIFICATIONS_REVIEW_PERIOD_DAYS\", 10)\n)\n\n# Key pair used for signing CloudFront URLS, only used if\n# PROTECTED_S3_STORAGE_USE_CLOUDFRONT is True\nCLOUDFRONT_KEY_PAIR_ID = os.environ.get(\"CLOUDFRONT_KEY_PAIR_ID\", \"\")\nCLOUDFRONT_PRIVATE_KEY_BASE64 = os.environ.get(\n \"CLOUDFRONT_PRIVATE_KEY_BASE64\", \"\"\n)\nCLOUDFRONT_URL_EXPIRY_SECONDS = int(\n os.environ.get(\"CLOUDFRONT_URL_EXPIRY_SECONDS\", \"300\") # 5 mins\n)\n\n##############################################################################\n#\n# Caching\n#\n##############################################################################\nREDIS_ENDPOINT = os.environ.get(\"REDIS_ENDPOINT\", \"redis://redis:6379\")\n\nCACHES = {\n \"default\": {\n \"BACKEND\": \"django_redis.cache.RedisCache\",\n \"LOCATION\": f\"{REDIS_ENDPOINT}/0\",\n \"OPTIONS\": {\"CLIENT_CLASS\": \"django_redis.client.DefaultClient\"},\n },\n \"machina_attachments\": {\n \"BACKEND\": \"django.core.cache.backends.filebased.FileBasedCache\",\n \"LOCATION\": \"/tmp\",\n },\n}\n\nROOT_URLCONF = \"config.urls.root\"\nCHALLENGE_SUBDOMAIN_URL_CONF = \"config.urls.challenge_subdomain\"\nRENDERING_SUBDOMAIN_URL_CONF = \"config.urls.rendering_subdomain\"\nDEFAULT_SCHEME = os.environ.get(\"DEFAULT_SCHEME\", \"https\")\n\n# Workaround for https://github.com/ellmetha/django-machina/issues/219\nABSOLUTE_URL_OVERRIDES = {\n \"forum.forum\": lambda o: reverse(\n \"forum:forum\", kwargs={\"slug\": o.slug, \"pk\": o.pk}\n ),\n \"forum_conversation.topic\": lambda o: reverse(\n \"forum_conversation:topic\",\n kwargs={\n \"slug\": o.slug,\n \"pk\": o.pk,\n \"forum_slug\": o.forum.slug,\n \"forum_pk\": o.forum.pk,\n },\n ),\n}\n\nSESSION_COOKIE_DOMAIN = os.environ.get(\n \"SESSION_COOKIE_DOMAIN\", \".gc.localhost\"\n)\n# We're always running behind a proxy so set these to true\nSESSION_COOKIE_SECURE = True\nCSRF_COOKIE_SECURE = True\n# Trust all subdomains for CSRF, used for user uploads. Changed the name\n# of the CSRF token as existing ones are already in use.\nCSRF_COOKIE_DOMAIN = SESSION_COOKIE_DOMAIN\nCSRF_COOKIE_NAME = \"_csrftoken\"\nCSRF_TRUSTED_ORIGINS = [SESSION_COOKIE_DOMAIN]\nSECURE_PROXY_SSL_HEADER = (\"HTTP_X_FORWARDED_PROTO\", \"https\")\n\n# Set the allowed hosts to the cookie domain\nALLOWED_HOSTS = [SESSION_COOKIE_DOMAIN, \"web\"]\n\n# Security options\nSECURE_HSTS_SECONDS = int(os.environ.get(\"SECURE_HSTS_SECONDS\", \"0\"))\nSECURE_HSTS_INCLUDE_SUBDOMAINS = strtobool(\n os.environ.get(\"SECURE_HSTS_INCLUDE_SUBDOMAINS\", \"False\")\n)\nSECURE_HSTS_PRELOAD = strtobool(os.environ.get(\"SECURE_HSTS_PRELOAD\", \"True\"))\nSECURE_CONTENT_TYPE_NOSNIFF = strtobool(\n os.environ.get(\"SECURE_CONTENT_TYPE_NOSNIFF\", \"False\")\n)\nSECURE_BROWSER_XSS_FILTER = strtobool(\n os.environ.get(\"SECURE_BROWSER_XSS_FILTER\", \"False\")\n)\nX_FRAME_OPTIONS = os.environ.get(\"X_FRAME_OPTIONS\", \"DENY\")\n# \"strict-origin-when-cross-origin\" required for uploads for cross domain POSTs\nSECURE_REFERRER_POLICY = os.environ.get(\n \"SECURE_REFERRER_POLICY\", \"strict-origin-when-cross-origin\"\n)\n\nPERMISSIONS_POLICY = {\n \"accelerometer\": [],\n \"ambient-light-sensor\": [],\n \"autoplay\": [],\n \"camera\": [],\n \"display-capture\": [],\n \"document-domain\": [],\n \"encrypted-media\": [],\n \"fullscreen\": [\"self\"],\n \"geolocation\": [],\n \"gyroscope\": [],\n \"interest-cohort\": [],\n \"magnetometer\": [],\n \"microphone\": [],\n \"midi\": [],\n \"payment\": [],\n \"usb\": [],\n}\n\nIPWARE_META_PRECEDENCE_ORDER = (\n # Set by nginx\n \"HTTP_X_FORWARDED_FOR\",\n \"HTTP_X_REAL_IP\",\n)\n\n# Absolute path to the directory static files should be collected to.\n# Don't put anything in this directory yourself; store your static files\n# in apps' \"static/\" subdirectories and in STATICFILES_DIRS.\n# Example: \"/home/media/media.lawrence.com/static/\"\nSTATIC_ROOT = \"/static/\"\n\nSTATIC_HOST = os.environ.get(\"DJANGO_STATIC_HOST\", \"\")\nSTATIC_URL = f\"{STATIC_HOST}/static/\"\n\n# List of finder classes that know how to find static files in\n# various locations.\nSTATICFILES_FINDERS = (\n \"django.contrib.staticfiles.finders.FileSystemFinder\",\n \"django.contrib.staticfiles.finders.AppDirectoriesFinder\",\n)\n\n# Vendored static files will be put here\nSTATICFILES_DIRS = [\"/opt/static/\", MACHINA_MAIN_STATIC_DIR]\n\nSTATICFILES_STORAGE = \"whitenoise.storage.CompressedManifestStaticFilesStorage\"\n\n# Make this unique, and don't share it with anybody.\nSECRET_KEY = os.environ.get(\n \"SECRET_KEY\", \"d=%^l=xa02an9jn-$!*hy1)5yox$a-$2(ejt-2smimh=j4%8*b\"\n)\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [\n # Override the machina templates, everything else is found with\n # django.template.loaders.app_directories.Loader\n os.path.join(SITE_ROOT, \"grandchallenge/forums/templates/\"),\n MACHINA_MAIN_TEMPLATE_DIR,\n ],\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.contrib.auth.context_processors.auth\",\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.i18n\",\n \"django.template.context_processors.media\",\n \"django.template.context_processors.static\",\n \"django.template.context_processors.tz\",\n \"django.template.context_processors.request\",\n \"django.contrib.messages.context_processors.messages\",\n \"grandchallenge.core.context_processors.challenge\",\n \"grandchallenge.core.context_processors.deployment_info\",\n \"grandchallenge.core.context_processors.debug\",\n \"grandchallenge.core.context_processors.sentry_dsn\",\n \"grandchallenge.core.context_processors.footer_links\",\n \"grandchallenge.core.context_processors.help_forum\",\n \"grandchallenge.core.context_processors.about_page\",\n \"grandchallenge.core.context_processors.newsletter_signup\",\n \"grandchallenge.core.context_processors.viewport_names\",\n \"machina.core.context_processors.metadata\",\n ],\n \"loaders\": [\n \"django.template.loaders.filesystem.Loader\",\n \"django.template.loaders.app_directories.Loader\",\n ],\n },\n }\n]\n\nMIDDLEWARE = (\n \"django.middleware.security.SecurityMiddleware\", # Keep security at top\n \"whitenoise.middleware.WhiteNoiseMiddleware\",\n # Keep whitenoise after security and before all else\n \"aws_xray_sdk.ext.django.middleware.XRayMiddleware\", # xray near the top\n \"corsheaders.middleware.CorsMiddleware\", # Keep CORS near the top\n \"django.middleware.common.BrokenLinkEmailsMiddleware\",\n # Keep BrokenLinkEmailsMiddleware near the top\n \"django_permissions_policy.PermissionsPolicyMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.contrib.sites.middleware.CurrentSiteMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"simple_history.middleware.HistoryRequestMiddleware\",\n # subdomain_middleware after CurrentSiteMiddleware\n \"grandchallenge.subdomains.middleware.subdomain_middleware\",\n \"grandchallenge.subdomains.middleware.challenge_subdomain_middleware\",\n \"grandchallenge.subdomains.middleware.subdomain_urlconf_middleware\",\n \"grandchallenge.timezones.middleware.TimezoneMiddleware\",\n \"machina.apps.forum_permission.middleware.ForumPermissionMiddleware\",\n # Flatpage fallback almost last\n \"django.contrib.flatpages.middleware.FlatpageFallbackMiddleware\",\n # Redirects last as they're a last resort\n \"django.contrib.redirects.middleware.RedirectFallbackMiddleware\",\n)\n\n# Python dotted path to the WSGI application used by Django's runserver.\nWSGI_APPLICATION = \"config.wsgi.application\"\n\nDJANGO_APPS = [\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.sites\",\n \"django.contrib.messages\",\n \"whitenoise.runserver_nostatic\", # Keep whitenoise above staticfiles\n \"django.contrib.staticfiles\",\n \"django.contrib.humanize\",\n \"django.contrib.admin\",\n \"django.contrib.postgres\",\n \"django.contrib.flatpages\",\n \"django.contrib.sitemaps\",\n \"django.contrib.redirects\",\n]\n\nTHIRD_PARTY_APPS = [\n \"aws_xray_sdk.ext.django\", # tracing\n \"django_celery_results\", # database results backend\n \"django_celery_beat\", # periodic tasks\n \"djcelery_email\", # asynchronous emails\n \"guardian\", # per object permissions\n \"rest_framework\", # provides REST API\n \"knox\", # token auth for REST API\n \"crispy_forms\", # bootstrap forms\n \"django_select2\", # for multiple choice widgets\n \"django_summernote\", # for WYSIWYG page editing\n \"dal\", # for autocompletion of selection fields\n \"dal_select2\", # for autocompletion of selection fields\n \"django_extensions\", # custom extensions\n \"simple_history\", # for object history\n \"corsheaders\", # to allow api communication from subdomains\n \"markdownx\", # for editing markdown\n \"stdimage\",\n \"django_filters\",\n \"drf_spectacular\",\n \"allauth\",\n \"allauth.account\",\n \"allauth.socialaccount\",\n \"grandchallenge.profiles.providers.gmail\",\n # Notifications with overrides\n \"actstream\",\n \"grandchallenge.notifications\",\n # django-machina dependencies:\n \"mptt\",\n \"haystack\",\n \"widget_tweaks\",\n # djano-machina apps:\n \"machina\",\n \"machina.apps.forum\",\n \"machina.apps.forum_conversation.forum_attachments\",\n \"machina.apps.forum_conversation.forum_polls\",\n \"machina.apps.forum_feeds\",\n \"machina.apps.forum_moderation\",\n \"machina.apps.forum_search\",\n \"machina.apps.forum_tracking\",\n \"machina.apps.forum_permission\",\n # Overridden apps\n \"grandchallenge.forum_conversation\",\n \"grandchallenge.forum_member\",\n]\n\nLOCAL_APPS = [\n \"grandchallenge.admins\",\n \"grandchallenge.anatomy\",\n \"grandchallenge.api\",\n \"grandchallenge.api_tokens\",\n \"grandchallenge.challenges\",\n \"grandchallenge.core\",\n \"grandchallenge.evaluation\",\n \"grandchallenge.pages\",\n \"grandchallenge.participants\",\n \"grandchallenge.profiles\",\n \"grandchallenge.teams\",\n \"grandchallenge.uploads\",\n \"grandchallenge.cases\",\n \"grandchallenge.algorithms\",\n \"grandchallenge.components\",\n \"grandchallenge.statistics\",\n \"grandchallenge.archives\",\n \"grandchallenge.patients\",\n \"grandchallenge.studies\",\n \"grandchallenge.registrations\",\n \"grandchallenge.annotations\",\n \"grandchallenge.retina_api\",\n \"grandchallenge.workstations\",\n \"grandchallenge.workspaces\",\n \"grandchallenge.reader_studies\",\n \"grandchallenge.workstation_configs\",\n \"grandchallenge.policies\",\n \"grandchallenge.products\",\n \"grandchallenge.serving\",\n \"grandchallenge.blogs\",\n \"grandchallenge.publications\",\n \"grandchallenge.verifications\",\n \"grandchallenge.credits\",\n \"grandchallenge.task_categories\",\n \"grandchallenge.modalities\",\n \"grandchallenge.datatables\",\n \"grandchallenge.organizations\",\n \"grandchallenge.groups\",\n \"grandchallenge.github\",\n \"grandchallenge.codebuild\",\n \"grandchallenge.timezones\",\n \"grandchallenge.documentation\",\n \"grandchallenge.flatpages\",\n \"grandchallenge.emails\",\n \"grandchallenge.hanging_protocols\",\n]\n\nINSTALLED_APPS = DJANGO_APPS + LOCAL_APPS + THIRD_PARTY_APPS\n\nADMIN_URL = f'{os.environ.get(\"DJANGO_ADMIN_URL\", \"django-admin\")}/'\n\nAUTHENTICATION_BACKENDS = [\n \"django.contrib.auth.backends.ModelBackend\",\n \"allauth.account.auth_backends.AuthenticationBackend\",\n \"guardian.backends.ObjectPermissionBackend\",\n]\n\nGOOGLE_ANALYTICS_ID = os.environ.get(\"GOOGLE_ANALYTICS_ID\", \"GA_TRACKING_ID\")\n\n##############################################################################\n#\n# django-allauth\n#\n##############################################################################\n\nACCOUNT_ADAPTER = \"grandchallenge.profiles.adapters.AccountAdapter\"\nACCOUNT_SIGNUP_FORM_CLASS = \"grandchallenge.profiles.forms.SignupForm\"\n\nACCOUNT_EMAIL_CONFIRMATION_COOLDOWN = 30\nACCOUNT_AUTHENTICATION_METHOD = \"username_email\"\nACCOUNT_EMAIL_REQUIRED = True\nACCOUNT_EMAIL_VERIFICATION = \"mandatory\"\nACCOUNT_USERNAME_MIN_LENGTH = 4\nACCOUNT_DEFAULT_HTTP_PROTOCOL = \"https\"\nACCOUNT_LOGIN_ON_EMAIL_CONFIRMATION = True\nACCOUNT_USERNAME_BLACKLIST = USERNAME_DENYLIST\n\nSOCIALACCOUNT_ADAPTER = \"grandchallenge.profiles.adapters.SocialAccountAdapter\"\nSOCIALACCOUNT_AUTO_SIGNUP = False\nSOCIALACCOUNT_STORE_TOKENS = False\nSOCIALACCOUNT_PROVIDERS = {\n \"gmail\": {\n \"APP\": {\n \"client_id\": os.environ.get(\"SOCIAL_AUTH_GOOGLE_OAUTH2_KEY\", \"\"),\n \"secret\": os.environ.get(\"SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET\", \"\"),\n }\n }\n}\n\n# Use full paths as view name lookups do not work on subdomains\nLOGIN_URL = \"/accounts/login/\"\nLOGOUT_URL = \"/accounts/logout/\"\nLOGIN_REDIRECT_URL = \"/users/profile/\"\n\n##############################################################################\n#\n# stdimage\n#\n##############################################################################\n\n# Re-render the existing images if these values change\n# https://github.com/codingjoe/django-stdimage#re-rendering-variations\nSTDIMAGE_LOGO_VARIATIONS = {\n # Must be square\n \"full\": (None, None, False),\n \"x20\": (640, 640, True),\n \"x15\": (480, 480, True),\n \"x10\": (320, 320, True),\n \"x02\": (64, 64, True),\n}\nSTDIMAGE_SOCIAL_VARIATIONS = {\n # Values from social sharing\n \"full\": (None, None, False),\n \"x20\": (1280, 640, False),\n \"x15\": (960, 480, False),\n \"x10\": (640, 320, False),\n}\nSTDIMAGE_BANNER_VARIATIONS = {\n # Fixed width, any height\n \"full\": (None, None, False),\n \"x20\": (2220, None, False),\n \"x15\": (1665, None, False),\n \"x10\": (1110, None, False),\n}\n\n##############################################################################\n#\n# actstream\n#\n##############################################################################\n\nACTSTREAM_ENABLE = strtobool(os.environ.get(\"ACTSTREAM_ENABLE\", \"True\"))\nACTSTREAM_SETTINGS = {\n \"MANAGER\": \"actstream.managers.ActionManager\",\n \"FETCH_RELATIONS\": True,\n \"USE_JSONFIELD\": True,\n}\n\n##############################################################################\n#\n# django-summernote\n#\n##############################################################################\n\n# WYSIWYG editing with Summernote\nSUMMERNOTE_THEME = \"bs4\"\nSUMMERNOTE_CONFIG = {\n \"attachment_model\": \"uploads.SummernoteAttachment\",\n \"attachment_require_authentication\": True,\n \"summernote\": {\n \"width\": \"100%\",\n \"toolbar\": [\n [\"style\", [\"style\"]],\n [\n \"font\",\n [\"bold\", \"italic\", \"underline\", \"strikethrough\", \"clear\"],\n ],\n [\"para\", [\"ul\", \"ol\", \"paragraph\"]],\n [\"insert\", [\"link\", \"picture\", \"hr\"]],\n [\"view\", [\"fullscreen\", \"codeview\"]],\n [\"help\", [\"help\"]],\n ],\n },\n}\n\n# Settings for allowed HTML\nBLEACH_ALLOWED_TAGS = [\n \"a\",\n \"abbr\",\n \"acronym\",\n \"b\",\n \"blockquote\",\n \"br\",\n \"code\",\n \"col\",\n \"div\",\n \"em\",\n \"h1\",\n \"h2\",\n \"h3\",\n \"h4\",\n \"h5\",\n \"h6\",\n \"hr\",\n \"i\",\n \"img\",\n \"li\",\n \"ol\",\n \"p\",\n \"pre\",\n \"span\",\n \"strike\",\n \"strong\",\n \"table\",\n \"tbody\",\n \"thead\",\n \"td\",\n \"th\",\n \"tr\",\n \"u\",\n \"ul\",\n \"video\",\n]\nBLEACH_ALLOWED_ATTRIBUTES = {\n \"*\": [\"class\", \"data-toggle\", \"id\", \"style\", \"role\"],\n \"a\": [\"href\", \"title\", \"target\", \"rel\"],\n \"abbr\": [\"title\"],\n \"acronym\": [\"title\"],\n \"img\": [\"height\", \"src\", \"width\"],\n # For bootstrap tables: https://getbootstrap.com/docs/4.3/content/tables/\n \"th\": [\"scope\", \"colspan\"],\n \"td\": [\"colspan\"],\n \"video\": [\"src\", \"loop\", \"controls\", \"poster\"],\n}\nBLEACH_ALLOWED_STYLES = [\"height\", \"margin-left\", \"text-align\", \"width\"]\nBLEACH_ALLOWED_PROTOCOLS = [\"http\", \"https\", \"mailto\"]\nBLEACH_STRIP = strtobool(os.environ.get(\"BLEACH_STRIP\", \"True\"))\n\n# The markdown processor\nMARKDOWNX_MEDIA_PATH = datetime.now().strftime(\"i/%Y/%m/%d/\")\nMARKDOWNX_MARKDOWN_EXTENSIONS = [\n \"markdown.extensions.fenced_code\",\n \"markdown.extensions.tables\",\n \"markdown.extensions.sane_lists\",\n \"markdown.extensions.codehilite\",\n \"markdown.extensions.attr_list\",\n BS4Extension(),\n]\nMARKDOWNX_MARKDOWNIFY_FUNCTION = (\n \"grandchallenge.core.templatetags.bleach.md2html\"\n)\nMARKDOWNX_MARKDOWN_EXTENSION_CONFIGS = {}\nMARKDOWNX_IMAGE_MAX_SIZE = {\"size\": (2000, 0), \"quality\": 90}\nMARKDOWNX_EDITOR_RESIZABLE = \"False\"\n\nHAYSTACK_CONNECTIONS = {\n \"default\": {\"ENGINE\": \"haystack.backends.simple_backend.SimpleEngine\"}\n}\n\nFORUMS_CHALLENGE_CATEGORY_NAME = \"Challenges\"\nMACHINA_BASE_TEMPLATE_NAME = \"base.html\"\nMACHINA_PROFILE_AVATARS_ENABLED = False\nMACHINA_FORUM_NAME = \"Grand Challenge Forums\"\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\"\n },\n {\"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\"},\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\"\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\"\n },\n]\n\n# A sample logging configuration. More info in configuration can be found at\n# https://docs.djangoproject.com/en/dev/topics/logging/ .\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"handlers\": {\"console\": {\"class\": \"logging.StreamHandler\"}},\n \"loggers\": {\n \"grandchallenge\": {\n \"level\": os.environ.get(\"GRAND_CHALLENGE_LOG_LEVEL\", \"INFO\"),\n \"handlers\": [\"console\"],\n \"propagate\": True,\n },\n \"django\": {\n \"level\": os.environ.get(\"DJANGO_LOG_LEVEL\", \"INFO\"),\n \"handlers\": [\"console\"],\n \"propagate\": True,\n },\n \"werkzeug\": {\n \"handlers\": [\"console\"],\n \"level\": \"DEBUG\",\n \"propagate\": True,\n },\n # As AWS_XRAY_CONTEXT_MISSING can only be set to LOG_ERROR,\n # silence errors from this sdk as they flood the logs in\n # RedirectFallbackMiddleware\n \"aws_xray_sdk\": {\n \"handlers\": [\"console\"],\n \"level\": \"CRITICAL\",\n \"propagate\": True,\n },\n },\n}\n\n###############################################################################\n# SENTRY\n###############################################################################\n\nSENTRY_DSN = os.environ.get(\"DJANGO_SENTRY_DSN\", \"\")\nSENTRY_ENABLE_JS_REPORTING = strtobool(\n os.environ.get(\"SENTRY_ENABLE_JS_REPORTING\", \"False\")\n)\nWORKSTATION_SENTRY_DSN = os.environ.get(\"WORKSTATION_SENTRY_DSN\", \"\")\n\nif SENTRY_DSN:\n sentry_sdk.init(\n dsn=SENTRY_DSN,\n integrations=[DjangoIntegration(), CeleryIntegration()],\n release=COMMIT_ID,\n traces_sample_rate=float(\n os.environ.get(\"SENTRY_TRACES_SAMPLE_RATE\", \"0.0\")\n ),\n ignore_errors=[PriorStepFailed, ImageImportError],\n )\n ignore_logger(\"django.security.DisallowedHost\")\n ignore_logger(\"aws_xray_sdk\")\n\n###############################################################################\n# XRAY\n###############################################################################\nXRAY_RECORDER = {\n \"AWS_XRAY_CONTEXT_MISSING\": \"LOG_ERROR\",\n \"PLUGINS\": (\"ECSPlugin\",),\n \"AWS_XRAY_TRACING_NAME\": SESSION_COOKIE_DOMAIN.lstrip(\".\"),\n}\n\n###############################################################################\n#\n# django-rest-framework and drf-spectacular\n#\n###############################################################################\n\nREST_FRAMEWORK = {\n \"DEFAULT_PERMISSION_CLASSES\": (\"rest_framework.permissions.IsAdminUser\",),\n \"DEFAULT_AUTHENTICATION_CLASSES\": (\n \"knox.auth.TokenAuthentication\",\n \"rest_framework.authentication.SessionAuthentication\",\n ),\n \"DEFAULT_RENDERER_CLASSES\": [\"rest_framework.renderers.JSONRenderer\"],\n \"DEFAULT_PAGINATION_CLASS\": \"grandchallenge.api.pagination.MaxLimit1000OffsetPagination\",\n \"PAGE_SIZE\": 100,\n \"UNAUTHENTICATED_USER\": \"guardian.utils.get_anonymous_user\",\n \"DEFAULT_SCHEMA_CLASS\": \"drf_spectacular.openapi.AutoSchema\",\n}\n\nSPECTACULAR_SETTINGS = {\n \"SCHEMA_PATH_PREFIX\": r\"/api/v[0-9]\",\n \"TITLE\": f\"{SESSION_COOKIE_DOMAIN.lstrip('.')} API\",\n \"DESCRIPTION\": f\"The API for {SESSION_COOKIE_DOMAIN.lstrip('.')}.\",\n \"TOS\": f\"https://{SESSION_COOKIE_DOMAIN.lstrip('.')}/policies/terms-of-service/\",\n \"LICENSE\": {\"name\": \"Apache License 2.0\"},\n \"VERSION\": \"1.0.0\",\n}\n\nREST_KNOX = {\"AUTH_HEADER_PREFIX\": \"Bearer\"}\n\n###############################################################################\n#\n# CORS\n#\n###############################################################################\n\nVALID_SUBDOMAIN_REGEX = r\"[A-Za-z0-9](?:[A-Za-z0-9\\-]{0,61}[A-Za-z0-9])?\"\nCORS_ORIGIN_REGEX_WHITELIST = [\n rf\"^https:\\/\\/{VALID_SUBDOMAIN_REGEX}{re.escape(SESSION_COOKIE_DOMAIN)}$\",\n]\n# SESSION_COOKIE_SAMESITE should be set to \"lax\" so won't send credentials\n# across domains, but this will allow workstations to access the api\nCORS_ALLOW_CREDENTIALS = True\n\n###############################################################################\n#\n# celery\n#\n###############################################################################\n\nCELERY_TASK_DECORATOR_KWARGS = {\n \"acks-late-2xlarge\": {\n # For idempotent tasks that take a long time (<7200s)\n # or require a large amount of memory\n \"acks_late\": True,\n \"reject_on_worker_lost\": True,\n \"queue\": \"acks-late-2xlarge\",\n },\n \"acks-late-micro-short\": {\n # For idempotent tasks that take a short time (<300s)\n # and do not require a large amount of memory\n \"acks_late\": True,\n \"reject_on_worker_lost\": True,\n \"queue\": \"acks-late-micro-short\",\n },\n}\n\nCELERY_RESULT_BACKEND = os.environ.get(\"CELERY_RESULT_BACKEND\", \"django-db\")\nCELERY_RESULT_PERSISTENT = True\nCELERY_RESULT_EXPIRES = timedelta(days=7)\nCELERY_TASK_ACKS_LATE = strtobool(\n os.environ.get(\"CELERY_TASK_ACKS_LATE\", \"False\")\n)\nCELERY_WORKER_PREFETCH_MULTIPLIER = int(\n os.environ.get(\"CELERY_WORKER_PREFETCH_MULTIPLIER\", \"1\")\n)\nCELERY_TASK_SOFT_TIME_LIMIT = int(\n os.environ.get(\"CELERY_TASK_SOFT_TIME_LIMIT\", \"7200\")\n)\nCELERY_TASK_TIME_LIMIT = int(os.environ.get(\"CELERY_TASK_TIME_LIMIT\", \"7260\"))\nCELERY_BROKER_TRANSPORT_OPTIONS = {\n \"visibility_timeout\": int(1.1 * CELERY_TASK_TIME_LIMIT)\n}\nCELERY_BROKER_CONNECTION_MAX_RETRIES = 0\n\nif os.environ.get(\"BROKER_TYPE\", \"\").lower() == \"sqs\":\n CELERY_BROKER_URL = \"sqs://\"\n\n CELERY_WORKER_ENABLE_REMOTE_CONTROL = False\n CELERY_BROKER_USE_SSL = True\n\n CELERY_BROKER_TRANSPORT_OPTIONS.update(\n {\n \"queue_name_prefix\": os.environ.get(\n \"CELERY_BROKER_QUEUE_NAME_PREFIX\", \"gclocalhost-\"\n ),\n \"region\": os.environ.get(\n \"CELERY_BROKER_REGION\", AWS_DEFAULT_REGION\n ),\n \"polling_interval\": int(\n os.environ.get(\"CELERY_BROKER_POLLING_INTERVAL\", \"1\")\n ),\n }\n )\nelse:\n CELERY_BROKER_URL = os.environ.get(\"BROKER_URL\", f\"{REDIS_ENDPOINT}/1\")\n\n# Keep results of sent emails\nCELERY_EMAIL_CHUNK_SIZE = 1\nCELERY_EMAIL_TASK_CONFIG = {\"ignore_result\": False}\n\nCOMPONENTS_DEFAULT_BACKEND = os.environ.get(\n \"COMPONENTS_DEFAULT_BACKEND\",\n \"grandchallenge.components.backends.amazon_ecs.AmazonECSExecutor\",\n)\nCOMPONENTS_REGISTRY_URL = os.environ.get(\n \"COMPONENTS_REGISTRY_URL\", \"registry:5000\"\n)\nCOMPONENTS_REGISTRY_PREFIX = os.environ.get(\n \"COMPONENTS_REGISTRY_PREFIX\", SESSION_COOKIE_DOMAIN.lstrip(\".\")\n)\nCOMPONENTS_REGISTRY_INSECURE = strtobool(\n os.environ.get(\"COMPONENTS_REGISTRY_INSECURE\", \"False\")\n)\nCOMPONENTS_SHIM_IMAGES = strtobool(\n os.environ.get(\"COMPONENTS_SHIM_IMAGES\", \"True\")\n)\nCOMPONENTS_CREATE_SAGEMAKER_MODEL = strtobool(\n os.environ.get(\"COMPONENTS_CREATE_SAGEMAKER_MODEL\", \"False\")\n)\nCOMPONENTS_INPUT_BUCKET_NAME = os.environ.get(\n \"COMPONENTS_INPUT_BUCKET_NAME\", \"grand-challenge-components-inputs\"\n)\nCOMPONENTS_OUTPUT_BUCKET_NAME = os.environ.get(\n \"COMPONENTS_OUTPUT_BUCKET_NAME\", \"grand-challenge-components-outputs\"\n)\nCOMPONENTS_MAXIMUM_IMAGE_SIZE = 10 * GIGABYTE\nCOMPONENTS_AMAZON_EFS_BLOCK_SIZE = 16 * MEGABYTE\nCOMPONENTS_AMAZON_EFS_BALANCE_TARGET_BYTES = int(\n os.environ.get(\n \"COMPONENTS_AMAZON_EFS_BALANCE_TARGET_BYTES\", 2.1 * TERABYTE\n )\n)\nCOMPONENTS_AMAZON_EFS_MAX_FILE_SIZE = int(\n os.environ.get(\"COMPONENTS_AMAZON_EFS_MAX_FILE_SIZE\", 100 * GIGABYTE)\n)\n# Minimum of 6 as there is no payback below this\nCOMPONENTS_AMAZON_EFS_TARGET_HOURS = max(\n int(os.environ.get(\"COMPONENTS_AMAZON_EFS_TARGET_HOURS\", 24)), 6\n)\nCOMPONENTS_AMAZON_EFS_FILE_SYSTEM_ID = os.environ.get(\n \"COMPONENTS_AMAZON_EFS_FILE_SYSTEM_ID\"\n)\nCOMPONENTS_AMAZON_ECR_REGION = os.environ.get(\"COMPONENTS_AMAZON_ECR_REGION\")\nCOMPONENTS_AMAZON_ECS_REGION = os.environ.get(\"COMPONENTS_AMAZON_ECS_REGION\")\nCOMPONENTS_AMAZON_ECS_NFS_MOUNT_POINT = os.environ.get(\n \"COMPONENTS_AMAZON_ECS_NFS_MOUNT_POINT\", \"/mnt/aws-batch-nfs/\"\n)\nCOMPONENTS_AMAZON_ECS_LOG_GROUP_NAME = os.environ.get(\n \"COMPONENTS_AMAZON_ECS_LOG_GROUP_NAME\", \"\"\n)\nCOMPONENTS_AMAZON_ECS_LOGS_REGION = os.environ.get(\n \"COMPONENTS_AMAZON_ECS_LOGS_REGION\"\n)\nCOMPONENTS_AMAZON_ECS_CPU_CLUSTER_ARN = os.environ.get(\n \"COMPONENTS_AMAZON_ECS_CPU_CLUSTER_ARN\", \"\"\n)\nCOMPONENTS_AMAZON_ECS_GPU_CLUSTER_ARN = os.environ.get(\n \"COMPONENTS_AMAZON_ECS_GPU_CLUSTER_ARN\", \"\"\n)\nCOMPONENTS_AMAZON_ECS_TASK_ROLE_ARN = os.environ.get(\n \"COMPONENTS_AMAZON_ECS_TASK_ROLE_ARN\", \"\"\n)\nCOMPONENTS_AMAZON_SAGEMAKER_EXECUTION_ROLE_ARN = os.environ.get(\n \"COMPONENTS_AMAZON_SAGEMAKER_EXECUTION_ROLE_ARN\", \"\"\n)\nCOMPONENTS_AMAZON_SAGEMAKER_SECURITY_GROUP_ID = os.environ.get(\n \"COMPONENTS_AMAZON_SAGEMAKER_SECURITY_GROUP_ID\", \"\"\n)\nCOMPONENTS_AMAZON_SAGEMAKER_SUBNETS = os.environ.get(\n \"COMPONENTS_AMAZON_SAGEMAKER_SUBNETS\", \"\"\n).split(\",\")\nCOMPONENTS_DOCKER_NETWORK_NAME = os.environ.get(\n \"COMPONENTS_DOCKER_NETWORK_NAME\", \"grand-challengeorg_components\"\n)\nCOMPONENTS_DOCKER_TASK_SET_AWS_ENV = strtobool(\n os.environ.get(\"COMPONENTS_DOCKER_TASK_SET_AWS_ENV\", \"True\")\n)\nCOMPONENTS_DOCKER_TASK_AWS_ACCESS_KEY_ID = os.environ.get(\n \"COMPONENTS_DOCKER_TASK_AWS_ACCESS_KEY_ID\", \"componentstask\"\n)\nCOMPONENTS_DOCKER_TASK_AWS_SECRET_ACCESS_KEY = os.environ.get(\n \"COMPONENTS_DOCKER_TASK_AWS_SECRET_ACCESS_KEY\", \"componentstask123\"\n)\nCOMPONENTS_PUBLISH_PORTS = strtobool(\n os.environ.get(\"COMPONENTS_PUBLISH_PORTS\", \"False\")\n)\nCOMPONENTS_PORT_ADDRESS = os.environ.get(\"COMPONENTS_PORT_ADDRESS\", \"0.0.0.0\")\n\nCOMPONENTS_MEMORY_LIMIT = int(os.environ.get(\"COMPONENTS_MEMORY_LIMIT\", \"4\"))\nCOMPONENTS_SHARED_MEMORY_SIZE = int(\n os.environ.get(\"COMPONENTS_SHARED_MEMORY_SIZE\", \"64\")\n)\nCOMPONENTS_CPU_QUOTA = int(os.environ.get(\"COMPONENTS_CPU_QUOTA\", \"100000\"))\nCOMPONENTS_CPU_PERIOD = int(os.environ.get(\"COMPONENTS_CPU_PERIOD\", \"100000\"))\nCOMPONENTS_PIDS_LIMIT = int(os.environ.get(\"COMPONENTS_PIDS_LIMIT\", \"128\"))\nCOMPONENTS_CPU_SHARES = int(\n os.environ.get(\"COMPONENTS_CPU_SHARES\", \"1024\") # Default weight\n)\nCOMPONENTS_CPUSET_CPUS = str(os.environ.get(\"COMPONENTS_CPUSET_CPUS\", \"\"))\nCOMPONENTS_DOCKER_RUNTIME = os.environ.get(\"COMPONENTS_DOCKER_RUNTIME\", None)\nCOMPONENTS_NVIDIA_VISIBLE_DEVICES = os.environ.get(\n \"COMPONENTS_NVIDIA_VISIBLE_DEVICES\", \"void\"\n)\n\n# Set which template pack to use for forms\nCRISPY_TEMPLATE_PACK = \"bootstrap4\"\n\n# When using bootstrap error messages need to be renamed to danger\nMESSAGE_TAGS = {messages.ERROR: \"danger\"}\n\n# The name of the group whose members will be able to create reader studies\nREADER_STUDY_CREATORS_GROUP_NAME = \"reader_study_creators\"\n\n###############################################################################\n#\n# workspaces\n#\n###############################################################################\n\nWORKBENCH_SECRET_KEY = os.environ.get(\"WORKBENCH_SECRET_KEY\")\nWORKBENCH_API_URL = os.environ.get(\"WORKBENCH_API_URL\")\nWORKBENCH_ADMIN_USERNAME = os.environ.get(\"WORKBENCH_ADMIN_USERNAME\", \"demo\")\n\n###############################################################################\n#\n# workstations\n#\n###############################################################################\n\n# The workstation that is accessible by all authorised users\nDEFAULT_WORKSTATION_SLUG = os.environ.get(\n \"DEFAULT_WORKSTATION_SLUG\", \"cirrus-core\"\n)\nWORKSTATIONS_DNS_RESOLVER = os.environ.get(\n \"WORKSTATIONS_DNS_RESOLVER\", \"1.1.1.1\"\n)\nWORKSTATIONS_BASE_IMAGE_QUERY_PARAM = \"image\"\nWORKSTATIONS_OVERLAY_QUERY_PARAM = \"overlay\"\nWORKSTATIONS_READY_STUDY_QUERY_PARAM = \"readerStudy\"\nWORKSTATIONS_ALGORITHM_JOB_QUERY_PARAM = \"algorithmJob\"\nWORKSTATIONS_ARCHIVE_ITEM_QUERY_PARAM = \"archiveItem\"\nWORKSTATIONS_CONFIG_QUERY_PARAM = \"config\"\nWORKSTATIONS_USER_QUERY_PARAM = \"viewAsUser\"\nWORKSTATIONS_DISPLAY_SET_QUERY_PARAM = \"displaySet\"\n# The name of the network that the workstations will be attached to\nWORKSTATIONS_NETWORK_NAME = os.environ.get(\n \"WORKSTATIONS_NETWORK_NAME\", \"grand-challengeorg_workstations\"\n)\n# The total limit on the number of sessions\nWORKSTATIONS_MAXIMUM_SESSIONS = int(\n os.environ.get(\"WORKSTATIONS_MAXIMUM_SESSIONS\", \"10\")\n)\n# The name of the group whose members will be able to create workstations\nWORKSTATIONS_CREATORS_GROUP_NAME = \"workstation_creators\"\nWORKSTATIONS_SESSION_DURATION_LIMIT = int(\n os.environ.get(\"WORKSTATIONS_SESSION_DURATION_LIMIT\", \"10000\")\n)\n# Which regions are available for workstations to run in\nWORKSTATIONS_ACTIVE_REGIONS = os.environ.get(\n \"WORKSTATIONS_ACTIVE_REGIONS\", AWS_DEFAULT_REGION\n).split(\",\")\nWORKSTATIONS_RENDERING_SUBDOMAINS = {\n # Possible AWS regions\n *[\n \"-\".join(z)\n for z in product(\n [\"us\", \"af\", \"ap\", \"ca\", \"cn\", \"eu\", \"me\", \"sa\"],\n [\n \"east\",\n \"west\",\n \"south\",\n \"north\",\n \"central\",\n \"northeast\",\n \"southeast\",\n \"northwest\",\n \"southwest\",\n ],\n [\"1\", \"2\", \"3\"],\n )\n ],\n # User defined regions\n \"eu-nl-1\",\n \"eu-nl-2\",\n}\n# Number of minutes grace period before the container is stopped\nWORKSTATIONS_GRACE_MINUTES = 5\n\nCELERY_BEAT_SCHEDULE = {\n \"ping_google\": {\n \"task\": \"grandchallenge.core.tasks.ping_google\",\n \"schedule\": timedelta(days=1),\n },\n \"update_publication_metadata\": {\n \"task\": \"grandchallenge.publications.tasks.update_publication_metadata\",\n \"schedule\": timedelta(days=1),\n },\n \"send_unread_notification_emails\": {\n \"task\": \"grandchallenge.notifications.tasks.send_unread_notification_emails\",\n \"schedule\": timedelta(days=1),\n },\n \"delete_old_user_uploads\": {\n \"task\": \"grandchallenge.uploads.tasks.delete_old_user_uploads\",\n \"schedule\": timedelta(hours=1),\n },\n \"clear_sessions\": {\n \"task\": \"grandchallenge.core.tasks.clear_sessions\",\n \"schedule\": timedelta(days=1),\n },\n \"update_challenge_results_cache\": {\n \"task\": \"grandchallenge.challenges.tasks.update_challenge_results_cache\",\n \"schedule\": timedelta(minutes=5),\n },\n \"update_associated_challenges\": {\n \"task\": \"grandchallenge.algorithms.tasks.update_associated_challenges\",\n \"schedule\": timedelta(days=1),\n },\n \"update_components_filesystem\": {\n \"task\": \"grandchallenge.components.tasks.update_filesystem\",\n \"schedule\": timedelta(hours=COMPONENTS_AMAZON_EFS_TARGET_HOURS),\n },\n **{\n f\"stop_expired_services_{region}\": {\n \"task\": \"grandchallenge.components.tasks.stop_expired_services\",\n \"kwargs\": {\n \"app_label\": \"workstations\",\n \"model_name\": \"session\",\n \"region\": region,\n },\n \"options\": {\"queue\": f\"workstations-{region}\"},\n \"schedule\": timedelta(minutes=WORKSTATIONS_GRACE_MINUTES),\n }\n for region in WORKSTATIONS_ACTIVE_REGIONS\n },\n}\n\nif strtobool(os.environ.get(\"PUSH_CLOUDWATCH_METRICS\", \"False\")):\n CELERY_BEAT_SCHEDULE[\"push_metrics_to_cloudwatch\"] = {\n \"task\": \"grandchallenge.core.tasks.put_cloudwatch_metrics\",\n \"schedule\": timedelta(seconds=15),\n }\n\n# The name of the group whose members will be able to create algorithms\nALGORITHMS_CREATORS_GROUP_NAME = \"algorithm_creators\"\n# Number of jobs that can be scheduled in one task\nALGORITHMS_JOB_BATCH_LIMIT = 256\n# Maximum and minimum values the user can set for algorithm requirements\n# Current limits of 4g/30g are restrictions from the instance types used on ECS\nALGORITHMS_MIN_MEMORY_GB = 4\nALGORITHMS_MAX_MEMORY_GB = 30\n\n# Disallow some challenge names due to subdomain or media folder clashes\nDISALLOWED_CHALLENGE_NAMES = {\n \"m\",\n IMAGE_FILES_SUBDIRECTORY,\n \"logos\",\n \"banners\",\n \"mugshots\",\n \"docker\",\n EVALUATION_FILES_SUBDIRECTORY,\n \"evaluation-supplementary\",\n \"favicon\",\n \"i\",\n \"cache\",\n \"challenge\",\n \"challenges\",\n *USERNAME_DENYLIST,\n *WORKSTATIONS_RENDERING_SUBDOMAINS,\n}\n\n# Disallow registration from certain domains\nDISALLOWED_EMAIL_DOMAINS = {\n \"qq.com\",\n \"aol.com\",\n \"usa.com\",\n \"yahoo.com\",\n \"yahoo.co.uk\",\n \"yahoo.it\",\n \"seznam.cz\",\n \"web.de\",\n \"gmx.de\",\n \"mail.com\",\n \"mail.ru\",\n \"verizon.net\",\n \"comcast.net\",\n \"nudt.edu.cn\",\n \"ihpc.a-star.edu.sg\",\n \"raysightmed.com\",\n \"csu.edu.cn\",\n \"cerist.dz\",\n \"ciitvehari.edu.pk\",\n \"mail.dcu.ie\",\n *blocklist,\n}\n\n# GitHub App\nGITHUB_APP_INSTALL_URL = os.environ.get(\"GITHUB_APP_INSTALL_URL\", \"\")\nGITHUB_APP_ID = os.environ.get(\"GITHUB_APP_ID\", \"\")\nGITHUB_CLIENT_ID = os.environ.get(\"GITHUB_CLIENT_ID\", \"\")\nGITHUB_CLIENT_SECRET = os.environ.get(\"GITHUB_CLIENT_SECRET\", \"\")\nGITHUB_PRIVATE_KEY_BASE64 = os.environ.get(\"GITHUB_PRIVATE_KEY_BASE64\", \"\")\nGITHUB_WEBHOOK_SECRET = os.environ.get(\"GITHUB_WEBHOOK_SECRET\", \"\")\n\nCODEBUILD_PROJECT_NAME = os.environ.get(\"CODEBUILD_PROJECT_NAME\", \"\")\n\n# License keys from https://github.com/licensee/licensee/tree/v9.15.1/vendor/choosealicense.com/_licenses\nOPEN_SOURCE_LICENSES = frozenset(\n (\n \"agpl-3.0\",\n \"apache-2.0\",\n \"bsd-2-clause\",\n \"bsd-3-clause\",\n \"bsd-3-clause-clear\",\n \"bsd-4-clause\",\n \"bsl-1.0\",\n \"gpl-3.0\",\n \"lgpl-3.0\",\n \"mit\",\n \"mpl-2.0\",\n \"unlicense\",\n )\n)\n\n# Set the post processors to use for the image imports\nCASES_POST_PROCESSORS = os.environ.get(\n \"CASES_POST_PROCESSORS\", \"panimg.post_processors.tiff_to_dzi\"\n).split(\",\")\n\n# Maximum file size in bytes to be opened by SimpleITK.ReadImage in cases_tests.utils.get_sitk_image()\nMAX_SITK_FILE_SIZE = 256 * MEGABYTE\n\n# The maximum size of all the files in an upload session in bytes\nUPLOAD_SESSION_MAX_BYTES = 10 * GIGABYTE\n\n# Some forms have a lot of data, such as a reader study update view\n# that can contain reports about the medical images\nDATA_UPLOAD_MAX_MEMORY_SIZE = 16 * MEGABYTE\n\n# Some forms have a lot of fields, such as uploads of images\n# with many slices\nDATA_UPLOAD_MAX_NUMBER_FIELDS = int(\n os.environ.get(\"DATA_UPLOAD_MAX_NUMBER_FIELDS\", \"2048\")\n)\n\n# Retina specific settings\nRETINA_GRADERS_GROUP_NAME = \"retina_graders\"\nRETINA_ADMINS_GROUP_NAME = \"retina_admins\"\n\nENABLE_DEBUG_TOOLBAR = False\n\nif DEBUG:\n # Allow localhost in development\n CORS_ORIGIN_REGEX_WHITELIST += [r\"^http://localhost:8888$\"]\n\n LOGGING[\"loggers\"][\"grandchallenge\"][\"level\"] = \"DEBUG\"\n\n PUBLIC_S3_STORAGE_KWARGS.update({\"secure_urls\": False})\n DEMO_ALGORITHM_IMAGE_PATH = os.path.join(SITE_ROOT, \"algorithm.tar.gz\")\n DEMO_ALGORITHM_SHA256 = \"sha256:5e81cef3738b7dbffc12c101990eb3b97f17642c09a2e0b64d5b3d4dd144e79b\"\n\n if ENABLE_DEBUG_TOOLBAR:\n INSTALLED_APPS += (\"debug_toolbar\",)\n\n MIDDLEWARE = (\n \"debug_toolbar.middleware.DebugToolbarMiddleware\",\n *MIDDLEWARE,\n )\n\n DEBUG_TOOLBAR_CONFIG = {\n \"SHOW_TOOLBAR_CALLBACK\": \"config.toolbar_callback\",\n \"RESULTS_CACHE_SIZE\": 100,\n }\n", "path": "app/config/settings.py" } ]
diff --git a/app/config/settings.py b/app/config/settings.py index c357e8c199..37adb9f2b8 100644 --- a/app/config/settings.py +++ b/app/config/settings.py @@ -716,6 +716,7 @@ ) MARKDOWNX_MARKDOWN_EXTENSION_CONFIGS = {} MARKDOWNX_IMAGE_MAX_SIZE = {"size": (2000, 0), "quality": 90} +MARKDOWNX_EDITOR_RESIZABLE = "False" HAYSTACK_CONNECTIONS = { "default": {"ENGINE": "haystack.backends.simple_backend.SimpleEngine"} diff --git a/app/grandchallenge/core/static/js/markdownx.js b/app/grandchallenge/core/static/js/markdownx.js index ea407db2c8..b9a60b6ff2 100644 --- a/app/grandchallenge/core/static/js/markdownx.js +++ b/app/grandchallenge/core/static/js/markdownx.js @@ -260,7 +260,7 @@ })); xhr.success = function(response) { properties.preview.innerHTML = response; - properties.editor = updateHeight(properties.editor); + properties.editor = properties._editorIsResizable ? (properties.editor) : properties.editor; utils_1.triggerCustomEvent("markdownx.update", properties.parent, [ response ]); }; xhr.error = function(response) {
weni-ai__bothub-engine-199
Ghost Intent Reported by @IlhasoftPeter in https://github.com/Ilhasoft/bothub/issues/26
[ { "content": "import uuid\nimport base64\nimport requests\n\nfrom django.db import models\nfrom django.utils.translation import gettext as _\nfrom django.utils import timezone\nfrom django.conf import settings\nfrom django.core.validators import RegexValidator, _lazy_re_compile\nfrom django.core.mail import send_mail\nfrom django.template.loader import render_to_string\nfrom django.dispatch import receiver\nfrom django.core.exceptions import ValidationError\n\nfrom bothub.authentication.models import User\n\nfrom . import languages\nfrom .exceptions import RepositoryUpdateAlreadyStartedTraining\nfrom .exceptions import RepositoryUpdateAlreadyTrained\nfrom .exceptions import TrainingNotAllowed\nfrom .exceptions import DoesNotHaveTranslation\n\n\nitem_key_regex = _lazy_re_compile(r'^[-a-z0-9_]+\\Z')\nvalidate_item_key = RegexValidator(\n item_key_regex,\n _('Enter a valid value consisting of lowercase letters, numbers, ' +\n 'underscores or hyphens.'),\n 'invalid'\n)\n\n\ndef can_t_be_other(value):\n if value == 'other':\n raise ValidationError(_('The label can\\'t be named as \"other\"'))\n\n\nclass RepositoryCategory(models.Model):\n class Meta:\n verbose_name = _('repository category')\n verbose_name_plural = _('repository categories')\n\n name = models.CharField(\n _('name'),\n max_length=32)\n\n def __str__(self):\n return self.name # pragma: no cover\n\n\nclass RepositoryQuerySet(models.QuerySet):\n def publics(self):\n return self.filter(is_private=False)\n\n def order_by_relevance(self):\n return self \\\n .annotate(votes_summ=models.Sum('votes__vote')) \\\n .annotate(examples_sum=models.Sum('updates__added')) \\\n .order_by('-votes_summ', '-examples_sum', '-created_at')\n\n\nclass RepositoryManager(models.Manager):\n def get_queryset(self):\n return RepositoryQuerySet(self.model, using=self._db)\n\n\nclass Repository(models.Model):\n class Meta:\n verbose_name = _('repository')\n verbose_name_plural = _('repositories')\n unique_together = ['owner', 'slug']\n\n CATEGORIES_HELP_TEXT = _('Categories for approaching repositories with ' +\n 'the same purpose')\n DESCRIPTION_HELP_TEXT = _('Tell what your bot do!')\n\n uuid = models.UUIDField(\n _('UUID'),\n primary_key=True,\n default=uuid.uuid4,\n editable=False)\n owner = models.ForeignKey(\n User,\n models.CASCADE)\n name = models.CharField(\n _('name'),\n max_length=64,\n help_text=_('Repository display name'))\n slug = models.SlugField(\n _('slug'),\n max_length=32,\n help_text=_('Easy way to found and share repositories'))\n language = models.CharField(\n _('language'),\n max_length=5,\n help_text=_('Repository\\'s examples language. The examples can be ' +\n 'translated to other languages.'),\n validators=[\n languages.validate_language,\n ])\n categories = models.ManyToManyField(\n RepositoryCategory,\n help_text=CATEGORIES_HELP_TEXT)\n description = models.TextField(\n _('description'),\n blank=True,\n help_text=DESCRIPTION_HELP_TEXT)\n is_private = models.BooleanField(\n _('private'),\n default=False,\n help_text=_('Your repository can be private, only you can see and' +\n ' use, or can be public and all community can see and ' +\n 'use.'))\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n objects = RepositoryManager()\n\n nlp_train_url = '{}train/'.format(settings.BOTHUB_NLP_BASE_URL)\n nlp_analyze_url = '{}parse/'.format(settings.BOTHUB_NLP_BASE_URL)\n\n @classmethod\n def request_nlp_train(cls, user_authorization):\n r = requests.post( # pragma: no cover\n cls.nlp_train_url,\n data={},\n headers={'Authorization': 'Bearer {}'.format(\n user_authorization.uuid)})\n return r # pragma: no cover\n\n @classmethod\n def request_nlp_analyze(cls, user_authorization, data):\n r = requests.post( # pragma: no cover\n cls.nlp_analyze_url,\n data={\n 'text': data.get('text'),\n 'language': data.get('language'),\n },\n headers={'Authorization': 'Bearer {}'.format(\n user_authorization.uuid)})\n return r # pragma: no cover\n\n @property\n def available_languages(self):\n examples = self.examples()\n examples_languages = examples.values_list(\n 'repository_update__language',\n flat=True)\n translations_languages = examples.annotate(\n translations_count=models.Count('translations')).filter(\n translations_count__gt=0).values_list(\n 'translations__language',\n flat=True)\n return list(set(\n [self.language] +\n list(examples_languages) +\n list(translations_languages)))\n\n @property\n def languages_status(self):\n return dict(\n map(\n lambda language: (\n language,\n self.language_status(language)),\n settings.SUPPORTED_LANGUAGES.keys(),\n ))\n\n @property\n def ready_for_train(self):\n updates = self.updates.filter(training_started_at=None)\n\n if RepositoryExample.objects.filter(\n models.Q(repository_update__in=updates) |\n models.Q(deleted_in__in=updates)).exists():\n return True\n\n if RepositoryTranslatedExample.objects.filter(\n repository_update__in=updates).exists():\n return True\n\n return False\n\n @property\n def votes_sum(self):\n return self.votes.aggregate(\n votes_sum=models.Sum('vote')).get('votes_sum')\n\n @property\n def intents(self):\n return list(set(self.examples(\n exclude_deleted=False).exclude(\n intent='').values_list(\n 'intent',\n flat=True)))\n\n @property\n def current_entities(self):\n return self.entities.filter(value__in=self.examples(\n exclude_deleted=True).exclude(\n entities__entity__value__isnull=True).values_list(\n 'entities__entity__value',\n flat=True).distinct())\n\n @property\n def entities_list(self):\n return self.current_entities.values_list(\n 'value',\n flat=True).distinct()\n\n @property\n def current_labels(self):\n return self.labels.filter(entities__value__in=self.examples(\n exclude_deleted=True).exclude(\n entities__entity__value__isnull=True).values_list(\n 'entities__entity__value',\n flat=True).distinct())\n\n @property\n def labels_list(self):\n return self.current_labels.values_list(\n 'value',\n flat=True).distinct()\n\n @property\n def admins(self):\n admins = [self.owner] + [\n authorization.user for authorization in\n self.authorizations.filter(role=RepositoryAuthorization.ROLE_ADMIN)\n ]\n return list(set(admins))\n\n def examples(self, language=None, exclude_deleted=True, queryset=None):\n if queryset is None:\n queryset = RepositoryExample.objects\n query = queryset.filter(\n repository_update__repository=self)\n if language:\n query = query.filter(\n repository_update__language=language)\n if exclude_deleted:\n return query.exclude(deleted_in__isnull=False)\n return query\n\n def language_status(self, language):\n is_base_language = self.language == language\n examples = self.examples(language)\n base_examples = self.examples(self.language)\n base_translations = RepositoryTranslatedExample.objects.filter(\n original_example__in=base_examples,\n language=language)\n\n examples_count = examples.count()\n base_examples_count = base_examples.count()\n base_translations_count = base_translations.count()\n base_translations_percentage = (\n base_translations_count / (\n base_examples_count if base_examples_count > 0 else 1)) * 100\n\n return {\n 'is_base_language': is_base_language,\n 'examples': {\n 'count': examples_count,\n 'entities': list(\n set(\n filter(\n lambda x: x,\n examples.values_list(\n 'entities__entity',\n flat=True).distinct()))),\n },\n 'base_translations': {\n 'count': base_translations_count,\n 'percentage': base_translations_percentage,\n },\n }\n\n def current_update(self, language=None):\n language = language or self.language\n repository_update, created = self.updates.get_or_create(\n language=language,\n training_started_at=None)\n return repository_update\n\n def last_trained_update(self, language=None):\n language = language or self.language\n return self.updates.filter(\n language=language,\n by__isnull=False).first()\n\n def get_user_authorization(self, user):\n if user.is_anonymous:\n return RepositoryAuthorization(repository=self)\n get, created = RepositoryAuthorization.objects.get_or_create(\n user=user,\n repository=self)\n return get\n\n def get_absolute_url(self):\n return '{}{}/{}/'.format(\n settings.BOTHUB_WEBAPP_BASE_URL,\n self.owner.nickname,\n self.slug)\n\n\nclass RepositoryUpdate(models.Model):\n class Meta:\n verbose_name = _('repository update')\n verbose_name_plural = _('repository updates')\n ordering = ['-created_at']\n\n repository = models.ForeignKey(\n Repository,\n models.CASCADE,\n related_name='updates')\n language = models.CharField(\n _('language'),\n max_length=5,\n validators=[\n languages.validate_language,\n ])\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n bot_data = models.TextField(\n _('bot data'),\n blank=True,\n editable=False)\n by = models.ForeignKey(\n User,\n models.CASCADE,\n blank=True,\n null=True)\n training_started_at = models.DateTimeField(\n _('training started at'),\n blank=True,\n null=True)\n trained_at = models.DateTimeField(\n _('trained at'),\n blank=True,\n null=True)\n failed_at = models.DateTimeField(\n _('failed at'),\n blank=True,\n null=True)\n\n @property\n def examples(self):\n examples = self.repository.examples(exclude_deleted=False).filter(\n models.Q(repository_update__language=self.language) |\n models.Q(translations__language=self.language))\n if self.training_started_at:\n t_started_at = self.training_started_at\n examples = examples.exclude(\n models.Q(repository_update__created_at__gt=t_started_at) |\n models.Q(deleted_in=self) |\n models.Q(deleted_in__training_started_at__lt=t_started_at))\n else:\n examples = examples.exclude(deleted_in__isnull=False)\n return examples\n\n @property\n def ready_for_train(self):\n if self.added.exists():\n return True\n if self.translated_added.exists():\n return True\n if self.deleted.exists():\n return True\n return False\n\n def start_training(self, by):\n if self.trained_at:\n raise RepositoryUpdateAlreadyTrained()\n if self.training_started_at:\n raise RepositoryUpdateAlreadyStartedTraining()\n\n authorization = self.repository.get_user_authorization(by)\n if not authorization.can_write:\n raise TrainingNotAllowed()\n\n self.by = by\n self.training_started_at = timezone.now()\n self.save(\n update_fields=[\n 'by',\n 'training_started_at',\n ])\n\n def save_training(self, bot_data):\n if self.trained_at:\n raise RepositoryUpdateAlreadyTrained()\n\n self.trained_at = timezone.now()\n self.bot_data = base64.b64encode(bot_data).decode('utf8')\n self.save(\n update_fields=[\n 'trained_at',\n 'bot_data',\n ])\n\n def get_bot_data(self):\n return base64.b64decode(self.bot_data)\n\n def train_fail(self):\n self.failed_at = timezone.now() # pragma: no cover\n self.save( # pragma: no cover\n update_fields=[\n 'failed_at',\n ])\n\n\nclass RepositoryExample(models.Model):\n class Meta:\n verbose_name = _('repository example')\n verbose_name_plural = _('repository examples')\n ordering = ['-created_at']\n\n repository_update = models.ForeignKey(\n RepositoryUpdate,\n models.CASCADE,\n related_name='added',\n editable=False)\n deleted_in = models.ForeignKey(\n RepositoryUpdate,\n models.CASCADE,\n related_name='deleted',\n blank=True,\n null=True)\n text = models.TextField(\n _('text'),\n help_text=_('Example text'))\n intent = models.CharField(\n _('intent'),\n max_length=64,\n blank=True,\n help_text=_('Example intent reference'),\n validators=[validate_item_key])\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n @property\n def language(self):\n return self.repository_update.language\n\n def has_valid_entities(self, language=None):\n if not language or language == self.repository_update.language:\n return True\n return self.get_translation(language).has_valid_entities\n\n def get_translation(self, language):\n try:\n return self.translations.get(language=language)\n except RepositoryTranslatedExample.DoesNotExist:\n raise DoesNotHaveTranslation()\n\n def get_text(self, language=None):\n if not language or language == self.repository_update.language:\n return self.text\n return self.get_translation(language).text\n\n def get_entities(self, language):\n if not language or language == self.repository_update.language:\n return self.entities.all()\n return self.get_translation(language).entities.all()\n\n def delete(self):\n self.deleted_in = self.repository_update.repository.current_update(\n self.repository_update.language)\n self.save(update_fields=['deleted_in'])\n\n\nclass RepositoryTranslatedExampleManager(models.Manager):\n def create(self, *args, original_example=None, language=None, **kwargs):\n repository = original_example.repository_update.repository\n return super().create(\n *args,\n repository_update=repository.current_update(language),\n original_example=original_example,\n language=language,\n **kwargs)\n\n\nclass RepositoryTranslatedExample(models.Model):\n class Meta:\n verbose_name = _('repository translated example')\n verbose_name_plural = _('repository translated examples')\n unique_together = ['original_example', 'language']\n ordering = ['-created_at']\n\n repository_update = models.ForeignKey(\n RepositoryUpdate,\n models.CASCADE,\n related_name='translated_added',\n editable=False)\n original_example = models.ForeignKey(\n RepositoryExample,\n models.CASCADE,\n related_name='translations',\n editable=False,\n help_text=_('Example object'))\n language = models.CharField(\n _('language'),\n max_length=5,\n help_text=_('Translation language'),\n validators=[\n languages.validate_language,\n ])\n text = models.TextField(\n _('text'),\n help_text=_('Translation text'))\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n objects = RepositoryTranslatedExampleManager()\n\n def entities_list_lambda_sort(item):\n return item.get('entity')\n\n @classmethod\n def same_entities_validator(cls, a, b):\n a_len = len(a)\n if a_len != len(b):\n return False\n a_sorted = sorted(\n a,\n key=cls.entities_list_lambda_sort)\n b_sorted = sorted(\n b,\n key=cls.entities_list_lambda_sort)\n for i in range(a_len):\n if a_sorted[i].get('entity') != b_sorted[i].get('entity'):\n return False\n return True\n\n @classmethod\n def count_entities(cls, entities_list, to_str=False):\n r = {}\n for e in entities_list:\n r.update({e.get('entity'): r.get('entity', 0) + 1})\n if to_str:\n r = ', '.join(map(\n lambda x: '{} {}'.format(x[1], x[0]),\n r.items())) if entities_list else 'no entities'\n return r\n\n @property\n def has_valid_entities(self):\n original_entities = self.original_example.entities.all()\n my_entities = self.entities.all()\n return RepositoryTranslatedExample.same_entities_validator(\n list(map(lambda x: x.to_dict, original_entities)),\n list(map(lambda x: x.to_dict, my_entities)))\n\n\nclass RepositoryEntityLabelQueryset(models.QuerySet):\n def get(self, repository, value):\n try:\n return super().get(\n repository=repository,\n value=value)\n except self.model.DoesNotExist as e:\n return super().create(\n repository=repository,\n value=value)\n\n\nclass RepositoryEntityLabelManager(models.Manager):\n def get_queryset(self):\n return RepositoryEntityLabelQueryset(self.model, using=self._db)\n\n\nclass RepositoryEntityLabel(models.Model):\n class Meta:\n unique_together = ['repository', 'value']\n\n repository = models.ForeignKey(\n Repository,\n on_delete=models.CASCADE,\n related_name='labels')\n value = models.CharField(\n _('label'),\n max_length=64,\n validators=[\n validate_item_key,\n can_t_be_other,\n ],\n blank=True)\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n objects = RepositoryEntityLabelManager()\n\n\nclass RepositoryEntityQueryset(models.QuerySet):\n def get(self, repository, value):\n try:\n return super().get(\n repository=repository,\n value=value)\n except self.model.DoesNotExist as e:\n return super().create(\n repository=repository,\n value=value)\n\n\nclass RepositoryEntityManager(models.Manager):\n def get_queryset(self):\n return RepositoryEntityQueryset(self.model, using=self._db)\n\n\nclass RepositoryEntity(models.Model):\n class Meta:\n unique_together = ['repository', 'value']\n\n repository = models.ForeignKey(\n Repository,\n on_delete=models.CASCADE,\n related_name='entities')\n value = models.CharField(\n _('entity'),\n max_length=64,\n help_text=_('Entity name'),\n validators=[validate_item_key])\n label = models.ForeignKey(\n RepositoryEntityLabel,\n on_delete=models.CASCADE,\n related_name='entities',\n null=True,\n blank=True)\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n objects = RepositoryEntityManager()\n\n def set_label(self, value):\n if not value:\n self.label = None\n else:\n self.label = RepositoryEntityLabel.objects.get(\n repository=self.repository,\n value=value)\n\n\nclass EntityBaseQueryset(models.QuerySet):\n def create(self, entity, **kwargs):\n if type(entity) is not RepositoryEntity:\n instance = self.model(**kwargs)\n repository = instance.example.repository_update.repository\n entity = RepositoryEntity.objects.get(\n repository=repository,\n value=entity)\n return super().create(\n entity=entity,\n **kwargs)\n\n\nclass EntityBaseManager(models.Manager):\n def get_queryset(self):\n return EntityBaseQueryset(self.model, using=self._db)\n\n\nclass EntityBase(models.Model):\n class Meta:\n verbose_name = _('repository example entity')\n verbose_name_plural = _('repository example entities')\n abstract = True\n\n start = models.PositiveIntegerField(\n _('start'),\n help_text=_('Start index of entity value in example text'))\n end = models.PositiveIntegerField(\n _('end'),\n help_text=_('End index of entity value in example text'))\n entity = models.ForeignKey(\n RepositoryEntity,\n on_delete=models.CASCADE)\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n objects = EntityBaseManager()\n\n @property\n def example(self):\n return self.get_example()\n\n @property\n def value(self):\n return self.example.text[self.start:self.end]\n\n @property\n def rasa_nlu_data(self):\n return {\n 'start': self.start,\n 'end': self.end,\n 'value': self.value,\n 'entity': self.entity.value,\n }\n\n @property\n def to_dict(self):\n return self.get_rasa_nlu_data()\n\n def get_example(self):\n pass # pragma: no cover\n\n def get_rasa_nlu_data(self, label_as_entity=False):\n return {\n 'start': self.start,\n 'end': self.end,\n 'entity': self.entity.label.value\n if label_as_entity else self.entity.value,\n }\n\n\nclass RepositoryExampleEntity(EntityBase):\n repository_example = models.ForeignKey(\n RepositoryExample,\n models.CASCADE,\n related_name='entities',\n editable=False,\n help_text=_('Example object'))\n\n def get_example(self):\n return self.repository_example\n\n\nclass RepositoryTranslatedExampleEntity(EntityBase):\n repository_translated_example = models.ForeignKey(\n RepositoryTranslatedExample,\n models.CASCADE,\n related_name='entities',\n editable=False,\n help_text=_('Translated example object'))\n\n def get_example(self):\n return self.repository_translated_example\n\n\nclass RepositoryAuthorization(models.Model):\n class Meta:\n verbose_name = _('repository authorization')\n verbose_name_plural = _('repository authorizations')\n unique_together = ['user', 'repository']\n\n LEVEL_NOTHING = 0\n LEVEL_READER = 1\n LEVEL_CONTRIBUTOR = 2\n LEVEL_ADMIN = 3\n\n ROLE_NOT_SETTED = 0\n ROLE_USER = 1\n ROLE_CONTRIBUTOR = 2\n ROLE_ADMIN = 3\n\n ROLE_CHOICES = [\n (ROLE_NOT_SETTED, _('not set')),\n (ROLE_USER, _('user')),\n (ROLE_CONTRIBUTOR, _('contributor')),\n (ROLE_ADMIN, _('admin')),\n ]\n\n uuid = models.UUIDField(\n _('UUID'),\n primary_key=True,\n default=uuid.uuid4,\n editable=False)\n user = models.ForeignKey(\n User,\n models.CASCADE)\n repository = models.ForeignKey(\n Repository,\n models.CASCADE,\n related_name='authorizations')\n role = models.PositiveIntegerField(\n _('role'),\n choices=ROLE_CHOICES,\n default=ROLE_NOT_SETTED)\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n @property\n def level(self):\n try:\n user = self.user\n except User.DoesNotExist:\n user = None\n\n if user and self.repository.owner == user:\n return RepositoryAuthorization.LEVEL_ADMIN\n\n if self.role == RepositoryAuthorization.ROLE_NOT_SETTED:\n if self.repository.is_private:\n return RepositoryAuthorization.LEVEL_NOTHING\n return RepositoryAuthorization.LEVEL_READER\n\n if self.role == RepositoryAuthorization.ROLE_USER:\n return RepositoryAuthorization.LEVEL_READER\n\n if self.role == RepositoryAuthorization.ROLE_CONTRIBUTOR:\n return RepositoryAuthorization.LEVEL_CONTRIBUTOR\n\n if self.role == RepositoryAuthorization.ROLE_ADMIN:\n return RepositoryAuthorization.LEVEL_ADMIN\n\n return RepositoryAuthorization.LEVEL_NOTHING\n\n @property\n def can_read(self):\n return self.level in [\n RepositoryAuthorization.LEVEL_READER,\n RepositoryAuthorization.LEVEL_CONTRIBUTOR,\n RepositoryAuthorization.LEVEL_ADMIN,\n ]\n\n @property\n def can_contribute(self):\n return self.level in [\n RepositoryAuthorization.LEVEL_CONTRIBUTOR,\n RepositoryAuthorization.LEVEL_ADMIN,\n ]\n\n @property\n def can_write(self):\n return self.level in [\n RepositoryAuthorization.LEVEL_ADMIN,\n ]\n\n @property\n def is_admin(self):\n return self.level == RepositoryAuthorization.LEVEL_ADMIN\n\n @property\n def is_owner(self):\n try:\n user = self.user\n except User.DoesNotExist:\n return False\n return self.repository.owner == user\n\n @property\n def role_verbose(self):\n return dict(RepositoryAuthorization.ROLE_CHOICES).get(self.role)\n\n def send_new_role_email(self, responsible=None):\n if not settings.SEND_EMAILS:\n return False\n responsible_name = responsible and responsible.name \\\n or self.repository.owner.name\n context = {\n 'responsible_name': responsible_name,\n 'user_name': self.user.name,\n 'repository_name': self.repository.name,\n 'repository_url': self.repository.get_absolute_url(),\n 'new_role': self.role_verbose,\n }\n send_mail(\n _('New role in {}').format(self.repository.name),\n render_to_string(\n 'common/emails/new_role.txt',\n context),\n None,\n [self.user.email],\n html_message=render_to_string(\n 'common/emails/new_role.html',\n context))\n\n\nclass RepositoryVote(models.Model):\n UP_VOTE = 1\n DOWN_VOTE = -1\n NEUTRAL_VOTE = 0\n VOTE_CHOICES = [\n (UP_VOTE, _('Up'),),\n (DOWN_VOTE, _('Down')),\n (NEUTRAL_VOTE, _('Neutral')),\n ]\n\n class Meta:\n verbose_name = _('repository vote')\n verbose_name_plural = _('repository votes')\n unique_together = [\n 'user',\n 'repository',\n ]\n\n user = models.ForeignKey(\n User,\n models.CASCADE,\n related_name='repository_votes')\n repository = models.ForeignKey(\n Repository,\n models.CASCADE,\n related_name='votes')\n vote = models.IntegerField(\n _('vote'),\n choices=VOTE_CHOICES)\n\n\nclass RequestRepositoryAuthorization(models.Model):\n class Meta:\n unique_together = ['user', 'repository']\n\n user = models.ForeignKey(\n User,\n models.CASCADE,\n related_name='requests')\n repository = models.ForeignKey(\n Repository,\n models.CASCADE,\n related_name='requests')\n text = models.CharField(\n _('text'),\n max_length=250)\n approved_by = models.ForeignKey(\n User,\n models.CASCADE,\n blank=True,\n null=True)\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True,\n editable=False)\n\n def send_new_request_email_to_admins(self):\n if not settings.SEND_EMAILS:\n return False\n context = {\n 'user_name': self.user.name,\n 'repository_name': self.repository.name,\n 'text': self.text,\n 'repository_url': self.repository.get_absolute_url(),\n }\n for admin in self.repository.admins:\n send_mail(\n _('New authorization request in {}').format(\n self.repository.name),\n render_to_string(\n 'common/emails/new_request.txt',\n context),\n None,\n [admin.email],\n html_message=render_to_string(\n 'common/emails/new_request.html',\n context))\n\n def send_request_rejected_email(self):\n if not settings.SEND_EMAILS:\n return False\n context = {\n 'repository_name': self.repository.name,\n }\n send_mail(\n _('Access denied to {}').format(\n self.repository.name),\n render_to_string(\n 'common/emails/request_rejected.txt',\n context),\n None,\n [self.user.email],\n html_message=render_to_string(\n 'common/emails/request_rejected.html',\n context))\n\n def send_request_approved_email(self):\n if not settings.SEND_EMAILS:\n return False\n context = {\n 'admin_name': self.approved_by.name,\n 'repository_name': self.repository.name,\n }\n send_mail(\n _('Authorization Request Approved to {}').format(\n self.repository.name),\n render_to_string(\n 'common/emails/request_approved.txt',\n context),\n None,\n [self.user.email],\n html_message=render_to_string(\n 'common/emails/request_approved.html',\n context))\n\n\n@receiver(models.signals.pre_save, sender=RequestRepositoryAuthorization)\ndef set_user_role_on_approved(instance, **kwargs):\n current = None\n try:\n current = RequestRepositoryAuthorization.objects.get(pk=instance.pk)\n except RequestRepositoryAuthorization.DoesNotExist as e:\n pass\n\n if not current:\n return False\n\n if current.approved_by is None and \\\n current.approved_by is not instance.approved_by:\n user_authorization = instance.repository.get_user_authorization(\n instance.user)\n user_authorization.role = RepositoryAuthorization.ROLE_USER\n user_authorization.save(update_fields=['role'])\n instance.send_request_approved_email()\n else:\n raise ValidationError(\n _('You can change approved_by just one time.'))\n\n\n@receiver(models.signals.post_save, sender=RequestRepositoryAuthorization)\ndef send_new_request_email_to_admins_on_created(instance, created, **kwargs):\n if created:\n instance.send_new_request_email_to_admins()\n\n\n@receiver(models.signals.post_delete, sender=RequestRepositoryAuthorization)\ndef send_request_rejected_email(instance, **kwargs):\n instance.send_request_rejected_email()\n", "path": "bothub/common/models.py" } ]
[ { "content": "import uuid\nimport base64\nimport requests\n\nfrom django.db import models\nfrom django.utils.translation import gettext as _\nfrom django.utils import timezone\nfrom django.conf import settings\nfrom django.core.validators import RegexValidator, _lazy_re_compile\nfrom django.core.mail import send_mail\nfrom django.template.loader import render_to_string\nfrom django.dispatch import receiver\nfrom django.core.exceptions import ValidationError\n\nfrom bothub.authentication.models import User\n\nfrom . import languages\nfrom .exceptions import RepositoryUpdateAlreadyStartedTraining\nfrom .exceptions import RepositoryUpdateAlreadyTrained\nfrom .exceptions import TrainingNotAllowed\nfrom .exceptions import DoesNotHaveTranslation\n\n\nitem_key_regex = _lazy_re_compile(r'^[-a-z0-9_]+\\Z')\nvalidate_item_key = RegexValidator(\n item_key_regex,\n _('Enter a valid value consisting of lowercase letters, numbers, ' +\n 'underscores or hyphens.'),\n 'invalid'\n)\n\n\ndef can_t_be_other(value):\n if value == 'other':\n raise ValidationError(_('The label can\\'t be named as \"other\"'))\n\n\nclass RepositoryCategory(models.Model):\n class Meta:\n verbose_name = _('repository category')\n verbose_name_plural = _('repository categories')\n\n name = models.CharField(\n _('name'),\n max_length=32)\n\n def __str__(self):\n return self.name # pragma: no cover\n\n\nclass RepositoryQuerySet(models.QuerySet):\n def publics(self):\n return self.filter(is_private=False)\n\n def order_by_relevance(self):\n return self \\\n .annotate(votes_summ=models.Sum('votes__vote')) \\\n .annotate(examples_sum=models.Sum('updates__added')) \\\n .order_by('-votes_summ', '-examples_sum', '-created_at')\n\n\nclass RepositoryManager(models.Manager):\n def get_queryset(self):\n return RepositoryQuerySet(self.model, using=self._db)\n\n\nclass Repository(models.Model):\n class Meta:\n verbose_name = _('repository')\n verbose_name_plural = _('repositories')\n unique_together = ['owner', 'slug']\n\n CATEGORIES_HELP_TEXT = _('Categories for approaching repositories with ' +\n 'the same purpose')\n DESCRIPTION_HELP_TEXT = _('Tell what your bot do!')\n\n uuid = models.UUIDField(\n _('UUID'),\n primary_key=True,\n default=uuid.uuid4,\n editable=False)\n owner = models.ForeignKey(\n User,\n models.CASCADE)\n name = models.CharField(\n _('name'),\n max_length=64,\n help_text=_('Repository display name'))\n slug = models.SlugField(\n _('slug'),\n max_length=32,\n help_text=_('Easy way to found and share repositories'))\n language = models.CharField(\n _('language'),\n max_length=5,\n help_text=_('Repository\\'s examples language. The examples can be ' +\n 'translated to other languages.'),\n validators=[\n languages.validate_language,\n ])\n categories = models.ManyToManyField(\n RepositoryCategory,\n help_text=CATEGORIES_HELP_TEXT)\n description = models.TextField(\n _('description'),\n blank=True,\n help_text=DESCRIPTION_HELP_TEXT)\n is_private = models.BooleanField(\n _('private'),\n default=False,\n help_text=_('Your repository can be private, only you can see and' +\n ' use, or can be public and all community can see and ' +\n 'use.'))\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n objects = RepositoryManager()\n\n nlp_train_url = '{}train/'.format(settings.BOTHUB_NLP_BASE_URL)\n nlp_analyze_url = '{}parse/'.format(settings.BOTHUB_NLP_BASE_URL)\n\n @classmethod\n def request_nlp_train(cls, user_authorization):\n r = requests.post( # pragma: no cover\n cls.nlp_train_url,\n data={},\n headers={'Authorization': 'Bearer {}'.format(\n user_authorization.uuid)})\n return r # pragma: no cover\n\n @classmethod\n def request_nlp_analyze(cls, user_authorization, data):\n r = requests.post( # pragma: no cover\n cls.nlp_analyze_url,\n data={\n 'text': data.get('text'),\n 'language': data.get('language'),\n },\n headers={'Authorization': 'Bearer {}'.format(\n user_authorization.uuid)})\n return r # pragma: no cover\n\n @property\n def available_languages(self):\n examples = self.examples()\n examples_languages = examples.values_list(\n 'repository_update__language',\n flat=True)\n translations_languages = examples.annotate(\n translations_count=models.Count('translations')).filter(\n translations_count__gt=0).values_list(\n 'translations__language',\n flat=True)\n return list(set(\n [self.language] +\n list(examples_languages) +\n list(translations_languages)))\n\n @property\n def languages_status(self):\n return dict(\n map(\n lambda language: (\n language,\n self.language_status(language)),\n settings.SUPPORTED_LANGUAGES.keys(),\n ))\n\n @property\n def ready_for_train(self):\n updates = self.updates.filter(training_started_at=None)\n\n if RepositoryExample.objects.filter(\n models.Q(repository_update__in=updates) |\n models.Q(deleted_in__in=updates)).exists():\n return True\n\n if RepositoryTranslatedExample.objects.filter(\n repository_update__in=updates).exists():\n return True\n\n return False\n\n @property\n def votes_sum(self):\n return self.votes.aggregate(\n votes_sum=models.Sum('vote')).get('votes_sum')\n\n @property\n def intents(self):\n return list(set(self.examples(\n exclude_deleted=True).exclude(\n intent='').values_list(\n 'intent',\n flat=True)))\n\n @property\n def current_entities(self):\n return self.entities.filter(value__in=self.examples(\n exclude_deleted=True).exclude(\n entities__entity__value__isnull=True).values_list(\n 'entities__entity__value',\n flat=True).distinct())\n\n @property\n def entities_list(self):\n return self.current_entities.values_list(\n 'value',\n flat=True).distinct()\n\n @property\n def current_labels(self):\n return self.labels.filter(entities__value__in=self.examples(\n exclude_deleted=True).exclude(\n entities__entity__value__isnull=True).values_list(\n 'entities__entity__value',\n flat=True).distinct())\n\n @property\n def labels_list(self):\n return self.current_labels.values_list(\n 'value',\n flat=True).distinct()\n\n @property\n def admins(self):\n admins = [self.owner] + [\n authorization.user for authorization in\n self.authorizations.filter(role=RepositoryAuthorization.ROLE_ADMIN)\n ]\n return list(set(admins))\n\n def examples(self, language=None, exclude_deleted=True, queryset=None):\n if queryset is None:\n queryset = RepositoryExample.objects\n query = queryset.filter(\n repository_update__repository=self)\n if language:\n query = query.filter(\n repository_update__language=language)\n if exclude_deleted:\n return query.exclude(deleted_in__isnull=False)\n return query\n\n def language_status(self, language):\n is_base_language = self.language == language\n examples = self.examples(language)\n base_examples = self.examples(self.language)\n base_translations = RepositoryTranslatedExample.objects.filter(\n original_example__in=base_examples,\n language=language)\n\n examples_count = examples.count()\n base_examples_count = base_examples.count()\n base_translations_count = base_translations.count()\n base_translations_percentage = (\n base_translations_count / (\n base_examples_count if base_examples_count > 0 else 1)) * 100\n\n return {\n 'is_base_language': is_base_language,\n 'examples': {\n 'count': examples_count,\n 'entities': list(\n set(\n filter(\n lambda x: x,\n examples.values_list(\n 'entities__entity',\n flat=True).distinct()))),\n },\n 'base_translations': {\n 'count': base_translations_count,\n 'percentage': base_translations_percentage,\n },\n }\n\n def current_update(self, language=None):\n language = language or self.language\n repository_update, created = self.updates.get_or_create(\n language=language,\n training_started_at=None)\n return repository_update\n\n def last_trained_update(self, language=None):\n language = language or self.language\n return self.updates.filter(\n language=language,\n by__isnull=False).first()\n\n def get_user_authorization(self, user):\n if user.is_anonymous:\n return RepositoryAuthorization(repository=self)\n get, created = RepositoryAuthorization.objects.get_or_create(\n user=user,\n repository=self)\n return get\n\n def get_absolute_url(self):\n return '{}{}/{}/'.format(\n settings.BOTHUB_WEBAPP_BASE_URL,\n self.owner.nickname,\n self.slug)\n\n\nclass RepositoryUpdate(models.Model):\n class Meta:\n verbose_name = _('repository update')\n verbose_name_plural = _('repository updates')\n ordering = ['-created_at']\n\n repository = models.ForeignKey(\n Repository,\n models.CASCADE,\n related_name='updates')\n language = models.CharField(\n _('language'),\n max_length=5,\n validators=[\n languages.validate_language,\n ])\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n bot_data = models.TextField(\n _('bot data'),\n blank=True,\n editable=False)\n by = models.ForeignKey(\n User,\n models.CASCADE,\n blank=True,\n null=True)\n training_started_at = models.DateTimeField(\n _('training started at'),\n blank=True,\n null=True)\n trained_at = models.DateTimeField(\n _('trained at'),\n blank=True,\n null=True)\n failed_at = models.DateTimeField(\n _('failed at'),\n blank=True,\n null=True)\n\n @property\n def examples(self):\n examples = self.repository.examples(exclude_deleted=False).filter(\n models.Q(repository_update__language=self.language) |\n models.Q(translations__language=self.language))\n if self.training_started_at:\n t_started_at = self.training_started_at\n examples = examples.exclude(\n models.Q(repository_update__created_at__gt=t_started_at) |\n models.Q(deleted_in=self) |\n models.Q(deleted_in__training_started_at__lt=t_started_at))\n else:\n examples = examples.exclude(deleted_in__isnull=False)\n return examples\n\n @property\n def ready_for_train(self):\n if self.added.exists():\n return True\n if self.translated_added.exists():\n return True\n if self.deleted.exists():\n return True\n return False\n\n def start_training(self, by):\n if self.trained_at:\n raise RepositoryUpdateAlreadyTrained()\n if self.training_started_at:\n raise RepositoryUpdateAlreadyStartedTraining()\n\n authorization = self.repository.get_user_authorization(by)\n if not authorization.can_write:\n raise TrainingNotAllowed()\n\n self.by = by\n self.training_started_at = timezone.now()\n self.save(\n update_fields=[\n 'by',\n 'training_started_at',\n ])\n\n def save_training(self, bot_data):\n if self.trained_at:\n raise RepositoryUpdateAlreadyTrained()\n\n self.trained_at = timezone.now()\n self.bot_data = base64.b64encode(bot_data).decode('utf8')\n self.save(\n update_fields=[\n 'trained_at',\n 'bot_data',\n ])\n\n def get_bot_data(self):\n return base64.b64decode(self.bot_data)\n\n def train_fail(self):\n self.failed_at = timezone.now() # pragma: no cover\n self.save( # pragma: no cover\n update_fields=[\n 'failed_at',\n ])\n\n\nclass RepositoryExample(models.Model):\n class Meta:\n verbose_name = _('repository example')\n verbose_name_plural = _('repository examples')\n ordering = ['-created_at']\n\n repository_update = models.ForeignKey(\n RepositoryUpdate,\n models.CASCADE,\n related_name='added',\n editable=False)\n deleted_in = models.ForeignKey(\n RepositoryUpdate,\n models.CASCADE,\n related_name='deleted',\n blank=True,\n null=True)\n text = models.TextField(\n _('text'),\n help_text=_('Example text'))\n intent = models.CharField(\n _('intent'),\n max_length=64,\n blank=True,\n help_text=_('Example intent reference'),\n validators=[validate_item_key])\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n @property\n def language(self):\n return self.repository_update.language\n\n def has_valid_entities(self, language=None):\n if not language or language == self.repository_update.language:\n return True\n return self.get_translation(language).has_valid_entities\n\n def get_translation(self, language):\n try:\n return self.translations.get(language=language)\n except RepositoryTranslatedExample.DoesNotExist:\n raise DoesNotHaveTranslation()\n\n def get_text(self, language=None):\n if not language or language == self.repository_update.language:\n return self.text\n return self.get_translation(language).text\n\n def get_entities(self, language):\n if not language or language == self.repository_update.language:\n return self.entities.all()\n return self.get_translation(language).entities.all()\n\n def delete(self):\n self.deleted_in = self.repository_update.repository.current_update(\n self.repository_update.language)\n self.save(update_fields=['deleted_in'])\n\n\nclass RepositoryTranslatedExampleManager(models.Manager):\n def create(self, *args, original_example=None, language=None, **kwargs):\n repository = original_example.repository_update.repository\n return super().create(\n *args,\n repository_update=repository.current_update(language),\n original_example=original_example,\n language=language,\n **kwargs)\n\n\nclass RepositoryTranslatedExample(models.Model):\n class Meta:\n verbose_name = _('repository translated example')\n verbose_name_plural = _('repository translated examples')\n unique_together = ['original_example', 'language']\n ordering = ['-created_at']\n\n repository_update = models.ForeignKey(\n RepositoryUpdate,\n models.CASCADE,\n related_name='translated_added',\n editable=False)\n original_example = models.ForeignKey(\n RepositoryExample,\n models.CASCADE,\n related_name='translations',\n editable=False,\n help_text=_('Example object'))\n language = models.CharField(\n _('language'),\n max_length=5,\n help_text=_('Translation language'),\n validators=[\n languages.validate_language,\n ])\n text = models.TextField(\n _('text'),\n help_text=_('Translation text'))\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n objects = RepositoryTranslatedExampleManager()\n\n def entities_list_lambda_sort(item):\n return item.get('entity')\n\n @classmethod\n def same_entities_validator(cls, a, b):\n a_len = len(a)\n if a_len != len(b):\n return False\n a_sorted = sorted(\n a,\n key=cls.entities_list_lambda_sort)\n b_sorted = sorted(\n b,\n key=cls.entities_list_lambda_sort)\n for i in range(a_len):\n if a_sorted[i].get('entity') != b_sorted[i].get('entity'):\n return False\n return True\n\n @classmethod\n def count_entities(cls, entities_list, to_str=False):\n r = {}\n for e in entities_list:\n r.update({e.get('entity'): r.get('entity', 0) + 1})\n if to_str:\n r = ', '.join(map(\n lambda x: '{} {}'.format(x[1], x[0]),\n r.items())) if entities_list else 'no entities'\n return r\n\n @property\n def has_valid_entities(self):\n original_entities = self.original_example.entities.all()\n my_entities = self.entities.all()\n return RepositoryTranslatedExample.same_entities_validator(\n list(map(lambda x: x.to_dict, original_entities)),\n list(map(lambda x: x.to_dict, my_entities)))\n\n\nclass RepositoryEntityLabelQueryset(models.QuerySet):\n def get(self, repository, value):\n try:\n return super().get(\n repository=repository,\n value=value)\n except self.model.DoesNotExist as e:\n return super().create(\n repository=repository,\n value=value)\n\n\nclass RepositoryEntityLabelManager(models.Manager):\n def get_queryset(self):\n return RepositoryEntityLabelQueryset(self.model, using=self._db)\n\n\nclass RepositoryEntityLabel(models.Model):\n class Meta:\n unique_together = ['repository', 'value']\n\n repository = models.ForeignKey(\n Repository,\n on_delete=models.CASCADE,\n related_name='labels')\n value = models.CharField(\n _('label'),\n max_length=64,\n validators=[\n validate_item_key,\n can_t_be_other,\n ],\n blank=True)\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n objects = RepositoryEntityLabelManager()\n\n\nclass RepositoryEntityQueryset(models.QuerySet):\n def get(self, repository, value):\n try:\n return super().get(\n repository=repository,\n value=value)\n except self.model.DoesNotExist as e:\n return super().create(\n repository=repository,\n value=value)\n\n\nclass RepositoryEntityManager(models.Manager):\n def get_queryset(self):\n return RepositoryEntityQueryset(self.model, using=self._db)\n\n\nclass RepositoryEntity(models.Model):\n class Meta:\n unique_together = ['repository', 'value']\n\n repository = models.ForeignKey(\n Repository,\n on_delete=models.CASCADE,\n related_name='entities')\n value = models.CharField(\n _('entity'),\n max_length=64,\n help_text=_('Entity name'),\n validators=[validate_item_key])\n label = models.ForeignKey(\n RepositoryEntityLabel,\n on_delete=models.CASCADE,\n related_name='entities',\n null=True,\n blank=True)\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n objects = RepositoryEntityManager()\n\n def set_label(self, value):\n if not value:\n self.label = None\n else:\n self.label = RepositoryEntityLabel.objects.get(\n repository=self.repository,\n value=value)\n\n\nclass EntityBaseQueryset(models.QuerySet):\n def create(self, entity, **kwargs):\n if type(entity) is not RepositoryEntity:\n instance = self.model(**kwargs)\n repository = instance.example.repository_update.repository\n entity = RepositoryEntity.objects.get(\n repository=repository,\n value=entity)\n return super().create(\n entity=entity,\n **kwargs)\n\n\nclass EntityBaseManager(models.Manager):\n def get_queryset(self):\n return EntityBaseQueryset(self.model, using=self._db)\n\n\nclass EntityBase(models.Model):\n class Meta:\n verbose_name = _('repository example entity')\n verbose_name_plural = _('repository example entities')\n abstract = True\n\n start = models.PositiveIntegerField(\n _('start'),\n help_text=_('Start index of entity value in example text'))\n end = models.PositiveIntegerField(\n _('end'),\n help_text=_('End index of entity value in example text'))\n entity = models.ForeignKey(\n RepositoryEntity,\n on_delete=models.CASCADE)\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n objects = EntityBaseManager()\n\n @property\n def example(self):\n return self.get_example()\n\n @property\n def value(self):\n return self.example.text[self.start:self.end]\n\n @property\n def rasa_nlu_data(self):\n return {\n 'start': self.start,\n 'end': self.end,\n 'value': self.value,\n 'entity': self.entity.value,\n }\n\n @property\n def to_dict(self):\n return self.get_rasa_nlu_data()\n\n def get_example(self):\n pass # pragma: no cover\n\n def get_rasa_nlu_data(self, label_as_entity=False):\n return {\n 'start': self.start,\n 'end': self.end,\n 'entity': self.entity.label.value\n if label_as_entity else self.entity.value,\n }\n\n\nclass RepositoryExampleEntity(EntityBase):\n repository_example = models.ForeignKey(\n RepositoryExample,\n models.CASCADE,\n related_name='entities',\n editable=False,\n help_text=_('Example object'))\n\n def get_example(self):\n return self.repository_example\n\n\nclass RepositoryTranslatedExampleEntity(EntityBase):\n repository_translated_example = models.ForeignKey(\n RepositoryTranslatedExample,\n models.CASCADE,\n related_name='entities',\n editable=False,\n help_text=_('Translated example object'))\n\n def get_example(self):\n return self.repository_translated_example\n\n\nclass RepositoryAuthorization(models.Model):\n class Meta:\n verbose_name = _('repository authorization')\n verbose_name_plural = _('repository authorizations')\n unique_together = ['user', 'repository']\n\n LEVEL_NOTHING = 0\n LEVEL_READER = 1\n LEVEL_CONTRIBUTOR = 2\n LEVEL_ADMIN = 3\n\n ROLE_NOT_SETTED = 0\n ROLE_USER = 1\n ROLE_CONTRIBUTOR = 2\n ROLE_ADMIN = 3\n\n ROLE_CHOICES = [\n (ROLE_NOT_SETTED, _('not set')),\n (ROLE_USER, _('user')),\n (ROLE_CONTRIBUTOR, _('contributor')),\n (ROLE_ADMIN, _('admin')),\n ]\n\n uuid = models.UUIDField(\n _('UUID'),\n primary_key=True,\n default=uuid.uuid4,\n editable=False)\n user = models.ForeignKey(\n User,\n models.CASCADE)\n repository = models.ForeignKey(\n Repository,\n models.CASCADE,\n related_name='authorizations')\n role = models.PositiveIntegerField(\n _('role'),\n choices=ROLE_CHOICES,\n default=ROLE_NOT_SETTED)\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True)\n\n @property\n def level(self):\n try:\n user = self.user\n except User.DoesNotExist:\n user = None\n\n if user and self.repository.owner == user:\n return RepositoryAuthorization.LEVEL_ADMIN\n\n if self.role == RepositoryAuthorization.ROLE_NOT_SETTED:\n if self.repository.is_private:\n return RepositoryAuthorization.LEVEL_NOTHING\n return RepositoryAuthorization.LEVEL_READER\n\n if self.role == RepositoryAuthorization.ROLE_USER:\n return RepositoryAuthorization.LEVEL_READER\n\n if self.role == RepositoryAuthorization.ROLE_CONTRIBUTOR:\n return RepositoryAuthorization.LEVEL_CONTRIBUTOR\n\n if self.role == RepositoryAuthorization.ROLE_ADMIN:\n return RepositoryAuthorization.LEVEL_ADMIN\n\n return RepositoryAuthorization.LEVEL_NOTHING\n\n @property\n def can_read(self):\n return self.level in [\n RepositoryAuthorization.LEVEL_READER,\n RepositoryAuthorization.LEVEL_CONTRIBUTOR,\n RepositoryAuthorization.LEVEL_ADMIN,\n ]\n\n @property\n def can_contribute(self):\n return self.level in [\n RepositoryAuthorization.LEVEL_CONTRIBUTOR,\n RepositoryAuthorization.LEVEL_ADMIN,\n ]\n\n @property\n def can_write(self):\n return self.level in [\n RepositoryAuthorization.LEVEL_ADMIN,\n ]\n\n @property\n def is_admin(self):\n return self.level == RepositoryAuthorization.LEVEL_ADMIN\n\n @property\n def is_owner(self):\n try:\n user = self.user\n except User.DoesNotExist:\n return False\n return self.repository.owner == user\n\n @property\n def role_verbose(self):\n return dict(RepositoryAuthorization.ROLE_CHOICES).get(self.role)\n\n def send_new_role_email(self, responsible=None):\n if not settings.SEND_EMAILS:\n return False\n responsible_name = responsible and responsible.name \\\n or self.repository.owner.name\n context = {\n 'responsible_name': responsible_name,\n 'user_name': self.user.name,\n 'repository_name': self.repository.name,\n 'repository_url': self.repository.get_absolute_url(),\n 'new_role': self.role_verbose,\n }\n send_mail(\n _('New role in {}').format(self.repository.name),\n render_to_string(\n 'common/emails/new_role.txt',\n context),\n None,\n [self.user.email],\n html_message=render_to_string(\n 'common/emails/new_role.html',\n context))\n\n\nclass RepositoryVote(models.Model):\n UP_VOTE = 1\n DOWN_VOTE = -1\n NEUTRAL_VOTE = 0\n VOTE_CHOICES = [\n (UP_VOTE, _('Up'),),\n (DOWN_VOTE, _('Down')),\n (NEUTRAL_VOTE, _('Neutral')),\n ]\n\n class Meta:\n verbose_name = _('repository vote')\n verbose_name_plural = _('repository votes')\n unique_together = [\n 'user',\n 'repository',\n ]\n\n user = models.ForeignKey(\n User,\n models.CASCADE,\n related_name='repository_votes')\n repository = models.ForeignKey(\n Repository,\n models.CASCADE,\n related_name='votes')\n vote = models.IntegerField(\n _('vote'),\n choices=VOTE_CHOICES)\n\n\nclass RequestRepositoryAuthorization(models.Model):\n class Meta:\n unique_together = ['user', 'repository']\n\n user = models.ForeignKey(\n User,\n models.CASCADE,\n related_name='requests')\n repository = models.ForeignKey(\n Repository,\n models.CASCADE,\n related_name='requests')\n text = models.CharField(\n _('text'),\n max_length=250)\n approved_by = models.ForeignKey(\n User,\n models.CASCADE,\n blank=True,\n null=True)\n created_at = models.DateTimeField(\n _('created at'),\n auto_now_add=True,\n editable=False)\n\n def send_new_request_email_to_admins(self):\n if not settings.SEND_EMAILS:\n return False\n context = {\n 'user_name': self.user.name,\n 'repository_name': self.repository.name,\n 'text': self.text,\n 'repository_url': self.repository.get_absolute_url(),\n }\n for admin in self.repository.admins:\n send_mail(\n _('New authorization request in {}').format(\n self.repository.name),\n render_to_string(\n 'common/emails/new_request.txt',\n context),\n None,\n [admin.email],\n html_message=render_to_string(\n 'common/emails/new_request.html',\n context))\n\n def send_request_rejected_email(self):\n if not settings.SEND_EMAILS:\n return False\n context = {\n 'repository_name': self.repository.name,\n }\n send_mail(\n _('Access denied to {}').format(\n self.repository.name),\n render_to_string(\n 'common/emails/request_rejected.txt',\n context),\n None,\n [self.user.email],\n html_message=render_to_string(\n 'common/emails/request_rejected.html',\n context))\n\n def send_request_approved_email(self):\n if not settings.SEND_EMAILS:\n return False\n context = {\n 'admin_name': self.approved_by.name,\n 'repository_name': self.repository.name,\n }\n send_mail(\n _('Authorization Request Approved to {}').format(\n self.repository.name),\n render_to_string(\n 'common/emails/request_approved.txt',\n context),\n None,\n [self.user.email],\n html_message=render_to_string(\n 'common/emails/request_approved.html',\n context))\n\n\n@receiver(models.signals.pre_save, sender=RequestRepositoryAuthorization)\ndef set_user_role_on_approved(instance, **kwargs):\n current = None\n try:\n current = RequestRepositoryAuthorization.objects.get(pk=instance.pk)\n except RequestRepositoryAuthorization.DoesNotExist as e:\n pass\n\n if not current:\n return False\n\n if current.approved_by is None and \\\n current.approved_by is not instance.approved_by:\n user_authorization = instance.repository.get_user_authorization(\n instance.user)\n user_authorization.role = RepositoryAuthorization.ROLE_USER\n user_authorization.save(update_fields=['role'])\n instance.send_request_approved_email()\n else:\n raise ValidationError(\n _('You can change approved_by just one time.'))\n\n\n@receiver(models.signals.post_save, sender=RequestRepositoryAuthorization)\ndef send_new_request_email_to_admins_on_created(instance, created, **kwargs):\n if created:\n instance.send_new_request_email_to_admins()\n\n\n@receiver(models.signals.post_delete, sender=RequestRepositoryAuthorization)\ndef send_request_rejected_email(instance, **kwargs):\n instance.send_request_rejected_email()\n", "path": "bothub/common/models.py" } ]
diff --git a/bothub/common/models.py b/bothub/common/models.py index c23c9bdf..641b2ecb 100644 --- a/bothub/common/models.py +++ b/bothub/common/models.py @@ -190,7 +190,7 @@ def votes_sum(self): @property def intents(self): return list(set(self.examples( - exclude_deleted=False).exclude( + exclude_deleted=True).exclude( intent='').values_list( 'intent', flat=True))) diff --git a/bothub/common/tests.py b/bothub/common/tests.py index 63cb95d4..70d9348c 100644 --- a/bothub/common/tests.py +++ b/bothub/common/tests.py @@ -292,7 +292,7 @@ def test_intents(self): 'greet', self.repository.intents) - RepositoryExample.objects.create( + example = RepositoryExample.objects.create( repository_update=self.repository.current_update( languages.LANGUAGE_PT), text='tchau', @@ -305,6 +305,12 @@ def test_intents(self): 'bye', self.repository.intents) + example.delete() + + self.assertNotIn( + 'bye', + self.repository.intents) + def test_entities(self): example = RepositoryExample.objects.create( repository_update=self.repository.current_update(
archlinux__archinstall-184
gnome-extra provides WAY too much bloatware I can't imagine most people wanting all the packages this installs on a new installation. Most of these applications are things like games and advanced tools like dconf-editor that your average user should not be touching. Some of them are nice to have but can be installed later manually instead of during initial installation.
[ { "content": "import archinstall\n\ninstallation.add_additional_packages(\"gnome gnome-extra gdm\") # We'll create a gnome-minimal later, but for now, we'll avoid issues by giving more than we need.\n# Note: gdm should be part of the gnome group, but adding it here for clarity", "path": "profiles/applications/gnome.py" } ]
[ { "content": "import archinstall\n\ninstallation.add_additional_packages(\"gnome gnome-tweaks gnome-todo gnome-sound-recorder evolution gdm\")\n# Note: gdm should be part of the gnome group, but adding it here for clarity\n", "path": "profiles/applications/gnome.py" } ]
diff --git a/profiles/applications/gnome.py b/profiles/applications/gnome.py index 1f2a20a109..e9fd1d50dd 100644 --- a/profiles/applications/gnome.py +++ b/profiles/applications/gnome.py @@ -1,4 +1,4 @@ import archinstall -installation.add_additional_packages("gnome gnome-extra gdm") # We'll create a gnome-minimal later, but for now, we'll avoid issues by giving more than we need. -# Note: gdm should be part of the gnome group, but adding it here for clarity \ No newline at end of file +installation.add_additional_packages("gnome gnome-tweaks gnome-todo gnome-sound-recorder evolution gdm") +# Note: gdm should be part of the gnome group, but adding it here for clarity
pyinstaller__pyinstaller-4360
Windows: Cannot bundle with debug if pkg_resources is a dependency This issue happens when I try to bundle my project, in the Analysis.assemble phase and only when I try to do it with debug enabled. PyInstaller tries to compile a module that is part of an executable (pyinstaller.exe in this case) which fails because it cannot read the module. This is with Windows 10, Python 3.6.6 (official from python.org) and PyInstaller 3.5.dev0+51429f8fc (which should be the latest develop version as of today). Here is the traceback: ``` Traceback (most recent call last): File "c:\python36-32\Lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "c:\python36-32\Lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\RMYROY~1\VIRTUA~1\CDDA-G~3\Scripts\pyinstaller.exe\__main__.py", line 9, in <module> File "c:\users\rmyroy~1\virtua~1\cdda-g~3\lib\site-packages\PyInstaller\__main__.py", line 111, in run run_build(pyi_config, spec_file, **vars(args)) File "c:\users\rmyroy~1\virtua~1\cdda-g~3\lib\site-packages\PyInstaller\__main__.py", line 63, in run_build PyInstaller.building.build_main.main(pyi_config, spec_file, **kwargs) File "c:\users\rmyroy~1\virtua~1\cdda-g~3\lib\site-packages\PyInstaller\building\build_main.py", line 846, in main build(specfile, kw.get('distpath'), kw.get('workpath'), kw.get('clean_build')) File "c:\users\rmyroy~1\virtua~1\cdda-g~3\lib\site-packages\PyInstaller\building\build_main.py", line 793, in build exec(code, spec_namespace) File "launcher.spec", line 17, in <module> noarchive=True) File "c:\users\rmyroy~1\virtua~1\cdda-g~3\lib\site-packages\PyInstaller\building\build_main.py", line 243, in __init__ self.__postinit__() File "c:\users\rmyroy~1\virtua~1\cdda-g~3\lib\site-packages\PyInstaller\building\datastruct.py", line 158, in __postinit__ self.assemble() File "c:\users\rmyroy~1\virtua~1\cdda-g~3\lib\site-packages\PyInstaller\building\build_main.py", line 599, in assemble for name, path, typecode in compile_py_files(new_toc, CONF['workpath']): File "c:\users\rmyroy~1\virtua~1\cdda-g~3\lib\site-packages\PyInstaller\utils\misc.py", line 150, in compile_py_files with open(obj_fnm, 'rb') as fh: FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\RMYROY~1\\VIRTUA~1\\CDDA-G~3\\Scripts\\pyinstaller.exe\\__main__.pyo' ``` For some reason, the following entry is added in Analysis.pure ```python ('__main__.pyc', 'C:\\Users\\RMYROY~1\\VIRTUA~1\\CDDA-G~3\\Scripts\\pyinstaller.exe\\__main__.py', 'PYMODULE') ``` **That entry is incorrect in that it shouldn't have been added in pure or it shouldn't be compiled in assemble which is the source of this issue.** Here is my spec file: ```python # -*- mode: python ; coding: utf-8 -*- block_cipher = None a = Analysis(['cddagl\\launcher.py'], pathex=['C:\\Program Files (x86)\\Windows Kits\\10\\Redist\\ucrt\\DLLs\\x86\\', 'C:\\Users\\Rémy Roy\\Projects\\CDDA-Game-Launcher'], binaries=[], datas=[('alembic', 'alembic'), ('data', 'data'), ('cddagl/resources', 'cddagl/resources'), ('cddagl/VERSION', 'cddagl'), ('C:\\Users\\Rémy Roy\\VirtualEnvs\\CDDA-Game-Launcher\\Scripts\\UnRAR.exe', '.'), ('cddagl/locale/en/LC_MESSAGES/cddagl.mo', 'cddagl/locale/en/LC_MESSAGES'), ('cddagl/locale/fr/LC_MESSAGES/cddagl.mo', 'cddagl/locale/fr/LC_MESSAGES'), ('cddagl/locale/it/LC_MESSAGES/cddagl.mo', 'cddagl/locale/it/LC_MESSAGES'), ('cddagl/locale/ja/LC_MESSAGES/cddagl.mo', 'cddagl/locale/ja/LC_MESSAGES'), ('cddagl/locale/ru/LC_MESSAGES/cddagl.mo', 'cddagl/locale/ru/LC_MESSAGES')], hiddenimports=['lxml.cssselect', 'babel.numbers'], hookspath=[], runtime_hooks=[], excludes=[], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=block_cipher, noarchive=True) pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher) exe = EXE(pyz, a.scripts, [('v', None, 'OPTION')], exclude_binaries=True, name='launcher', debug=True, bootloader_ignore_signals=False, strip=False, upx=False, console=True , icon='cddagl\\resources\\launcher.ico') coll = COLLECT(exe, a.binaries, a.zipfiles, a.datas, strip=False, upx=False, upx_exclude=[], name='launcher') ``` You can probably reproduce this issue easily by cloning [my project](https://github.com/remyroy/CDDA-Game-Launcher) and issuing the following command: ``` python setup.py freeze --debug=1 ``` Here is the full pyinstaller log output: https://gist.github.com/remyroy/37f7f0a912d5d714a947cddfb78769d4 I'll investigate how that entry is added in Analysis to give more context to this issue.
[ { "content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2005-2019, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License with exception\n# for distributing bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\nfrom PyInstaller.utils.hooks import collect_submodules\n\n# pkg_resources keeps vendored modules in its _vendor subpackage, and does\n# sys.meta_path based import magic to expose them as pkg_resources.extern.*\nhiddenimports = collect_submodules('pkg_resources._vendor')\n", "path": "PyInstaller/hooks/hook-pkg_resources.py" } ]
[ { "content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2005-2019, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License with exception\n# for distributing bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\nfrom PyInstaller.utils.hooks import collect_submodules\n\n# pkg_resources keeps vendored modules in its _vendor subpackage, and does\n# sys.meta_path based import magic to expose them as pkg_resources.extern.*\nhiddenimports = collect_submodules('pkg_resources._vendor')\n\nexcludedimports = ['__main__']\n", "path": "PyInstaller/hooks/hook-pkg_resources.py" } ]
diff --git a/PyInstaller/hooks/hook-pkg_resources.py b/PyInstaller/hooks/hook-pkg_resources.py index 04c588ab79..0f758bd42d 100644 --- a/PyInstaller/hooks/hook-pkg_resources.py +++ b/PyInstaller/hooks/hook-pkg_resources.py @@ -11,3 +11,5 @@ # pkg_resources keeps vendored modules in its _vendor subpackage, and does # sys.meta_path based import magic to expose them as pkg_resources.extern.* hiddenimports = collect_submodules('pkg_resources._vendor') + +excludedimports = ['__main__'] diff --git a/news/4263.hooks.rst b/news/4263.hooks.rst new file mode 100644 index 0000000000..0d8d13c94c --- /dev/null +++ b/news/4263.hooks.rst @@ -0,0 +1 @@ +Exclude imports for pkg_resources to fix bundling issue. diff --git a/news/4360.hooks.rst b/news/4360.hooks.rst new file mode 100644 index 0000000000..0d8d13c94c --- /dev/null +++ b/news/4360.hooks.rst @@ -0,0 +1 @@ +Exclude imports for pkg_resources to fix bundling issue.
conan-io__conan-5334
tools.patch cant create files To help us debug your issue please explain: - [x] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md). - [x] I've specified the Conan version, operating system version and any tool that can be relevant. - [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion. Hello, I am trying to apply some out-of-tree git patches, some of them create new files. Some of them contain new files, and the tools.patch utility will then compail that absolute filepaths are not allowed (`/dev/null`). Lacking this support, its cumbersome to add bugfixes and git commits in general. Example patch (created with `git format-patch`): ```patch From d0807313143bb35da65c2b858a2d9e17fd3fbf9e Mon Sep 17 00:00:00 2001 From: Norbert Lange <[email protected]> Date: Fri, 7 Jun 2019 21:49:19 +0200 Subject: [PATCH] add and remove file --- newfile | 1 + oldfile | 1 - 2 files changed, 1 insertion(+), 1 deletion(-) create mode 100644 newfile delete mode 100644 oldfile diff --git a/newfile b/newfile new file mode 100644 index 0000000..fdedddf --- /dev/null +++ b/newfile @@ -0,0 +1 @@ +Hello mean world diff --git a/oldfile b/oldfile deleted file mode 100644 index 32332e1..0000000 --- a/oldfile +++ /dev/null @@ -1 +0,0 @@ -Old litter -- 2.20.1 ``` My environment is: ``` Debian Buster x64 Conan version 1.16.0 ```
[ { "content": "import logging\nimport os\nimport platform\nimport sys\nfrom contextlib import contextmanager\nfrom fnmatch import fnmatch\n\nimport six\nfrom patch import fromfile, fromstring\n\nfrom conans.client.output import ConanOutput\nfrom conans.errors import ConanException\nfrom conans.unicode import get_cwd\nfrom conans.util.fallbacks import default_output\nfrom conans.util.files import (_generic_algorithm_sum, load, save)\n\nUNIT_SIZE = 1000.0\n\n\n@contextmanager\ndef chdir(newdir):\n old_path = get_cwd()\n os.chdir(newdir)\n try:\n yield\n finally:\n os.chdir(old_path)\n\n\ndef human_size(size_bytes):\n \"\"\"\n format a size in bytes into a 'human' file size, e.g. B, KB, MB, GB, TB, PB\n Note that bytes will be reported in whole numbers but KB and above will have\n greater precision. e.g. 43 B, 443 KB, 4.3 MB, 4.43 GB, etc\n \"\"\"\n\n suffixes_table = [('B', 0), ('KB', 1), ('MB', 1), ('GB', 2), ('TB', 2), ('PB', 2)]\n\n num = float(size_bytes)\n for suffix, precision in suffixes_table:\n if num < UNIT_SIZE:\n break\n num /= UNIT_SIZE\n\n if precision == 0:\n formatted_size = \"%d\" % num\n else:\n formatted_size = str(round(num, ndigits=precision))\n\n return \"%s%s\" % (formatted_size, suffix)\n\n\ndef unzip(filename, destination=\".\", keep_permissions=False, pattern=None, output=None):\n \"\"\"\n Unzip a zipped file\n :param filename: Path to the zip file\n :param destination: Destination folder (or file for .gz files)\n :param keep_permissions: Keep the zip permissions. WARNING: Can be\n dangerous if the zip was not created in a NIX system, the bits could\n produce undefined permission schema. Use this option only if you are sure\n that the zip was created correctly.\n :param pattern: Extract only paths matching the pattern. This should be a\n Unix shell-style wildcard, see fnmatch documentation for more details.\n :param output: output\n :return:\n \"\"\"\n output = default_output(output, 'conans.client.tools.files.unzip')\n\n if (filename.endswith(\".tar.gz\") or filename.endswith(\".tgz\") or\n filename.endswith(\".tbz2\") or filename.endswith(\".tar.bz2\") or\n filename.endswith(\".tar\")):\n return untargz(filename, destination, pattern)\n if filename.endswith(\".gz\"):\n import gzip\n with gzip.open(filename, 'rb') as f:\n file_content = f.read()\n target_name = filename[:-3] if destination == \".\" else destination\n save(target_name, file_content)\n return\n if filename.endswith(\".tar.xz\") or filename.endswith(\".txz\"):\n if six.PY2:\n raise ConanException(\"XZ format not supported in Python 2. Use Python 3 instead\")\n return untargz(filename, destination, pattern)\n\n import zipfile\n full_path = os.path.normpath(os.path.join(get_cwd(), destination))\n\n if hasattr(sys.stdout, \"isatty\") and sys.stdout.isatty():\n def print_progress(the_size, uncomp_size):\n the_size = (the_size * 100.0 / uncomp_size) if uncomp_size != 0 else 0\n txt_msg = \"Unzipping %d %%\"\n if the_size > print_progress.last_size + 1:\n output.rewrite_line(txt_msg % the_size)\n print_progress.last_size = the_size\n if int(the_size) == 99:\n output.rewrite_line(txt_msg % 100)\n output.writeln(\"\")\n else:\n def print_progress(_, __):\n pass\n\n with zipfile.ZipFile(filename, \"r\") as z:\n if not pattern:\n zip_info = z.infolist()\n else:\n zip_info = [zi for zi in z.infolist() if fnmatch(zi.filename, pattern)]\n uncompress_size = sum((file_.file_size for file_ in zip_info))\n if uncompress_size > 100000:\n output.info(\"Unzipping %s, this can take a while\" % human_size(uncompress_size))\n else:\n output.info(\"Unzipping %s\" % human_size(uncompress_size))\n extracted_size = 0\n\n print_progress.last_size = -1\n if platform.system() == \"Windows\":\n for file_ in zip_info:\n extracted_size += file_.file_size\n print_progress(extracted_size, uncompress_size)\n try:\n z.extract(file_, full_path)\n except Exception as e:\n output.error(\"Error extract %s\\n%s\" % (file_.filename, str(e)))\n else: # duplicated for, to avoid a platform check for each zipped file\n for file_ in zip_info:\n extracted_size += file_.file_size\n print_progress(extracted_size, uncompress_size)\n try:\n z.extract(file_, full_path)\n if keep_permissions:\n # Could be dangerous if the ZIP has been created in a non nix system\n # https://bugs.python.org/issue15795\n perm = file_.external_attr >> 16 & 0xFFF\n os.chmod(os.path.join(full_path, file_.filename), perm)\n except Exception as e:\n output.error(\"Error extract %s\\n%s\" % (file_.filename, str(e)))\n\n\ndef untargz(filename, destination=\".\", pattern=None):\n import tarfile\n with tarfile.TarFile.open(filename, 'r:*') as tarredgzippedFile:\n if not pattern:\n tarredgzippedFile.extractall(destination)\n else:\n members = list(filter(lambda m: fnmatch(m.name, pattern),\n tarredgzippedFile.getmembers()))\n tarredgzippedFile.extractall(destination, members=members)\n\n\ndef check_with_algorithm_sum(algorithm_name, file_path, signature):\n real_signature = _generic_algorithm_sum(file_path, algorithm_name)\n if real_signature != signature.lower():\n raise ConanException(\"%s signature failed for '%s' file. \\n\"\n \" Provided signature: %s \\n\"\n \" Computed signature: %s\" % (algorithm_name,\n os.path.basename(file_path),\n signature,\n real_signature))\n\n\ndef check_sha1(file_path, signature):\n check_with_algorithm_sum(\"sha1\", file_path, signature)\n\n\ndef check_md5(file_path, signature):\n check_with_algorithm_sum(\"md5\", file_path, signature)\n\n\ndef check_sha256(file_path, signature):\n check_with_algorithm_sum(\"sha256\", file_path, signature)\n\n\ndef patch(base_path=None, patch_file=None, patch_string=None, strip=0, output=None):\n \"\"\"Applies a diff from file (patch_file) or string (patch_string)\n in base_path directory or current dir if None\"\"\"\n\n class PatchLogHandler(logging.Handler):\n def __init__(self):\n logging.Handler.__init__(self, logging.DEBUG)\n self.output = output or ConanOutput(sys.stdout, sys.stderr, color=True)\n self.patchname = patch_file if patch_file else \"patch\"\n\n def emit(self, record):\n logstr = self.format(record)\n if record.levelno == logging.WARN:\n self.output.warn(\"%s: %s\" % (self.patchname, logstr))\n else:\n self.output.info(\"%s: %s\" % (self.patchname, logstr))\n\n patchlog = logging.getLogger(\"patch\")\n if patchlog:\n patchlog.handlers = []\n patchlog.addHandler(PatchLogHandler())\n\n if not patch_file and not patch_string:\n return\n if patch_file:\n patchset = fromfile(patch_file)\n else:\n patchset = fromstring(patch_string.encode())\n\n if not patchset:\n raise ConanException(\"Failed to parse patch: %s\" % (patch_file if patch_file else \"string\"))\n\n def decode_clean(path, prefix):\n path = path.decode(\"utf-8\").replace(\"\\\\\", \"/\")\n if path.startswith(prefix):\n path = path[2:]\n return path\n\n def strip_path(path):\n tokens = path.split(\"/\")[strip:]\n path = \"/\".join(tokens)\n if base_path:\n path = os.path.join(base_path, path)\n return path\n # account for new and deleted files, upstream dep won't fix them\n items = []\n for p in patchset:\n source = decode_clean(p.source, \"a/\")\n target = decode_clean(p.target, \"b/\")\n if \"dev/null\" in source:\n target = strip_path(target)\n hunks = [s.decode(\"utf-8\") for s in p.hunks[0].text]\n new_file = \"\".join(hunk[1:] for hunk in hunks)\n save(target, new_file)\n elif \"dev/null\" in target:\n source = strip_path(source)\n os.unlink(source)\n else:\n items.append(p)\n patchset.items = items\n\n if not patchset.apply(root=base_path, strip=strip):\n raise ConanException(\"Failed to apply patch: %s\" % patch_file)\n\n\ndef _manage_text_not_found(search, file_path, strict, function_name, output):\n message = \"%s didn't find pattern '%s' in '%s' file.\" % (function_name, search, file_path)\n if strict:\n raise ConanException(message)\n else:\n output.warn(message)\n return False\n\n\ndef replace_in_file(file_path, search, replace, strict=True, output=None):\n output = default_output(output, 'conans.client.tools.files.replace_in_file')\n\n content = load(file_path)\n if -1 == content.find(search):\n _manage_text_not_found(search, file_path, strict, \"replace_in_file\", output=output)\n content = content.replace(search, replace)\n content = content.encode(\"utf-8\")\n with open(file_path, \"wb\") as handle:\n handle.write(content)\n\n\ndef replace_path_in_file(file_path, search, replace, strict=True, windows_paths=None, output=None):\n output = default_output(output, 'conans.client.tools.files.replace_path_in_file')\n\n if windows_paths is False or (windows_paths is None and platform.system() != \"Windows\"):\n return replace_in_file(file_path, search, replace, strict=strict, output=output)\n\n def normalized_text(text):\n return text.replace(\"\\\\\", \"/\").lower()\n\n content = load(file_path)\n normalized_content = normalized_text(content)\n normalized_search = normalized_text(search)\n index = normalized_content.find(normalized_search)\n if index == -1:\n return _manage_text_not_found(search, file_path, strict, \"replace_path_in_file\",\n output=output)\n\n while index != -1:\n content = content[:index] + replace + content[index + len(search):]\n normalized_content = normalized_text(content)\n index = normalized_content.find(normalized_search)\n\n content = content.encode(\"utf-8\")\n with open(file_path, \"wb\") as handle:\n handle.write(content)\n\n return True\n\n\ndef replace_prefix_in_pc_file(pc_file, new_prefix):\n content = load(pc_file)\n lines = []\n for line in content.splitlines():\n if line.startswith(\"prefix=\"):\n lines.append('prefix=%s' % new_prefix)\n else:\n lines.append(line)\n save(pc_file, \"\\n\".join(lines))\n\n\ndef _path_equals(path1, path2):\n path1 = os.path.normpath(path1)\n path2 = os.path.normpath(path2)\n if platform.system() == \"Windows\":\n path1 = path1.lower().replace(\"sysnative\", \"system32\")\n path2 = path2.lower().replace(\"sysnative\", \"system32\")\n return path1 == path2\n\n\ndef collect_libs(conanfile, folder=None):\n if not conanfile.package_folder:\n return []\n if folder:\n lib_folders = [os.path.join(conanfile.package_folder, folder)]\n else:\n lib_folders = [os.path.join(conanfile.package_folder, folder)\n for folder in conanfile.cpp_info.libdirs]\n result = []\n for lib_folder in lib_folders:\n if not os.path.exists(lib_folder):\n conanfile.output.warn(\"Lib folder doesn't exist, can't collect libraries: \"\n \"{0}\".format(lib_folder))\n continue\n files = os.listdir(lib_folder)\n for f in files:\n name, ext = os.path.splitext(f)\n if ext in (\".so\", \".lib\", \".a\", \".dylib\", \".bc\"):\n if ext != \".lib\" and name.startswith(\"lib\"):\n name = name[3:]\n if name in result:\n conanfile.output.warn(\"Library '%s' was either already found in a previous \"\n \"'conanfile.cpp_info.libdirs' folder or appears several \"\n \"times with a different file extension\" % name)\n else:\n result.append(name)\n result.sort()\n return result\n\n\ndef which(filename):\n \"\"\" same affect as posix which command or shutil.which from python3 \"\"\"\n def verify(filepath):\n if os.path.isfile(filepath) and os.access(filepath, os.X_OK):\n return os.path.join(path, filename)\n return None\n\n def _get_possible_filenames(filename):\n extensions_win = (os.getenv(\"PATHEXT\", \".COM;.EXE;.BAT;.CMD\").split(\";\")\n if \".\" not in filename else [])\n extensions = [\".sh\"] if platform.system() != \"Windows\" else extensions_win\n extensions.insert(1, \"\") # No extension\n return [\"%s%s\" % (filename, entry.lower()) for entry in extensions]\n\n possible_names = _get_possible_filenames(filename)\n for path in os.environ[\"PATH\"].split(os.pathsep):\n for name in possible_names:\n filepath = os.path.abspath(os.path.join(path, name))\n if verify(filepath):\n return filepath\n if platform.system() == \"Windows\":\n filepath = filepath.lower()\n if \"system32\" in filepath:\n # python return False for os.path.exists of exes in System32 but with SysNative\n trick_path = filepath.replace(\"system32\", \"sysnative\")\n if verify(trick_path):\n return trick_path\n\n return None\n\n\ndef _replace_with_separator(filepath, sep):\n tmp = load(filepath)\n ret = sep.join(tmp.splitlines())\n if tmp.endswith(\"\\n\"):\n ret += sep\n save(filepath, ret)\n\n\ndef unix2dos(filepath):\n _replace_with_separator(filepath, \"\\r\\n\")\n\n\ndef dos2unix(filepath):\n _replace_with_separator(filepath, \"\\n\")\n", "path": "conans/client/tools/files.py" } ]
[ { "content": "import logging\nimport os\nimport platform\nimport sys\nfrom contextlib import contextmanager\nfrom fnmatch import fnmatch\n\nimport six\nfrom patch import fromfile, fromstring\n\nfrom conans.client.output import ConanOutput\nfrom conans.errors import ConanException\nfrom conans.unicode import get_cwd\nfrom conans.util.fallbacks import default_output\nfrom conans.util.files import (_generic_algorithm_sum, load, save)\n\nUNIT_SIZE = 1000.0\n\n\n@contextmanager\ndef chdir(newdir):\n old_path = get_cwd()\n os.chdir(newdir)\n try:\n yield\n finally:\n os.chdir(old_path)\n\n\ndef human_size(size_bytes):\n \"\"\"\n format a size in bytes into a 'human' file size, e.g. B, KB, MB, GB, TB, PB\n Note that bytes will be reported in whole numbers but KB and above will have\n greater precision. e.g. 43 B, 443 KB, 4.3 MB, 4.43 GB, etc\n \"\"\"\n\n suffixes_table = [('B', 0), ('KB', 1), ('MB', 1), ('GB', 2), ('TB', 2), ('PB', 2)]\n\n num = float(size_bytes)\n for suffix, precision in suffixes_table:\n if num < UNIT_SIZE:\n break\n num /= UNIT_SIZE\n\n if precision == 0:\n formatted_size = \"%d\" % num\n else:\n formatted_size = str(round(num, ndigits=precision))\n\n return \"%s%s\" % (formatted_size, suffix)\n\n\ndef unzip(filename, destination=\".\", keep_permissions=False, pattern=None, output=None):\n \"\"\"\n Unzip a zipped file\n :param filename: Path to the zip file\n :param destination: Destination folder (or file for .gz files)\n :param keep_permissions: Keep the zip permissions. WARNING: Can be\n dangerous if the zip was not created in a NIX system, the bits could\n produce undefined permission schema. Use this option only if you are sure\n that the zip was created correctly.\n :param pattern: Extract only paths matching the pattern. This should be a\n Unix shell-style wildcard, see fnmatch documentation for more details.\n :param output: output\n :return:\n \"\"\"\n output = default_output(output, 'conans.client.tools.files.unzip')\n\n if (filename.endswith(\".tar.gz\") or filename.endswith(\".tgz\") or\n filename.endswith(\".tbz2\") or filename.endswith(\".tar.bz2\") or\n filename.endswith(\".tar\")):\n return untargz(filename, destination, pattern)\n if filename.endswith(\".gz\"):\n import gzip\n with gzip.open(filename, 'rb') as f:\n file_content = f.read()\n target_name = filename[:-3] if destination == \".\" else destination\n save(target_name, file_content)\n return\n if filename.endswith(\".tar.xz\") or filename.endswith(\".txz\"):\n if six.PY2:\n raise ConanException(\"XZ format not supported in Python 2. Use Python 3 instead\")\n return untargz(filename, destination, pattern)\n\n import zipfile\n full_path = os.path.normpath(os.path.join(get_cwd(), destination))\n\n if hasattr(sys.stdout, \"isatty\") and sys.stdout.isatty():\n def print_progress(the_size, uncomp_size):\n the_size = (the_size * 100.0 / uncomp_size) if uncomp_size != 0 else 0\n txt_msg = \"Unzipping %d %%\"\n if the_size > print_progress.last_size + 1:\n output.rewrite_line(txt_msg % the_size)\n print_progress.last_size = the_size\n if int(the_size) == 99:\n output.rewrite_line(txt_msg % 100)\n output.writeln(\"\")\n else:\n def print_progress(_, __):\n pass\n\n with zipfile.ZipFile(filename, \"r\") as z:\n if not pattern:\n zip_info = z.infolist()\n else:\n zip_info = [zi for zi in z.infolist() if fnmatch(zi.filename, pattern)]\n uncompress_size = sum((file_.file_size for file_ in zip_info))\n if uncompress_size > 100000:\n output.info(\"Unzipping %s, this can take a while\" % human_size(uncompress_size))\n else:\n output.info(\"Unzipping %s\" % human_size(uncompress_size))\n extracted_size = 0\n\n print_progress.last_size = -1\n if platform.system() == \"Windows\":\n for file_ in zip_info:\n extracted_size += file_.file_size\n print_progress(extracted_size, uncompress_size)\n try:\n z.extract(file_, full_path)\n except Exception as e:\n output.error(\"Error extract %s\\n%s\" % (file_.filename, str(e)))\n else: # duplicated for, to avoid a platform check for each zipped file\n for file_ in zip_info:\n extracted_size += file_.file_size\n print_progress(extracted_size, uncompress_size)\n try:\n z.extract(file_, full_path)\n if keep_permissions:\n # Could be dangerous if the ZIP has been created in a non nix system\n # https://bugs.python.org/issue15795\n perm = file_.external_attr >> 16 & 0xFFF\n os.chmod(os.path.join(full_path, file_.filename), perm)\n except Exception as e:\n output.error(\"Error extract %s\\n%s\" % (file_.filename, str(e)))\n\n\ndef untargz(filename, destination=\".\", pattern=None):\n import tarfile\n with tarfile.TarFile.open(filename, 'r:*') as tarredgzippedFile:\n if not pattern:\n tarredgzippedFile.extractall(destination)\n else:\n members = list(filter(lambda m: fnmatch(m.name, pattern),\n tarredgzippedFile.getmembers()))\n tarredgzippedFile.extractall(destination, members=members)\n\n\ndef check_with_algorithm_sum(algorithm_name, file_path, signature):\n real_signature = _generic_algorithm_sum(file_path, algorithm_name)\n if real_signature != signature.lower():\n raise ConanException(\"%s signature failed for '%s' file. \\n\"\n \" Provided signature: %s \\n\"\n \" Computed signature: %s\" % (algorithm_name,\n os.path.basename(file_path),\n signature,\n real_signature))\n\n\ndef check_sha1(file_path, signature):\n check_with_algorithm_sum(\"sha1\", file_path, signature)\n\n\ndef check_md5(file_path, signature):\n check_with_algorithm_sum(\"md5\", file_path, signature)\n\n\ndef check_sha256(file_path, signature):\n check_with_algorithm_sum(\"sha256\", file_path, signature)\n\n\ndef patch(base_path=None, patch_file=None, patch_string=None, strip=0, output=None):\n \"\"\"Applies a diff from file (patch_file) or string (patch_string)\n in base_path directory or current dir if None\"\"\"\n\n class PatchLogHandler(logging.Handler):\n def __init__(self):\n logging.Handler.__init__(self, logging.DEBUG)\n self.output = output or ConanOutput(sys.stdout, sys.stderr, color=True)\n self.patchname = patch_file if patch_file else \"patch\"\n\n def emit(self, record):\n logstr = self.format(record)\n if record.levelno == logging.WARN:\n self.output.warn(\"%s: %s\" % (self.patchname, logstr))\n else:\n self.output.info(\"%s: %s\" % (self.patchname, logstr))\n\n patchlog = logging.getLogger(\"patch\")\n if patchlog:\n patchlog.handlers = []\n patchlog.addHandler(PatchLogHandler())\n\n if not patch_file and not patch_string:\n return\n if patch_file:\n patchset = fromfile(patch_file)\n else:\n patchset = fromstring(patch_string.encode())\n\n if not patchset:\n raise ConanException(\"Failed to parse patch: %s\" % (patch_file if patch_file else \"string\"))\n\n def decode_clean(path, prefix):\n path = path.decode(\"utf-8\").replace(\"\\\\\", \"/\")\n if path.startswith(prefix):\n path = path[2:]\n return path\n\n def strip_path(path):\n tokens = path.split(\"/\")\n if len(tokens) > 1:\n tokens = tokens[strip:]\n path = \"/\".join(tokens)\n if base_path:\n path = os.path.join(base_path, path)\n return path\n # account for new and deleted files, upstream dep won't fix them\n items = []\n for p in patchset:\n source = decode_clean(p.source, \"a/\")\n target = decode_clean(p.target, \"b/\")\n if \"dev/null\" in source:\n target = strip_path(target)\n hunks = [s.decode(\"utf-8\") for s in p.hunks[0].text]\n new_file = \"\".join(hunk[1:] for hunk in hunks)\n save(target, new_file)\n elif \"dev/null\" in target:\n source = strip_path(source)\n os.unlink(source)\n else:\n items.append(p)\n patchset.items = items\n\n if not patchset.apply(root=base_path, strip=strip):\n raise ConanException(\"Failed to apply patch: %s\" % patch_file)\n\n\ndef _manage_text_not_found(search, file_path, strict, function_name, output):\n message = \"%s didn't find pattern '%s' in '%s' file.\" % (function_name, search, file_path)\n if strict:\n raise ConanException(message)\n else:\n output.warn(message)\n return False\n\n\ndef replace_in_file(file_path, search, replace, strict=True, output=None):\n output = default_output(output, 'conans.client.tools.files.replace_in_file')\n\n content = load(file_path)\n if -1 == content.find(search):\n _manage_text_not_found(search, file_path, strict, \"replace_in_file\", output=output)\n content = content.replace(search, replace)\n content = content.encode(\"utf-8\")\n with open(file_path, \"wb\") as handle:\n handle.write(content)\n\n\ndef replace_path_in_file(file_path, search, replace, strict=True, windows_paths=None, output=None):\n output = default_output(output, 'conans.client.tools.files.replace_path_in_file')\n\n if windows_paths is False or (windows_paths is None and platform.system() != \"Windows\"):\n return replace_in_file(file_path, search, replace, strict=strict, output=output)\n\n def normalized_text(text):\n return text.replace(\"\\\\\", \"/\").lower()\n\n content = load(file_path)\n normalized_content = normalized_text(content)\n normalized_search = normalized_text(search)\n index = normalized_content.find(normalized_search)\n if index == -1:\n return _manage_text_not_found(search, file_path, strict, \"replace_path_in_file\",\n output=output)\n\n while index != -1:\n content = content[:index] + replace + content[index + len(search):]\n normalized_content = normalized_text(content)\n index = normalized_content.find(normalized_search)\n\n content = content.encode(\"utf-8\")\n with open(file_path, \"wb\") as handle:\n handle.write(content)\n\n return True\n\n\ndef replace_prefix_in_pc_file(pc_file, new_prefix):\n content = load(pc_file)\n lines = []\n for line in content.splitlines():\n if line.startswith(\"prefix=\"):\n lines.append('prefix=%s' % new_prefix)\n else:\n lines.append(line)\n save(pc_file, \"\\n\".join(lines))\n\n\ndef _path_equals(path1, path2):\n path1 = os.path.normpath(path1)\n path2 = os.path.normpath(path2)\n if platform.system() == \"Windows\":\n path1 = path1.lower().replace(\"sysnative\", \"system32\")\n path2 = path2.lower().replace(\"sysnative\", \"system32\")\n return path1 == path2\n\n\ndef collect_libs(conanfile, folder=None):\n if not conanfile.package_folder:\n return []\n if folder:\n lib_folders = [os.path.join(conanfile.package_folder, folder)]\n else:\n lib_folders = [os.path.join(conanfile.package_folder, folder)\n for folder in conanfile.cpp_info.libdirs]\n result = []\n for lib_folder in lib_folders:\n if not os.path.exists(lib_folder):\n conanfile.output.warn(\"Lib folder doesn't exist, can't collect libraries: \"\n \"{0}\".format(lib_folder))\n continue\n files = os.listdir(lib_folder)\n for f in files:\n name, ext = os.path.splitext(f)\n if ext in (\".so\", \".lib\", \".a\", \".dylib\", \".bc\"):\n if ext != \".lib\" and name.startswith(\"lib\"):\n name = name[3:]\n if name in result:\n conanfile.output.warn(\"Library '%s' was either already found in a previous \"\n \"'conanfile.cpp_info.libdirs' folder or appears several \"\n \"times with a different file extension\" % name)\n else:\n result.append(name)\n result.sort()\n return result\n\n\ndef which(filename):\n \"\"\" same affect as posix which command or shutil.which from python3 \"\"\"\n def verify(filepath):\n if os.path.isfile(filepath) and os.access(filepath, os.X_OK):\n return os.path.join(path, filename)\n return None\n\n def _get_possible_filenames(filename):\n extensions_win = (os.getenv(\"PATHEXT\", \".COM;.EXE;.BAT;.CMD\").split(\";\")\n if \".\" not in filename else [])\n extensions = [\".sh\"] if platform.system() != \"Windows\" else extensions_win\n extensions.insert(1, \"\") # No extension\n return [\"%s%s\" % (filename, entry.lower()) for entry in extensions]\n\n possible_names = _get_possible_filenames(filename)\n for path in os.environ[\"PATH\"].split(os.pathsep):\n for name in possible_names:\n filepath = os.path.abspath(os.path.join(path, name))\n if verify(filepath):\n return filepath\n if platform.system() == \"Windows\":\n filepath = filepath.lower()\n if \"system32\" in filepath:\n # python return False for os.path.exists of exes in System32 but with SysNative\n trick_path = filepath.replace(\"system32\", \"sysnative\")\n if verify(trick_path):\n return trick_path\n\n return None\n\n\ndef _replace_with_separator(filepath, sep):\n tmp = load(filepath)\n ret = sep.join(tmp.splitlines())\n if tmp.endswith(\"\\n\"):\n ret += sep\n save(filepath, ret)\n\n\ndef unix2dos(filepath):\n _replace_with_separator(filepath, \"\\r\\n\")\n\n\ndef dos2unix(filepath):\n _replace_with_separator(filepath, \"\\n\")\n", "path": "conans/client/tools/files.py" } ]
diff --git a/conans/client/tools/files.py b/conans/client/tools/files.py index 435afb3e168..fa000b65e20 100644 --- a/conans/client/tools/files.py +++ b/conans/client/tools/files.py @@ -208,7 +208,9 @@ def decode_clean(path, prefix): return path def strip_path(path): - tokens = path.split("/")[strip:] + tokens = path.split("/") + if len(tokens) > 1: + tokens = tokens[strip:] path = "/".join(tokens) if base_path: path = os.path.join(base_path, path) diff --git a/conans/test/unittests/tools/files_patch_test.py b/conans/test/unittests/tools/files_patch_test.py index 6ef2cfc68f5..7fba8c64d82 100644 --- a/conans/test/unittests/tools/files_patch_test.py +++ b/conans/test/unittests/tools/files_patch_test.py @@ -105,6 +105,26 @@ def source(self): client.run("source .") self.assertFalse(os.path.exists(path)) + def test_patch_strip_delete_no_folder(self): + conanfile = dedent(""" + from conans import ConanFile, tools + class PatchConan(ConanFile): + def source(self): + tools.patch(self.source_folder, "example.patch", strip=1)""") + patch = dedent(""" + --- a/oldfile + +++ b/dev/null + @@ -0,1 +0,0 @@ + -legacy code""") + client = TestClient() + client.save({"conanfile.py": conanfile, + "example.patch": patch, + "oldfile": "legacy code"}) + path = os.path.join(client.current_folder, "oldfile") + self.assertTrue(os.path.exists(path)) + client.run("source .") + self.assertFalse(os.path.exists(path)) + def test_patch_new_delete(self): conanfile = base_conanfile + ''' def build(self): @@ -133,6 +153,26 @@ def build(self): client.out) self.assertIn("test/1.9.10@user/testing: OLD FILE=False", client.out) + def test_patch_new_strip(self): + conanfile = base_conanfile + ''' + def build(self): + from conans.tools import load, save + patch_content = """--- /dev/null ++++ b/newfile +@@ -0,0 +0,3 @@ ++New file! ++New file! ++New file! +""" + patch(patch_string=patch_content, strip=1) + self.output.info("NEW FILE=%s" % load("newfile")) +''' + client = TestClient() + client.save({"conanfile.py": conanfile}) + client.run("create . user/testing") + self.assertIn("test/1.9.10@user/testing: NEW FILE=New file!\nNew file!\nNew file!\n", + client.out) + def test_error_patch(self): file_content = base_conanfile + ''' def build(self):
kivy__kivy-2523
FileChooser Icon view is not scrolled to the top after opening a dir When FileChooser Icon view is selected and the view is scolled down before opening a directory with many files and folders, ScrollView is not reset and scrolled to the top, as one would expect.
[ { "content": "'''\nFileChooser\n===========\n\n.. versionadded:: 1.0.5\n\n\n.. versionchanged:: 1.2.0\n In the chooser template, the `controller` is not a direct reference anymore\n but a weak-reference.\n You must update all the notation `root.controller.xxx` to\n `root.controller().xxx`.\n\nSimple example\n--------------\n\nmain.py\n\n.. include:: ../../examples/RST_Editor/main.py\n :literal:\n\neditor.kv\n\n.. highlight:: kv\n\n.. include:: ../../examples/RST_Editor/editor.kv\n :literal:\n\n'''\n\n__all__ = ('FileChooserListView', 'FileChooserIconView',\n 'FileChooserController', 'FileChooserProgressBase',\n 'FileSystemAbstract', 'FileSystemLocal')\n\nfrom weakref import ref\nfrom time import time\nfrom kivy.compat import string_types\nfrom kivy.factory import Factory\nfrom kivy.clock import Clock\nfrom kivy.lang import Builder\nfrom kivy.logger import Logger\nfrom kivy.utils import platform as core_platform\nfrom kivy.uix.floatlayout import FloatLayout\nfrom kivy.properties import (\n StringProperty, ListProperty, BooleanProperty, ObjectProperty,\n NumericProperty)\nfrom os import listdir\nfrom os.path import (\n basename, join, sep, normpath, expanduser, altsep,\n splitdrive, realpath, getsize, isdir)\nfrom fnmatch import fnmatch\nimport collections\n\nplatform = core_platform\nfilesize_units = ('B', 'KB', 'MB', 'GB', 'TB')\n\n_have_win32file = False\nif platform == 'win':\n # Import that module here as it's not available on non-windows machines.\n # See http://bit.ly/i9klJE except that the attributes are defined in\n # win32file not win32com (bug on page).\n # Note: For some reason this doesn't work after a os.chdir(), no matter to\n # what directory you change from where. Windows weirdness.\n try:\n from win32file import FILE_ATTRIBUTE_HIDDEN, GetFileAttributesExW, error\n _have_win32file = True\n except ImportError:\n Logger.error('filechooser: win32file module is missing')\n Logger.error('filechooser: we cant check if a file is hidden or not')\n\n\ndef alphanumeric_folders_first(files, filesystem):\n return (sorted(f for f in files if filesystem.is_dir(f)) +\n sorted(f for f in files if not filesystem.is_dir(f)))\n\n\nclass FileSystemAbstract(object):\n '''Class for implementing a File System view that can be used with the\n :class:`FileChooser`.:attr:`~FileChooser.file_system`.\n\n .. versionadded:: 1.8.0\n '''\n\n def listdir(self, fn):\n '''Return the list of files in the directory `fn`\n '''\n pass\n\n def getsize(self, fn):\n '''Return the size in bytes of a file\n '''\n pass\n\n def is_hidden(self, fn):\n '''Return True if the file is hidden\n '''\n pass\n\n def is_dir(self, fn):\n '''Return True if the argument passed to this method is a directory\n '''\n pass\n\n\nclass FileSystemLocal(FileSystemAbstract):\n '''Implementation of :class:`FileSystemAbstract` for local files\n\n .. versionadded:: 1.8.0\n '''\n\n def listdir(self, fn):\n return listdir(fn)\n\n def getsize(self, fn):\n return getsize(fn)\n\n def is_hidden(self, fn):\n if platform == 'win':\n if not _have_win32file:\n return False\n try:\n return GetFileAttributesExW(fn)[0] & FILE_ATTRIBUTE_HIDDEN\n except error:\n # This error can occured when a file is already accessed by\n # someone else. So don't return to True, because we have lot\n # of chances to not being able to do anything with it.\n Logger.exception('unable to access to <%s>' % fn)\n return True\n\n return basename(fn).startswith('.')\n\n def is_dir(self, fn):\n return isdir(fn)\n\n\nclass FileChooserProgressBase(FloatLayout):\n '''Base for implementing a progress view. This view is used when too many\n entries need to be created and are delayed over multiple frames.\n\n .. versionadded:: 1.2.0\n '''\n\n path = StringProperty('')\n '''Current path of the FileChooser, read-only.\n '''\n\n index = NumericProperty(0)\n '''Current index of :attr:`total` entries to be loaded.\n '''\n\n total = NumericProperty(1)\n '''Total number of entries to load.\n '''\n\n def cancel(self, *largs):\n '''Cancel any action from the FileChooserController.\n '''\n if self.parent:\n self.parent.cancel()\n\n def on_touch_down(self, touch):\n if self.collide_point(*touch.pos):\n super(FileChooserProgressBase, self).on_touch_down(touch)\n return True\n\n def on_touch_move(self, touch):\n if self.collide_point(*touch.pos):\n super(FileChooserProgressBase, self).on_touch_move(touch)\n return True\n\n def on_touch_up(self, touch):\n if self.collide_point(*touch.pos):\n super(FileChooserProgressBase, self).on_touch_up(touch)\n return True\n\n\nclass FileChooserProgress(FileChooserProgressBase):\n pass\n\n\nclass FileChooserController(FloatLayout):\n '''Base for implementing a FileChooser. Don't use this class directly, but\n prefer using an implementation such as the :class:`FileChooserListView` or\n :class:`FileChooserIconView`.\n\n :Events:\n `on_entry_added`: entry, parent\n Fired when a root-level entry is added to the file list.\n `on_entries_cleared`\n Fired when the the entries list is cleared, usually when the\n root is refreshed.\n `on_subentry_to_entry`: entry, parent\n Fired when a sub-entry is added to an existing entry.\n `on_remove_subentry`: entry, parent\n Fired when entries are removed from an entry, usually when\n a node is closed.\n `on_submit`: selection, touch\n Fired when a file has been selected with a double-tap.\n '''\n _ENTRY_TEMPLATE = None\n\n path = StringProperty(u'/')\n '''\n :class:`~kivy.properties.StringProperty`, defaults to the current working\n directory as a unicode string. It specifies the path on the filesystem that\n this controller should refer to.\n\n .. warning::\n\n If a unicode path is specified, all the files returned will be in\n unicode allowing the display of unicode files and paths. If a bytes\n path is specified, only files and paths with ascii names will be\n displayed properly: non-ascii filenames will be displayed and listed\n with questions marks (?) instead of their unicode characters.\n '''\n\n filters = ListProperty([])\n ''':class:`~kivy.properties.ListProperty`, defaults to [], equal to '\\*'.\n Specifies the filters to be applied to the files in the directory.\n\n The filters are not reset when the path changes. You need to do that\n yourself if desired.\n\n There are two kinds of filters: patterns and callbacks.\n\n #. Patterns\n\n e.g. ['\\*.png'].\n You can use the following patterns:\n\n ========== =================================\n Pattern Meaning\n ========== =================================\n \\* matches everything\n ? matches any single character\n [seq] matches any character in seq\n [!seq] matches any character not in seq\n ========== =================================\n\n #. Callbacks\n\n You can specify a function that will be called for each file. The\n callback will be passed the folder and file name as the first\n and second parameters respectively. It should return True to\n indicate a match and False otherwise.\n\n .. versionchanged:: 1.4.0\n If the filter is a callable (function or method), it will be called\n with the path and the file name as arguments for each file in the\n directory.\n The callable should returns True to indicate a match and False\n overwise.\n '''\n\n filter_dirs = BooleanProperty(False)\n '''\n :class:`~kivy.properties.BooleanProperty`, defaults to False.\n Indicates whether filters should also apply to directories.\n '''\n\n sort_func = ObjectProperty(alphanumeric_folders_first)\n '''\n :class:`~kivy.properties.ObjectProperty`.\n Provides a function to be called with a list of filenames, and the\n filesystem implementation as the second argument.\n Returns a list of filenames sorted for display in the view.\n\n .. versionchanged:: 1.8.0\n\n The signature needs now 2 arguments: first the list of files,\n second the filesystem class to use.\n '''\n\n files = ListProperty([])\n '''\n Read-only :class:`~kivy.properties.ListProperty`.\n The list of files in the directory specified by path after applying the\n filters.\n '''\n\n show_hidden = BooleanProperty(False)\n '''\n :class:`~kivy.properties.BooleanProperty`, defaults to False.\n Determines whether hidden files and folders should be shown.\n '''\n\n selection = ListProperty([])\n '''\n Read-only :class:`~kivy.properties.ListProperty`.\n Contains the list of files that are currently selected.\n '''\n\n multiselect = BooleanProperty(False)\n '''\n :class:`~kivy.properties.BooleanProperty`, defaults to False.\n Determines whether the user is able to select multiple files or not.\n '''\n\n dirselect = BooleanProperty(False)\n '''\n :class:`~kivy.properties.BooleanProperty`, defaults to False.\n Determines whether directories are valid selections or not.\n\n .. versionadded:: 1.1.0\n '''\n\n rootpath = StringProperty(None, allownone=True)\n '''\n Root path to use instead of the system root path. If set, it will not show\n a \"..\" directory to go up to the root path. For example, if you set\n rootpath to /users/foo, the user will be unable to go to /users or to any\n other directory not starting with /users/foo.\n\n .. versionadded:: 1.2.0\n\n :class:`~kivy.properties.StringProperty`, defaults to None.\n\n .. note::\n\n Similar to :attr:`path`, if `rootpath` is specified, whether it's a\n bytes or unicode string determines the type of the filenames and paths\n read.\n '''\n\n progress_cls = ObjectProperty(FileChooserProgress)\n '''Class to use for displaying a progress indicator for filechooser\n loading.\n\n .. versionadded:: 1.2.0\n\n :class:`~kivy.properties.ObjectProperty`, defaults to\n :class:`FileChooserProgress`.\n\n .. versionchanged:: 1.8.0\n\n If you set a string, the :class:`~kivy.factory.Factory` will be used to\n resolve the class.\n\n '''\n\n file_encodings = ListProperty(['utf-8', 'latin1', 'cp1252'])\n '''Possible encodings for decoding a filename to unicode. In the case that\n the user has a weird filename, undecodable without knowing it's\n initial encoding, we have no other choice than to guess it.\n\n Please note that if you encounter an issue because of a missing encoding\n here, we'll be glad to add it to this list.\n\n .. versionadded:: 1.3.0\n\n .. deprecated:: 1.8.0\n This property is no longer used as the filechooser no longer decodes\n the file names.\n\n file_encodings is a :class:`~kivy.properties.ListProperty` and defaults to\n ['utf-8', 'latin1', 'cp1252'],\n '''\n\n file_system = ObjectProperty(FileSystemLocal(),\n baseclass=FileSystemAbstract)\n '''Implementation to access the file system. Must be an instance of\n FileSystemAbstract.\n\n .. versionadded:: 1.8.0\n\n :class:`~kivy.properties.ObjectProperty`, defaults to\n :class:`FileSystemLocal()`\n '''\n\n __events__ = ('on_entry_added', 'on_entries_cleared',\n 'on_subentry_to_entry', 'on_remove_subentry', 'on_submit')\n\n def __init__(self, **kwargs):\n self._progress = None\n super(FileChooserController, self).__init__(**kwargs)\n\n self._items = []\n self.bind(selection=self._update_item_selection)\n\n self._previous_path = [self.path]\n self.bind(path=self._save_previous_path)\n self.bind(path=self._trigger_update,\n filters=self._trigger_update,\n rootpath=self._trigger_update)\n self._trigger_update()\n\n def on_touch_down(self, touch):\n # don't respond to touchs outside self\n if not self.collide_point(*touch.pos):\n return\n if self.disabled:\n return True\n return super(FileChooserController, self).on_touch_down(touch)\n\n def on_touch_up(self, touch):\n # don't respond to touchs outside self\n if not self.collide_point(*touch.pos):\n return True\n if self.disabled:\n return True\n return super(FileChooserController, self).on_touch_up(touch)\n\n def _update_item_selection(self, *args):\n for item in self._items:\n item.selected = item.path in self.selection\n\n def _save_previous_path(self, instance, value):\n self._previous_path.append(value)\n self._previous_path = self._previous_path[-2:]\n\n def _trigger_update(self, *args):\n Clock.unschedule(self._update_files)\n Clock.schedule_once(self._update_files)\n\n def on_entry_added(self, node, parent=None):\n pass\n\n def on_entries_cleared(self):\n pass\n\n def on_subentry_to_entry(self, subentry, entry):\n pass\n\n def on_remove_subentry(self, subentry, entry):\n pass\n\n def on_submit(self, selected, touch=None):\n pass\n\n def entry_touched(self, entry, touch):\n '''(internal) This method must be called by the template when an entry\n is touched by the user.\n '''\n if (\n 'button' in touch.profile and touch.button in (\n 'scrollup', 'scrolldown', 'scrollleft', 'scrollright')):\n return False\n\n _dir = self.file_system.is_dir(entry.path)\n dirselect = self.dirselect\n\n if _dir and dirselect and touch.is_double_tap:\n self.open_entry(entry)\n return\n\n if self.multiselect:\n if entry.path in self.selection:\n self.selection.remove(entry.path)\n else:\n if _dir and not self.dirselect:\n self.open_entry(entry)\n return\n self.selection.append(entry.path)\n else:\n if _dir and not self.dirselect:\n self.open_entry\n return\n self.selection = [entry.path, ]\n\n def entry_released(self, entry, touch):\n '''(internal) This method must be called by the template when an entry\n is touched by the user.\n\n .. versionadded:: 1.1.0\n '''\n if (\n 'button' in touch.profile and touch.button in (\n 'scrollup', 'scrolldown', 'scrollleft', 'scrollright')):\n return False\n if not self.multiselect:\n if self.file_system.is_dir(entry.path) and not self.dirselect:\n self.open_entry(entry)\n elif touch.is_double_tap:\n if self.dirselect and self.file_system.is_dir(entry.path):\n self.open_entry(entry)\n else:\n self.dispatch('on_submit', self.selection, touch)\n\n def open_entry(self, entry):\n try:\n # Just check if we can list the directory. This is also what\n # _add_file does, so if it fails here, it would also fail later\n # on. Do the check here to prevent setting path to an invalid\n # directory that we cannot list.\n self.file_system.listdir(entry.path)\n except OSError:\n entry.locked = True\n else:\n self.path = join(self.path, entry.path)\n self.selection = []\n\n def _apply_filters(self, files):\n if not self.filters:\n return files\n filtered = []\n for filt in self.filters:\n if isinstance(filt, collections.Callable):\n filtered.extend([fn for fn in files if filt(self.path, fn)])\n else:\n filtered.extend([fn for fn in files if fnmatch(fn, filt)])\n if not self.filter_dirs:\n dirs = [fn for fn in files if self.file_system.is_dir(fn)]\n filtered.extend(dirs)\n return list(set(filtered))\n\n def get_nice_size(self, fn):\n '''Pass the filepath. Returns the size in the best human readable\n format or '' if it is a directory (Don't recursively calculate size.).\n '''\n if self.file_system.is_dir(fn):\n return ''\n try:\n size = self.file_system.getsize(fn)\n except OSError:\n return '--'\n\n for unit in filesize_units:\n if size < 1024.0:\n return '%1.0f %s' % (size, unit)\n size /= 1024.0\n\n def _update_files(self, *args, **kwargs):\n # trigger to start gathering the files in the new directory\n # we'll start a timer that will do the job, 10 times per frames\n # (default)\n self._gitems = []\n self._gitems_parent = kwargs.get('parent', None)\n self._gitems_gen = self._generate_file_entries(\n path=kwargs.get('path', self.path),\n parent=self._gitems_parent)\n\n # cancel any previous clock if exist\n Clock.unschedule(self._create_files_entries)\n\n # show the progression screen\n self._hide_progress()\n if self._create_files_entries():\n # not enough for creating all the entries, all a clock to continue\n # start a timer for the next 100 ms\n Clock.schedule_interval(self._create_files_entries, .1)\n\n def _create_files_entries(self, *args):\n # create maximum entries during 50ms max, or 10 minimum (slow system)\n # (on a \"fast system\" (core i7 2700K), we can create up to 40 entries\n # in 50 ms. So 10 is fine for low system.\n start = time()\n finished = False\n index = total = count = 1\n while time() - start < 0.05 or count < 10:\n try:\n index, total, item = next(self._gitems_gen)\n self._gitems.append(item)\n count += 1\n except StopIteration:\n finished = True\n break\n except TypeError: # in case _gitems_gen is None\n finished = True\n break\n\n # if this wasn't enough for creating all the entries, show a progress\n # bar, and report the activity to the user.\n if not finished:\n self._show_progress()\n self._progress.total = total\n self._progress.index = index\n return True\n\n # we created all the files, now push them on the view\n self._items = items = self._gitems\n parent = self._gitems_parent\n if parent is None:\n self.dispatch('on_entries_cleared')\n for entry in items:\n self.dispatch('on_entry_added', entry, parent)\n else:\n parent.entries[:] = items\n for entry in items:\n self.dispatch('on_subentry_to_entry', entry, parent)\n self.files[:] = [file.path for file in items]\n\n # stop the progression / creation\n self._hide_progress()\n self._gitems = None\n self._gitems_gen = None\n Clock.unschedule(self._create_files_entries)\n return False\n\n def cancel(self, *largs):\n '''Cancel any background action started by filechooser, such as loading\n a new directory.\n\n .. versionadded:: 1.2.0\n '''\n Clock.unschedule(self._create_files_entries)\n self._hide_progress()\n if len(self._previous_path) > 1:\n # if we cancel any action, the path will be set same as the\n # previous one, so we can safely cancel the update of the previous\n # path.\n self.path = self._previous_path[-2]\n Clock.unschedule(self._update_files)\n\n def _show_progress(self):\n if self._progress:\n return\n cls = self.progress_cls\n if isinstance(cls, string_types):\n cls = Factory.get(cls)\n self._progress = cls(path=self.path)\n self._progress.value = 0\n self.add_widget(self._progress)\n\n def _hide_progress(self):\n if self._progress:\n self.remove_widget(self._progress)\n self._progress = None\n\n def _generate_file_entries(self, *args, **kwargs):\n # Generator that will create all the files entries.\n # the generator is used via _update_files() and _create_files_entries()\n # don't use it directly.\n is_root = False\n path = kwargs.get('path', self.path)\n have_parent = kwargs.get('parent', None) is not None\n\n # Add the components that are always needed\n if self.rootpath:\n rootpath = realpath(self.rootpath)\n path = realpath(path)\n if not path.startswith(rootpath):\n self.path = rootpath\n return\n elif path == rootpath:\n is_root = True\n else:\n if platform == 'win':\n is_root = splitdrive(path)[1] in (sep, altsep)\n elif platform in ('macosx', 'linux', 'android', 'ios'):\n is_root = normpath(expanduser(path)) == sep\n else:\n # Unknown fs, just always add the .. entry but also log\n Logger.warning('Filechooser: Unsupported OS: %r' % platform)\n # generate an entries to go back to previous\n if not is_root and not have_parent:\n back = '..' + sep\n pardir = Builder.template(self._ENTRY_TEMPLATE, **dict(\n name=back, size='', path=back, controller=ref(self),\n isdir=True, parent=None, sep=sep, get_nice_size=lambda: ''))\n yield 0, 1, pardir\n\n # generate all the entries for files\n try:\n for index, total, item in self._add_files(path):\n yield index, total, item\n except OSError:\n Logger.exception('Unable to open directory <%s>' % self.path)\n self.files[:] = []\n\n def _add_files(self, path, parent=None):\n path = expanduser(path)\n\n files = []\n fappend = files.append\n for f in self.file_system.listdir(path):\n try:\n # In the following, use fully qualified filenames\n fappend(normpath(join(path, f)))\n except UnicodeDecodeError:\n Logger.exception('unable to decode <{}>'.format(f))\n except UnicodeEncodeError:\n Logger.exception('unable to encode <{}>'.format(f))\n # Apply filename filters\n files = self._apply_filters(files)\n # Sort the list of files\n files = self.sort_func(files, self.file_system)\n is_hidden = self.file_system.is_hidden\n if not self.show_hidden:\n files = [x for x in files if not is_hidden(x)]\n self.files[:] = files\n total = len(files)\n wself = ref(self)\n for index, fn in enumerate(files):\n\n def get_nice_size():\n # Use a closure for lazy-loading here\n return self.get_nice_size(fn)\n\n ctx = {'name': basename(fn),\n 'get_nice_size': get_nice_size,\n 'path': fn,\n 'controller': wself,\n 'isdir': self.file_system.is_dir(fn),\n 'parent': parent,\n 'sep': sep}\n entry = Builder.template(self._ENTRY_TEMPLATE, **ctx)\n yield index, total, entry\n\n def entry_subselect(self, entry):\n if not self.file_system.is_dir(entry.path):\n return\n self._update_files(path=entry.path, parent=entry)\n\n def close_subselection(self, entry):\n for subentry in entry.entries:\n self.dispatch('on_remove_subentry', subentry, entry)\n\n\nclass FileChooserListView(FileChooserController):\n '''Implementation of :class:`FileChooserController` using a list view.\n '''\n _ENTRY_TEMPLATE = 'FileListEntry'\n\n\nclass FileChooserIconView(FileChooserController):\n '''Implementation of :class:`FileChooserController` using an icon view.\n '''\n _ENTRY_TEMPLATE = 'FileIconEntry'\n\n\nif __name__ == '__main__':\n from kivy.app import App\n from pprint import pprint\n import sys\n\n class FileChooserApp(App):\n\n def build(self):\n view = FileChooserListView\n\n if len(sys.argv) > 1:\n v = view(path=sys.argv[1])\n else:\n v = view()\n\n v.bind(selection=lambda *x: pprint(\"selection: %s\" % x[1:]))\n v.bind(path=lambda *x: pprint(\"path: %s\" % x[1:]))\n return v\n\n FileChooserApp().run()\n", "path": "kivy/uix/filechooser.py" } ]
[ { "content": "'''\nFileChooser\n===========\n\n.. versionadded:: 1.0.5\n\n\n.. versionchanged:: 1.2.0\n In the chooser template, the `controller` is not a direct reference anymore\n but a weak-reference.\n You must update all the notation `root.controller.xxx` to\n `root.controller().xxx`.\n\nSimple example\n--------------\n\nmain.py\n\n.. include:: ../../examples/RST_Editor/main.py\n :literal:\n\neditor.kv\n\n.. highlight:: kv\n\n.. include:: ../../examples/RST_Editor/editor.kv\n :literal:\n\n'''\n\n__all__ = ('FileChooserListView', 'FileChooserIconView',\n 'FileChooserController', 'FileChooserProgressBase',\n 'FileSystemAbstract', 'FileSystemLocal')\n\nfrom weakref import ref\nfrom time import time\nfrom kivy.compat import string_types\nfrom kivy.factory import Factory\nfrom kivy.clock import Clock\nfrom kivy.lang import Builder\nfrom kivy.logger import Logger\nfrom kivy.utils import platform as core_platform\nfrom kivy.uix.floatlayout import FloatLayout\nfrom kivy.properties import (\n StringProperty, ListProperty, BooleanProperty, ObjectProperty,\n NumericProperty)\nfrom os import listdir\nfrom os.path import (\n basename, join, sep, normpath, expanduser, altsep,\n splitdrive, realpath, getsize, isdir)\nfrom fnmatch import fnmatch\nimport collections\n\nplatform = core_platform\nfilesize_units = ('B', 'KB', 'MB', 'GB', 'TB')\n\n_have_win32file = False\nif platform == 'win':\n # Import that module here as it's not available on non-windows machines.\n # See http://bit.ly/i9klJE except that the attributes are defined in\n # win32file not win32com (bug on page).\n # Note: For some reason this doesn't work after a os.chdir(), no matter to\n # what directory you change from where. Windows weirdness.\n try:\n from win32file import FILE_ATTRIBUTE_HIDDEN, GetFileAttributesExW, error\n _have_win32file = True\n except ImportError:\n Logger.error('filechooser: win32file module is missing')\n Logger.error('filechooser: we cant check if a file is hidden or not')\n\n\ndef alphanumeric_folders_first(files, filesystem):\n return (sorted(f for f in files if filesystem.is_dir(f)) +\n sorted(f for f in files if not filesystem.is_dir(f)))\n\n\nclass FileSystemAbstract(object):\n '''Class for implementing a File System view that can be used with the\n :class:`FileChooser`.:attr:`~FileChooser.file_system`.\n\n .. versionadded:: 1.8.0\n '''\n\n def listdir(self, fn):\n '''Return the list of files in the directory `fn`\n '''\n pass\n\n def getsize(self, fn):\n '''Return the size in bytes of a file\n '''\n pass\n\n def is_hidden(self, fn):\n '''Return True if the file is hidden\n '''\n pass\n\n def is_dir(self, fn):\n '''Return True if the argument passed to this method is a directory\n '''\n pass\n\n\nclass FileSystemLocal(FileSystemAbstract):\n '''Implementation of :class:`FileSystemAbstract` for local files\n\n .. versionadded:: 1.8.0\n '''\n\n def listdir(self, fn):\n return listdir(fn)\n\n def getsize(self, fn):\n return getsize(fn)\n\n def is_hidden(self, fn):\n if platform == 'win':\n if not _have_win32file:\n return False\n try:\n return GetFileAttributesExW(fn)[0] & FILE_ATTRIBUTE_HIDDEN\n except error:\n # This error can occured when a file is already accessed by\n # someone else. So don't return to True, because we have lot\n # of chances to not being able to do anything with it.\n Logger.exception('unable to access to <%s>' % fn)\n return True\n\n return basename(fn).startswith('.')\n\n def is_dir(self, fn):\n return isdir(fn)\n\n\nclass FileChooserProgressBase(FloatLayout):\n '''Base for implementing a progress view. This view is used when too many\n entries need to be created and are delayed over multiple frames.\n\n .. versionadded:: 1.2.0\n '''\n\n path = StringProperty('')\n '''Current path of the FileChooser, read-only.\n '''\n\n index = NumericProperty(0)\n '''Current index of :attr:`total` entries to be loaded.\n '''\n\n total = NumericProperty(1)\n '''Total number of entries to load.\n '''\n\n def cancel(self, *largs):\n '''Cancel any action from the FileChooserController.\n '''\n if self.parent:\n self.parent.cancel()\n\n def on_touch_down(self, touch):\n if self.collide_point(*touch.pos):\n super(FileChooserProgressBase, self).on_touch_down(touch)\n return True\n\n def on_touch_move(self, touch):\n if self.collide_point(*touch.pos):\n super(FileChooserProgressBase, self).on_touch_move(touch)\n return True\n\n def on_touch_up(self, touch):\n if self.collide_point(*touch.pos):\n super(FileChooserProgressBase, self).on_touch_up(touch)\n return True\n\n\nclass FileChooserProgress(FileChooserProgressBase):\n pass\n\n\nclass FileChooserController(FloatLayout):\n '''Base for implementing a FileChooser. Don't use this class directly, but\n prefer using an implementation such as the :class:`FileChooserListView` or\n :class:`FileChooserIconView`.\n\n :Events:\n `on_entry_added`: entry, parent\n Fired when a root-level entry is added to the file list.\n `on_entries_cleared`\n Fired when the the entries list is cleared, usually when the\n root is refreshed.\n `on_subentry_to_entry`: entry, parent\n Fired when a sub-entry is added to an existing entry.\n `on_remove_subentry`: entry, parent\n Fired when entries are removed from an entry, usually when\n a node is closed.\n `on_submit`: selection, touch\n Fired when a file has been selected with a double-tap.\n '''\n _ENTRY_TEMPLATE = None\n\n path = StringProperty(u'/')\n '''\n :class:`~kivy.properties.StringProperty`, defaults to the current working\n directory as a unicode string. It specifies the path on the filesystem that\n this controller should refer to.\n\n .. warning::\n\n If a unicode path is specified, all the files returned will be in\n unicode allowing the display of unicode files and paths. If a bytes\n path is specified, only files and paths with ascii names will be\n displayed properly: non-ascii filenames will be displayed and listed\n with questions marks (?) instead of their unicode characters.\n '''\n\n filters = ListProperty([])\n ''':class:`~kivy.properties.ListProperty`, defaults to [], equal to '\\*'.\n Specifies the filters to be applied to the files in the directory.\n\n The filters are not reset when the path changes. You need to do that\n yourself if desired.\n\n There are two kinds of filters: patterns and callbacks.\n\n #. Patterns\n\n e.g. ['\\*.png'].\n You can use the following patterns:\n\n ========== =================================\n Pattern Meaning\n ========== =================================\n \\* matches everything\n ? matches any single character\n [seq] matches any character in seq\n [!seq] matches any character not in seq\n ========== =================================\n\n #. Callbacks\n\n You can specify a function that will be called for each file. The\n callback will be passed the folder and file name as the first\n and second parameters respectively. It should return True to\n indicate a match and False otherwise.\n\n .. versionchanged:: 1.4.0\n If the filter is a callable (function or method), it will be called\n with the path and the file name as arguments for each file in the\n directory.\n The callable should returns True to indicate a match and False\n overwise.\n '''\n\n filter_dirs = BooleanProperty(False)\n '''\n :class:`~kivy.properties.BooleanProperty`, defaults to False.\n Indicates whether filters should also apply to directories.\n '''\n\n sort_func = ObjectProperty(alphanumeric_folders_first)\n '''\n :class:`~kivy.properties.ObjectProperty`.\n Provides a function to be called with a list of filenames, and the\n filesystem implementation as the second argument.\n Returns a list of filenames sorted for display in the view.\n\n .. versionchanged:: 1.8.0\n\n The signature needs now 2 arguments: first the list of files,\n second the filesystem class to use.\n '''\n\n files = ListProperty([])\n '''\n Read-only :class:`~kivy.properties.ListProperty`.\n The list of files in the directory specified by path after applying the\n filters.\n '''\n\n show_hidden = BooleanProperty(False)\n '''\n :class:`~kivy.properties.BooleanProperty`, defaults to False.\n Determines whether hidden files and folders should be shown.\n '''\n\n selection = ListProperty([])\n '''\n Read-only :class:`~kivy.properties.ListProperty`.\n Contains the list of files that are currently selected.\n '''\n\n multiselect = BooleanProperty(False)\n '''\n :class:`~kivy.properties.BooleanProperty`, defaults to False.\n Determines whether the user is able to select multiple files or not.\n '''\n\n dirselect = BooleanProperty(False)\n '''\n :class:`~kivy.properties.BooleanProperty`, defaults to False.\n Determines whether directories are valid selections or not.\n\n .. versionadded:: 1.1.0\n '''\n\n rootpath = StringProperty(None, allownone=True)\n '''\n Root path to use instead of the system root path. If set, it will not show\n a \"..\" directory to go up to the root path. For example, if you set\n rootpath to /users/foo, the user will be unable to go to /users or to any\n other directory not starting with /users/foo.\n\n .. versionadded:: 1.2.0\n\n :class:`~kivy.properties.StringProperty`, defaults to None.\n\n .. note::\n\n Similar to :attr:`path`, if `rootpath` is specified, whether it's a\n bytes or unicode string determines the type of the filenames and paths\n read.\n '''\n\n progress_cls = ObjectProperty(FileChooserProgress)\n '''Class to use for displaying a progress indicator for filechooser\n loading.\n\n .. versionadded:: 1.2.0\n\n :class:`~kivy.properties.ObjectProperty`, defaults to\n :class:`FileChooserProgress`.\n\n .. versionchanged:: 1.8.0\n\n If you set a string, the :class:`~kivy.factory.Factory` will be used to\n resolve the class.\n\n '''\n\n file_encodings = ListProperty(['utf-8', 'latin1', 'cp1252'])\n '''Possible encodings for decoding a filename to unicode. In the case that\n the user has a weird filename, undecodable without knowing it's\n initial encoding, we have no other choice than to guess it.\n\n Please note that if you encounter an issue because of a missing encoding\n here, we'll be glad to add it to this list.\n\n .. versionadded:: 1.3.0\n\n .. deprecated:: 1.8.0\n This property is no longer used as the filechooser no longer decodes\n the file names.\n\n file_encodings is a :class:`~kivy.properties.ListProperty` and defaults to\n ['utf-8', 'latin1', 'cp1252'],\n '''\n\n file_system = ObjectProperty(FileSystemLocal(),\n baseclass=FileSystemAbstract)\n '''Implementation to access the file system. Must be an instance of\n FileSystemAbstract.\n\n .. versionadded:: 1.8.0\n\n :class:`~kivy.properties.ObjectProperty`, defaults to\n :class:`FileSystemLocal()`\n '''\n\n __events__ = ('on_entry_added', 'on_entries_cleared',\n 'on_subentry_to_entry', 'on_remove_subentry', 'on_submit')\n\n def __init__(self, **kwargs):\n self._progress = None\n super(FileChooserController, self).__init__(**kwargs)\n\n self._items = []\n self.bind(selection=self._update_item_selection)\n\n self._previous_path = [self.path]\n self.bind(path=self._save_previous_path)\n self.bind(path=self._trigger_update,\n filters=self._trigger_update,\n rootpath=self._trigger_update)\n self._trigger_update()\n\n def on_touch_down(self, touch):\n # don't respond to touchs outside self\n if not self.collide_point(*touch.pos):\n return\n if self.disabled:\n return True\n return super(FileChooserController, self).on_touch_down(touch)\n\n def on_touch_up(self, touch):\n # don't respond to touchs outside self\n if not self.collide_point(*touch.pos):\n return True\n if self.disabled:\n return True\n return super(FileChooserController, self).on_touch_up(touch)\n\n def _update_item_selection(self, *args):\n for item in self._items:\n item.selected = item.path in self.selection\n\n def _save_previous_path(self, instance, value):\n self._previous_path.append(value)\n self._previous_path = self._previous_path[-2:]\n\n def _trigger_update(self, *args):\n Clock.unschedule(self._update_files)\n Clock.schedule_once(self._update_files)\n\n def on_entry_added(self, node, parent=None):\n pass\n\n def on_entries_cleared(self):\n pass\n\n def on_subentry_to_entry(self, subentry, entry):\n pass\n\n def on_remove_subentry(self, subentry, entry):\n pass\n\n def on_submit(self, selected, touch=None):\n pass\n\n def entry_touched(self, entry, touch):\n '''(internal) This method must be called by the template when an entry\n is touched by the user.\n '''\n if (\n 'button' in touch.profile and touch.button in (\n 'scrollup', 'scrolldown', 'scrollleft', 'scrollright')):\n return False\n\n _dir = self.file_system.is_dir(entry.path)\n dirselect = self.dirselect\n\n if _dir and dirselect and touch.is_double_tap:\n self.open_entry(entry)\n return\n\n if self.multiselect:\n if entry.path in self.selection:\n self.selection.remove(entry.path)\n else:\n if _dir and not self.dirselect:\n self.open_entry(entry)\n return\n self.selection.append(entry.path)\n else:\n if _dir and not self.dirselect:\n self.open_entry\n return\n self.selection = [entry.path, ]\n\n def entry_released(self, entry, touch):\n '''(internal) This method must be called by the template when an entry\n is touched by the user.\n\n .. versionadded:: 1.1.0\n '''\n if (\n 'button' in touch.profile and touch.button in (\n 'scrollup', 'scrolldown', 'scrollleft', 'scrollright')):\n return False\n if not self.multiselect:\n if self.file_system.is_dir(entry.path) and not self.dirselect:\n self.open_entry(entry)\n elif touch.is_double_tap:\n if self.dirselect and self.file_system.is_dir(entry.path):\n self.open_entry(entry)\n else:\n self.dispatch('on_submit', self.selection, touch)\n\n def open_entry(self, entry):\n try:\n # Just check if we can list the directory. This is also what\n # _add_file does, so if it fails here, it would also fail later\n # on. Do the check here to prevent setting path to an invalid\n # directory that we cannot list.\n self.file_system.listdir(entry.path)\n except OSError:\n entry.locked = True\n else:\n self.path = join(self.path, entry.path)\n self.selection = []\n\n def _apply_filters(self, files):\n if not self.filters:\n return files\n filtered = []\n for filt in self.filters:\n if isinstance(filt, collections.Callable):\n filtered.extend([fn for fn in files if filt(self.path, fn)])\n else:\n filtered.extend([fn for fn in files if fnmatch(fn, filt)])\n if not self.filter_dirs:\n dirs = [fn for fn in files if self.file_system.is_dir(fn)]\n filtered.extend(dirs)\n return list(set(filtered))\n\n def get_nice_size(self, fn):\n '''Pass the filepath. Returns the size in the best human readable\n format or '' if it is a directory (Don't recursively calculate size.).\n '''\n if self.file_system.is_dir(fn):\n return ''\n try:\n size = self.file_system.getsize(fn)\n except OSError:\n return '--'\n\n for unit in filesize_units:\n if size < 1024.0:\n return '%1.0f %s' % (size, unit)\n size /= 1024.0\n\n def _update_files(self, *args, **kwargs):\n # trigger to start gathering the files in the new directory\n # we'll start a timer that will do the job, 10 times per frames\n # (default)\n self._gitems = []\n self._gitems_parent = kwargs.get('parent', None)\n self._gitems_gen = self._generate_file_entries(\n path=kwargs.get('path', self.path),\n parent=self._gitems_parent)\n\n # cancel any previous clock if exist\n Clock.unschedule(self._create_files_entries)\n\n # show the progression screen\n self._hide_progress()\n if self._create_files_entries():\n # not enough for creating all the entries, all a clock to continue\n # start a timer for the next 100 ms\n Clock.schedule_interval(self._create_files_entries, .1)\n\n def _create_files_entries(self, *args):\n # create maximum entries during 50ms max, or 10 minimum (slow system)\n # (on a \"fast system\" (core i7 2700K), we can create up to 40 entries\n # in 50 ms. So 10 is fine for low system.\n start = time()\n finished = False\n index = total = count = 1\n while time() - start < 0.05 or count < 10:\n try:\n index, total, item = next(self._gitems_gen)\n self._gitems.append(item)\n count += 1\n except StopIteration:\n finished = True\n break\n except TypeError: # in case _gitems_gen is None\n finished = True\n break\n\n # if this wasn't enough for creating all the entries, show a progress\n # bar, and report the activity to the user.\n if not finished:\n self._show_progress()\n self._progress.total = total\n self._progress.index = index\n return True\n\n # we created all the files, now push them on the view\n self._items = items = self._gitems\n parent = self._gitems_parent\n if parent is None:\n self.dispatch('on_entries_cleared')\n for entry in items:\n self.dispatch('on_entry_added', entry, parent)\n else:\n parent.entries[:] = items\n for entry in items:\n self.dispatch('on_subentry_to_entry', entry, parent)\n self.files[:] = [file.path for file in items]\n\n # stop the progression / creation\n self._hide_progress()\n self._gitems = None\n self._gitems_gen = None\n Clock.unschedule(self._create_files_entries)\n return False\n\n def cancel(self, *largs):\n '''Cancel any background action started by filechooser, such as loading\n a new directory.\n\n .. versionadded:: 1.2.0\n '''\n Clock.unschedule(self._create_files_entries)\n self._hide_progress()\n if len(self._previous_path) > 1:\n # if we cancel any action, the path will be set same as the\n # previous one, so we can safely cancel the update of the previous\n # path.\n self.path = self._previous_path[-2]\n Clock.unschedule(self._update_files)\n\n def _show_progress(self):\n if self._progress:\n return\n cls = self.progress_cls\n if isinstance(cls, string_types):\n cls = Factory.get(cls)\n self._progress = cls(path=self.path)\n self._progress.value = 0\n self.add_widget(self._progress)\n\n def _hide_progress(self):\n if self._progress:\n self.remove_widget(self._progress)\n self._progress = None\n\n def _generate_file_entries(self, *args, **kwargs):\n # Generator that will create all the files entries.\n # the generator is used via _update_files() and _create_files_entries()\n # don't use it directly.\n is_root = False\n path = kwargs.get('path', self.path)\n have_parent = kwargs.get('parent', None) is not None\n\n # Add the components that are always needed\n if self.rootpath:\n rootpath = realpath(self.rootpath)\n path = realpath(path)\n if not path.startswith(rootpath):\n self.path = rootpath\n return\n elif path == rootpath:\n is_root = True\n else:\n if platform == 'win':\n is_root = splitdrive(path)[1] in (sep, altsep)\n elif platform in ('macosx', 'linux', 'android', 'ios'):\n is_root = normpath(expanduser(path)) == sep\n else:\n # Unknown fs, just always add the .. entry but also log\n Logger.warning('Filechooser: Unsupported OS: %r' % platform)\n # generate an entries to go back to previous\n if not is_root and not have_parent:\n back = '..' + sep\n pardir = Builder.template(self._ENTRY_TEMPLATE, **dict(\n name=back, size='', path=back, controller=ref(self),\n isdir=True, parent=None, sep=sep, get_nice_size=lambda: ''))\n yield 0, 1, pardir\n\n # generate all the entries for files\n try:\n for index, total, item in self._add_files(path):\n yield index, total, item\n except OSError:\n Logger.exception('Unable to open directory <%s>' % self.path)\n self.files[:] = []\n\n def _add_files(self, path, parent=None):\n path = expanduser(path)\n\n files = []\n fappend = files.append\n for f in self.file_system.listdir(path):\n try:\n # In the following, use fully qualified filenames\n fappend(normpath(join(path, f)))\n except UnicodeDecodeError:\n Logger.exception('unable to decode <{}>'.format(f))\n except UnicodeEncodeError:\n Logger.exception('unable to encode <{}>'.format(f))\n # Apply filename filters\n files = self._apply_filters(files)\n # Sort the list of files\n files = self.sort_func(files, self.file_system)\n is_hidden = self.file_system.is_hidden\n if not self.show_hidden:\n files = [x for x in files if not is_hidden(x)]\n self.files[:] = files\n total = len(files)\n wself = ref(self)\n for index, fn in enumerate(files):\n\n def get_nice_size():\n # Use a closure for lazy-loading here\n return self.get_nice_size(fn)\n\n ctx = {'name': basename(fn),\n 'get_nice_size': get_nice_size,\n 'path': fn,\n 'controller': wself,\n 'isdir': self.file_system.is_dir(fn),\n 'parent': parent,\n 'sep': sep}\n entry = Builder.template(self._ENTRY_TEMPLATE, **ctx)\n yield index, total, entry\n\n def entry_subselect(self, entry):\n if not self.file_system.is_dir(entry.path):\n return\n self._update_files(path=entry.path, parent=entry)\n\n def close_subselection(self, entry):\n for subentry in entry.entries:\n self.dispatch('on_remove_subentry', subentry, entry)\n\n\nclass FileChooserListView(FileChooserController):\n '''Implementation of :class:`FileChooserController` using a list view.\n '''\n _ENTRY_TEMPLATE = 'FileListEntry'\n\n\nclass FileChooserIconView(FileChooserController):\n '''Implementation of :class:`FileChooserController` using an icon view.\n '''\n _ENTRY_TEMPLATE = 'FileIconEntry'\n\n def __init__(self, **kwargs):\n super(FileChooserIconView, self).__init__(**kwargs)\n self.bind(on_entries_cleared=self.scroll_to_top)\n \n def scroll_to_top(self, *args):\n self.ids.scrollview.scroll_y = 1.0\n\n\nif __name__ == '__main__':\n from kivy.app import App\n from pprint import pprint\n import sys\n\n class FileChooserApp(App):\n\n def build(self):\n view = FileChooserListView\n\n if len(sys.argv) > 1:\n v = view(path=sys.argv[1])\n else:\n v = view()\n\n v.bind(selection=lambda *x: pprint(\"selection: %s\" % x[1:]))\n v.bind(path=lambda *x: pprint(\"path: %s\" % x[1:]))\n return v\n\n FileChooserApp().run()\n", "path": "kivy/uix/filechooser.py" } ]
diff --git a/kivy/uix/filechooser.py b/kivy/uix/filechooser.py index 7242f14533..1e86dd5414 100644 --- a/kivy/uix/filechooser.py +++ b/kivy/uix/filechooser.py @@ -717,6 +717,13 @@ class FileChooserIconView(FileChooserController): ''' _ENTRY_TEMPLATE = 'FileIconEntry' + def __init__(self, **kwargs): + super(FileChooserIconView, self).__init__(**kwargs) + self.bind(on_entries_cleared=self.scroll_to_top) + + def scroll_to_top(self, *args): + self.ids.scrollview.scroll_y = 1.0 + if __name__ == '__main__': from kivy.app import App
mosaicml__composer-496
Move `ComposerTrainer` to top-level imports Our most heavily used objects should be easily importable from `composer` via: ``` from composer import Trainer, ComposerModel ``` rather than remember their submodule: ``` from composer.models import ComposerModel ``` Especially the last one, its tricky to remember whether its `models` or `model`
[ { "content": "# Copyright 2021 MosaicML. All Rights Reserved.\n\nfrom composer import algorithms as algorithms\nfrom composer import callbacks as callbacks\nfrom composer import datasets as datasets\nfrom composer import loggers as loggers\nfrom composer import models as models\nfrom composer import optim as optim\nfrom composer import profiler as profiler\nfrom composer import trainer as trainer\nfrom composer import utils as utils\nfrom composer.core import Algorithm as Algorithm\nfrom composer.core import Callback as Callback\nfrom composer.core import DataSpec as DataSpec\nfrom composer.core import Engine as Engine\nfrom composer.core import Event as Event\nfrom composer.core import Logger as Logger\nfrom composer.core import State as State\nfrom composer.core import Time as Time\nfrom composer.core import Timer as Timer\nfrom composer.core import TimeUnit as TimeUnit\nfrom composer.core import types as types\nfrom composer.trainer import Trainer as Trainer\n\n__version__ = \"0.3.1\"\n", "path": "composer/__init__.py" } ]
[ { "content": "# Copyright 2021 MosaicML. All Rights Reserved.\n\nfrom composer import algorithms as algorithms\nfrom composer import callbacks as callbacks\nfrom composer import datasets as datasets\nfrom composer import loggers as loggers\nfrom composer import models as models\nfrom composer import optim as optim\nfrom composer import profiler as profiler\nfrom composer import trainer as trainer\nfrom composer import utils as utils\nfrom composer.core import Algorithm as Algorithm\nfrom composer.core import Callback as Callback\nfrom composer.core import DataSpec as DataSpec\nfrom composer.core import Engine as Engine\nfrom composer.core import Event as Event\nfrom composer.core import Logger as Logger\nfrom composer.core import State as State\nfrom composer.core import Time as Time\nfrom composer.core import Timer as Timer\nfrom composer.core import TimeUnit as TimeUnit\nfrom composer.core import types as types\nfrom composer.models import ComposerModel as ComposerModel\nfrom composer.trainer import Trainer as Trainer\n\n__version__ = \"0.3.1\"\n", "path": "composer/__init__.py" } ]
diff --git a/composer/__init__.py b/composer/__init__.py index ee6694915f..3e0c87c78c 100644 --- a/composer/__init__.py +++ b/composer/__init__.py @@ -20,6 +20,7 @@ from composer.core import Timer as Timer from composer.core import TimeUnit as TimeUnit from composer.core import types as types +from composer.models import ComposerModel as ComposerModel from composer.trainer import Trainer as Trainer __version__ = "0.3.1"
ethereum__web3.py-1107
Backport 1094 to v4 branch ### What was wrong? https://github.com/ethereum/web3.py/issues/1094#issuecomment-428259232 needs to be backported to the v4 branch.
[ { "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom setuptools import (\n find_packages,\n setup,\n)\n\nextras_require = {\n 'tester': [\n \"eth-tester[py-evm]==0.1.0-beta.33\",\n \"py-geth>=2.0.1,<3.0.0\",\n ],\n 'testrpc': [\"eth-testrpc>=1.3.3,<2.0.0\"],\n 'linter': [\n \"flake8==3.4.1\",\n \"isort>=4.2.15,<5\",\n ],\n 'docs': [\n \"mock\",\n \"sphinx-better-theme>=0.1.4\",\n \"click>=5.1\",\n \"configparser==3.5.0\",\n \"contextlib2>=0.5.4\",\n #\"eth-testrpc>=0.8.0\",\n #\"ethereum-tester-client>=1.1.0\",\n \"ethtoken\",\n \"py-geth>=1.4.0\",\n \"py-solc>=0.4.0\",\n \"pytest>=2.7.2\",\n \"sphinx\",\n \"sphinx_rtd_theme>=0.1.9\",\n \"toposort>=1.4\",\n \"urllib3\",\n \"web3>=2.1.0\",\n \"wheel\"\n ],\n 'dev': [\n \"bumpversion\",\n \"flaky>=3.3.0\",\n \"hypothesis>=3.31.2\",\n \"pytest>=3.5.0,<4\",\n \"pytest-mock==1.*\",\n \"pytest-pythonpath>=0.3\",\n \"pytest-watch==4.*\",\n \"pytest-xdist==1.*\",\n \"tox>=1.8.0\",\n \"tqdm\",\n \"when-changed\"\n ]\n}\n\nextras_require['dev'] = (\n extras_require['tester'] +\n extras_require['linter'] +\n extras_require['docs'] +\n extras_require['dev']\n)\n\nsetup(\n name='web3',\n # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n version='4.7.2',\n description=\"\"\"Web3.py\"\"\",\n long_description_markdown_filename='README.md',\n author='Piper Merriam',\n author_email='[email protected]',\n url='https://github.com/ethereum/web3.py',\n include_package_data=True,\n install_requires=[\n \"toolz>=0.9.0,<1.0.0;implementation_name=='pypy'\",\n \"cytoolz>=0.9.0,<1.0.0;implementation_name=='cpython'\",\n \"eth-abi>=1.2.0,<2.0.0\",\n \"eth-account>=0.2.1,<0.4.0\",\n \"eth-utils>=1.2.0,<2.0.0\",\n \"hexbytes>=0.1.0,<1.0.0\",\n \"lru-dict>=1.1.6,<2.0.0\",\n \"eth-hash[pycryptodome]>=0.2.0,<1.0.0\",\n \"requests>=2.16.0,<3.0.0\",\n \"websockets>=6.0.0,<7.0.0\",\n \"pypiwin32>=223;platform_system=='Windows'\",\n ],\n setup_requires=['setuptools-markdown'],\n python_requires='>=3.5, <4',\n extras_require=extras_require,\n py_modules=['web3', 'ens'],\n license=\"MIT\",\n zip_safe=False,\n keywords='ethereum',\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n)\n", "path": "setup.py" } ]
[ { "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom setuptools import (\n find_packages,\n setup,\n)\n\nextras_require = {\n 'tester': [\n \"eth-tester[py-evm]==0.1.0-beta.33\",\n \"py-geth>=2.0.1,<3.0.0\",\n ],\n 'testrpc': [\"eth-testrpc>=1.3.3,<2.0.0\"],\n 'linter': [\n \"flake8==3.4.1\",\n \"isort>=4.2.15,<5\",\n ],\n 'docs': [\n \"mock\",\n \"sphinx-better-theme>=0.1.4\",\n \"click>=5.1\",\n \"configparser==3.5.0\",\n \"contextlib2>=0.5.4\",\n #\"eth-testrpc>=0.8.0\",\n #\"ethereum-tester-client>=1.1.0\",\n \"ethtoken\",\n \"py-geth>=1.4.0\",\n \"py-solc>=0.4.0\",\n \"pytest>=2.7.2\",\n \"sphinx\",\n \"sphinx_rtd_theme>=0.1.9\",\n \"toposort>=1.4\",\n \"urllib3\",\n \"web3>=2.1.0\",\n \"wheel\"\n ],\n 'dev': [\n \"bumpversion\",\n \"flaky>=3.3.0\",\n \"hypothesis>=3.31.2\",\n \"pytest>=3.5.0,<4\",\n \"pytest-mock==1.*\",\n \"pytest-pythonpath>=0.3\",\n \"pytest-watch==4.*\",\n \"pytest-xdist==1.*\",\n \"tox>=1.8.0\",\n \"tqdm\",\n \"when-changed\"\n ]\n}\n\nextras_require['dev'] = (\n extras_require['tester'] +\n extras_require['linter'] +\n extras_require['docs'] +\n extras_require['dev']\n)\n\nsetup(\n name='web3',\n # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n version='4.7.2',\n description=\"\"\"Web3.py\"\"\",\n long_description_markdown_filename='README.md',\n author='Piper Merriam',\n author_email='[email protected]',\n url='https://github.com/ethereum/web3.py',\n include_package_data=True,\n install_requires=[\n \"toolz>=0.9.0,<1.0.0;implementation_name=='pypy'\",\n \"cytoolz>=0.9.0,<1.0.0;implementation_name=='cpython'\",\n \"eth-abi>=1.2.0,<2.0.0\",\n \"eth-account>=0.2.1,<0.4.0\",\n \"eth-utils>=1.2.0,<2.0.0\",\n \"hexbytes>=0.1.0,<1.0.0\",\n \"lru-dict>=1.1.6,<2.0.0\",\n \"eth-hash[pycryptodome]>=0.2.0,<1.0.0\",\n \"requests>=2.16.0,<3.0.0\",\n \"websockets>=6.0.0,<7.0.0\",\n \"pypiwin32>=223;platform_system=='Windows'\",\n ],\n setup_requires=['setuptools-markdown'],\n python_requires='>=3.5.3,<4',\n extras_require=extras_require,\n py_modules=['web3', 'ens'],\n license=\"MIT\",\n zip_safe=False,\n keywords='ethereum',\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n)\n", "path": "setup.py" } ]
diff --git a/setup.py b/setup.py index e5ba56e5ff..e9c00c9986 100644 --- a/setup.py +++ b/setup.py @@ -80,7 +80,7 @@ "pypiwin32>=223;platform_system=='Windows'", ], setup_requires=['setuptools-markdown'], - python_requires='>=3.5, <4', + python_requires='>=3.5.3,<4', extras_require=extras_require, py_modules=['web3', 'ens'], license="MIT",
django-cms__django-filer-491
CircularDependencyError when using custom Image model 093d07357ee13d4ea830db136ef037180824ddae added a migration dependency to the swappable `Image` model. But a custom `Image` model inherits from `File`, so `filer.0001_initial` needs to be applied before the custom `Image` model. It of course leads to a `CircularDependencyError`. The solution is to remove that dependency. No django-filer model depends on `Image`, so it can be removed safely.
[ { "content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.db import models, migrations\nimport filer.fields.multistorage_file\nimport filer.models.mixins\nfrom filer.settings import FILER_IMAGE_MODEL\nfrom django.conf import settings\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('auth', '0001_initial'),\n migrations.swappable_dependency(settings.AUTH_USER_MODEL),\n migrations.swappable_dependency(FILER_IMAGE_MODEL or 'filer.models.imagemodels.Image'),\n ('contenttypes', '0001_initial'),\n ]\n\n operations = [\n migrations.CreateModel(\n name='Clipboard',\n fields=[\n ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),\n ],\n options={\n 'verbose_name': 'clipboard',\n 'verbose_name_plural': 'clipboards',\n },\n bases=(models.Model,),\n ),\n migrations.CreateModel(\n name='ClipboardItem',\n fields=[\n ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),\n ('clipboard', models.ForeignKey(verbose_name='clipboard', to='filer.Clipboard')),\n ],\n options={\n 'verbose_name': 'clipboard item',\n 'verbose_name_plural': 'clipboard items',\n },\n bases=(models.Model,),\n ),\n migrations.CreateModel(\n name='File',\n fields=[\n ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),\n ('file', filer.fields.multistorage_file.MultiStorageFileField(max_length=255, upload_to=filer.fields.multistorage_file.generate_filename_multistorage, null=True, verbose_name='file', blank=True)),\n ('_file_size', models.IntegerField(null=True, verbose_name='file size', blank=True)),\n ('sha1', models.CharField(default='', max_length=40, verbose_name='sha1', blank=True)),\n ('has_all_mandatory_data', models.BooleanField(default=False, verbose_name='has all mandatory data', editable=False)),\n ('original_filename', models.CharField(max_length=255, null=True, verbose_name='original filename', blank=True)),\n ('name', models.CharField(default='', max_length=255, verbose_name='name', blank=True)),\n ('description', models.TextField(null=True, verbose_name='description', blank=True)),\n ('uploaded_at', models.DateTimeField(auto_now_add=True, verbose_name='uploaded at')),\n ('modified_at', models.DateTimeField(auto_now=True, verbose_name='modified at')),\n ('is_public', models.BooleanField(default=True, help_text='Disable any permission checking for this file. File will be publicly accessible to anyone.', verbose_name='Permissions disabled')),\n ],\n options={\n 'verbose_name': 'file',\n 'verbose_name_plural': 'files',\n },\n bases=(models.Model, filer.models.mixins.IconsMixin),\n ),\n migrations.CreateModel(\n name='Folder',\n fields=[\n ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),\n ('name', models.CharField(max_length=255, verbose_name='name')),\n ('uploaded_at', models.DateTimeField(auto_now_add=True, verbose_name='uploaded at')),\n ('created_at', models.DateTimeField(auto_now_add=True, verbose_name='created at')),\n ('modified_at', models.DateTimeField(auto_now=True, verbose_name='modified at')),\n ('lft', models.PositiveIntegerField(editable=False, db_index=True)),\n ('rght', models.PositiveIntegerField(editable=False, db_index=True)),\n ('tree_id', models.PositiveIntegerField(editable=False, db_index=True)),\n ('level', models.PositiveIntegerField(editable=False, db_index=True)),\n ('owner', models.ForeignKey(related_name='filer_owned_folders', verbose_name='owner', blank=True, to=settings.AUTH_USER_MODEL, null=True)),\n ('parent', models.ForeignKey(related_name='children', verbose_name='parent', blank=True, to='filer.Folder', null=True)),\n ],\n options={\n 'ordering': ('name',),\n 'verbose_name': 'Folder',\n 'verbose_name_plural': 'Folders',\n 'permissions': (('can_use_directory_listing', 'Can use directory listing'),),\n },\n bases=(models.Model, filer.models.mixins.IconsMixin),\n ),\n migrations.CreateModel(\n name='FolderPermission',\n fields=[\n ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),\n ('type', models.SmallIntegerField(default=0, verbose_name='type', choices=[(0, 'all items'), (1, 'this item only'), (2, 'this item and all children')])),\n ('everybody', models.BooleanField(default=False, verbose_name='everybody')),\n ('can_edit', models.SmallIntegerField(default=None, null=True, verbose_name='can edit', blank=True, choices=[(1, 'allow'), (0, 'deny')])),\n ('can_read', models.SmallIntegerField(default=None, null=True, verbose_name='can read', blank=True, choices=[(1, 'allow'), (0, 'deny')])),\n ('can_add_children', models.SmallIntegerField(default=None, null=True, verbose_name='can add children', blank=True, choices=[(1, 'allow'), (0, 'deny')])),\n ('folder', models.ForeignKey(verbose_name='folder', blank=True, to='filer.Folder', null=True)),\n ('group', models.ForeignKey(related_name='filer_folder_permissions', verbose_name='group', blank=True, to='auth.Group', null=True)),\n ('user', models.ForeignKey(related_name='filer_folder_permissions', verbose_name='user', blank=True, to=settings.AUTH_USER_MODEL, null=True)),\n ],\n options={\n 'verbose_name': 'folder permission',\n 'verbose_name_plural': 'folder permissions',\n },\n bases=(models.Model,),\n ),\n migrations.AlterUniqueTogether(\n name='folder',\n unique_together=set([('parent', 'name')]),\n ),\n migrations.AddField(\n model_name='file',\n name='folder',\n field=models.ForeignKey(related_name='all_files', verbose_name='folder', blank=True, to='filer.Folder', null=True),\n preserve_default=True,\n ),\n migrations.AddField(\n model_name='file',\n name='owner',\n field=models.ForeignKey(related_name='owned_files', verbose_name='owner', blank=True, to=settings.AUTH_USER_MODEL, null=True),\n preserve_default=True,\n ),\n migrations.AddField(\n model_name='file',\n name='polymorphic_ctype',\n field=models.ForeignKey(related_name='polymorphic_filer.file_set', editable=False, to='contenttypes.ContentType', null=True),\n preserve_default=True,\n ),\n migrations.AddField(\n model_name='clipboarditem',\n name='file',\n field=models.ForeignKey(verbose_name='file', to='filer.File'),\n preserve_default=True,\n ),\n migrations.AddField(\n model_name='clipboard',\n name='files',\n field=models.ManyToManyField(related_name='in_clipboards', verbose_name='files', through='filer.ClipboardItem', to='filer.File'),\n preserve_default=True,\n ),\n migrations.AddField(\n model_name='clipboard',\n name='user',\n field=models.ForeignKey(related_name='filer_clipboards', verbose_name='user', to=settings.AUTH_USER_MODEL),\n preserve_default=True,\n ),\n ]\n if not FILER_IMAGE_MODEL:\n operations.append(\n migrations.CreateModel(\n name='Image',\n fields=[\n ('file_ptr', models.OneToOneField(serialize=False, auto_created=True, to='filer.File', primary_key=True, parent_link=True)),\n ('_height', models.IntegerField(null=True, blank=True)),\n ('_width', models.IntegerField(null=True, blank=True)),\n ('date_taken', models.DateTimeField(verbose_name='date taken', null=True, editable=False, blank=True)),\n ('default_alt_text', models.CharField(max_length=255, null=True, verbose_name='default alt text', blank=True)),\n ('default_caption', models.CharField(max_length=255, null=True, verbose_name='default caption', blank=True)),\n ('author', models.CharField(max_length=255, null=True, verbose_name='author', blank=True)),\n ('must_always_publish_author_credit', models.BooleanField(default=False, verbose_name='must always publish author credit')),\n ('must_always_publish_copyright', models.BooleanField(default=False, verbose_name='must always publish copyright')),\n ('subject_location', models.CharField(default=None, max_length=64, null=True, verbose_name='subject location', blank=True)),\n ],\n options={\n 'verbose_name': 'image',\n 'verbose_name_plural': 'images',\n },\n bases=('filer.file',),\n )\n )\n", "path": "filer/migrations_django/0001_initial.py" } ]
[ { "content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.db import models, migrations\nimport filer.fields.multistorage_file\nimport filer.models.mixins\nfrom filer.settings import FILER_IMAGE_MODEL\nfrom django.conf import settings\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('auth', '0001_initial'),\n migrations.swappable_dependency(settings.AUTH_USER_MODEL),\n ('contenttypes', '0001_initial'),\n ]\n\n operations = [\n migrations.CreateModel(\n name='Clipboard',\n fields=[\n ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),\n ],\n options={\n 'verbose_name': 'clipboard',\n 'verbose_name_plural': 'clipboards',\n },\n bases=(models.Model,),\n ),\n migrations.CreateModel(\n name='ClipboardItem',\n fields=[\n ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),\n ('clipboard', models.ForeignKey(verbose_name='clipboard', to='filer.Clipboard')),\n ],\n options={\n 'verbose_name': 'clipboard item',\n 'verbose_name_plural': 'clipboard items',\n },\n bases=(models.Model,),\n ),\n migrations.CreateModel(\n name='File',\n fields=[\n ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),\n ('file', filer.fields.multistorage_file.MultiStorageFileField(max_length=255, upload_to=filer.fields.multistorage_file.generate_filename_multistorage, null=True, verbose_name='file', blank=True)),\n ('_file_size', models.IntegerField(null=True, verbose_name='file size', blank=True)),\n ('sha1', models.CharField(default='', max_length=40, verbose_name='sha1', blank=True)),\n ('has_all_mandatory_data', models.BooleanField(default=False, verbose_name='has all mandatory data', editable=False)),\n ('original_filename', models.CharField(max_length=255, null=True, verbose_name='original filename', blank=True)),\n ('name', models.CharField(default='', max_length=255, verbose_name='name', blank=True)),\n ('description', models.TextField(null=True, verbose_name='description', blank=True)),\n ('uploaded_at', models.DateTimeField(auto_now_add=True, verbose_name='uploaded at')),\n ('modified_at', models.DateTimeField(auto_now=True, verbose_name='modified at')),\n ('is_public', models.BooleanField(default=True, help_text='Disable any permission checking for this file. File will be publicly accessible to anyone.', verbose_name='Permissions disabled')),\n ],\n options={\n 'verbose_name': 'file',\n 'verbose_name_plural': 'files',\n },\n bases=(models.Model, filer.models.mixins.IconsMixin),\n ),\n migrations.CreateModel(\n name='Folder',\n fields=[\n ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),\n ('name', models.CharField(max_length=255, verbose_name='name')),\n ('uploaded_at', models.DateTimeField(auto_now_add=True, verbose_name='uploaded at')),\n ('created_at', models.DateTimeField(auto_now_add=True, verbose_name='created at')),\n ('modified_at', models.DateTimeField(auto_now=True, verbose_name='modified at')),\n ('lft', models.PositiveIntegerField(editable=False, db_index=True)),\n ('rght', models.PositiveIntegerField(editable=False, db_index=True)),\n ('tree_id', models.PositiveIntegerField(editable=False, db_index=True)),\n ('level', models.PositiveIntegerField(editable=False, db_index=True)),\n ('owner', models.ForeignKey(related_name='filer_owned_folders', verbose_name='owner', blank=True, to=settings.AUTH_USER_MODEL, null=True)),\n ('parent', models.ForeignKey(related_name='children', verbose_name='parent', blank=True, to='filer.Folder', null=True)),\n ],\n options={\n 'ordering': ('name',),\n 'verbose_name': 'Folder',\n 'verbose_name_plural': 'Folders',\n 'permissions': (('can_use_directory_listing', 'Can use directory listing'),),\n },\n bases=(models.Model, filer.models.mixins.IconsMixin),\n ),\n migrations.CreateModel(\n name='FolderPermission',\n fields=[\n ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),\n ('type', models.SmallIntegerField(default=0, verbose_name='type', choices=[(0, 'all items'), (1, 'this item only'), (2, 'this item and all children')])),\n ('everybody', models.BooleanField(default=False, verbose_name='everybody')),\n ('can_edit', models.SmallIntegerField(default=None, null=True, verbose_name='can edit', blank=True, choices=[(1, 'allow'), (0, 'deny')])),\n ('can_read', models.SmallIntegerField(default=None, null=True, verbose_name='can read', blank=True, choices=[(1, 'allow'), (0, 'deny')])),\n ('can_add_children', models.SmallIntegerField(default=None, null=True, verbose_name='can add children', blank=True, choices=[(1, 'allow'), (0, 'deny')])),\n ('folder', models.ForeignKey(verbose_name='folder', blank=True, to='filer.Folder', null=True)),\n ('group', models.ForeignKey(related_name='filer_folder_permissions', verbose_name='group', blank=True, to='auth.Group', null=True)),\n ('user', models.ForeignKey(related_name='filer_folder_permissions', verbose_name='user', blank=True, to=settings.AUTH_USER_MODEL, null=True)),\n ],\n options={\n 'verbose_name': 'folder permission',\n 'verbose_name_plural': 'folder permissions',\n },\n bases=(models.Model,),\n ),\n migrations.AlterUniqueTogether(\n name='folder',\n unique_together=set([('parent', 'name')]),\n ),\n migrations.AddField(\n model_name='file',\n name='folder',\n field=models.ForeignKey(related_name='all_files', verbose_name='folder', blank=True, to='filer.Folder', null=True),\n preserve_default=True,\n ),\n migrations.AddField(\n model_name='file',\n name='owner',\n field=models.ForeignKey(related_name='owned_files', verbose_name='owner', blank=True, to=settings.AUTH_USER_MODEL, null=True),\n preserve_default=True,\n ),\n migrations.AddField(\n model_name='file',\n name='polymorphic_ctype',\n field=models.ForeignKey(related_name='polymorphic_filer.file_set', editable=False, to='contenttypes.ContentType', null=True),\n preserve_default=True,\n ),\n migrations.AddField(\n model_name='clipboarditem',\n name='file',\n field=models.ForeignKey(verbose_name='file', to='filer.File'),\n preserve_default=True,\n ),\n migrations.AddField(\n model_name='clipboard',\n name='files',\n field=models.ManyToManyField(related_name='in_clipboards', verbose_name='files', through='filer.ClipboardItem', to='filer.File'),\n preserve_default=True,\n ),\n migrations.AddField(\n model_name='clipboard',\n name='user',\n field=models.ForeignKey(related_name='filer_clipboards', verbose_name='user', to=settings.AUTH_USER_MODEL),\n preserve_default=True,\n ),\n ]\n if not FILER_IMAGE_MODEL:\n operations.append(\n migrations.CreateModel(\n name='Image',\n fields=[\n ('file_ptr', models.OneToOneField(serialize=False, auto_created=True, to='filer.File', primary_key=True, parent_link=True)),\n ('_height', models.IntegerField(null=True, blank=True)),\n ('_width', models.IntegerField(null=True, blank=True)),\n ('date_taken', models.DateTimeField(verbose_name='date taken', null=True, editable=False, blank=True)),\n ('default_alt_text', models.CharField(max_length=255, null=True, verbose_name='default alt text', blank=True)),\n ('default_caption', models.CharField(max_length=255, null=True, verbose_name='default caption', blank=True)),\n ('author', models.CharField(max_length=255, null=True, verbose_name='author', blank=True)),\n ('must_always_publish_author_credit', models.BooleanField(default=False, verbose_name='must always publish author credit')),\n ('must_always_publish_copyright', models.BooleanField(default=False, verbose_name='must always publish copyright')),\n ('subject_location', models.CharField(default=None, max_length=64, null=True, verbose_name='subject location', blank=True)),\n ],\n options={\n 'verbose_name': 'image',\n 'verbose_name_plural': 'images',\n },\n bases=('filer.file',),\n )\n )\n", "path": "filer/migrations_django/0001_initial.py" } ]
diff --git a/filer/migrations_django/0001_initial.py b/filer/migrations_django/0001_initial.py index c35e9ff8e..b8d70adf5 100644 --- a/filer/migrations_django/0001_initial.py +++ b/filer/migrations_django/0001_initial.py @@ -13,7 +13,6 @@ class Migration(migrations.Migration): dependencies = [ ('auth', '0001_initial'), migrations.swappable_dependency(settings.AUTH_USER_MODEL), - migrations.swappable_dependency(FILER_IMAGE_MODEL or 'filer.models.imagemodels.Image'), ('contenttypes', '0001_initial'), ]
optuna__optuna-4964
Use `__future__.annotations` everywhere in the Optuna code base ### Motivation Optuna drops Python 3.6 from v3.1, so we can use `__future__.annotations`, which simplifies the code base. See [PEP 563](https://peps.python.org/pep-0563/), [PEP584](https://peps.python.org/pep-0584/), [PEP 585](https://peps.python.org/pep-0585/), and [PEP 604](https://peps.python.org/pep-0604/) for more details. This issue suggests to use the module and simplifies the code base. ### Suggestion Use `__future__.annotations` for each file and simplify the type annotations. The list of classes whose type annotations can be simplified is [here](https://peps.python.org/pep-0585/#implementation). The list of files where the `__future__.annotations` can be used is as follows. In order to reduce review costs and to encourage more contributors to work on it, please, as a rule, fix one file per PR. - [x] optuna/_convert_positional_args.py - [x] optuna/visualization/_optimization_history.py - [x] optuna/visualization/_hypervolume_history.py - [x] optuna/visualization/_edf.py - [x] optuna/visualization/_pareto_front.py - [x] optuna/visualization/matplotlib/_optimization_history.py - [x] optuna/visualization/matplotlib/_hypervolume_history.py - [x] optuna/visualization/matplotlib/_edf.py - [x] optuna/visualization/matplotlib/_pareto_front.py - [x] optuna/visualization/matplotlib/_contour.py - [x] optuna/visualization/_utils.py - [x] optuna/logging.py - [ ] optuna/storages/_base.py - [ ] optuna/storages/_cached_storage.py - [ ] optuna/storages/__init__.py - [ ] optuna/storages/_heartbeat.py - [ ] optuna/storages/_in_memory.py - [ ] optuna/storages/_rdb/models.py - [ ] optuna/storages/_rdb/storage.py - [ ] optuna/storages/_rdb/alembic/versions/v3.0.0.c.py - [ ] optuna/storages/_rdb/alembic/versions/v3.0.0.d.py - [ ] optuna/storages/_rdb/alembic/versions/v3.0.0.a.py - [ ] optuna/storages/_journal/file.py - [ ] optuna/storages/_journal/redis.py - [ ] optuna/storages/_journal/storage.py - [ ] optuna/storages/_journal/base.py - [ ] optuna/study/_dataframe.py - [ ] optuna/study/_optimize.py - [ ] optuna/study/_tell.py - [ ] optuna/study/_multi_objective.py - [ ] optuna/study/_frozen.py - [ ] optuna/study/study.py - [ ] optuna/study/_study_summary.py - [ ] optuna/search_space/group_decomposed.py - [ ] optuna/search_space/intersection.py - [ ] optuna/_typing.py - [ ] optuna/_deprecated.py - [ ] optuna/pruners/_hyperband.py - [ ] optuna/pruners/_patient.py - [ ] optuna/pruners/_successive_halving.py - [ ] optuna/pruners/_percentile.py - [ ] optuna/pruners/_threshold.py - [ ] optuna/trial/_base.py - [ ] optuna/trial/_fixed.py - [ ] optuna/trial/_trial.py - [ ] optuna/trial/_frozen.py - [ ] optuna/integration/cma.py - [ ] optuna/integration/shap.py - [ ] optuna/integration/lightgbm.py - [ ] optuna/integration/pytorch_distributed.py - [ ] optuna/integration/_lightgbm_tuner/optimize.py - [ ] optuna/integration/_lightgbm_tuner/alias.py - [ ] optuna/integration/mlflow.py - [ ] optuna/integration/wandb.py - [ ] optuna/integration/catboost.py - [ ] optuna/integration/skopt.py - [ ] optuna/integration/botorch.py - [ ] optuna/integration/dask.py - [x] optuna/integration/sklearn.py - [ ] optuna/integration/tensorboard.py - [ ] optuna/terminator/callback.py - [ ] optuna/terminator/terminator.py - [ ] optuna/terminator/improvement/_preprocessing.py - [ ] optuna/terminator/improvement/gp/botorch.py - [ ] optuna/terminator/improvement/gp/base.py - [ ] optuna/terminator/improvement/evaluator.py - [ ] optuna/importance/_base.py - [ ] optuna/importance/_mean_decrease_impurity.py - [ ] optuna/importance/__init__.py - [ ] optuna/importance/_fanova/_fanova.py - [ ] optuna/importance/_fanova/_evaluator.py - [ ] optuna/importance/_fanova/_tree.py - [ ] optuna/_imports.py - [ ] optuna/testing/tempfile_pool.py - [ ] optuna/testing/threading.py - [ ] optuna/testing/distributions.py - [ ] optuna/testing/samplers.py - [ ] optuna/testing/storages.py - [ ] optuna/distributions.py - [ ] optuna/cli.py - [ ] optuna/multi_objective/visualization/_pareto_front.py - [ ] optuna/multi_objective/trial.py - [ ] optuna/multi_objective/samplers/_base.py - [ ] optuna/multi_objective/samplers/_nsga2.py - [ ] optuna/multi_objective/samplers/_adapter.py - [ ] optuna/multi_objective/samplers/_random.py - [ ] optuna/multi_objective/samplers/_motpe.py - [ ] optuna/multi_objective/study.py - [ ] optuna/_experimental.py - [ ] optuna/samplers/_base.py - [ ] optuna/samplers/nsgaii/_crossovers/_undx.py - [ ] optuna/samplers/nsgaii/_crossovers/_spx.py - [ ] optuna/samplers/nsgaii/_crossovers/_sbx.py - [ ] optuna/samplers/nsgaii/_crossovers/_vsbx.py - [ ] optuna/samplers/nsgaii/_sampler.py - [ ] optuna/samplers/nsgaii/_crossover.py - [ ] optuna/samplers/_search_space/intersection.py - [ ] optuna/samplers/_qmc.py - [ ] optuna/samplers/_tpe/probability_distributions.py - [ ] optuna/samplers/_tpe/_truncnorm.py - [ ] optuna/samplers/_tpe/multi_objective_sampler.py - [ ] optuna/samplers/_tpe/parzen_estimator.py - [ ] optuna/samplers/_tpe/sampler.py - [ ] optuna/samplers/_random.py - [ ] optuna/samplers/_cmaes.py - [ ] optuna/samplers/_partial_fixed.py - [ ] optuna/samplers/_brute_force.py - [ ] optuna/samplers/_nsgaiii.py - [ ] optuna/samplers/_grid.py - [ ] optuna/_hypervolume/wfg.py - [ ] optuna/_hypervolume/hssp.py - [ ] optuna/progress_bar.py - [ ] optuna/_transform.py - [ ] optuna/_callbacks.py - [ ] tests/multi_objective_tests/test_study.py - [ ] tests/multi_objective_tests/samplers_tests/test_motpe.py - [ ] tests/multi_objective_tests/samplers_tests/test_nsga2.py - [ ] tests/multi_objective_tests/test_trial.py - [ ] tests/multi_objective_tests/visualization_tests/test_pareto_front.py - [ ] tests/trial_tests/test_frozen.py - [ ] tests/trial_tests/test_trials.py - [ ] tests/trial_tests/test_trial.py - [ ] tests/pruners_tests/test_percentile.py - [ ] tests/pruners_tests/test_median.py - [ ] tests/pruners_tests/test_patient.py - [ ] tests/pruners_tests/test_successive_halving.py - [ ] tests/study_tests/test_optimize.py - [ ] tests/study_tests/test_study.py - [ ] tests/hypervolume_tests/test_hssp.py - [x] tests/integration_tests/test_skopt.py - [x] tests/integration_tests/test_pytorch_lightning.py - [ ] tests/integration_tests/test_shap.py - [ ] tests/integration_tests/test_cma.py - [ ] tests/integration_tests/test_pytorch_distributed.py - [ ] tests/integration_tests/lightgbm_tuner_tests/test_optimize.py - [ ] tests/integration_tests/lightgbm_tuner_tests/test_alias.py - [ ] tests/integration_tests/test_botorch.py - [ ] tests/integration_tests/test_mlflow.py - [ ] tests/integration_tests/test_mxnet.py - [ ] tests/integration_tests/test_wandb.py - [ ] tests/importance_tests/fanova_tests/test_tree.py - [ ] tests/importance_tests/test_mean_decrease_impurity.py - [ ] tests/importance_tests/test_fanova.py - [ ] tests/importance_tests/test_init.py - [ ] tests/test_convert_positional_args.py - [ ] tests/test_deprecated.py - [ ] tests/storages_tests/test_journal.py - [ ] tests/storages_tests/test_heartbeat.py - [ ] tests/storages_tests/test_storages.py - [ ] tests/storages_tests/rdb_tests/test_storage.py - [ ] tests/storages_tests/rdb_tests/create_db.py - [ ] tests/storages_tests/test_with_server.py - [ ] tests/samplers_tests/test_grid.py - [ ] tests/samplers_tests/tpe_tests/test_parzen_estimator.py - [ ] tests/samplers_tests/tpe_tests/test_multi_objective_sampler.py - [ ] tests/samplers_tests/tpe_tests/test_sampler.py - [ ] tests/samplers_tests/test_cmaes.py - [ ] tests/samplers_tests/test_samplers.py - [x] tests/samplers_tests/test_nsgaii.py - [x] tests/samplers_tests/test_nsgaiii.py - [ ] tests/samplers_tests/test_qmc.py - [ ] tests/test_distributions.py - [ ] tests/test_multi_objective.py - [ ] tests/test_cli.py - [ ] tests/visualization_tests/test_hypervolume_history.py - [ ] tests/visualization_tests/test_pareto_front.py - [ ] tests/terminator_tests/improvement_tests/test_evaluator.py - [ ] benchmarks/kurobako/problems/wfg/transformation_functions.py - [ ] benchmarks/bayesmark/report_bayesmark.py - [ ] benchmarks/bayesmark/optuna_optimizer.py ### Additional context (optional) The above list is generated by the following script. <details> <summary>script</summary> ```python import os import pathlib PATTERS = [ "from typing import Union", "from typing import Optional", "from typing import Tuple", "from typing import List", "from typing import Dict", "from typing import Set", "from typing import FrozenSet", "from typing import Type", "from typing import FrozenSet", "from typing import Sequence", ] def get_filenames_to_be_simplified(dir_path): ret = [] for f in os.listdir(dir_path): file_path = os.path.join(dir_path, f) if not os.path.isfile(file_path): ret.extend(get_filenames_to_be_simplified(file_path)) else: try: with open(file_path) as fd: contents = fd.read() if any([s in contents for s in PATTERS]): ret.append(str(file_path)) except UnicodeDecodeError as e: pass return ret def main(): dirs = ["optuna", "tests", "benchmarks"] for dir_name in dirs: filenames = get_filenames_to_be_simplified(pathlib.Path(dir_name)) for filename in filenames: print(f"- [ ] {filename}") if __name__ == "__main__": main() ``` </details>
[ { "content": "from __future__ import annotations\n\nfrom enum import Enum\nimport math\nfrom typing import Callable\nfrom typing import cast\nfrom typing import NamedTuple\nfrom typing import Sequence\n\nimport numpy as np\n\nfrom optuna.logging import get_logger\nfrom optuna.samplers._base import _CONSTRAINTS_KEY\nfrom optuna.study import Study\nfrom optuna.study._study_direction import StudyDirection\nfrom optuna.trial import FrozenTrial\nfrom optuna.trial import TrialState\nfrom optuna.visualization._plotly_imports import _imports\nfrom optuna.visualization._utils import _check_plot_args\n\n\nif _imports.is_successful():\n from optuna.visualization._plotly_imports import go\n\n_logger = get_logger(__name__)\n\n\nclass _ValueState(Enum):\n Feasible = 0\n Infeasible = 1\n Incomplete = 2\n\n\nclass _ValuesInfo(NamedTuple):\n values: list[float]\n stds: list[float] | None\n label_name: str\n states: list[_ValueState]\n\n\nclass _OptimizationHistoryInfo(NamedTuple):\n trial_numbers: list[int]\n values_info: _ValuesInfo\n best_values_info: _ValuesInfo | None\n\n\ndef _get_optimization_history_info_list(\n study: Study | Sequence[Study],\n target: Callable[[FrozenTrial], float] | None,\n target_name: str,\n error_bar: bool,\n) -> list[_OptimizationHistoryInfo]:\n _check_plot_args(study, target, target_name)\n if isinstance(study, Study):\n studies = [study]\n else:\n studies = list(study)\n\n info_list: list[_OptimizationHistoryInfo] = []\n for study in studies:\n trials = study.get_trials()\n label_name = target_name if len(studies) == 1 else f\"{target_name} of {study.study_name}\"\n values = []\n value_states = []\n for trial in trials:\n if trial.state != TrialState.COMPLETE:\n values.append(float(\"nan\"))\n value_states.append(_ValueState.Incomplete)\n continue\n constraints = trial.system_attrs.get(_CONSTRAINTS_KEY)\n if constraints is None or all([x <= 0.0 for x in constraints]):\n value_states.append(_ValueState.Feasible)\n else:\n value_states.append(_ValueState.Infeasible)\n if target is not None:\n values.append(target(trial))\n else:\n values.append(cast(float, trial.value))\n if target is not None:\n # We don't calculate best for user-defined target function since we cannot tell\n # which direction is better.\n best_values_info: _ValuesInfo | None = None\n else:\n feasible_best_values = []\n if study.direction == StudyDirection.MINIMIZE:\n feasible_best_values = [\n v if s == _ValueState.Feasible else float(\"inf\")\n for v, s in zip(values, value_states)\n ]\n best_values = list(np.minimum.accumulate(feasible_best_values))\n else:\n feasible_best_values = [\n v if s == _ValueState.Feasible else -float(\"inf\")\n for v, s in zip(values, value_states)\n ]\n best_values = list(np.maximum.accumulate(feasible_best_values))\n best_label_name = (\n \"Best Value\" if len(studies) == 1 else f\"Best Value of {study.study_name}\"\n )\n best_values_info = _ValuesInfo(best_values, None, best_label_name, value_states)\n info_list.append(\n _OptimizationHistoryInfo(\n trial_numbers=[t.number for t in trials],\n values_info=_ValuesInfo(values, None, label_name, value_states),\n best_values_info=best_values_info,\n )\n )\n\n if len(info_list) == 0:\n _logger.warning(\"There are no studies.\")\n\n feasible_trial_count = sum(\n info.values_info.states.count(_ValueState.Feasible) for info in info_list\n )\n infeasible_trial_count = sum(\n info.values_info.states.count(_ValueState.Infeasible) for info in info_list\n )\n if feasible_trial_count + infeasible_trial_count == 0:\n _logger.warning(\"There are no complete trials.\")\n info_list.clear()\n\n if not error_bar:\n return info_list\n\n # When error_bar=True, a list of 0 or 1 element is returned.\n if len(info_list) == 0:\n return []\n if feasible_trial_count == 0:\n _logger.warning(\"There are no feasible trials.\")\n return []\n\n all_trial_numbers = [number for info in info_list for number in info.trial_numbers]\n max_num_trial = max(all_trial_numbers) + 1\n\n def _aggregate(label_name: str, use_best_value: bool) -> tuple[list[int], _ValuesInfo]:\n # Calculate mean and std of values for each trial number.\n values: list[list[float]] = [[] for _ in range(max_num_trial)]\n states: list[list[_ValueState]] = [[] for _ in range(max_num_trial)]\n assert info_list is not None\n for trial_numbers, values_info, best_values_info in info_list:\n if use_best_value:\n assert best_values_info is not None\n values_info = best_values_info\n for n, v, s in zip(trial_numbers, values_info.values, values_info.states):\n if not math.isinf(v):\n if not use_best_value and s == _ValueState.Feasible:\n values[n].append(v)\n elif use_best_value:\n values[n].append(v)\n states[n].append(s)\n trial_numbers_union: list[int] = []\n value_states: list[_ValueState] = []\n value_means: list[float] = []\n value_stds: list[float] = []\n for i in range(max_num_trial):\n if len(states[i]) > 0 and _ValueState.Feasible in states[i]:\n value_states.append(_ValueState.Feasible)\n trial_numbers_union.append(i)\n value_means.append(np.mean(values[i]).item())\n value_stds.append(np.std(values[i]).item())\n else:\n value_states.append(_ValueState.Infeasible)\n return trial_numbers_union, _ValuesInfo(value_means, value_stds, label_name, value_states)\n\n eb_trial_numbers, eb_values_info = _aggregate(target_name, False)\n eb_best_values_info: _ValuesInfo | None = None\n if target is None:\n _, eb_best_values_info = _aggregate(\"Best Value\", True)\n return [_OptimizationHistoryInfo(eb_trial_numbers, eb_values_info, eb_best_values_info)]\n\n\ndef plot_optimization_history(\n study: Study | Sequence[Study],\n *,\n target: Callable[[FrozenTrial], float] | None = None,\n target_name: str = \"Objective Value\",\n error_bar: bool = False,\n) -> \"go.Figure\":\n \"\"\"Plot optimization history of all trials in a study.\n\n Example:\n\n The following code snippet shows how to plot optimization history.\n\n .. plotly::\n\n import optuna\n\n\n def objective(trial):\n x = trial.suggest_float(\"x\", -100, 100)\n y = trial.suggest_categorical(\"y\", [-1, 0, 1])\n return x ** 2 + y\n\n\n sampler = optuna.samplers.TPESampler(seed=10)\n study = optuna.create_study(sampler=sampler)\n study.optimize(objective, n_trials=10)\n\n fig = optuna.visualization.plot_optimization_history(study)\n fig.show()\n\n Args:\n study:\n A :class:`~optuna.study.Study` object whose trials are plotted for their target values.\n You can pass multiple studies if you want to compare those optimization histories.\n target:\n A function to specify the value to display. If it is :obj:`None` and ``study`` is being\n used for single-objective optimization, the objective values are plotted.\n\n .. note::\n Specify this argument if ``study`` is being used for multi-objective optimization.\n target_name:\n Target's name to display on the axis label and the legend.\n error_bar:\n A flag to show the error bar.\n\n Returns:\n A :class:`plotly.graph_objs.Figure` object.\n \"\"\"\n\n _imports.check()\n\n info_list = _get_optimization_history_info_list(study, target, target_name, error_bar)\n return _get_optimization_history_plot(info_list, target_name)\n\n\ndef _get_optimization_history_plot(\n info_list: list[_OptimizationHistoryInfo],\n target_name: str,\n) -> \"go.Figure\":\n layout = go.Layout(\n title=\"Optimization History Plot\",\n xaxis={\"title\": \"Trial\"},\n yaxis={\"title\": target_name},\n )\n\n traces = []\n for trial_numbers, values_info, best_values_info in info_list:\n infeasible_trial_numbers = [\n n for n, s in zip(trial_numbers, values_info.states) if s == _ValueState.Infeasible\n ]\n if values_info.stds is None:\n error_y = None\n feasible_trial_numbers = [\n num\n for num, s in zip(trial_numbers, values_info.states)\n if s == _ValueState.Feasible\n ]\n feasible_trial_values = []\n for num in feasible_trial_numbers:\n feasible_trial_values.append(values_info.values[num])\n infeasible_trial_values = []\n for num in infeasible_trial_numbers:\n infeasible_trial_values.append(values_info.values[num])\n else:\n if (\n _ValueState.Infeasible in values_info.states\n or _ValueState.Incomplete in values_info.states\n ):\n _logger.warning(\n \"Your study contains infeasible trials. \"\n \"In optimization history plot, \"\n \"error bars are calculated for only feasible trial values.\"\n )\n error_y = {\"type\": \"data\", \"array\": values_info.stds, \"visible\": True}\n feasible_trial_numbers = trial_numbers\n feasible_trial_values = values_info.values\n infeasible_trial_values = []\n traces.append(\n go.Scatter(\n x=feasible_trial_numbers,\n y=feasible_trial_values,\n error_y=error_y,\n mode=\"markers\",\n name=values_info.label_name,\n )\n )\n if best_values_info is not None:\n traces.append(\n go.Scatter(\n x=trial_numbers,\n y=best_values_info.values,\n name=best_values_info.label_name,\n mode=\"lines\",\n )\n )\n if best_values_info.stds is not None:\n upper = np.array(best_values_info.values) + np.array(best_values_info.stds)\n traces.append(\n go.Scatter(\n x=trial_numbers,\n y=upper,\n mode=\"lines\",\n line=dict(width=0.01),\n showlegend=False,\n )\n )\n lower = np.array(best_values_info.values) - np.array(best_values_info.stds)\n traces.append(\n go.Scatter(\n x=trial_numbers,\n y=lower,\n mode=\"none\",\n showlegend=False,\n fill=\"tonexty\",\n fillcolor=\"rgba(255,0,0,0.2)\",\n )\n )\n traces.append(\n go.Scatter(\n x=infeasible_trial_numbers,\n y=infeasible_trial_values,\n error_y=error_y,\n mode=\"markers\",\n name=\"Infeasible Trial\",\n marker={\"color\": \"#cccccc\"},\n showlegend=False,\n )\n )\n return go.Figure(data=traces, layout=layout)\n", "path": "optuna/visualization/_optimization_history.py" } ]
[ { "content": "from __future__ import annotations\n\nfrom collections.abc import Callable\nfrom collections.abc import Sequence\nfrom enum import Enum\nimport math\nfrom typing import cast\nfrom typing import NamedTuple\n\nimport numpy as np\n\nfrom optuna.logging import get_logger\nfrom optuna.samplers._base import _CONSTRAINTS_KEY\nfrom optuna.study import Study\nfrom optuna.study._study_direction import StudyDirection\nfrom optuna.trial import FrozenTrial\nfrom optuna.trial import TrialState\nfrom optuna.visualization._plotly_imports import _imports\nfrom optuna.visualization._utils import _check_plot_args\n\n\nif _imports.is_successful():\n from optuna.visualization._plotly_imports import go\n\n_logger = get_logger(__name__)\n\n\nclass _ValueState(Enum):\n Feasible = 0\n Infeasible = 1\n Incomplete = 2\n\n\nclass _ValuesInfo(NamedTuple):\n values: list[float]\n stds: list[float] | None\n label_name: str\n states: list[_ValueState]\n\n\nclass _OptimizationHistoryInfo(NamedTuple):\n trial_numbers: list[int]\n values_info: _ValuesInfo\n best_values_info: _ValuesInfo | None\n\n\ndef _get_optimization_history_info_list(\n study: Study | Sequence[Study],\n target: Callable[[FrozenTrial], float] | None,\n target_name: str,\n error_bar: bool,\n) -> list[_OptimizationHistoryInfo]:\n _check_plot_args(study, target, target_name)\n if isinstance(study, Study):\n studies = [study]\n else:\n studies = list(study)\n\n info_list: list[_OptimizationHistoryInfo] = []\n for study in studies:\n trials = study.get_trials()\n label_name = target_name if len(studies) == 1 else f\"{target_name} of {study.study_name}\"\n values = []\n value_states = []\n for trial in trials:\n if trial.state != TrialState.COMPLETE:\n values.append(float(\"nan\"))\n value_states.append(_ValueState.Incomplete)\n continue\n constraints = trial.system_attrs.get(_CONSTRAINTS_KEY)\n if constraints is None or all([x <= 0.0 for x in constraints]):\n value_states.append(_ValueState.Feasible)\n else:\n value_states.append(_ValueState.Infeasible)\n if target is not None:\n values.append(target(trial))\n else:\n values.append(cast(float, trial.value))\n if target is not None:\n # We don't calculate best for user-defined target function since we cannot tell\n # which direction is better.\n best_values_info: _ValuesInfo | None = None\n else:\n feasible_best_values = []\n if study.direction == StudyDirection.MINIMIZE:\n feasible_best_values = [\n v if s == _ValueState.Feasible else float(\"inf\")\n for v, s in zip(values, value_states)\n ]\n best_values = list(np.minimum.accumulate(feasible_best_values))\n else:\n feasible_best_values = [\n v if s == _ValueState.Feasible else -float(\"inf\")\n for v, s in zip(values, value_states)\n ]\n best_values = list(np.maximum.accumulate(feasible_best_values))\n best_label_name = (\n \"Best Value\" if len(studies) == 1 else f\"Best Value of {study.study_name}\"\n )\n best_values_info = _ValuesInfo(best_values, None, best_label_name, value_states)\n info_list.append(\n _OptimizationHistoryInfo(\n trial_numbers=[t.number for t in trials],\n values_info=_ValuesInfo(values, None, label_name, value_states),\n best_values_info=best_values_info,\n )\n )\n\n if len(info_list) == 0:\n _logger.warning(\"There are no studies.\")\n\n feasible_trial_count = sum(\n info.values_info.states.count(_ValueState.Feasible) for info in info_list\n )\n infeasible_trial_count = sum(\n info.values_info.states.count(_ValueState.Infeasible) for info in info_list\n )\n if feasible_trial_count + infeasible_trial_count == 0:\n _logger.warning(\"There are no complete trials.\")\n info_list.clear()\n\n if not error_bar:\n return info_list\n\n # When error_bar=True, a list of 0 or 1 element is returned.\n if len(info_list) == 0:\n return []\n if feasible_trial_count == 0:\n _logger.warning(\"There are no feasible trials.\")\n return []\n\n all_trial_numbers = [number for info in info_list for number in info.trial_numbers]\n max_num_trial = max(all_trial_numbers) + 1\n\n def _aggregate(label_name: str, use_best_value: bool) -> tuple[list[int], _ValuesInfo]:\n # Calculate mean and std of values for each trial number.\n values: list[list[float]] = [[] for _ in range(max_num_trial)]\n states: list[list[_ValueState]] = [[] for _ in range(max_num_trial)]\n assert info_list is not None\n for trial_numbers, values_info, best_values_info in info_list:\n if use_best_value:\n assert best_values_info is not None\n values_info = best_values_info\n for n, v, s in zip(trial_numbers, values_info.values, values_info.states):\n if not math.isinf(v):\n if not use_best_value and s == _ValueState.Feasible:\n values[n].append(v)\n elif use_best_value:\n values[n].append(v)\n states[n].append(s)\n trial_numbers_union: list[int] = []\n value_states: list[_ValueState] = []\n value_means: list[float] = []\n value_stds: list[float] = []\n for i in range(max_num_trial):\n if len(states[i]) > 0 and _ValueState.Feasible in states[i]:\n value_states.append(_ValueState.Feasible)\n trial_numbers_union.append(i)\n value_means.append(np.mean(values[i]).item())\n value_stds.append(np.std(values[i]).item())\n else:\n value_states.append(_ValueState.Infeasible)\n return trial_numbers_union, _ValuesInfo(value_means, value_stds, label_name, value_states)\n\n eb_trial_numbers, eb_values_info = _aggregate(target_name, False)\n eb_best_values_info: _ValuesInfo | None = None\n if target is None:\n _, eb_best_values_info = _aggregate(\"Best Value\", True)\n return [_OptimizationHistoryInfo(eb_trial_numbers, eb_values_info, eb_best_values_info)]\n\n\ndef plot_optimization_history(\n study: Study | Sequence[Study],\n *,\n target: Callable[[FrozenTrial], float] | None = None,\n target_name: str = \"Objective Value\",\n error_bar: bool = False,\n) -> \"go.Figure\":\n \"\"\"Plot optimization history of all trials in a study.\n\n Example:\n\n The following code snippet shows how to plot optimization history.\n\n .. plotly::\n\n import optuna\n\n\n def objective(trial):\n x = trial.suggest_float(\"x\", -100, 100)\n y = trial.suggest_categorical(\"y\", [-1, 0, 1])\n return x ** 2 + y\n\n\n sampler = optuna.samplers.TPESampler(seed=10)\n study = optuna.create_study(sampler=sampler)\n study.optimize(objective, n_trials=10)\n\n fig = optuna.visualization.plot_optimization_history(study)\n fig.show()\n\n Args:\n study:\n A :class:`~optuna.study.Study` object whose trials are plotted for their target values.\n You can pass multiple studies if you want to compare those optimization histories.\n target:\n A function to specify the value to display. If it is :obj:`None` and ``study`` is being\n used for single-objective optimization, the objective values are plotted.\n\n .. note::\n Specify this argument if ``study`` is being used for multi-objective optimization.\n target_name:\n Target's name to display on the axis label and the legend.\n error_bar:\n A flag to show the error bar.\n\n Returns:\n A :class:`plotly.graph_objs.Figure` object.\n \"\"\"\n\n _imports.check()\n\n info_list = _get_optimization_history_info_list(study, target, target_name, error_bar)\n return _get_optimization_history_plot(info_list, target_name)\n\n\ndef _get_optimization_history_plot(\n info_list: list[_OptimizationHistoryInfo],\n target_name: str,\n) -> \"go.Figure\":\n layout = go.Layout(\n title=\"Optimization History Plot\",\n xaxis={\"title\": \"Trial\"},\n yaxis={\"title\": target_name},\n )\n\n traces = []\n for trial_numbers, values_info, best_values_info in info_list:\n infeasible_trial_numbers = [\n n for n, s in zip(trial_numbers, values_info.states) if s == _ValueState.Infeasible\n ]\n if values_info.stds is None:\n error_y = None\n feasible_trial_numbers = [\n num\n for num, s in zip(trial_numbers, values_info.states)\n if s == _ValueState.Feasible\n ]\n feasible_trial_values = []\n for num in feasible_trial_numbers:\n feasible_trial_values.append(values_info.values[num])\n infeasible_trial_values = []\n for num in infeasible_trial_numbers:\n infeasible_trial_values.append(values_info.values[num])\n else:\n if (\n _ValueState.Infeasible in values_info.states\n or _ValueState.Incomplete in values_info.states\n ):\n _logger.warning(\n \"Your study contains infeasible trials. \"\n \"In optimization history plot, \"\n \"error bars are calculated for only feasible trial values.\"\n )\n error_y = {\"type\": \"data\", \"array\": values_info.stds, \"visible\": True}\n feasible_trial_numbers = trial_numbers\n feasible_trial_values = values_info.values\n infeasible_trial_values = []\n traces.append(\n go.Scatter(\n x=feasible_trial_numbers,\n y=feasible_trial_values,\n error_y=error_y,\n mode=\"markers\",\n name=values_info.label_name,\n )\n )\n if best_values_info is not None:\n traces.append(\n go.Scatter(\n x=trial_numbers,\n y=best_values_info.values,\n name=best_values_info.label_name,\n mode=\"lines\",\n )\n )\n if best_values_info.stds is not None:\n upper = np.array(best_values_info.values) + np.array(best_values_info.stds)\n traces.append(\n go.Scatter(\n x=trial_numbers,\n y=upper,\n mode=\"lines\",\n line=dict(width=0.01),\n showlegend=False,\n )\n )\n lower = np.array(best_values_info.values) - np.array(best_values_info.stds)\n traces.append(\n go.Scatter(\n x=trial_numbers,\n y=lower,\n mode=\"none\",\n showlegend=False,\n fill=\"tonexty\",\n fillcolor=\"rgba(255,0,0,0.2)\",\n )\n )\n traces.append(\n go.Scatter(\n x=infeasible_trial_numbers,\n y=infeasible_trial_values,\n error_y=error_y,\n mode=\"markers\",\n name=\"Infeasible Trial\",\n marker={\"color\": \"#cccccc\"},\n showlegend=False,\n )\n )\n return go.Figure(data=traces, layout=layout)\n", "path": "optuna/visualization/_optimization_history.py" } ]
diff --git a/optuna/visualization/_optimization_history.py b/optuna/visualization/_optimization_history.py index b489a86c9a..3561c38073 100644 --- a/optuna/visualization/_optimization_history.py +++ b/optuna/visualization/_optimization_history.py @@ -1,11 +1,11 @@ from __future__ import annotations +from collections.abc import Callable +from collections.abc import Sequence from enum import Enum import math -from typing import Callable from typing import cast from typing import NamedTuple -from typing import Sequence import numpy as np
docker__docker-py-1250
attach is causing an "Invalid Argument" exception from os.read ``` python stream = client.attach(container, stream=True, stdout=True, stderr=True) for chunk in stream: pass ``` Results in: ``` File "/Users/michael/work/oss/marina/marina/build.py", line 695, in watcher for chunk in stream: File ".venv/lib/python3.5/site-packages/docker/utils/socket.py", line 67, in frames_iter yield read(socket, n) File ".venv/lib/python3.5/site-packages/docker/utils/socket.py", line 25, in read return os.read(socket.fileno(), n) OSError: [Errno 22] Invalid argument ``` Using docker-py 1.10.2 on OS X 10.11.6 with docker for mac 1.12.0-rc3. Reverting to 1.9.0 fixes the issue.
[ { "content": "import errno\nimport os\nimport select\nimport struct\n\nimport six\n\ntry:\n from ..transport import NpipeSocket\nexcept ImportError:\n NpipeSocket = type(None)\n\n\nclass SocketError(Exception):\n pass\n\n\ndef read(socket, n=4096):\n \"\"\"\n Reads at most n bytes from socket\n \"\"\"\n\n recoverable_errors = (errno.EINTR, errno.EDEADLK, errno.EWOULDBLOCK)\n\n # wait for data to become available\n if not isinstance(socket, NpipeSocket):\n select.select([socket], [], [])\n\n try:\n if hasattr(socket, 'recv'):\n return socket.recv(n)\n return os.read(socket.fileno(), n)\n except EnvironmentError as e:\n if e.errno not in recoverable_errors:\n raise\n\n\ndef read_exactly(socket, n):\n \"\"\"\n Reads exactly n bytes from socket\n Raises SocketError if there isn't enough data\n \"\"\"\n data = six.binary_type()\n while len(data) < n:\n next_data = read(socket, n - len(data))\n if not next_data:\n raise SocketError(\"Unexpected EOF\")\n data += next_data\n return data\n\n\ndef next_frame_size(socket):\n \"\"\"\n Returns the size of the next frame of data waiting to be read from socket,\n according to the protocol defined here:\n\n https://docs.docker.com/engine/reference/api/docker_remote_api_v1.24/#/attach-to-a-container\n \"\"\"\n try:\n data = read_exactly(socket, 8)\n except SocketError:\n return 0\n\n _, actual = struct.unpack('>BxxxL', data)\n return actual\n\n\ndef frames_iter(socket):\n \"\"\"\n Returns a generator of frames read from socket\n \"\"\"\n n = next_frame_size(socket)\n while n > 0:\n yield read(socket, n)\n n = next_frame_size(socket)\n", "path": "docker/utils/socket.py" } ]
[ { "content": "import errno\nimport os\nimport select\nimport struct\n\nimport six\n\ntry:\n from ..transport import NpipeSocket\nexcept ImportError:\n NpipeSocket = type(None)\n\n\nclass SocketError(Exception):\n pass\n\n\ndef read(socket, n=4096):\n \"\"\"\n Reads at most n bytes from socket\n \"\"\"\n\n recoverable_errors = (errno.EINTR, errno.EDEADLK, errno.EWOULDBLOCK)\n\n # wait for data to become available\n if not isinstance(socket, NpipeSocket):\n select.select([socket], [], [])\n\n try:\n if hasattr(socket, 'recv'):\n return socket.recv(n)\n return os.read(socket.fileno(), n)\n except EnvironmentError as e:\n if e.errno not in recoverable_errors:\n raise\n\n\ndef read_exactly(socket, n):\n \"\"\"\n Reads exactly n bytes from socket\n Raises SocketError if there isn't enough data\n \"\"\"\n data = six.binary_type()\n while len(data) < n:\n next_data = read(socket, n - len(data))\n if not next_data:\n raise SocketError(\"Unexpected EOF\")\n data += next_data\n return data\n\n\ndef next_frame_size(socket):\n \"\"\"\n Returns the size of the next frame of data waiting to be read from socket,\n according to the protocol defined here:\n\n https://docs.docker.com/engine/reference/api/docker_remote_api_v1.24/#/attach-to-a-container\n \"\"\"\n try:\n data = read_exactly(socket, 8)\n except SocketError:\n return 0\n\n _, actual = struct.unpack('>BxxxL', data)\n return actual\n\n\ndef frames_iter(socket):\n \"\"\"\n Returns a generator of frames read from socket\n \"\"\"\n while True:\n n = next_frame_size(socket)\n if n == 0:\n break\n while n > 0:\n result = read(socket, n)\n n -= len(result)\n yield result\n", "path": "docker/utils/socket.py" } ]
diff --git a/docker/utils/socket.py b/docker/utils/socket.py index 164b845af..4080f253f 100644 --- a/docker/utils/socket.py +++ b/docker/utils/socket.py @@ -69,7 +69,11 @@ def frames_iter(socket): """ Returns a generator of frames read from socket """ - n = next_frame_size(socket) - while n > 0: - yield read(socket, n) + while True: n = next_frame_size(socket) + if n == 0: + break + while n > 0: + result = read(socket, n) + n -= len(result) + yield result
opendatacube__datacube-core-1374
Incompatibilities with xarray > 2022.03 ### Expected behaviour ODC should work with current version of `xarray`. In `setup.py` there's an exclusion of `2022.6.0`, but I don't think that's sufficient. It'd be worth digging up the commit/PR that made that change. ### Actual behaviour Tests are failing. ``` FAILED tests/api/test_grid_workflow.py::test_gridworkflow_with_time_depth - AssertionError FAILED tests/api/test_virtual.py::test_aggregate - ValueError: time already exists as coordinate or variable name. ``` ### Steps to reproduce the behaviour `pytest tests/` ### Environment information * Which ``datacube --version`` are you using? `develop` branch at `af59377327c363b9c52b55000b4024a0b3fbaa8b` * What datacube deployment/enviornment are you running against? - Mambaforge - conda-forge - Python 3.10
[ { "content": "#!/usr/bin/env python\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'hypothesis',\n 'pycodestyle',\n 'pylint',\n 'pytest',\n 'pytest-cov',\n 'pytest-timeout',\n 'pytest-httpserver',\n 'moto',\n]\ndoc_require = [\n 'Sphinx',\n 'sphinx_rtd_theme',\n 'sphinx_autodoc_typehints', # Propagate mypy info into docs\n 'sphinx-click',\n 'recommonmark',\n 'setuptools', # version related dependencies\n 'setuptools_scm[toml]',\n]\n\nextras_require = {\n 'performance': ['ciso8601', 'bottleneck'],\n 'distributed': ['distributed', 'dask[distributed]'],\n 'doc': doc_require,\n 's3': ['boto3', 'botocore'],\n 'test': tests_require,\n 'cf': ['compliance-checker>=4.0.0'],\n}\n\nextras_require['dev'] = sorted(set(sum([extras_require[k] for k in [\n 'test',\n 'doc',\n 'performance',\n 's3',\n 'distributed',\n]], [])))\n\n# An 'all' option, following ipython naming conventions.\nextras_require['all'] = sorted(set(sum(extras_require.values(), [])))\n\nextra_plugins = dict(read=[], write=[], index=[])\n\nsetup(\n name='datacube',\n python_requires='>=3.8.0',\n\n url='https://github.com/opendatacube/datacube-core',\n author='Open Data Cube',\n maintainer='Open Data Cube',\n maintainer_email='',\n description='An analysis environment for satellite and other earth observation data',\n long_description=open('README.rst').read(),\n long_description_content_type='text/x-rst',\n license='Apache License 2.0',\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Natural Language :: English\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX\",\n \"Operating System :: POSIX :: BSD\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Topic :: Scientific/Engineering :: GIS\",\n \"Topic :: Scientific/Engineering :: Information Analysis\",\n ],\n\n packages=find_packages(\n exclude=('tests', 'tests.*',\n 'integration_tests', 'integration_tests.*')\n ),\n package_data={\n '': ['*.yaml', '*/*.yaml'],\n 'datacube': ['py.typed'],\n },\n scripts=[],\n install_requires=[\n 'affine',\n 'pyproj>=2.5',\n 'shapely>=1.6.4',\n 'cachetools',\n 'click>=5.0',\n 'cloudpickle>=0.4',\n 'dask[array]',\n 'distributed',\n 'jsonschema',\n 'netcdf4',\n 'numpy',\n 'psycopg2',\n 'lark',\n 'pandas',\n 'python-dateutil',\n 'pyyaml',\n 'rasterio>=1.3.2', # Warping broken in 1.3.0 and 1.3.1\n 'sqlalchemy',\n 'GeoAlchemy2',\n 'toolz',\n 'xarray>=0.9,!=2022.6.0', # >0.9 fixes most problems with `crs` attributes being lost\n ],\n extras_require=extras_require,\n tests_require=tests_require,\n\n entry_points={\n 'console_scripts': [\n 'datacube = datacube.scripts.cli_app:cli',\n 'datacube-search = datacube.scripts.search_tool:cli',\n 'datacube-worker = datacube.execution.worker:main',\n ],\n 'datacube.plugins.io.read': [\n 'netcdf = datacube.drivers.netcdf.driver:reader_driver_init',\n *extra_plugins['read'],\n ],\n 'datacube.plugins.io.write': [\n 'netcdf = datacube.drivers.netcdf.driver:writer_driver_init',\n *extra_plugins['write'],\n ],\n 'datacube.plugins.index': [\n 'default = datacube.index.postgres.index:index_driver_init',\n 'null = datacube.index.null.index:index_driver_init',\n 'memory = datacube.index.memory.index:index_driver_init',\n 'postgis = datacube.index.postgis.index:index_driver_init',\n *extra_plugins['index'],\n ],\n },\n)\n", "path": "setup.py" } ]
[ { "content": "#!/usr/bin/env python\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'hypothesis',\n 'pycodestyle',\n 'pylint',\n 'pytest',\n 'pytest-cov',\n 'pytest-timeout',\n 'pytest-httpserver',\n 'moto',\n]\ndoc_require = [\n 'Sphinx',\n 'sphinx_rtd_theme',\n 'sphinx_autodoc_typehints', # Propagate mypy info into docs\n 'sphinx-click',\n 'recommonmark',\n 'setuptools', # version related dependencies\n 'setuptools_scm[toml]',\n]\n\nextras_require = {\n 'performance': ['ciso8601', 'bottleneck'],\n 'distributed': ['distributed', 'dask[distributed]'],\n 'doc': doc_require,\n 's3': ['boto3', 'botocore'],\n 'test': tests_require,\n 'cf': ['compliance-checker>=4.0.0'],\n}\n\nextras_require['dev'] = sorted(set(sum([extras_require[k] for k in [\n 'test',\n 'doc',\n 'performance',\n 's3',\n 'distributed',\n]], [])))\n\n# An 'all' option, following ipython naming conventions.\nextras_require['all'] = sorted(set(sum(extras_require.values(), [])))\n\nextra_plugins = dict(read=[], write=[], index=[])\n\nsetup(\n name='datacube',\n python_requires='>=3.8.0',\n\n url='https://github.com/opendatacube/datacube-core',\n author='Open Data Cube',\n maintainer='Open Data Cube',\n maintainer_email='',\n description='An analysis environment for satellite and other earth observation data',\n long_description=open('README.rst').read(),\n long_description_content_type='text/x-rst',\n license='Apache License 2.0',\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Natural Language :: English\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX\",\n \"Operating System :: POSIX :: BSD\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Topic :: Scientific/Engineering :: GIS\",\n \"Topic :: Scientific/Engineering :: Information Analysis\",\n ],\n\n packages=find_packages(\n exclude=('tests', 'tests.*',\n 'integration_tests', 'integration_tests.*')\n ),\n package_data={\n '': ['*.yaml', '*/*.yaml'],\n 'datacube': ['py.typed'],\n },\n scripts=[],\n install_requires=[\n 'affine',\n 'pyproj>=2.5',\n 'shapely>=1.6.4',\n 'cachetools',\n 'click>=5.0',\n 'cloudpickle>=0.4',\n 'dask[array]',\n 'distributed',\n 'jsonschema',\n 'netcdf4',\n 'numpy',\n 'psycopg2',\n 'lark',\n 'pandas',\n 'python-dateutil',\n 'pyyaml',\n 'rasterio>=1.3.2', # Warping broken in 1.3.0 and 1.3.1\n 'sqlalchemy',\n 'GeoAlchemy2',\n 'toolz',\n 'xarray>=0.9,<2022.6', # >0.9 fixes most problems with `crs` attributes being lost\n ],\n extras_require=extras_require,\n tests_require=tests_require,\n\n entry_points={\n 'console_scripts': [\n 'datacube = datacube.scripts.cli_app:cli',\n 'datacube-search = datacube.scripts.search_tool:cli',\n 'datacube-worker = datacube.execution.worker:main',\n ],\n 'datacube.plugins.io.read': [\n 'netcdf = datacube.drivers.netcdf.driver:reader_driver_init',\n *extra_plugins['read'],\n ],\n 'datacube.plugins.io.write': [\n 'netcdf = datacube.drivers.netcdf.driver:writer_driver_init',\n *extra_plugins['write'],\n ],\n 'datacube.plugins.index': [\n 'default = datacube.index.postgres.index:index_driver_init',\n 'null = datacube.index.null.index:index_driver_init',\n 'memory = datacube.index.memory.index:index_driver_init',\n 'postgis = datacube.index.postgis.index:index_driver_init',\n *extra_plugins['index'],\n ],\n },\n)\n", "path": "setup.py" } ]
diff --git a/setup.py b/setup.py index 2721f9506a..019a9e2dbe 100755 --- a/setup.py +++ b/setup.py @@ -106,7 +106,7 @@ 'sqlalchemy', 'GeoAlchemy2', 'toolz', - 'xarray>=0.9,!=2022.6.0', # >0.9 fixes most problems with `crs` attributes being lost + 'xarray>=0.9,<2022.6', # >0.9 fixes most problems with `crs` attributes being lost ], extras_require=extras_require, tests_require=tests_require,
ethereum__web3.py-1095
Dissallow python 3.5.1 ### What was wrong? It looks like `typing.NewType` may not be available in python 3.5.1 https://github.com/ethereum/web3.py/issues/1091 ### How can it be fixed? Check what version `NewType` was added and restrict our python versions as declared in `setup.py` to be `>=` that version
[ { "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom setuptools import (\n find_packages,\n setup,\n)\n\nextras_require = {\n 'tester': [\n \"eth-tester[py-evm]==0.1.0-beta.32\",\n \"py-geth>=2.0.1,<3.0.0\",\n ],\n 'testrpc': [\"eth-testrpc>=1.3.3,<2.0.0\"],\n 'linter': [\n \"flake8==3.4.1\",\n \"isort>=4.2.15,<5\",\n ],\n 'docs': [\n \"mock\",\n \"sphinx-better-theme>=0.1.4\",\n \"click>=5.1\",\n \"configparser==3.5.0\",\n \"contextlib2>=0.5.4\",\n #\"eth-testrpc>=0.8.0\",\n #\"ethereum-tester-client>=1.1.0\",\n \"ethtoken\",\n \"py-geth>=1.4.0\",\n \"py-solc>=0.4.0\",\n \"pytest>=2.7.2\",\n \"sphinx\",\n \"sphinx_rtd_theme>=0.1.9\",\n \"toposort>=1.4\",\n \"urllib3\",\n \"web3>=2.1.0\",\n \"wheel\"\n ],\n 'dev': [\n \"bumpversion\",\n \"flaky>=3.3.0\",\n \"hypothesis>=3.31.2\",\n \"pytest>=3.5.0,<4\",\n \"pytest-mock==1.*\",\n \"pytest-pythonpath>=0.3\",\n \"pytest-watch==4.*\",\n \"pytest-xdist==1.*\",\n \"setuptools>=36.2.0\",\n \"tox>=1.8.0\",\n \"tqdm\",\n \"when-changed\"\n ]\n}\n\nextras_require['dev'] = (\n extras_require['tester'] +\n extras_require['linter'] +\n extras_require['docs'] +\n extras_require['dev']\n)\n\nsetup(\n name='web3',\n # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n version='4.7.1',\n description=\"\"\"Web3.py\"\"\",\n long_description_markdown_filename='README.md',\n author='Piper Merriam',\n author_email='[email protected]',\n url='https://github.com/ethereum/web3.py',\n include_package_data=True,\n install_requires=[\n \"toolz>=0.9.0,<1.0.0;implementation_name=='pypy'\",\n \"cytoolz>=0.9.0,<1.0.0;implementation_name=='cpython'\",\n \"eth-abi>=1.2.0,<2.0.0\",\n \"eth-account>=0.2.1,<0.4.0\",\n \"eth-utils>=1.0.1,<2.0.0\",\n \"hexbytes>=0.1.0,<1.0.0\",\n \"lru-dict>=1.1.6,<2.0.0\",\n \"eth-hash[pycryptodome]>=0.2.0,<1.0.0\",\n \"requests>=2.16.0,<3.0.0\",\n \"websockets>=6.0.0,<7.0.0\",\n \"pypiwin32>=223;platform_system=='Windows'\",\n ],\n setup_requires=['setuptools-markdown'],\n python_requires='>=3.5, <4',\n extras_require=extras_require,\n py_modules=['web3', 'ens'],\n license=\"MIT\",\n zip_safe=False,\n keywords='ethereum',\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n)\n", "path": "setup.py" } ]
[ { "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom setuptools import (\n find_packages,\n setup,\n)\n\nextras_require = {\n 'tester': [\n \"eth-tester[py-evm]==0.1.0-beta.32\",\n \"py-geth>=2.0.1,<3.0.0\",\n ],\n 'testrpc': [\"eth-testrpc>=1.3.3,<2.0.0\"],\n 'linter': [\n \"flake8==3.4.1\",\n \"isort>=4.2.15,<5\",\n ],\n 'docs': [\n \"mock\",\n \"sphinx-better-theme>=0.1.4\",\n \"click>=5.1\",\n \"configparser==3.5.0\",\n \"contextlib2>=0.5.4\",\n #\"eth-testrpc>=0.8.0\",\n #\"ethereum-tester-client>=1.1.0\",\n \"ethtoken\",\n \"py-geth>=1.4.0\",\n \"py-solc>=0.4.0\",\n \"pytest>=2.7.2\",\n \"sphinx\",\n \"sphinx_rtd_theme>=0.1.9\",\n \"toposort>=1.4\",\n \"urllib3\",\n \"web3>=2.1.0\",\n \"wheel\"\n ],\n 'dev': [\n \"bumpversion\",\n \"flaky>=3.3.0\",\n \"hypothesis>=3.31.2\",\n \"pytest>=3.5.0,<4\",\n \"pytest-mock==1.*\",\n \"pytest-pythonpath>=0.3\",\n \"pytest-watch==4.*\",\n \"pytest-xdist==1.*\",\n \"setuptools>=36.2.0\",\n \"tox>=1.8.0\",\n \"tqdm\",\n \"when-changed\"\n ]\n}\n\nextras_require['dev'] = (\n extras_require['tester'] +\n extras_require['linter'] +\n extras_require['docs'] +\n extras_require['dev']\n)\n\nsetup(\n name='web3',\n # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n version='4.7.1',\n description=\"\"\"Web3.py\"\"\",\n long_description_markdown_filename='README.md',\n author='Piper Merriam',\n author_email='[email protected]',\n url='https://github.com/ethereum/web3.py',\n include_package_data=True,\n install_requires=[\n \"toolz>=0.9.0,<1.0.0;implementation_name=='pypy'\",\n \"cytoolz>=0.9.0,<1.0.0;implementation_name=='cpython'\",\n \"eth-abi>=1.2.0,<2.0.0\",\n \"eth-account>=0.2.1,<0.4.0\",\n \"eth-utils>=1.0.1,<2.0.0\",\n \"hexbytes>=0.1.0,<1.0.0\",\n \"lru-dict>=1.1.6,<2.0.0\",\n \"eth-hash[pycryptodome]>=0.2.0,<1.0.0\",\n \"requests>=2.16.0,<3.0.0\",\n \"websockets>=6.0.0,<7.0.0\",\n \"pypiwin32>=223;platform_system=='Windows'\",\n ],\n setup_requires=['setuptools-markdown'],\n python_requires='>=3.5.2, <4',\n extras_require=extras_require,\n py_modules=['web3', 'ens'],\n license=\"MIT\",\n zip_safe=False,\n keywords='ethereum',\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n)\n", "path": "setup.py" } ]
diff --git a/setup.py b/setup.py index 87e5defc7d..8c4a8a4ede 100644 --- a/setup.py +++ b/setup.py @@ -81,7 +81,7 @@ "pypiwin32>=223;platform_system=='Windows'", ], setup_requires=['setuptools-markdown'], - python_requires='>=3.5, <4', + python_requires='>=3.5.2, <4', extras_require=extras_require, py_modules=['web3', 'ens'], license="MIT",